Lookup2

Lookup (Type 2) #

Recap of types #

TypeDescriptionRecapThis
lookup1Arr[i]{0,1}Each element of array Arr is in {0,1} (or another small set).
lookup2Arr[i]TableEach element of array Arr is in a disclosed table of values Table.

Relation #

Rlookup2:={(Arr,T)|Arr[i]T,0in1,PolyArr=FFT.Interp(ω,Arr),PolyT=FFT.Interp(ω,T),KArr=KZG.Commit(PolyArr),KT=KZG.Commit(PolyT),}

Intuition #

The prover (P) holds an array Arr of n integers from Zq: [a0,a1,a2,,an1]. It will produce a succinct (independent of n) proof that each value in Arr is the element of a public table T. The prover will encode Arr and T into polynomials: PolyArr and PolyT. Assume Arr is equal to T, then the prover can complete the proof by simply performing the product check or the permutation check between Arr and T. However, if Arr is not equal to T (this is the case we want to solve), the prover needs to reveal the points where Arr equals to T, which makes zero knowledge impossible. Thus, the prover needs to construct auxiliary polynomial(s) to prove the constraints between Arr and T are desired.

There are different approaches to achieving the lookup argument, e.g., halo2 and Plookup. We will discuss these two approaches as follows.

halo2 #

The lookup argument in halo2 requires the prover to construct two auxiliary vectors, Arr and T, where Arr is the permutation of Arr and sorted (ascending or descending does not matter), T is the permutation of T and sorted by Arr. For any two sets A,B such that AB, we say B is sorted by A when the elements in A have the same order as they do in B. Let us demonstrate the halo2 lookup argument with a concrete example, Arr={1,2,1,6,4,5,3,0},T=Z8. The prover constructs Arr and T such that

Arr={0,1,1,2,3,4,5,6} T={0,1,7,2,3,4,5,6}

We can observe that Arr[i] is equal to T[i] or Arr[i1]. Since Arr[i1] does not exist when i=0, we have to enforce the other rule, Arr[0]=T[0]. With these two constraints and the proof that Arr is the permutation of Arr and T is the permutation of T, the prover can prove each element in Arr exists in T.

Plookup #

We start with an unoptimized version of Plookup that is conceptually simpler than the final version and is still fully succinct. The inputs are: an array containing the values of the lookup table T of size n1; and an array of length n2 (typically longer than n1 but not necessarily) that will be proven to contain only values from somewhere in T.

The prover will create 5 helper arrays (Acc1 to Acc5) to demonstrate this overall property (Arr[i]T for all i). The helper arrays will each be of size n1+n2 with the exception of Acc4 which is n2. They are as follows:

  1. Acc1=Arr||T
  2. Acc2=Sort(Acc1)
  3. Acc3[i]=Acc2[i+1]Acc2[i]
  4. Acc4[i]=T[i+1]T[i]
  5. Acc5=Acc4||{0}n2 where n1=|Arr|

It is probably worth an example at this point.

  • T=[7,0,15,3]
  • Arr=[7,0,15,15,7,7,15,0,0,7,15,7]
  • Acc1=[7,0,15,15,7,7,15,0,0,7,15,7,7,0,15,3]
  • Acc2=[7,7,7,7,7,7,0,0,0,0,15,15,15,15,15,3]
  • Acc3=[0,0,0,0,0,7,0,0,0,15,0,0,0,0,12,]
  • Acc4=[7,15,12,]
  • Acc5=[7,15,12,,0,0,0,0,0,0,0,0,0,0,0,0]

The first array Acc1 is the concatenation for Arr and T. The intuition for this is as follows. Every element of Arr is in T (what is being proven) but the converse is not true, not every element of T is necessarily in Arr. By constructing Acc1, the prover has an array where every element of T appears at least once. This will be convenient later. Additionally, if the prover can show every element in Acc1 is in T, it implies every element in the original Arr is in T so it can now move forward with Acc1.

How does Acc1 compare to T? They both have the same elements but Acc1 has a bunch of extra duplicates of values. Also the appearance of elements in Acc1 are in a different (arbitrary) order. Next the prover will sort the values of Acc1, grouping all duplicates together, and having them appear in the same order as the original T (which does not have to be sorted).

Now how does Acc2 compare to T? They have the same elements in the same order (including 3 which is not in the original Arr) however Acc2 has a bunch of duplicates of some of the elements. Next we want to “flag” all the elements that are duplicates. The way we do this is to take the difference between each neighbouring elements. If the neighbouring elements are the same (duplicates), this is will place a 0 in that position. If they are not, a non-zero number will appear instead.

This is straight-forward until we hit the last element in Acc2 and Acc3 which has no “next” element in the array. We will leave it for now as an arbitrary integer .

We have marked the duplicates elements but have some other integers when neighbouring elements are not the same. The idea is do the same thing with T and we create Acc4, which we then pad with n2 zeros.

  • Acc5=[7,15,12,,0,0,0,0,0,0,0,0,0,0,0,0]

If Acc5 is a permutation of Acc3 and everything else is correctly constructed, then all elements in Arr are from T.

The prover will show that Acc1 is constructed correctly with the concat gadget. It will not prove that Acc2 is sorted correctly (if it does not sort it correctly, the protocol will not work) but it will prove that Acc2 is a permutation of Acc1 using shuffle1(Acc1,Acc2). It will prove Acc3 is constructed correctly by add1(Acc2[i+1],Acc2[i]) and Acc3 is add1(T[i+1],T[i]). Last, it will prove that Acc5 is correctly formed with concat and finally, that it is a permutation of Acc3 with shuffle1(Acc5,Acc3).

T to the end of Arr does not change anything about the argument

In fact, the only difference between Acc1 and T itself is that there are several duplicates

With these arrays, the following constraints are demonstrated:

  1. Uses concat gadget
  2. Uses shuffle1(Acc1,Acc2)
  3. Uses add1(Acc2[i+1],Acc2[i])
  4. Uses add1(T[i+1],T[i])
  5. Uses concat and then shuffle1(Acc5,Acc3)

Unlike halo2, Plookup requires only one auxiliary vector, S, where S is the union set of Arr and T, and sorted by T. The prover encodes Arr, T, and S into polynomials: PolyArr, PolyT, and PolyS, and computes PolyArrPolyT and PolyS with some random challenges. The theorem tells us the two products are equal if and only if: ArrT, and S=(Arr,T) and sorted by T.

Protocol Details #

halo2 #

Array Level #

  • P and V are given a public table T
  • P holds an array Arr=[a0,a1,a2,,an1] of n integers (aiZq)
  • P computes an array Arr=[a0,a1,a2,,an1] of n integers (aiZq) such that:
    • Arr is a permutation of Arr and sorted in ascending or descending
  • P computes an array T such that:
    • T is a permutation of T and sorted by Arr
  • Arr and T have the following relations:
    • Arr[0]=T[0]
    • Arr[i]=T[i]Arr[i1],i0

Polynomial Level #

We assume arrays Arr, Arr, T, and T are encoded as the y-coordinates into a univariant polynomial where the x-coordinates (called the domain Hκ) are chosen as the multiplicative group of order κ with generator ωGκ (see Background for more). In short, ω0 is the first element and ωκ1 is the last element of Hκ. If κ is larger than the length of the array, the array can be padded with elements of value 1 (which will not change the product).

Recall the four constraints we want to prove:

  1. Arr is a permutation of Arr
  2. T is a permutation of T
  3. The first value in Arr equals to the first value in T
  4. The rest of the values in Arr are of the form Arr[i]=T[i]Arr[i1],i0

In polynomial form, the constraints are (α,β are challenges from V):

  1. [αPolyArr(X)]=[αPolyArr(X)]
  2. [βPolyT(X)]=[βPolyT(X)]
  3. For X=ω0: PolyArr(X)=PolyT(X)
  4. For all XHκω0: [PolyArr(X)PolyT(X)][PolyArr(X)PolyArr(Xω1)]=0

We take care of the “for X” conditions by zeroing out the rest of the polynomial that is not zero. See the gadget zero1 for more on why this works.

  1. PolyVanish1(X)=[αPolyArr(X)][αPolyArr(X)]=0
  2. PolyVanish2(X)=[βPolyT(X)][βPolyT(X)]=0
  3. PolyVanish3(X)=[PolyArr(X)PolyT(X)]Xκ1Xω0=0
  4. PolyVanish4(X)=[PolyArr(X)PolyT(X)][PolyArr(X)PolyArr(Xω1)](Xω0)=0

Instead of proving PolyVanish1 and PolyVanish2 are vanishing through two permutation checks, we can construct an accumulator PolyZ to make it more efficient (the domain needs to be expanded to 2κ, denoted by k):

  • PolyZ(ω0)=PolyZ(ωk1)=1
  • For all XHκωk1: PolyZ(Xω)=PolyZ(X)[PolyArr(X)+α][PolyT(X)+β][PolyArr(X)+α][PolyT(X)+β]

Now we have the new PolyVanish1 and PolyVanish2:

  1. PolyVanish1=[PolyZ(X)1]Xk1(Xω0)(Xωk1)=0
  2. PolyVanish2={PolyZ(Xω)[PolyArr(X)+α][PolyT(X)+β]PolyZ(X)[PolyArr(X)+α][PolyT(X)+β]}(Xωk1)=0

These equations are true for every value of XHk (but not necessarily true outside of these values). To show this, we divide each polynomial by Xk1, which is a minimal vanishing polynomial for Hk that does not require interpolation to create. If the quotients are polynomials (and not rational functions), then PolyVanish1(X), PolyVanish2(X), PolyVanish3(X), and PolyVanish4(X) must be vanishing on Hk too. Specifically, the prover computes,

  1. Q1(X)=PolyVanish1(X)Xk1
  2. Q2(X)=PolyVanish2(X)Xk1
  3. Q3(X)=PolyVanish3(X)Xk1
  4. Q4(X)=PolyVanish4(X)Xk1

Instead of proving the four polynomials are zero polynomials one by one, we can linearly combine the four polynomials with a random challenge ρ sent by V to compute:

  • W(X)=PolyVanish1(X)+ρPolyVanish2(X)+ρ2PolyVanish3(X)+ρ3PolyVanish4(X)=0

When PolyVanish1(X),PolyVanish2(X),PolyVanish3(X),PolyVanish4(X) are vanishing on the domain Hk, W(X) is also vanishing with high probability. Again, if and only if W(X) is vanishing over the field Hk, Q(X)=W(X)/(Xk1) exists.

By rearranging, we can get PolyZero(X) as a true zero polynomial (zero at every value both in Hκ and outside of it):

PolyZero(X)=PolyVanish1(X)+ρPolyVanish2(X)+ρ2PolyVanish3(X)+ρ3PolyVanish4(X)Q(X)(Xn1)=0

Ultimately the halo2 lookup argument will satisfy the following constraints at the Commitment Level:

  1. Show Q(X) exists
  2. Show PolyZero(X) is correctly constructed from PolyZ(X), PolyArr(X), PolyT(X), PolyArr(X), and PolyT(X)
  3. Show PolyZero(X) is a zero polynomial

Commitment Level #

The verifier will never see the arrays or polynomials themselves. They are undisclosed because they either (i) contain private data or (ii) are too large to examine and maintain a succinct proof system. Instead, the prover will use commitments.

The prover will write the following commitments to the transcript:

  • KZ=KZG.Commit(PolyZ(X))
  • KArr=KZG.Commit(PolyArr(X))
  • KT=KZG.Commit(PolyT(X))
  • KArr=KZG.Commit(PolyArr(X))
  • KT=KZG.Commit(PolyT(X))

The prover will generate a random challenge evaluation point (using strong Fiat-Shamir) on the polynomial that is outside of Hk. Call this point ρ. It will be used by the prover to create polynomial Q(X) (see above) and the prover will write to the transcript:

  • ρ
  • KQ=KZG.Commit(Q(X))

The prover will generate a second random challenge evaluation point (using strong Fiat-Shamir) on the polynomial that is outside of Hk. Call this point ζ. The prover will write the three points and opening proofs to the transcript:

  • ζ
  • PolyZ(ζ)=KZG.Open(KZ,ζ)
  • PolyZ(ζω)=KZG.Open(KZ,ζω)
  • PolyArr(ζ)=KZG.Open(KArr,ζ)
  • PolyArr(ζ)=KZG.Open(KArr,ζ)
  • PolyArr(ζω1)=KZG.Open(KArr,ζω1)
  • PolyT(ζ)=KZG.Open(KT,ζ)
  • PolyT(ζ)=KZG.Open(KT,ζ)
  • Q(ζ)=KZG.Open(KQ,ζ)

To check the proof, the verifier uses the transcript to construct the value YZero as follows:

  • YVanish1=[PolyZ(ζ)1](ζk1)(ζω0)(ζωk1)
  • YVanish2={PolyZ(ζω)[PolyArr(ζ)+α][PolyT(ζ)+β]PolyZ(ζ)[PolyArr(ζ)+α][PolyT(ζ)+β]}(ζωk1)
  • YVanish3=[PolyArr(ζ)PolyT(ζ)]ζk1ζω0
  • YVanish4=[PolyArr(ζ)PolyT(ζ)][PolyArr(ζ)PolyArr(ζω1)](ζω0)
  • YZero=YVanish1+ρYVanish2+ρ2YVanish3+ρ3YVanish4Q(ζ)(ζk1)

Finally, if the constraint system is true, the following constraint will be true (and will be false otherwise with overwhelming probability, due to the Schwartz-Zippel lemma on ρ and ζ) :

  • YZero=?0

Plookup #

Array Level #

  • P and V are given a public table T=[t0,t1,t2,,td1] of d integers (tiZq)
  • P holds an array Arr=[a0,a1,a2,,an1] of n integers (aiZq)
  • P constructs an array S=(Arr,T)=[s0,s1,s2,,sn+d1] of n+d integers (siZq) such that:
    • S is a union set of Arr and T
    • S is sorted by T
  • Arr, T, and S have the following relations:
    • For each i[0,d1), there exists a j[0,n+d1) such that (ti,ti+1)=(sj,sj+1)
    • Let I be the set of those d1 indices, and let I:=[0,n+d1)I. For each iI, there exists a j[0,n) such that si=si+1=aj

Polynomial Level #

We assume arrays Arr, T, and S are encoded as the y-coordinates into a univariant polynomial where the x-coordinates (called the domain Hκ) are chosen as the multiplicative group of order κ with generator ωGκ (see Background for more). In short, ω0 is the first element and ωκ1 is the last element of Hκ. If κ is larger than the length of the array, the array can be padded with elements of value 1 (which will not change the product).

Recall the two constraints we want to prove:

  1. For each i[0,d1), there exists a j[0,n+d1) such that (ti,ti+1)=(sj,sj+1)
  2. Let I be the set of those d1 indices, and let I:=[0,n+d1)I. For each iI, there exists a j[0,n) such that si=si+1=aj

In polynomial form, the constraints are (α,β are challenges from V):

  1. (1+β)ni<n[α+PolyArr(ωi)]i<d1[α(1+β)+PolyT(ωi)+βPolyT(ωi+1)]=i<n+d1[α(1+β)+PolyS(ωi)+βPolyS(ωi+1)]

To efficiently prove the above polynomial holds, we can use a similar trick in halo2 lookup by constructing an accumulator :

  • PolyZ(ω0)=PolyZ(ωn+d1)=1
  • For i[0,n+d1): PolyZ(Xω)=PolyZ(X)(1+β)[α+PolyArr(X)][α(1+β)+PolyT(X)+βPolyT(Xω)]α(1+β)+PolyS(X)+βPolyS(Xω)

However, the above accumulator does not exist because the degree of the denominator, PolyS, is different from PolyArr and PolyT. Specifically, there are n elements in Arr, d elements in T, but n+d elements in S. Thus we have to decompose the denominator to make it have the same iteration as Arr and T do. It is worth noting for the numerator, the left term contains PolyArr(X) while the right term contains PolyT(X) and PolyT(Xω). Therefore, it will be convenient to assume d=n+1. The order of S is 2n+1. To traverse S in n steps, we can halve it to Sl=[s0,s1,,sn1,sn] and Sh=[sn,sn+1,,s2n1,s2n], and prove Sl[n]=Sh[0]. Then we can compute the accumulator such that:

  • PolyZ(ω0)=PolyZ(ωκ1)=1
  • For XHnωκ1: PolyZ(Xω)=PolyZ(X)(1+β)[α+PolyArr(X)][α(1+β)+PolyT(X)+βPolyT(Xω)][α(1+β)+PolySl(X)+βPolySl(Xω)][α(1+β)+PolySh(X)+βPolySh(Xω)]
  • For X=ωκ1: PolySl(X)=PolySh(Xω)

Similarly, we take care of the “for X” conditions by zeroing out the rest of the polynomial that is not zero. See the gadget zero1 for more on why this works.

  1. PolyVanish1(X)=[PolyZ(X)1]Xn1(Xω0)(Xωκ1)=0
  2. PolyVanish2(X)={PolyZ(Xω)[α(1+β)+PolySl(X)+βPolySl(Xω)][α(1+β)+PolySh(X)+βPolySh(Xω)]PolyZ(X)(1+β)[α+PolyArr(X)][α(1+β)+PolyT(X)+βPolyT(Xω)]}(Xωκ1)=0
  3. PolyVanish3(X)=[PolySl(X)PolySh(Xωn+1)]Xn1Xωκ1=0

These equations are true for every value of XHn (but not necessarily true outside of these values). To show this, we divide each polynomial by Xn1, which is a minimal vanishing polynomial for Hn that does not require interpolation to create. If the quotients are polynomials (and not rational functions), then PolyVanish1(X), PolyVanish2(X), and PolyVanish3(X) must be vanishing on Hn too. Specifically, the prover computes,

  1. Q1(X)=PolyVanish1(X)Xn1
  2. Q2(X)=PolyVanish2(X)Xn1
  3. Q3(X)=PolyVanish3(X)Xn1

Instead of proving the three polynomials are zero polynomials one by one, we can linearly combine the three polynomials with a random challenge ρ sent by V to compute:

  • W(X)=PolyVanish1(X)+ρPolyVanish2(X)+ρ2PolyVanish3(X)=0

When PolyVanish1(X),PolyVanish2(X),PolyVanish3(X) are vanishing on the domain Hn, W(X) is also vanishing with high probability. Again, if and only if W(X) is vanishing over the field Hn, Q(X)=W(X)/(Xn1) exists.

By rearranging, we can get PolyZero(X) as a true zero polynomial (zero at every value both in Hn and outside of it):

PolyZero(X)=PolyVanish1(X)+ρPolyVanish2(X)+ρ2PolyVanish3(X)Q(X)(Xn1)=0

Ultimately the Plookup will satisfy the following constraints at the Commitment Level:

  1. Show Q(X) exists
  2. Show PolyZero(X) is correctly constructed from PolyZ(X), PolyArr(X), PolyT(X), PolySl(X), and PolySh(X)
  3. Show PolyZero(X) is a zero polynomial

Commitment Level #

The verifier will never see the arrays or polynomials themselves. They are undisclosed because they either (i) contain private data or (ii) are too large to examine and maintain a succinct proof system. Instead, the prover will use commitments.

The prover will write the following commitments to the transcript:

  • KZ=KZG.Commit(PolyZ(X))
  • KArr=KZG.Commit(PolyArr(X))
  • KT=KZG.Commit(PolyT(X))
  • KSl=KZG.Commit(PolySl(X))
  • KSh=KZG.Commit(PolySh(X))

The prover will generate a random challenge evaluation point (using strong Fiat-Shamir) on the polynomial that is outside of Hn. Call this point ρ. It will be used by the prover to create polynomial Q(X) (see above) and the prover will write to the transcript:

  • ρ
  • KQ=KZG.Commit(Q(X))

The prover will generate a second random challenge evaluation point (using strong Fiat-Shamir) on the polynomial that is outside of Hn. Call this point ζ. The prover will write the three points and opening proofs to the transcript:

  • ζ
  • PolyZ(ζ)=KZG.Open(KZ,ζ)
  • PolyZ(ζω)=KZG.Open(KZ,ζω)
  • PolyArr(ζ)=KZG.Open(KArr,ζ)
  • PolyT(ζ)=KZG.Open(KT,ζ)
  • PolyT(ζω)=KZG.Open(KT,ζω)
  • PolySl(ζ)=KZG.Open(KSl,ζ)
  • PolySl(ζω)=KZG.Open(KSl,ζω)
  • PolySh(ζ)=KZG.Open(KSh,ζ)
  • PolySh(ζω)=KZG.Open(KSh,ζω)
  • Q(ζ)=KZG.Open(KQ,ζ)

To check the proof, the verifier uses the transcript to construct the value YZero as follows:

  • YVanish1=[PolyZ(ζ)1](ζn1)(ζω0)(ζωκ1)
  • YVanish2={PolyZ(ζω)[α(1+β)+PolySl(ζ)+βPolySl(ζω)][α(1+β)+PolySh(ζ)+βPolySh(ζω)]PolyZ(ζ)(1+β)[α+PolyArr(ζ)][α(1+β)+PolyT(ζ)+βPolyT(ζω)]}(ζωκ1)
  • YVanish3=[PolySl(ζ)PolySl(ζωn+1)]ζn1ζωκ1
  • YZero=YVanish1+ρYVanish2+ρ2YVanish3Q(ζ)(ζk1)

Finally, if the constraint system is true, the following constraint will be true (and will be false otherwise with overwhelming probability, due to the Schwartz-Zippel lemma on ρ and ζ) :

  • YZero=?0

Security Proof #

halo2 #

Completeness #

Any honest prover can do the computations explained above and create an accepting proof.

Soundness #

We prove knowledge soundness in the Algebraic Group Model (AGM). To do so, we must prove that there exists an efficient extractor E such that for any algebraic adversary A the probability of A winning the following game is negl(λ).

  1. Given [g,gτ,gτ2,,gτn1], A outputs commitments to PolyArr(X), PolyArr(X), PolyT(X), PolyT(X), Q
  2. E, given access to A’s outputs from the previous step, outputs PolyArr(X), PolyArr(X), PolyT(X), PolyT(X), Q
  3. A plays the part of the prover in showing that YZero is zero at a random challenge ζ
  4. A wins if
    • V accepts at the end of the protocol
    • For some i[0,κ1], Arr[i]T

Our proof is as follows:

To make YZero a zero polynomial, A has to prove the four vanishing polynomials are correct. For PolyVanish1 and PolyVanish2, the permutation check tells us the probability that V accepts the proof is negligible if some elements in Arr are not in T. By the Schwartz-Zippel lemma, we know the probability that PolyVanish3 is vanishing is negligible if PolyArr(ω0)PolyT(ω0). Therefore, to win the KS game, A has to prove PolyVanish4 is correct with the winning condition (some elements in Arr do not appear in T). Assume Arr[i]T,i>0, to make such PolyVanish4 exist, Arr[i] has to equal to Arr[i1]. And because Arr[i1]T[i1], Arr[i1] has to equal to Arr[i2]. Thus, to make the winning condition hold, Arr[i]=Arr[0] must hold, which contradicts to the condition of PolyVanish3, Arr[0]=T[0].

Zero-Knowledge #

Before we prove the above protocol is zero-knowledge, it is worth noting the protocol is different from the lookup argument of halo2 in the real world. Specifically, the real halo2 lookup optimizes the two permutation checks into one and fills the table with some random numbers for the PLONK-based proof system. We refer to the official halo2 handbook to see the details. To prove the above protocol is zero-knowledge, we do so by constructing a probabilistic polynomial time simulator S which, for every (possibly malicious) verifier V, can output a view of the execution of the protocol that is indistinguishable from the view produced by the real execution of the program.

The simulator S generates an array Arr by randomly filling it with elements from T, then follows the same steps a prover would prove the lookup argument. S computes Arr,T and interpolates the four arrays into their respective polynomials, PolyArr(X), PolyArr(X), PolyT(X), and PolyT(X). It computes Q(X) and finally outputs the commitments to each of these polynomials (and writes the commitments to the transcript). Then, it generates the random challenge ζ (once again this is by strong Fiat-Shamir). It creates opening proofs for PolyArr(ζ),PolyArr(ζ),Q(ζ), PolyT(ζω), and PolyT(ζω), and writes these to the transcript as well. Since S knows each of the above polynomials, it can honestly compute this step and the proof will be accepted by V. The transcript it generates this way will be indistinguishable from a transcript generated from a real execution, since PolyCommitPed has the property of Indistinguishability of Commitments due to the randomization by hϕ^(x).

Plookup #

Completeness #

Any honest prover can do the computations explained above and create an accepting proof.

Soundness #

We prove knowledge soundness in the Algebraic Group Model (AGM). To do so, we must prove that there exists an efficient extractor E such that for any algebraic adversary A the probability of A winning the following game is negl(λ).

  1. Given [g,gτ,gτ2,,gτn1], A outputs commitments to PolyArr(X), PolyT(X), PolyS(X), Q
  2. E, given access to A’s outputs from the previous step, outputs PolyArr(X), PolyT(X), PolyS(X), Q
  3. A plays the part of the prover in showing that YZero is zero at a random challenge ζ
  4. A wins if
    • V accepts at the end of the protocol
    • For some i[0,κ1], Arr[i]T

Our proof is as follows:

To make YZero a zero polynomial, A has to prove the three vanishing polynomials are correct. It is easy to observe that PolyVanish1 and PolyVanish3 can be constructed even if some elements in Arr do not appear in T, so we focus on the PolyVanish2. By the soundness of the product check, we know S must be the union set of Arr and T if we want to the product of S[i] equals to the product of Arr[i] and T[i]. Recall the equation A needs to prove:

(1+β)ni<n[α+PolyArr(ωi)]i<d1[α(1+β)+PolyT(ωi)+βPolyT(ωi+1)]=i<n+d1[α(1+β)+PolyS(ωi)+βPolyS(ωi+1)]

Thus, for each i[0,d1], there exists a factor in the right-hand side of the equation equal to α(1+β)+PolyT(ωi)+βPolyT(ωi+1), which means α(1+β)+PolyT(ωi)+βPolyT(ωi+1)=α(1+β)+PolyS(ωi)+βPolyS(ωi+1). We can get PolyT(ωi)=PolyS(ωi) and PolyT(ωi+1)=PolyS(ωi+1) for i[0,d1]. Similarly, we can get PolyArr(ωi)=PolyS(ωi)=PolyS(ωi+1) for i[0,n]. Since n should be equal to or less than d, when PolyArr(ωi)=PolyS(ωi), PolyT(ωi)=PolyS(ωi) must hold at the same time, which contradicts to the winning assumption. Therefore, the protocol is sound.

Zero-Knowledge #

To prove the above protocol is zero-knowledge, we do so by constructing a probabilistic polynomial time simulator S which, for every (possibly malicious) verifier V, can output a view of the execution of the protocol that is indistinguishable from the view produced by the real execution of the program.

The simulator S generates an array Arr by randomly filling it with elements from T, then follows the same steps a prover would prove the lookup argument. S computes Sl, Sh, and Z and interpolates the five arrays into their respective polynomials, PolyArr(X), PolyT(X), PolySl(X), PolySh(X), and PolyZ(X). It computes Q(X) and finally outputs the commitments to each of these polynomials (and writes the commitments to the transcript). Then, it generates the random challenge ζ (once again this is by strong Fiat-Shamir). It creates opening proofs for PolyArr(ζ),PolyT(ζ),PolyT(ζω),PolySl(ζ),PolySl(ζω),PolySh(ζ),PolySh(ζω),Q(ζ),PolyZ(ζ), and PolyZ(ζω), and writes these to the transcript as well. Since S knows each of the above polynomials, it can honestly compute this step and the proof will be accepted by V. The transcript it generates this way will be indistinguishable from a transcript generated from a real execution since PolyCommitPed has the property of Indistinguishability of Commitments due to the randomization by hϕ^(x).