1 Introduction

NIZK in Bilinear Groups. Non-Interactive Zero-Knowledge Proofs allow to convince any party of the truth of a statement without revealing any other information. They are a very useful building block in the construction of cryptographic protocols. Since the first pairing-friendly NIZK proof system of Groth, Ostrovsky and Sahai [19] many different constructions have emerged in different models and under different assumptions, for various types of statements. Compared to a plain discrete logarithm setting, bilinear groups have a rich structure which is specially amenable to construct NIZK proofs.

Among this variety of results, there are three particularly interesting families with different advantages in terms of generality, efficiency or strength of the assumptions. On the one hand, there is a line of research initiated by Groth, Ostrovsky and Sahai [19] and which culminated in the Groth-Sahai proof system [21]. The latter result provides relatively efficient proofs for proving satisfiability of several types of quadratic equations in bilinear groups based on standard assumptions. Although several works have tried to improve the efficiency of Groth-Sahai proofs [8, 30], for many equation types they still remain the best alternative based on falsifiable assumptions.

Another family of results are the constructions of quasi-adaptive NIZK (QA-NIZK) arguments, initiated by Jutla and Roy [22] and leading to very efficient proofs of very concrete statements. Most notably, given a bilinear group \({gk:=(p,{\mathbb {G}}_1,{\mathbb {G}}_2,{\mathbb {G}}_T,e,\mathcal {P}_1,\mathcal {P}_2)}\), proving membership in linear spaces in \({{\mathbb {G}}_1^m}\) or \({{\mathbb {G}}_2^m}\), for some \(m \in \mathbb {N}\), requires only one group element [23, 24]. The power of the quasi-adaptive notion of zero-knowledge allows to specialize the common reference string to the language one is proving membership in, trading generality for efficiency under very weak computational assumptions. Other works have constructed proofs for different languages in the QA-NIZK setting, like the proof for bilateral spaces (linear spaces in \({\mathbb {G}}_1^{m} \times {\mathbb {G}}_2^{n}\)) [14], or, beyond linear spaces, the language of vector commitments to integers opening to a boolean vector [14] or shuffles and range proofs [15].

Finally, in the last few years, an extremely successful line of research has constructed succinct non-interactive arguments of knowledge (zk-SNARKs) [7, 11, 16, 17, 27] for NP complete languages, which are not only constant-size (independent of the witness size) but which are also very efficient in a concrete sense. One of the main downsides of SNARKs is that their security relies on knowledge of exponent assumptions, a very strong type of assumptions classified as non-falsifiable [29]. However, one cannot achieve succinctness (proofs essentially independent of the size of the statement being proved and its witness) and security based on falsifiable assumptions at the same time, as per the impossibility result by Gentry and Wichs [12].

Commit-and-Prove. In a broad sense, we can think of many of the results in these three families as commit-and-prove schemes [5]. This is very clear for the Groth-Sahai proof system, which has even been recasted in the commit-and-prove formalism by Escala and Groth [8]. This is probably less obvious for some results in the QA-NIZK setting. However, as noted already in the first QA-NIZK construction of membership in linear spaces [22], in these cases one can often think of the statement as a commitment to the witness. For instance, in the case of proving that a vector \({\varvec{y}}\) in the exponent is in the linear span of the columns of some matrix \(\mathbf {{A}}\), this means that \({\varvec{y}}=\mathbf {{A}}{\varvec{w}}\) and we can think of \({\varvec{y}}\) as a commitment to \({\varvec{w}}\). Finally, in the case of many SNARK constructions, e.g. [7] the commitment is usually a “knowledge commitment”—from which the witness is extracted in the soundness proof using knowledge assumptions—while the rest can be considered the “proof”.

With this idea in mind, it is interesting to compare these three approaches for constructing proofs of satisfiability of d equations in n variables in bilinear groups in terms of proof size. We observe that for linear equations, while the original Groth-Sahai proof system required O(n) group elements for the commit step and O(d) for the “prove” one, recent works have shown how to aggregate the proof in the quasi-adaptive setting [14, 23], reducing the “prove” step to O(1) in many cases. For quadratic equations in the other hand, we summarize the three different approaches in Table 1.

Table 1. Three different approaches for proving quadratic equations in bilinear groups. For concreteness, assume that one wants to prove that a set of values \(x_1,\ldots ,x_n\) form a bitstring, that is, satisfiability of \(x_i(x_i-1)=0\).

Motivation. Quadratic equations are much more powerful than linear ones. In particular, they allow to prove boolean Circuit Sat, but they are also important to prove other statements like range, shuffle proofs or validity of an encrypted vote. While for proving statements about large circuits non-falsifiable assumptions are necessary to get around impossibility results, it would be desirable to eliminate them in less demanding settings, to understand better what the security claims mean in a concrete sense. As in the QA-NIZK arguments for linear spaces, there are even natural situations in which the statement is already “an encrypted witness”, and it seems unnatural to use the full power of knowledge of exponent assumptions in these cases (for instance, in the case of vote validity).

In summary, it is worth investigating efficiency improvements for quadratic equations under falsifiable assumptions. In particular, aggregating the “prove” step would be an important step towards this goal. The techniques for the linear case do not apply to the quadratic one, and we are only aware of one result in aggregating the proof of quadratic equations, namely the bitstring argument of González et al. [14] for proving that a set of commitments to integers opens to boolean values. There is a large concrete gap between this result and the others in the non-falsifiable setting both in terms of the size of the proof and the common reference string. Thus, it is natural to ask if it is possible to reduce the gap and improve on this result importing techniques from SNARKs in the falsifiable setting.

1.1 Our Results

We introduce new techniques to aggregate proofs of quadratic equations. First, in Sect. 3.1, we construct a proof system for proving that d equations of the type \(X_i(X_i-2)=0\) are satisfied, where \(X_i\) is an affine combination of some \(a_1, \ldots , a_n\). The size of the proof is constant and the set of commitments to the variables is of size linear in n, and the size of the CRS is linear in d. The prover computes a number of exponentiations linear in \(n+d\), while the verifier computes a number of pairings linear in d. Our proof system is perfect zero-knowledge and computationally sound under a variant of the so-called target strong Diffie-Hellman assumption. These assumptions belong to the broader class of q-assumptions, where each instance of the problem is of size proportional to some integer q, which in our case is the number of equations. In particular, the bitstring language of [14] can be formulated as such a system of equations. In Sect. 3.2 we discuss as a particular case an argument for unit vector, and argue how to modify our general proof system so that it can be proven sound under static assumptions (the full details are in the full version). A typical application of membership in these languages is for computing disjunctions of statements such as “the committed verification key is equal to \(\mathcal {V}_1\), or \(\mathcal {V}_2\), ..., or \(\mathcal {V}_m\)”, which might be expressed as \(vk = \sum _{i=1}^m b_i \mathcal {V}_i,b_i\in \{0,1\}\) and \((b_1,\ldots ,b_m)\) is a unit vector.

Next, in Sect. 4, we generalize the previous argument to prove that d equations of the type \((X_i-z_1)(X_i-z_2)\ldots (X_i-z_m)=0\) are satisfied, where \(X_i\) is an affine combination of the variables \(a_1, \ldots , a_n\). For this we combine techniques from the interactive setting of [4] for proving set membership in a set of size m of \(\mathbb {Z}_p\) with ideas from Sect. 3.1 and from quasi-adaptive aggregation [23]. In the full version, we illustrate how to use this for improved range proofs in bilinear groups under falsifiable assumptions.

Finally, in Sect. 5 we discuss two approaches to construct shuffle arguments. They are the most efficient in terms of proof size in the common reference string model under falsifiable assumptions in bilinear groups (comparing favorably even to the best constructions in the generic bilinear group model [10]), but they have large public parameters (quadratic in the shuffle size) (Tables 2 and 3).

Table 2. The table shows the proof sizes (not including commitments) and CRS sizes of our constructions. We consider d variables and n equations, and m is the size of the set from the set membership proof. The Assumptions 6, 7 and 8 are new.
Table 3. Comparison of our shuffle arguments with state-of-the-art arguments. Note that PPA stands for the Pairing Permutation Assumption and SPA for the Simultaneous Pairing Assumption.

1.2 Our Techniques

Let \({\mathbb {G}}_1,{\mathbb {G}}_2,{\mathbb {G}}_T\) be groups of prime order p and let \(e:{\mathbb {G}}_1\times {\mathbb {G}}_2\rightarrow {\mathbb {G}}_T\) be a bilinear map. Both SNARKs and our schemes can be seen as “commit-and-prove” schemes [8]: in the first step we commit to the solution of the equations. In the case of SNARKs, the knowledge assumption allows to extract the solutions from a constant-size commitment during the soundness proof, but we are trying to avoid using these assumptions, so we require perfectly binding commitments for each element of the solution. The second step is a proof of the opening of the commitments verifying the equations.

Let \(r_1,\ldots ,r_d\in \mathbb {Z}_p\). The “prove” part is handled with a polynomial aggregation technique in which satisfiability of a set of d equations is encoded into a polynomial p(X) such that \(p(r_j)=0\) if and only if the jth equation is satisfied. To prove that d equations are satisfied, one needs to prove that p(X) is divisible by \(\prod _{j=1}^{d} (X-r_j)\). The key to succinctness is that the divisibility condition is only checked at a secret point s chosen by the trusted party who generates the CRS. This preserves soundness as long as the prover only knows s (or powers thereof) in \({\mathbb {G}}_{1}\) or \({\mathbb {G}}_2\), but not its discrete logarithm.

In the soundness proof, the witness is extracted from the knowledge commitment, and then used to find some \(r_j\) such that \(p(r_j) \ne 0\) and compute auxiliary information which, together with the proof, allows to break a hard problem, e.g. the q-Target Strong Diffie-Hellman Assumption in [7]. Under non-falsifiable assumptions the commitments, even if perfectly binding, can be only opened in the source groups, instead of in \(\mathbb {Z}_p\). This has an impact on the soundness proof, as it is not possible to eliminate some terms in the proof to find a solution to the q-TSDH assumption, so we need to consider a more flexible assumption. Furthermore, since the solutions define the coefficients of polynomial p(X), our access to this polynomial is much more limited.

For our set-membership proof we start from the following insight: the satisfiability of equation \(b(b-1)=0\) can be proven showing knowledge of a signature for b if only signatures for 0 or 1 are known. This approach can be easily extended for larger sets of solutions as done by Camenisch et al. [4]. To express the validity of many signature and message pairs, we again encode the signature verification equations as a problem of divisibility of polynomials.

This requires the signature verification to be expressible as a set of quadratic equations. While structure preserving signatures clearly solve this problem, it is overkill, since we only need unforgeability against static queries. Further, even the generic group construction of [17] requires at least 3 group elements. We choose basic Boneh-Boyen signatures since each signature consists of only one group element. Our argument needs to solve other technical difficulties which are explained in more detail in Sect. 4.

1.3 Related Works

The recent line of research in SNARKs started with [16], in which the first sub-linear arguments without random oracles were presented, but with CRS of quadratic size. Subsequent works have defined alternative models for the encoding of the circuit [7, 11, 17, 26], reducing the CRS size to linear and obtaining smaller proofs, going as small as 3 group elements in the case of [17]. In particular, our encodings are based on those of [7, 11].

When considering falsifiable assumptions, one classic way to prove quadratic equations in the non-interactive setting makes use of Groth-Sahai proofs [20], which are quite efficient and can be aggregated to obtain a constant-size proof of many equations.

In this work, we also use techniques from QA-NIZK proofs. This model was introduced in [22] to build proofs of membership in linear subspaces over \({\mathbb {G}}_1\) or \({\mathbb {G}}_2\). It was later improved to make proofs constant-size (independent of the size of the witness) [23,24,25] and adapted to the asymmetric setting [14]. Although introduced initially to build proofs of linear equations, the QA-NIZK setting has also been used to build the first constant-size aggregated proofs of some quadratic equations under standard assumptions [14], in particular the proof that a set of commitments open to bits.

The usage of signatures for proving membership in a set dates back to the work of Camenisch et al. [4] in the interactive setting, and in the non-interactive setting by Rial et al. [31]. Both works achieve constant-size proofs but without aggregation (i.e. proving n instances requires O(n) communication). Set membership proofs were also recently investigated by Bootle and Groth [3] in the interactive setting. They construct proofs logarithmic in the size of the set and aggregate n instances with a multiplicative overhead of \(O(\sqrt{n})\). In the non-interactive setting, González et al. constructed set membership proofs of size linear in the size of the set and aggregated many instances without any overhead [15].

1.4 Organization

In Sect. 2 we establish the assumptions required for our proofs, present the relevant security definitions and recall the subschemes that we will make use of, namely ElGamal encryption, Boneh-Boyen signatures, Groth-Sahai proofs and proofs of membership in linear spaces. In Sect. 3, we present our proof system for satisfiability of quadratic equations. In Sect. 4 we present an aggregated argument to prove membership in a set of \(\mathbb {Z}_p\). In Sect. 5 we discuss new approaches to construct shuffle arguments. In the full version, we give an argument to prove that a commitment opens to a unit vector which can be proven secure based on a static assumption. We also discuss the application of the set membership argument in \(\mathbb {Z}_p\) to range proof.

2 Preliminaries

2.1 Bilinear Groups and Implicit Notation

Let \(\mathcal {G}\) be some probabilistic polynomial time algorithm which on input \(1^{\lambda }\), where \(\lambda \) is the security parameter, returns the group key which is the description of an asymmetric bilinear group \({gk:=(p,{\mathbb {G}}_1,{\mathbb {G}}_2,{\mathbb {G}}_T,e,\mathcal {P}_1,\mathcal {P}_2)}\), where \({{\mathbb {G}}_1,{\mathbb {G}}_2}\) and \({{\mathbb {G}}_T}\) are additive groups of prime order p, the elements \(\mathcal {P}_1, \mathcal {P}_2\) are generators of \({{\mathbb {G}}_1,{\mathbb {G}}_2}\) respectively, \({e:{\mathbb {G}}_1\times {\mathbb {G}}_2\rightarrow {\mathbb {G}}_T}\) is an efficiently computable, non-degenerate bilinear map, and there is no efficiently computable isomorphism between \({\mathbb {G}}_1\) and \({\mathbb {G}}_2\).

Elements in \({{\mathbb {G}}_{\gamma }}\) are denoted implicitly as \([a]_\gamma :=a \mathcal {P}_\gamma \), where \(\gamma \in \{1,2,T\}\) and \(\mathcal {P}_T:=e(\mathcal {P}_1,\mathcal {P}_2)\). For simplicity, we often write \([a]_{1,2}\) for the pair \([a]_1,[a]_2\). The pairing operation will be written as a product \(\cdot \), that is \([a]_1 \cdot [b]_2=[a]_1 [b]_2=e([a]_1,[b]_2)=[ab]_T\). Vectors and matrices are denoted in boldface. Given a matrix \(\mathbf {{T}}=(t_{i,j})\), \([\mathbf {{T}}]_\gamma \) is the natural embedding of \(\mathbf {{T}}\) in \({{\mathbb {G}}_{\gamma }}\), that is, the matrix whose (ij)th entry is \(t_{i,j}\mathcal {P}_\gamma \). We denote by \({|{\mathbb {G}}_{\gamma }|}\) the bit-size of the elements of \({{\mathbb {G}}_{\gamma }}\).

\(\mathbf {{I}}_{n}\) refers to the identity matrix in \(\mathbb {Z}_p^{n\times n}\), \(\mathbf {{0}}_{m\times n}\) refers to the all-zero matrix in \(\mathbb {Z}_p^{m\times n}\), and \({\varvec{e}}^{n}_i\) the ith element of the canonical basis of \(\mathbb {Z}_p^{n}\) (simply \(\mathbf {{I}}\), \(\mathbf {{0}}\), and \({\varvec{e}}_i\), respectively, if nm are clear from the context).

Given a set \(\mathcal {R}=\{r_1,\ldots ,r_d\} \subset \mathbb {Z}_p\), we denote by \(\ell _i(X)=\prod _{j \ne i} \dfrac{(X-r_i)}{(r_j-r_i)}\) the ith Lagrange interpolation polynomial associated to \(\mathcal {R}\).

2.2 Hardness Assumptions

Definition 1

Let \(\ell ,k \in \mathbb {N}\). We call \(\mathcal {D}_{\ell ,k}\) a matrix distribution if it outputs (in PPT time, with overwhelming probability) matrices in \(\mathbb {Z}_p^{\ell \times k}\). We define \(\mathcal {D}_k := \mathcal {D}_{k+1,k}\).

The following applies for \({\mathbb {G}}_{\gamma }\), where \(\gamma \in \{1,2\}\).

Assumption 1

(Matrix Decisional Diffie-Hellman Assumption in \({{\mathbb {G}}_{\gamma }}\) [9]). For all non-uniform PPT adversaries \(\mathcal {A}\),

where the probability is taken over \( gk \leftarrow \mathcal {G}(1^\lambda )\), and the coin tosses of adversary \(\mathcal {A}\).

Intuitively, the \(\mathcal {D}_{\ell ,k}\)-MDDH assumption means that it is hard to decide whether a vector is in the image space of a matrix or it is a random vector, where the matrix is drawn from \(\mathcal {D}_{\ell ,k}\). In this paper we will refer to the following matrix distributions:

$$ \mathcal {L}_{k}:\mathbf {{A}}= \left( {\begin{matrix} a_1 &{} 0 &{} \ldots &{} 0 \\ 0 &{} a_2 &{} \ldots &{} 0\\ \tiny {\vdots } &{} \tiny {\vdots } &{} \tiny {\ddots } &{} \tiny {\vdots } \\ 0 &{} 0 &{} \ldots &{} a_{k}\\ 1 &{} 1 &{} \ldots &{} 1 \end{matrix}} \right) , \ \quad \mathcal {U}_{\ell ,k}: \mathbf {{A}}= \left( {\begin{matrix} a_{1,1} &{} \ldots &{} a_{1,k} \\ \tiny {\vdots } &{} \tiny {\ddots } &{} \tiny {\vdots } \\ a_{\ell ,1} &{} \ldots &{} a_{\ell ,k} \end{matrix}} \right) , $$

where \(a_i,a_{i,j}\leftarrow \mathbb {Z}_p\). The \(\mathcal {L}_{k}\)-\(\mathsf {MDDH}\) Assumption is the k-linear family of Decisional Assumptions and corresponds to the Decisional Diffie-Hellman (DDH) Assumption in \({{\mathbb {G}}_{\gamma }}\) when \(k=1\). The SXDH Assumption states that DDH holds in \({{\mathbb {G}}_{\gamma }}\) for all \(\gamma \in \{1,2\}\). The \(\mathcal {U}_{\ell ,k}\)-MDDH assumption is the Uniform Assumption and is the weakest of all matrix assumptions of size \(\ell \times k\).

Additionally, we will be using the following family of computational assumptions:

Assumption 2

(Kernel Diffie-Hellman Assumption in \({{\mathbb {G}}_{\gamma }}\) [28]). For all non-uniform PPT adversaries \(\mathcal {A}\):

$$\begin{aligned} \Pr \left[ [{\varvec{x}}]_{3-\gamma }\leftarrow \mathcal {A}(gk,[\mathbf {{A}}]_\gamma ):{\varvec{x}}\ne 0\wedge {\varvec{x}}^\top \mathbf {{A}}={\varvec{0}}\right] \approx 0, \end{aligned}$$

where the probability is taken over \( gk \leftarrow \mathcal {G}(1^\lambda )\), \(\mathbf {{A}} \leftarrow \mathcal {D}_{\ell ,k}\) and the coin tosses of adversary \(\mathcal {A}\).

The Assumption is not stronger than the Assumption, since a solution to the former allows to decide membership in \(\mathbf {Im}([\mathbf {{A}}]_{\gamma })\). In asymmetric bilinear groups, there is a natural variant of this assumption.

Assumption 3

(Split Kernel Diffie-Hellman Assumption [14]). For all non-uniform PPT adversaries \(\mathcal {A}\):

$$\begin{aligned} \Pr \left[ [{\varvec{r}}]_1,[{\varvec{s}}]_2\leftarrow \mathcal {A}(gk,[\mathbf {{A}}]_{1,2}):{\varvec{r}}\ne {\varvec{s}}\wedge {\varvec{r}}^\top \mathbf {{A}}={\varvec{s}}^\top \mathbf {{A}}\right] \approx 0, \end{aligned}$$

where the probability is taken over \( gk \leftarrow \mathcal {G}(1^\lambda )\), \(\mathbf {{A}} \leftarrow \mathcal {D}_{\ell ,k}\) and the coin tosses of adversary \(\mathcal {A}\).

While the Kernel Diffie-Hellman Assumption says one cannot find a non-zero vector in one of the groups which is in the co-kernel of \(\mathbf {{A}}\), the split assumption says one cannot find different vectors in \({{\mathbb {G}}_1^{\ell } \times {\mathbb {G}}_2^{\ell }}\) such that the difference of the vector of their discrete logarithms is in the co-kernel of \(\mathbf {{A}}\). As a particular case, [14] considers the Split Simultaneous Double Pairing Assumption in \({{\mathbb {G}}_1,{\mathbb {G}}_2}\) (\(\mathsf {SSDP}\)) which is the \(\mathcal {RL}_{2}\text {-}\mathsf {SKerMDH}\) Assumption, where \(\mathcal {RL}_{2}\) is the distribution which results of sampling a matrix from \(\mathcal {L}_{2}\) and replacing the last row by random elements.

\({{\varvec{q}}}\)-Assumptions. We first recall the q-Strong Diffie-Hellman and q-Target Strong Diffie-Hellman assumptions, which essentially tell us that inversion is hard in the exponent, even when given q powers of the element to invert.

Assumption 4

(\(\varvec{q}\)-Strong Diffie Hellman Assumption in \({\mathbb {G}}_\gamma \), \(\varvec{q}\)-SDH [2]). For all non-uniform PPT adversaries \(\mathcal {A}\):

$$\begin{aligned} \Pr \left[ \left( r,\left[ \nu \right] _\gamma \right) \leftarrow \mathcal {A}(gk,\{[s^i]_\gamma \}_{i=1}^q):\nu =\frac{1}{s-r}\right] \approx 0, \end{aligned}$$

where the probability is taken over \( gk \leftarrow \mathcal {G}(1^\lambda )\), \(s\leftarrow \mathbb {Z}_p\) and the coin tosses of adversary \(\mathcal {A}\).

Assumption 5

(\(\varvec{q}\)-Target Strong Diffie-Hellman Assumption, \(\varvec{q}\)-TSDH [1]). For all non-uniform PPT adversaries \(\mathcal {A}\):

$$\Pr \left[ \left( r,\left[ \nu \right] _T\right) \leftarrow \mathcal {A}(gk,\{[s^i]_{1,2}\}_{i=1}^q): \nu =\frac{1}{s-r} \right] \approx 0,$$

where the probability is taken over \( gk \leftarrow \mathcal {G}(1^\lambda )\), \(s\leftarrow \mathbb {Z}_p\) and the coin tosses of adversary \(\mathcal {A}\).

The soundness proofs of our schemes will rely on the following variations of the two assumptions above.

Assumption 6

(\(\varvec{\mathcal {Z}}\)-Group Strong DH Assumption in \({\mathbb {G}}_{\gamma }\), \(\varvec{\mathcal {Z}}\)-GSDH). Let \(\mathcal {Z}\subset \mathbb {Z}_p\) such that \(\#\mathcal {Z}=q\). For all non-uniform PPT adversaries \(\mathcal {A}\):

$$\Pr \left[ \left( [z_1]_1,[z_2]_\gamma , [\nu ]_2\right) \leftarrow \mathcal {A}\left( gk,\mathcal {Z},[\varepsilon ]_{1,2},\{[s^i]_{1,2}\}_{i=1}^q\right) : \begin{array}{c}z_1\not \in \mathcal {Z}\wedge z_2=\varepsilon z_1\\ \nu =\frac{\prod _{z\in \mathcal {Z}}(s-z)}{s-z_1}\end{array}\right] \approx 0,$$

where the probability is taken over \( gk \leftarrow \mathcal {G}(1^\lambda )\), \(s,\varepsilon \leftarrow \mathbb {Z}_p\) and the coin tosses of adversary \(\mathcal {A}\).

The name is motivated by the fact that it is a variant of the q-SDH Assumption in which the adversary must only give \([z_1]_1\) in the group \({\mathbb {G}}_1\), instead of giving it in \(\mathbb {Z}_p\) as in the q-SDH Assumption.

Assumption 7

(\(\varvec{q}\)-Square TSDH Assumption, \(\varvec{q}\)-STSDH). For all non-uniform PPT adversaries \(\mathcal {A}\):

$$\Pr \left[ \left( r,[\beta _1]_1, [\beta _2]_2,[\nu ]_T\right) \leftarrow \mathcal {A}\left( gk,[\varepsilon ]_2,\{[s^i]_{1,2}\}_{i=1}^q\right) :\begin{array}{c} \beta _1\ne \pm 1\\ \beta _2=\varepsilon \beta _1\wedge \nu =\frac{\beta _1^2-1}{s-r} \end{array}\right] \approx 0,$$

where the probability is taken over \( gk \leftarrow \mathcal {G}(1^\lambda )\), \(s,\varepsilon \leftarrow \mathbb {Z}_p\) and the coin tosses of adversary \(\mathcal {A}\).

Note that the challenger knows \(\varepsilon , s\), so this assumption is falsifiable. Indeed, upon receiving \((r,[\beta _1]_1,[\beta _2]_2,[\nu ]_T)\), the challenger verifies that \([\beta _1]_1\ne [\pm 1]_1, e([1]_1,[\beta _2]_2)=e(\varepsilon [\beta _1]_1,[1]_2)\), and \(\varepsilon (s-r)[\nu ]_T=e([\beta _1]_1,[\beta _2]_2)-e([\varepsilon ]_1,[1]_2)\). A similar argument can be made for the other assumptions in this section.

Assumption 8

(\(\varvec{q}\)-Quadratic TSDH Assumption, \(\varvec{q}\)-QTSDH). For all non-uniform PPT adversaries \(\mathcal {A}\):

$$\Pr \left[ \begin{array}{c} \left( r,[\beta _1]_1, [\beta _2]_1,[\tilde{\beta }_1]_2,[\tilde{\beta }_2]_2,[\nu ]_T\right) \leftarrow \mathcal {A}\left( gk, [\varepsilon ]_{1,2},\{[s^i]_{1,2}\}_{i=1}^q\right) :\\ \begin{array}{c} \beta _1\tilde{\beta }_1\ne 1\\ \beta _2=\varepsilon \beta _1\wedge \tilde{\beta }_2=\varepsilon \tilde{\beta }_1\wedge \nu =\frac{\beta _1\tilde{\beta }_1-1}{s-r} \end{array} \end{array}\right] \approx 0,$$

where the probability is taken over \( gk \leftarrow \mathcal {G}(1^\lambda )\), \(s,\varepsilon \leftarrow \mathbb {Z}_p\) and the coin tosses of adversary \(\mathcal {A}\).

2.3 Building Blocks

ElGamal Encryption. We denote by \(\mathsf {Enc}_{[\mathsf {sk}]}(\mathsf {m},r)\) the lifted ElGamal encryption of message m with randomness r and public key \([\mathsf {sk}]\). Using implicit group notation, ElGamal encryption is as follows:

$$\left[ \begin{array}{c}c_1\\ c_2 \end{array}\right] =\mathsf {Enc}_{[\mathsf {sk}]}(\mathsf {m},r)=\mathsf {m}[{\varvec{e}}_2]+r\left[ \begin{array}{c} 1\\ \mathsf {sk}\end{array}\right] ,$$

where if one knows the secret key \(\mathsf {sk}\) in \(\mathbb {Z}_p\), then one can recover the message in \({\mathbb {G}}\) by computing \([c_2]-\mathsf {sk}[c_1]=[\mathsf {m}]\). ElGamal encryption is semantically secure under the DDH assumption. It can be seen as a commitment scheme, in which case it is perfectly binding and computationally hiding under the DDH assumption, and in fact this is how we will use it in our schemes.

Boneh-Boyen Signatures [2]. We briefly recall Boneh-Boyen signatures. Let \({\mathbb {G}}_1,{\mathbb {G}}_2,{\mathbb {G}}_T,e:{\mathbb {G}}_1\times {\mathbb {G}}_2\rightarrow {\mathbb {G}}_T\) be a bilinear group. Messages are elements of \(\mathbb {Z}_p\), and signatures are elements of \({\mathbb {G}}_2\). The secret key is \(\mathsf {sk}\in \mathbb {Z}_p\), and the public key (verification key) is \([\mathsf {sk}]_1\in {\mathbb {G}}_1\). To sign a message \(x\in \mathbb {Z}_p\), the signer computes

$$\begin{aligned}{}[\sigma ]_2=\left[ \frac{1}{\mathsf {sk}-x}\right] _2 \end{aligned}$$

The verifier accepts the signature if the equation \(e([\mathsf {sk}]_1-[x]_1,[\sigma ]_2)=[1]_T\) holds. Boneh-Boyen signatures are existentially unforgeable under the q-SDH assumption.

Dual-Mode Commitments and Groth-Sahai Proofs [20]. Groth-Sahai proofs allow to prove satisfiability of quadratic equations in bilinear groups in the non-interactive setting. More precisely, Groth-Sahai proofs deal with equations of the form

$$\begin{aligned} \sum _{j=1}^{m_y} a_j \mathsf {y}_j + \sum _{i=1}^{m_x} b_i \mathsf {x}_i + \sum _{i,j=1}^{m_x,m_y} \gamma _{i,j} \mathsf {x}_i \mathsf {y}_j = t, \end{aligned}$$

in which the set of variables is divided into two disjoint subsets \(\mathsf {X}=\{\mathsf {x}_1,\ldots ,\mathsf {x}_{m_x}\}\) and \(\mathsf {Y}=\{\mathsf {y}_1,\ldots ,\mathsf {y}_{m_y}\}\), and depending on the type of equation \(\mathsf {X},\mathsf {Y}\subset \mathbb {Z}_p\) (quadratic equations in \(\mathbb {Z}_p\)), \(\mathsf {X}\subset \mathbb {Z}_p,\mathsf {Y}\subset {\mathbb {G}}_\gamma \) (multi-exponentiation equations in \({\mathbb {G}}_\gamma \)) for \(\gamma \in \{1,2\}\) or \(\mathsf {X}\subset {\mathbb {G}}_1\) and \(\mathsf {Y}\subset {\mathbb {G}}_2\) (pairing product equations). Here the product means a bilinear operation which is multiplication in \(\mathbb {Z}_p\), exponentiation or the pairing operation.

The scheme can be seen as a commit-and-prove scheme [8], where in the first step the prover gives commitments to the solutions, and in the second provides a proof that these commitments verify the corresponding equation. In particular, the commitments used are dual-mode commitments, that is, commitments that can be either perfectly binding or perfectly hiding, and we can switch from one to the other with an indistinguishable change of security game. More precisely, Groth-Sahai commitments to field elements \(z\in \mathbb {Z}_p\) and group elements \([z]\in {\mathbb {G}}\) are, respectively:

$$\begin{aligned} \mathsf {Com}(z;w)=z\left[ {\varvec{u}}\right] +w[{\varvec{u}}_1],\quad \mathsf {Com}([z];w_1,w_2)=\left[ \begin{array}{c}0\\ z\end{array}\right] +w_1[{\varvec{u}}_1]+w_2[{\varvec{u}}_2], \end{aligned}$$

where \([{\varvec{u}}],[{\varvec{u}}_1],[{\varvec{u}}_2]\) are vectors in \({\mathbb {G}}^2\) given in the commitment key, and their definitions depend on whether we want the commitments to be perfectly binding or perfectly hiding.

Groth-Sahai proofs are sound, witness-indistinguishable and, in many cases, zero-knowledge. More precisely, the proof is always zero-knowledge for quadratic equations in \(\mathbb {Z}_p\) and multi-exponentiation equations, and also for pairing product equations provided that \(t=1\).

QA-NIZK Arguments of Membership in Linear Spaces [22]. We describe some languages for which there exist constant-size QA-NIZK arguments of membership which will be used as building blocks in our constructions. These languages are (i) linear subspaces of \({{\mathbb {G}}_{\gamma }^m}\), \(\gamma \in \{1,2\}\) [23, 24], and (ii) bilateral linear subspaces, that is, linear subspaces of \({{\mathbb {G}}_1^m\times {\mathbb {G}}_2^n}\) [14]. For \(\gamma \in \{1,2\}\),

We use \(\mathsf {LS}\) (\(\mathsf {BLS}\)) to designate (bilateral) linear subspace proof systems for the languages \(\mathcal {L}_{[\mathbf {{M}}]_\gamma }\) (\(\mathcal {L}_{[\mathbf {{M}}]_1,[\mathbf {{N}}]_2}\)). These proof systems verify strong soundness, which essentially means that they are sound even when the discrete logarithm of the matrices is given. This property is formally defined in González et al. [14].

Case (i) can be instantiated based on the Kernel Diffie-Hellman Assumption 2, and the proof has size \(|{\mathbb {G}}_{\gamma }|\), whereas (ii) can be based on the Split Kernel Diffie-Hellman Assumption 3, and the proof has size \(2|{\mathbb {G}}_1|+2|{\mathbb {G}}_2|\).

3 Proving Satisfiability of Quadratic Equations

In this section we present a scheme in which soundness is based on the \(q\text {-}\mathsf {STSDH}\) Assumption.

3.1 Arguments for Quadratic Equations from q-Assumptions

Intuition. Given \(n,d\in \mathbb {N}\), the number of variables and equations, respectively, we build a proof system for the family of languages

$$\mathcal {L}_{\text {quad},ck}=\left\{ \begin{array}{ll}\left( [{\varvec{c}}]_1,\mathbf {{V}},{\varvec{b}}\right) \in \mathbb {G}_1^{2n}\times \mathbb {Z}_p^{n\times d}\times \mathbb {Z}_p^d &{} \left| \begin{array}{l} \exists {\varvec{a}},{\varvec{w}}\in \mathbb {Z}_p^n \text { s.t }\\ {[{\varvec{c}}]_1}=\mathsf {Com}_{ck}({\varvec{a}},{\varvec{w}})\text { and }\\ \left\{ {\varvec{a}}^\top {\varvec{v}}_j+b_j \right\} _{j=1}^d \in \{0,2\}\ \end{array}\right. \end{array} \right\} $$

where \([{\varvec{c}}]_1=\mathsf {Com}_{ck}({\varvec{a}},{\varvec{w}})\) is a vector of ElGamal encryptions. This generalizes to any other perfectly binding commitment of the form \([{\varvec{c}}]_1=\mathsf {Com}_{ck}({\varvec{a}};{\varvec{w}})=[\mathbf {{U}}_1{\varvec{a}}+\mathbf {{U}}_2{\varvec{w}}]_1\) for \(ck=([\mathbf {{U}}_1]_1,[\mathbf {{U}}_2]_1)\), and \([\mathbf {{U}}_1]_1,[\mathbf {{U}}_2]_1\) are from a witness sampleable distribution.

We follow the approach of Danezis et al. [7] and encode the equations

$$\begin{aligned} {\varvec{a}}^\top {\varvec{v}}_j+b_j \in \{0,2\} \end{aligned}$$

into a Square Span Program (SSP): we construct \(n+1\) polynomials \(v_0(X),\ldots ,v_n(X)\) and a target polynomial t(X), where \(\deg (v_i) < \deg (t)=d\) for all \(i\in \{0,\dots ,n\}\). This codification asserts that a witness \({\varvec{a}}\) satisfies the set of equations if and only if t(X) divides p(X), where

$$ p(X)=\left( v_0(X)+\sum _{i=1}^n a_iv_i(X)\right) ^2-1. $$

The polynomials \(v_i(X)\), \(i\in \{1,\dots ,n\}\), are defined as the interpolation polynomials of the coefficients \(v_{ij}\) of \(\mathbf {{V}}\) at \(r_1,\ldots ,r_d\), which are fixed, arbitrary, pairwise different points of \(\mathbb {Z}_p\). Similarly, \(v_0(X)\) is the interpolation polynomial of \(b_j-1\) at the same points. That is, if \({\varvec{v}}_j\) is the jth column of \(\mathbf {{V}}\),

$$ {\varvec{a}}^\top {\varvec{v}}_j+b_j-1 = \sum _{i=1}^n a_iv_{ij} + b_j -1= \sum _{i=1}^n a_iv_i(r_j) + v_0(r_j). $$

Note that the statement \(Z\in \{0,2\}\) is equivalent to \((Z-1)^2-1=0\) and hence, the polynomial p(X) interpolates the left side of this equation in \(r_1,\ldots ,r_d\) when Z is replaced by \({\varvec{a}}^\top {\varvec{v}}_j+b_j-1\) for each \(j\in \{1,\dots ,d\}\). The target polynomial \(t(X)=\prod _{i=1}^d(X-r_i)\) is 0 at \(r_1,\ldots ,r_d\) and therefore encodes the right sides. This codification gives us the equivalence: the equations hold if and only if t(X) divides p(X).

Danezis et al. constructed a SNARK for this statement, “t(X) divides p(X)”, which is very efficient because it just checks that the divisibility relation holds at a single secret point \(s \in \mathbb {Z}_p\) whose powers \([s]_1,[s]_2,\ldots ,[s^d]_1,[s^d]_2\) are published in the CRS. That is, the proof essentially shows “in the exponent” that

$$\begin{aligned} p(s)=h(s)t(s), \end{aligned}$$

where \(h(X)=p(X)/t(X)\). When all the equations hold, h(X) is a polynomial and the evaluation at s can be constructed as a linear combination of the powers of s in the CRS. When some equation does not hold, h(X) is a rational function, and its evaluation at s is no longer efficiently computable from the CRS. The actual proof system has some additional randomization elements to achieve Zero-Knowledge, but its soundness follows from this argument.

In the scheme of Danezis et al., the prover outputs a perfectly hiding commitment to the witness. In the soundness proof, one uses a knowledge of exponent assumption to extract the witness in \(\mathbb {Z}_p^n\) from the commitment. The witness is used to derive a reduction from breaking soundness to the \(d\text {-}\mathsf {TSDH}\) Assumption. More precisely, it follows from the SSP characterization that if the equation with index \(j^*\) does not hold, then \(p(X)=q(X)(X-r_{j^*})+b\), for some \(b \ne 0\). From the extracted value of the witness \({\varvec{a}}\) one can identify at least one such \(j^*\) and also recover the coefficients of q(X) and the value b in \(\mathbb {Z}_p\). From the verification equation, the reduction can obtain

$$\begin{aligned} \left[ \dfrac{p(s)}{s-r_{j^*}}\right] _T=\left[ q(s)+\dfrac{b}{s-r_{j^*}}\right] _T\end{aligned}$$
(1)

and using bq(s) derive \(\left[ \dfrac{1}{s-r_{j^*}}\right] _T\).

In other words, there are two ways in which the Danezis et al.’s scheme (as well as most other SNARKs) use knowledge assumptions: (a) extracting vectors of committed values from one single group element (beyond what is information-theoretically possible), and (b) extract in the base field, so computing discrete logarithms. Our goal is to avoid knowledge of exponent assumptions, so to circumvent (a) we change the scheme to include perfectly binding commitments to the witness. However, we still have to deal with (b), as our commitments to \({\varvec{a}}\) can only be opened to . Therefore, we are no longer able to compute \([q(s)]_T\) since it requires to compute terms of the form \([a_ia_js^k]_T\) from \([a_i]_1,[a_j]_2\) and powers of s in one of the groups, in any case it would be a multiplication of three group elements.

At this point, we would like to be able to include in the proof a commitment that allows the reduction to extract q(s), but the fact that q(s) is “quadratic” in the witness makes this difficult. For this reason, we factor q(X) into two polynomials \(q_1(X)\) and \(q_2(X)\). In the soundness game we will program the CRSFootnote 1 to depend on an index \(j^*\) and let the prover compute binding commitment to \([q_2(s)]_2\), while \([q_1(s)]_1\) can be directly computed from the proof. From these factors we are able to compute \([q(s)]_T\). However, extracting b in \(\mathbb {Z}_p\) to obtain a reduction to the \(q\text {-}\mathsf {TSDH}\) problem seems difficult, so we will rely on a more flexible security assumption where we do not need to remove b. The idea of the new assumption is to give the adversary powers of s in the source groups and ask the adversary to output

$$ \left( r_{j^*},[\beta ]_1,\left[ \frac{b}{s-r_{j^*}}\right] _T\right) ,\text { where}\, \beta ^2-1=b. $$

However, this is not a hard problem, as the adversary can set b as a combination of \(s-r_{j^*}\) to achieve elimination of the denominator in \(\frac{b}{s-r_{j^*}}\). For example, if an adversary sets \(\beta =s-r_{j^*}+1\), it can compute a valid solution as \((r_{j^*}, [\beta ]_1,[s-r_{j^*}+2]_T)\). To prevent this type of attacks from happening, we add an element \([\varepsilon ]_2\in {\mathbb {G}}_2\) to the challenge, and ask the adversary to output \([\varepsilon \beta ]_2\) too, so that \(\beta \) cannot be set as a function of s (since the adversary will not be able to compute \(\varepsilon s\) in \({\mathbb {G}}_2\)). We call the modified assumption the \(q\text {-}\mathsf {STSDH}\), which is proven to be generically secure (see full version). Further, it can be easily checked that the assumption is falsifiable as we note in Sect. 2.2. To make sure that we can extract \([\varepsilon \beta ]_2\) from the prover’s output and also that the rest of the elements of the proof are of the right form, we will require the prover to show that its output is in a given linear space.

Scheme Description. Given \(n,d\in \mathbb {N}\) we construct a QA-NIZK argument for the language \( \mathcal {L}_{\text {quad}, ck }\).

Setup.

  • Algorithm \(\mathsf {K}_0( gk ,n,d)\) samples \( ck =[{\varvec{u}}]_1\leftarrow \mathcal {L}_1\). A commitment \(\mathsf {Com}_{ ck }({\varvec{a}};{\varvec{w}})\) is the concatenation of \(\mathsf {\mathsf {Enc}}_{ ck }(a_i;w_i)=[a_i {\varvec{e}}_2+w_i {\varvec{u}}]_1\). That is, \(\mathsf {Com}_{ ck }({\varvec{a}};{\varvec{w}})=[\mathbf {{U}}_1 {\varvec{a}}+ \mathbf {{U}}_2 {\varvec{w}}]_1\), where \(\mathbf {{U}}_1,\mathbf {{U}}_2\) are \(2n \times n\) matrices such that \(\mathbf {{U}}_1\) has \({\varvec{e}}_2\) in the diagonal and \([\mathbf {{U}}_2]_1\) has \({\varvec{u}}\) in the diagonal.

  • Algorithm \(\mathsf {K}_1( gk , ck ,n,d)\) picks \(s \leftarrow \mathbb {Z}_p\), \(\left\{ \hat{\varvec{\phi }}_{i} \right\} _{i\in \{1,\dots ,n+1\}} \leftarrow \mathbb {Z}_p^3\), \(\mathbf {{Q}}_2\leftarrow \mathcal {U}_{3,3}\) and generates also the CRS for proving membership in bilateral linear spaces of Sect. 2, \(\mathsf {BLS.CRS}\), for the linear spaces generated by the matrices:

    The CRS includes the elements

    $$ \left( gk , ck ,\left\{ \left[ s^{i}\right] _{1,2}\right\} _{i\in \{1,\dots ,d\}},\left\{ \left[ \hat{\varvec{\phi }}_{i}\right] _2\right\} _{i\in \{1,\dots ,n+1\}},[\mathbf {{Q}}_2]_2, \mathsf {BLS.CRS}\right) . $$

Prover. The prover \(\mathsf {P}\) with input \((\text {CRS},[{\varvec{c}}]_1,\mathbf {{V}},{\varvec{b}},{\varvec{a}})\) picks \(\delta \leftarrow \mathbb {Z}_p, {\varvec{r}}_{q.2} \leftarrow \mathbb {Z}_p^3\) and defines the polynomial

$$p(X) = \left( v_0(X)+\sum _{i=1}^na_iv_i(X)+\delta t(X)\right) ^2 -1 \in \mathbb {Z}_p[X],$$

where each \(v_i(X)\), for \(i\in \{1,\dots ,n\}\), is the interpolation polynomial of the components \(v_{ij}\) of \(\mathbf {{V}}\) at points \(r_j\), for \(j\in \{1,\dots ,d\}\), and \(v_0(X)\) is the interpolation polynomial of \(b_j-1\) at the same points. It then computes \(h(X)=\frac{p(X)}{t(X)}\), which is a polynomial in \(\mathbb {Z}_p[X]\) because \({\varvec{a}}\) satisfies the equations, and the following elements:

The prover can compute all these elements as linear combinations of the powers of s in the CRS. The prover also computes a \(\mathsf {BLS}\) proof \(\psi \) of

$$ \left( [{\varvec{c}}]_1,\left[ V\right] _1,\left[ V\right] _2, [{\varvec{q}}_2]_2\right) ^{\top } \in \mathbf {Im}\left( \begin{array}{c} \left[ \mathbf {M}\right] _1 \\ \left[ \mathbf {N}\right] _2 \end{array}\right) $$

with witness \(\left( {\varvec{a}},{\varvec{w}},\delta ,{\varvec{r}}_{q.2} \right) ^{\top } \in \mathbb {Z}_p^{2n+4}\).

Finally, it sends the proof \(\pi \) to the verifier, where \(\pi := \bigg (\left[ H\right] _1, \left[ V\right] _{1,2},\) \(\left[ {\varvec{q}}_2\right] _2, \psi \bigg ).\)

Verifier. The verifier \(\mathsf {V}\) with input \((\text {CRS},[{\varvec{c}}]_1,\mathbf {{V}},{\varvec{b}}, \pi )\) checks whether the equation

$$\begin{aligned} e(\left[ v_0(s)+V\right] _1,\left[ v_0(s)+V\right] _2)-[1]_T=e(\left[ H\right] _1, [t(s)]_2)\end{aligned}$$
(2)

holds and \(\mathsf {BLS.verify}(\psi )=1\). If both conditions hold, it returns 1, else it returns0.

Completeness. This property is based on the perfect completeness of membership in bilateral spaces, and the observation that the left hand side of the verification equation is \(e\left( \left[ v_0(s)+V\right] _1,\left[ v_0(s)+V\right] _2\right) -[1]_T=\left[ (v_0(s)+V)^2-1\right] _T=\left[ p(s)\right] _T\), and the right hand side is \(e\left( \left[ H\right] _1,[t(s)]_2\right) =e\left( \left[ h(s)\right] _1,[t(s)]_2\right) = \left[ p(s)\right] _T\).

Soundness. We introduce a technical lemma that we will use in the following to prove the soundness of the scheme.

Lemma 1

Let v(X) be a polynomial in \(\mathbb {Z}_p[X]\). For any \(r\in \mathbb {Z}_p\), we define \(q_2(X)\) and \(\beta \) as the quotient and remainder, respectively, of the polynomial division of v(X) by \(X-r\), i.e. \(v(X)=q_2(X)(X-r)+\beta \). If \(p(X)=v(X)^2-1\), then

$$p(X)=\left( v(X)+\beta \right) q_2(X)(X-r)+ \beta ^2-1.$$

Proof

By definition, \(p(X)=v(X)^2-1,\) if we expand this expression using the definition of \(q_2(X)\) we have:

$$\begin{aligned} \begin{aligned} p(X)=~&v(X)\left( q_2(X)(X-r)+\beta \right) -1 = v(X)q_2(X)(X-r)+v(X)\beta -1\\ =~&v(X)q_2(X)(X-r)+q_2(X)(X-r)\beta +\beta ^2-1\\ =~&(v(X)+\beta )q_2(X)(X-r)+\beta ^2-1.\,\, \end{aligned} \end{aligned}$$

   \(\square \)

Theorem 1

Let \(\mathsf {Adv}_{\text {Sound}}(\mathcal {A})\) be the advantage of any PPT adversary \(\mathcal {A}\) against the soundness of the scheme. There exist PPT adversaries \(\mathcal {B}_1, \mathcal {B}_3\) against the \(\mathcal {L}_{1}\text {-}\mathsf {MDDH}_{{\mathbb {G}}_2}\) and \(d\text {-}\mathsf {STSDH}\) Assumptions, respectively, and an adversary \(\mathcal {B}_2\) against strong soundness of the \(\mathsf {BLS}\) proof such that

$$ \mathsf {Adv}_{\text {Sound}}(\mathcal {A}) \le d\Big ( 2\mathsf {Adv}_{\mathcal {L}_1\text {-}\mathsf {MDDH},{\mathbb {G}}_2}(\mathcal {B}_1) + \, \mathsf {Adv}_{\mathsf {BLS}}(\mathcal {B}_2)+\mathsf {Adv}_{d\text {-}\mathsf {STSDH}}(\mathcal {B}_3)\Big ). $$

Proof. In order to prove soundness we will prove indistinguishability of the following games.

  • \(\mathsf {Real}\): This is the real soundness game. The output is 1 if the adversary produces a false accepting proof, i.e. if there is some equation \({\varvec{a}}^{\top }{\varvec{v}}_i+b_i \not \in \{0,2\}\) and the verifier accepts the proof.

  • \(\mathsf {Game}_0\): This game is identical to the previous one, except that the commitment key \({\varvec{u}}\) is chosen by the game.

  • \(\mathsf {Game}_1\): This game is identical to the previous one, except that some \(j^* \leftarrow \{1,\dots ,d\}\) is chosen and the game aborts if \({\varvec{a}}\) satisfies the \(j^*\)-th equation, i.e. \([{\varvec{a}}]_1^\top {\varvec{v}}_{j^*}+[b_{j^*}]_1\in \{[0]_1,[2]_1\}\).

  • \(\mathsf {Game}_2\): For \(r=r_{j^*}\) and \(i\in \{1,\dots ,n+1\}\) let \(\alpha _i(X)\) and \(\beta _i\) be the quotient and the reminder of the polynomial division of \(v_i(X)\) by \(X-r_{j^*}\) if \(i\in \{1,\dots ,n\}\), and of t(X) by \(X-r_{j^*}\) if \(i=n+1\). This game is identical to the previous one, except that \(\mathbf {{Q}}_2\) is now a uniformly random matrix conditioned on having rank 1, and each \(\left[ \hat{\varvec{\phi }}_{i}\right] _2\) is changed to

    $$\left[ \hat{\varvec{\phi }}_{i}\right] _2= [\alpha _i(s)]_2{\varvec{e}}_2+ \beta _i[\varepsilon ]_2{\varvec{e}}_3+[\mathbf {{Q}}_2]_2 {\varvec{r}}_i,$$

    where \(\varepsilon \leftarrow \mathbb {Z}_p, {\varvec{r}}_{i} \leftarrow \mathbb {Z}_p^3\) and \({\varvec{e}}_i\) is the ith vector of the canonical basis of \(\mathbb {Z}_p^3\).

Obviously, the games \(\mathsf {Real}\) and \(\mathsf {Game}_0\) are indistinguishable.

Lemma 2

\(\text {Pr}[\mathsf {Game}_0(\mathcal {A})=1] \le d \cdot \text {Pr}[\mathsf {Game}_1(\mathcal {A}) = 1]\).

Proof. If \(\mathcal {A}\) breaks soundness, at least one equation does not hold. Thus the challenger has at least a probability of \(\frac{1}{d}\) of guessing this equation.   \(\square \)

Lemma 3

There exists a \(\mathcal {L}_{1}\text {-}\mathsf {MDDH}_{{\mathbb {G}}_2}\) adversary \(\mathcal {B}_1\) such that

$$|\text {Pr}[\mathsf {Game}_1(\mathcal {A})=1] - \text {Pr}[\mathsf {Game}_{2}(\mathcal {A}) = 1]| \le 2\text {Adv}_{\mathcal {L}_{1}\text {-}\mathsf {MDDH},{\mathbb {G}}_2}(\mathcal {B}_1).$$

We use a direct application of the rank problem, which is reducible to \(\mathsf {MDDH}\), to prove the above Lemma. See the full version for the details.

Lemma 4

There exists an adversary \(\mathcal {B}_2\) against the strong soundness of the \(\mathsf {BLS}\) proof and a \(d\text {-}\mathsf {STSDH}\) adversary \(\mathcal {B}_3\) such that

$$\text {Pr}[\mathsf {Game}_3(\mathcal {A})=1]\le \mathsf {Adv}_{\mathsf {BLS}}(\mathcal {B}_2)+\mathsf {Adv}_{d\text {-}\mathsf {STSDH}}(\mathcal {B}_3).$$

Proof

For any adversary which breaks soundness \(\mathcal {A}\), let E be the event that \(([{\varvec{c}}]_1, [V]_1, [V]_2, [{\varvec{q}}_2]_2)^{\top } \in \mathbf {Im} \left( \begin{array}{c} \left[ \mathbf {{M}}\right] _1\\ \left[ \mathbf {{N}}\right] _2 \end{array}\right) \) of Sect. 2 and \(\overline{E}\) be the complementary event. Obviously,

$$\begin{aligned} \text {Pr}[\mathsf {Game}_3(\mathcal {A})=1]\le \text {Pr}[\mathsf {Game}_3(\mathcal {A})=1 | E] + \text {Pr}[\mathsf {Game}_3(\mathcal {A})=1 | \overline{E}]. \end{aligned}$$
(3)

We can bound the second summand by the advantage of an adversary \(\mathcal {B}_2\) against the strong soundness of \(\mathsf {BLS}\). Such an adversary receives \([\mathbf {{M}}]_1,[\mathbf {{N}}]_2\) sampled according to the distribution specified by \(\mathsf {Game}_3\) and the witness that proves that \(\mathbf {{M}},\mathbf {{N}}\) are sampled according to this distribution, which is s (see strong soundness, defined in full version). It also generates the \(\mathsf {BLS.CRS}\), and the rest of the CRS is chosen in the usual way. Adversary \(\mathcal {B}_2\) can use the output of \(\mathcal {A}\) to break the soundness of \(\mathsf {BLS}\) in a straightforward way.

In the following, we bound the first term of the sum in Eq. (3) by constructing an adversary \(\mathcal {B}_3\) which breaks the \(d\text {-}\mathsf {STSDH}\) Assumption in the case that E happens. Note that in this case there exists a witness \(\left( {\varvec{a}},{\varvec{w}},\delta ,{\varvec{r}}_{q.2} \right) ^{\top }\) of membership in \(\mathbf {Im}\left( \begin{array}{c} \left[ \mathbf {M}\right] _1 \\ \left[ \mathbf {N}\right] _2 \end{array}\right) \). Further, this witness is partially unique, because \([{\varvec{c}}]_1\) is a perfectly binding commitment, so \({\varvec{a}},{\varvec{w}},\delta \) are uniquely determined, and in particular this uniquely determines the polynomial p(X).

We now describe the full reduction. Adversary \(\mathcal {B}_3\) receives a challenge of the \(d{\text {-}}\mathsf {STSDH}\) Assumption and plugs it in the CRS. The rest of the elements are chosen by adversary \(\mathcal {B}_3\) with the distribution specified by the game. The CRS is then sent to the soundness adversary \(\mathcal {A}\), who eventually outputs \(\pi \) for the corresponding \([{\varvec{c}}]_1\).

Adversary \(\mathcal {B}_3\) extracts \([{\varvec{a}}]_1 \in {\mathbb {G}}_1\) from the knowledge of \({\varvec{u}} \in \mathbb {Z}_p^2\) and aborts if the \(j^*\)-th equation is satisfied. By definition \(e(\left[ v_0(s)+V\right] _1,\left[ v_0(s)+V\right] _2)-[1]_T=[p(s)]_T\). If we divide both sides of the verification equation (2) by \(s-r_{j^*}\),

$$\begin{aligned} \left[ \dfrac{p(s)}{s-r_{j^*}}\right] _T =e\left( \left[ H\right] _1, \left[ \dfrac{t(s)}{s-r_{j^*}}\right] _2\right) =e\left( \left[ H\right] _1, \left[ \prod _{i\ne j^*}(s-r_i)\right] _2\right) , \end{aligned}$$
(4)

so the adversary \(\mathcal {B}_3\) can compute \(\left[ \dfrac{p(s)}{s-r_{j^*}}\right] _T\) from \([H]_1\) and the powers of \([s]_{1,2}\) in the CRS. On the other hand, if we apply Lemma 1 to p(X), we have

$$\begin{aligned} \left[ \dfrac{p(s)}{s-r_{j^*}}\right] _T =\left[ (v(s)+\beta )q_2(s)+ \dfrac{\beta ^2 -1}{s-r_{j^*}} \right] _T, \end{aligned}$$
(5)

and we have \(\beta ^2-1 \ne 0\) (otherwise the \(j^*\)-th equation is satisfied, in which case the game aborts). We describe in the following how \(\mathcal {B}_3\) can compute right side of (5) and the elements to break the \(d\text {-}\mathsf {STSDH}\) Assumption.

\(\mathcal {B}_3\) can compute \([\beta ]_1 = \sum _{i=0}^{n}[a_i]_1 \beta _i\) and also \([v(s)+\beta ]_1 = [V]_1+ [\beta ]_1 \), because it knows \([V]_1\) from the proof \(\pi \) and the extracted values \([a_i]_1\), and \(\beta _i\) are the reminders of dividing \(v_i(X)\) by \(X-r_{j^*}\).

Since \(\mathcal {B}_3\) sampled \(\mathbf {{Q}}_2\) itself, it can recover \(\left[ q_2(s)\right] _2\) and \(\left[ \varepsilon \beta \right] _2\) from \([{\varvec{q}}_2]_2\) because it can compute two vectors \({\varvec{v}}_2, {\varvec{v}}_3 \in \mathbb {Z}_p^3\) such that \({\varvec{v}}_i^{\top } [\mathbf {{Q}}_2]_2={\varvec{0}}\), \({\varvec{v}}_i^{\top }{\varvec{e}}_j = 0\) if \(i\ne j\) and \({\varvec{v}}_i^{\top }{\varvec{e}}_j = 1\) if \(i=j\). \(\mathcal {B}_3\) multiplies these vectors by \({\varvec{q}}_2\) (which is correctly computed, because E holds), resulting in:

figure a

From these values, \(\mathcal {B}_3\) can compute \([q_2(s)]_2\) and \([\varepsilon \beta ]_2\) by adding \([\alpha _0(s)]_2\) and \(\beta _0[\varepsilon ]_2\) to the above extracted elements, respectively:

$$ \left[ \alpha _0(s)+\sum _{i=1}^{n+1}a_i\alpha _{i}(s)\right] _2=[q_2(s)]_2,\qquad \beta _0[\varepsilon ]_2+\left[ \varepsilon \sum _{i=1}^{n+1}a_i\beta _i\right] _2=[\varepsilon \beta ]_2. $$

From these values and \([v(s)+\beta ]_2\), computed above, \(\mathcal {B}\) can derive \(\left[ (v(s)+\beta )q_2(s)\right] _T\) as \(e([v(s)+\beta ]_1,[q_2(s)]_2)\), and from Eq. (5) recover \(\left[ \dfrac{\beta ^2-1}{s-r_{j^*}} \right] _T\).

Finally, \(\mathcal {B}_3\) returns \(\left( r_{j^*}, [\beta ]_1, [\varepsilon \beta ]_2, \left[ \dfrac{\beta ^2-1}{s-r_{j^*}}\right] _T\right) \), breaking the \(d\text {-}\mathsf {STSDH}\) Assumption.   \(\square \)

Zero-Knowledge. We describe the simulation algorithm \((\mathsf {S}_1,\mathsf {S}_2)\). \(\mathsf {S}_1( gk )\) outputs \((\text {CRS},\tau =\{s\},\tau _{\mathsf {BLS}})\), the common reference string computed in the usual way plus the simulation trapdoor \(s \in \mathbb {Z}_p\) and the simulation trapdoor of the bilateral spaces membership proof.

\(\mathsf {Simulator}\) \(\mathsf {S}_2(\text {CRS},[{\varvec{c}}]_1,\tau ,\tau _{\mathsf {BLS}})\): This algorithm samples \(V^S \in \mathbb {Z}_p\), \(\left[ {\varvec{q}}_2^S\right] _2 \leftarrow {\mathbb {G}}_2^3\), and defines:

$$\begin{aligned} \left[ H^S\right] _1= \left[ \frac{(V^S)^2-1}{t(s)}\right] _1. \end{aligned}$$

\(\mathsf {S}\) also constructs \(\psi ^S \leftarrow \mathsf {BLS.simulator}(\text {CRS},\left[ {\varvec{c}}\right] _1,\left[ V^S\right] _1,\left[ V^S\right] _2,\left[ {\varvec{q}}_2^S\right] _2,\tau _{\mathsf {BLS}})\). The algorithm outputs \(\pi :=(\left[ {\varvec{c}}\right] _1,\left[ V^S\right] _1,\left[ V^S\right] _2,\left[ {\varvec{q}}_2^S\right] _2,\psi ^S).\)

Theorem 2

The scheme above is Perfect Zero-Knowledge.

Proof

The key idea behind the proof is that all its the elements can be seen as perfectly hiding commitments to \({\varvec{a}}\), where \({\varvec{a}}\) is the opening of \([{\varvec{c}}]_1\). For any \(V^S\) and any \({\varvec{a}}\), there always exists a compatible \(\delta \). Further, since \(\mathbf {{Q}}_2\) has full rank, \(\left[ {\varvec{q}}_2^S\right] _2\) is compatible with any values \({\varvec{a}}\), \(\delta \). \(\left[ H^S\right] _1\) is uniquely determined by \(V^S\) and the rest of the elements of the CRS. Finally, perfect zero-knowledge follows from the perfect zero-knowledge property of the bilateral space membership proof.    \(\square \)

3.2 Unit Vector from Static Assumptions

In our argument for aggregating quadratic equations, we obtain succinctness following the usual polynomial aggregation technique used in most SNARK constructions (e.g. [7, 11]), namely, the set of interpolation points \(r_1,\dots ,r_d \in \mathbb {Z}_p\) is public, while the evaluation point s is only known in the exponent. We can consider a dual approach in which \(s \in \mathbb {Z}_p\) is public but \(r_1,\dots r_d\) are in the exponent. We observe that this leads to a trade-off between the type of assumption (q-type vs. static) and size of the CRS (linear vs. quadratic). The second construction reminds us the beginnings of SNARKs, where the CRS was quadratic in the circuit size. The construction is still interesting for proving that a set of n binding commitments to integers open to n binary values \((b_1,\ldots ,b_n) \in \{0,1\}^n\) such that \(\sum b_i=1\). In this case, a simple modification of the proof system of Sect. 3.1 leads to a scheme with computational soundness based on static assumptions and linear CRS. The full scheme and its security proof are presented in full version. The unit vector argument can be used, for instance, to improve the constructions of the best pairing-based constructions of ring signature schemes without random oracles and based on falsifiable assumptions [6, 13]. It also leads to a shuffle argument, described in Sect. 5.1.

4 Aggregated Set Membership Arguments

In the construction of Sect. 3.1, if \(\mathbf {{V}}\) is the identity matrix and \({\varvec{b}}={\varvec{0}}\), the equations \({\varvec{a}}\mathbf {{V}}+{\varvec{b}}\in \{0,2\}^{d}\) just prove that each \(a_i \in \{0,2\}\). In this section we consider a generalization and build a proof system which proves that some perfectly binding commitments open to \(a_i \in \mathcal {Z}=\{z_1,\ldots ,z_m\} \subset \mathbb {Z}_p\). The proof is constant-size and uses the Boneh-Boyen signature scheme (the basic scheme from [2, Sect. 4.3]) together with a technique to aggregate quadratic equations similar to the one of Sect. 3.1 and inspired by the quadratic span programs of Gennaro et al. [11].

First, in Sect. 4.1, we describe how to construct an argument of membership for a single \(a \in \mathcal {Z}\) and then in Sect. 4.2 we show how to aggregate the argument. In the full version we show how to apply these ideas to construct a range proof.

4.1 Non-aggregated Set Membership Argument

Intuition. We build a constant-size proof of membership for polynomially-large sets in \(\mathbb {Z}_p\) with linear CRS. The idea is to give in the common reference string Boneh-Boyen signatures to each element of the set. The proof of membership is just a proof of knowledge of a valid signature. Recall that \([\sigma ]_2\) is a valid signature for x if and only if

$$\begin{aligned} e([\mathsf {sk}-x]_1,[\sigma ]_2)-[1]_T=[0]_T. \end{aligned}$$

The statement \(x\in \mathcal {Z}\) is proven committing to x and to \([\sigma ]_2=\left[ \frac{1}{\mathsf {sk}-x}\right] _2\), and giving a Groth-Sahai proof for the satisfiability of the verification equation.

The problem with this approach is that it is not possible to extract \(x\in \mathbb {Z}_p\) from its Groth-Sahai commitment, but only . Therefore, it is not clear how to reduce soundness to the EUF-CMA security of Boneh-Boyen, as the reduction can only output a “relaxed form” of forgery \(([x]_1,[\sigma ]_2)\), for some \(x \notin \mathcal {Z}\), instead of \((x,[\sigma ]_2)\).Footnote 2

It turns out that Boneh-Boyen signatures are not unforgeable when purported forgeries are pairs of the form \(([x]_1,[\sigma ]_2)\). The problem is that \([x]_1\) may be dependent of \(\mathsf {sk}\), whereas this is impossible when \(x \in \mathbb {Z}_p\) must be given. Indeed, for any message of the form \([\mathsf {sk}-x]_1\) one might compute a forgery as \([1/x]_2\).

To solve this issue, we force the prover to commit to \([\varepsilon x]_1\), where the discrete logarithm of \([\varepsilon ]_1\) remains hidden. Since \([\mathsf {sk}\cdot \varepsilon ]_1\) is not given, the adversary cannot choose x to be a function of \(\mathsf {sk}\).

Scheme Description. We give a proof of membership in \(\mathcal {Z}=\{z_1,\dots ,z_m\}\subset \mathbb {Z}_p\). More precisely, we build a proof for the family of languages:

Setup. Parameters for the Boneh-Boyen signatures are generated. Choose \(\varepsilon \leftarrow \mathbb {Z}_p\). The CRS contains \([\varepsilon ]_2\), signatures \([\sigma _j]_2= \left[ \frac{1}{\mathsf {sk}-z_j}\right] _2\) of each \(z_j\in \mathcal {Z}\), and the Groth-Sahai CRS. The simulation trapdoor is \(\varepsilon \) and the GS simulation trapdoor for equations which are right-simulatableFootnote 3.

Prover. If \(x\in \mathcal {Z}\), then there is some pair \(([y]_2,[\sigma ]_2)\), where \([\sigma ]_2\) is in the CRS, such that

$$e([\mathsf {sk}]_1-[x]_1,[\sigma ]_2)=[1]_T \,\,\,\text { and }\,\,\, [y]_2 = x[\varepsilon ]_2.$$

The prover produces a Groth-Sahai proof of the equations:

$$ e\left( [\mathsf {sk}]_1-[X]_1,[\varSigma ]_2\right) =[1]_T \,\,\,\text { and }\,\,\, [Y]_2 = X[\varepsilon ]_2 $$

where \(X,Y,\varSigma \) are the variables.

Verifier. Accept if and only if both proofs are valid.

Theorem 3

The argument above is computationally quasi-adaptively sound under the \(\mathcal {Z}\text {-}\mathsf {GSDH}\) Assumption in \({\mathbb {G}}_2\) and the soundness of Groth-Sahai proofs.

Proof

We construct an adversary \(\mathcal {B}\) against the \(\mathcal {Z}\text {-}\mathsf {GSDH}\) assumption, which receives \({gk:=(p,{\mathbb {G}}_1,{\mathbb {G}}_2,{\mathbb {G}}_T,e,\mathcal {P}_1,\mathcal {P}_2)}\) together with \([\varepsilon ]_{1,2}\) and \(\{[s^i]_{1,2}\}_{i=1}^m\) from the challenger. The adversary defines a new generator for \({{\mathbb {G}}_2}\), \(\overline{\mathcal {P}}_2=[\prod _{i=1}^m(s-z_i)]_2\), defines a new group key \(\overline{gk}:=(p,{\mathbb {G}}_1,{\mathbb {G}}_2,{\mathbb {G}}_T,e,\mathcal {P}_1,\overline{\mathcal {P}}_2)\), and defines \([\mathsf {sk}]_1 = [s]_1\). Note that we use implicit notation with respect to \(\mathcal {P}_1,\mathcal {P}_2\) and not with respect to the new generators.

The adversary can now build the signatures

$$\begin{aligned} \left( z_j[\varepsilon ]_2, \left[ \prod _{\begin{array}{c} i=1\\ i\ne j \end{array}}^{m}(s-z_i)\right] _2 \right) =\left( z_j[\varepsilon ]_2, \frac{1}{\mathsf {sk}-z_j} \overline{\mathcal {P}}_2 \right) \end{aligned}$$

which are valid with respect to the group key \(\overline{gk}\).

Let \(\mathcal {A}\) be an adversary against our set membership proof. Adversary \(\mathcal {B}\) runs \(\mathcal {A}\) with the new group key \(\overline{gk}\), Groth-Sahai commitment keys for which it knows the discrete logarithm (in order to open commitments), and signatures \(([\sigma _1]_2,\dots ,[\sigma _m]_2)\). Suppose that \(\mathcal {A}\) wins by producing an accepting proof for some \(x\not \in \mathcal {Z}\). From the adversary’s proof and committed values one can extract \([x]_1\) and \(([y^*]_2,[\sigma ^*]_2)\) and, from perfect soundness of Groth-Sahai proofs, it follows that

$$ e([\mathsf {sk}]_1-[x]_1,[\sigma ^*]_2)=e(\mathcal {P}_1,\overline{\mathcal {P}}_2) \,\,\,\text { and }\,\,\, [y^*]_2 = x[\varepsilon ]_2. $$

This implies that \([\sigma ^*]_2 = \left[ \frac{\prod _{j=1}^m(\mathsf {sk}-z_j)}{\mathsf {sk}-x}\right] _2\), and hence \(\left( [x]_1,[y^*]_2,[\sigma ^*]_{2}\right) \) is a solution to the \(\mathcal {Z}\text {-}\mathsf {GSDH}\) problem.

   \(\square \)

Theorem 4

The argument above is composable zero-knowledge under the composable zero-knowledge property of Groth-Sahai proofs.

Proof

The proof simulator uses the Groth-Sahai trapdoor and \(\varepsilon \) to simulate the Groth-Sahai proof of both equations (note that even though the commitment \([{\varvec{c}}]_1\) is part of the statement, both equations are right-simulatable when \(\varepsilon \) is known).    \(\square \)

4.2 Aggregated Set Membership Argument

Let \(\mathcal {Z}\subset \mathbb {Z}_p\), \(m=|\mathcal {Z}|\), and \(n\in \mathbb {N}\). We construct a QA-NIZK argument for the following language

where \([{\varvec{c}}]_1 = \mathsf {Com}_{ck}({\varvec{x}};{\varvec{w}})\) is a vector of ElGamal encryptions. The generalization to other perfectly binding commitments is straightforward.

Intuition. To express the validity of n signature and message pairs, we construct polynomials v(X), y(X), which encode the set of n verification equations for the Boneh-Boyen signatures. Given the set \(\mathcal {R}=\{r_1,\ldots ,r_n\} \subset \mathbb {Z}_p\), recall that we denote as \(\ell _i(X)\) the ith Lagrange interpolation polynomial associated to \(\mathcal {R}\).

We define \(v_0(X)\) as the constant polynomial \(v_0(X)=\mathsf {sk}\), and \(t(X)=\prod _{r_j \in \mathcal {R}} (X-r_j)\). The set of polynomials \(v_0(X),\{\ell _i(X)\}_{i=0}^n, t(X)\) accepts \(x_1,\dots ,x_n\) if and only if t(X) divides \( (v_0(X)-v(X))y(X)-1\), where

$$\begin{aligned} v(X) = \sum _{j=1}^n x_i\ell _i(X),\qquad y(X)=\sum _{i=1}^m \sigma _{k(i)} \ell _i(X), \end{aligned}$$

and \(\sigma _{k(i)}\) is the signature of some \(z_{k(i)}\) such that \(x_i = z_{k(i)}\).

That is, at any point \(r_j \in \mathcal {R}\), if \(x_j=v(r_j)\), then \(y(r_j)\) is a valid signature of \(x_j\). This follows from

$$\begin{aligned}&(v_0(X)-v(X))y(X)-1 = h(X)t(X) \text { for some polynomial }h(X)\\ \implies&(v_0(r_j)-v(r_j))y(r_j)-1 = 0 \iff (\mathsf {sk}-x_j)y(r_j) -1 = 0. \end{aligned}$$

In particular, if \(j\in [n]\) is such that \(x_j\notin \mathcal {Z}\), then \(y(r_j)\) is a forgery for \(x_j\). For simplicity, in this exposition we ignore the issue mentioned in previous section about commitment extractability, but this is taken into account in the argument.

Note that to compute y(X) given \(\ell _i(X)\) in some source group, the prover would need to know the discrete logarithm of the signatures. To render the interpolation polynomials efficiently computable, we include in the CRS the terms \([\sigma _is^j]_2\), where \(\sigma _i = \frac{1}{\mathsf {sk}-z_i}\), for all \(i\in \{1,\dots ,m\},j\in \{1,\dots ,n\}\), and all other values which require the signature’s discrete logarithm. Consequently, our CRS is of size O(nm).

A direct instantiation of techniques from Sect. 3.1 requires perfectly binding commitments to each of the signatures and hence, a proof of size linear in the number of statements. But it turns out that perfectly binding commitments to signatures are not necessary for proving membership in \(\mathcal {Z}\). To achieve this, we use a trick similar to Sect. 3.1. We program the CRS in order to extract a valid signature for \(x_{j^*}\), for a random \(j^*\in \{1,\dots ,n\}\), in such a way that the adversary might only detect the change in the CRS with negligible probability.

Scheme Description. Given \(m,n \in \mathbb {N}\) and a set \(\mathcal {Z}\subset \mathbb {Z}_p\), \(|\mathcal {Z}|=m\), we construct a QA-NIZK argument for the language \(\mathcal {L}_{\textsf {memb},\mathcal {Z}, ck }\).

Setup.

  • Algorithm \(\mathsf {K}_0( gk )\) sets \( ck =[{\varvec{u}}]_1\leftarrow \mathcal {L}_1\).

  • Algorithm \(\mathsf {K}_1( gk , ck )\) picks \(s \leftarrow \mathbb {Z}_p\), \(\left\{ \varvec{\phi }_{i}, \hat{\varvec{\phi }}_{i} \right\} _{i\in \{1,\dots ,n+1\}} \leftarrow \mathbb {Z}_p^3\times \mathbb {Z}_p^4\), \(\mathbf {{Q}}_1\leftarrow \mathcal {U}_{3,3}, \mathbf {{Q}}_2\leftarrow \mathcal {U}_{4,4}\), picks a Boneh-Boyen secret key \(\mathsf {sk}\leftarrow \mathbb {Z}_p\), generates signatures \([\sigma _1]_2,\ldots ,[\sigma _m]_2\) for each element in \(\mathcal {Z}\) and generates also \(\mathsf {crs}_{\varPi _1}\) and \(\mathsf {crs}_{\varPi _2}\) for proving membership in the linear spaces generated, respectively, by the matrices \(\mathbf {{M}},\mathbf {{N}}\), where:

    figure b

    The CRS includes the elements

    figure c

Prover. The prover \(\mathsf {P}(\text {CRS},[{\varvec{c}}]_1,{\varvec{x}},{\varvec{w}})\) picks \(\delta _v,\delta _y \leftarrow \mathbb {Z}_p, {\varvec{r}}_{q.1} \leftarrow \mathbb {Z}_p^3,{\varvec{r}}_{q.2}\leftarrow \mathbb {Z}_p^4\) and defines the polynomials

$$\begin{aligned}&v(X) = \sum _{i=1}^n x_{i}\ell _i(X)+\delta _v t(X),\qquad&y(X) = \sum _{i=1}^n \sigma _{k(i)}\ell _i(X)+\delta _y t(X)\\&h(X) = \frac{(v_0(X)-v(X))y(X)-1}{t(X)} \end{aligned}$$

where \(v_0(r_j) = \mathsf {sk}\), for all \(j\in \{1,\dots ,n\}\), \(t(X)=\prod _{r \in \mathcal {R}} (X-r)\) and \(\ell _i(X)\) is the ith Lagrangian interpolation polynomial associated to \(\mathcal {R}\). By definition of the language, each \(x_i\) is equal to \(z_{k(i)}\), for some \(k(i) \in \{1,\ldots ,m\}\).

The prover computes the following elements:

The prover also computes two \(\mathsf {LS}\) proofs

where \({\varvec{y}}=(y_{1,1},y_{1,2},\ldots ,y_{n,m})\) and \(y_{i,j}\) is equal to 1 if \(i=k(j)\) and 0 otherwise. Finally, it sends the proof \(\pi \) to the verifier, where

$$\begin{aligned} \pi := \left( \left[ H\right] _1, \left[ V\right] _1, \left[ Y\right] _2, \left[ {\varvec{q}}_1\right] _1,\left[ {\varvec{q}}_2\right] _2, \psi _1,\psi _2\right) . \end{aligned}$$

Verifier. The verifier \(\mathsf {V}(\text {CRS}, \pi )\) checks whether the equation

$$e(\left[ H\right] _1, [t(s)]_2)=e([v_0(s)]_1-[V]_1,\left[ Y\right] _2)-[1]_T\text { holds, and }$$
$$\varPi _1.\mathsf {LS.verify}\left( \mathsf {crs}_{\varPi _1},\left[ \begin{array}{c} {\varvec{c}}\\ V \\ {\varvec{q}}_1 \\ \end{array}\right] _1,\psi _1\right) =1,\quad \varPi _2.\mathsf {LS.verify}\left( \mathsf {crs}_{\varPi _2},\left[ \begin{array}{c} Y\\ {\varvec{q}}_2 \end{array}\right] _2,\psi _2\right) =1.$$

If all of these conditions hold, it returns 1, else 0.

Completeness. If \(x_1,\ldots ,x_n\in \mathcal {Z}\) then \((v_0(r_j)-v(r_j))y(r_j)-1=(x_{k(j)}+\mathsf {sk})\sigma _{k(j)}-1=0\) for all j, and thus \((v_0(X)-v(X))y(X) = 1 \mod t(X)\). This implies that h(X) is a well defined polynomial in \(\mathbb {Z}_p[X]\) such that \(e\left( \left[ h(s)\right] _1,[t(s)]_2\right) =e\left( \left[ v_0(s)-v(s)\right] _1,[y(s)]_2\right) -[1]_T\). It is easy to check that

where \({\varvec{y}} = (y_{1,1},\ldots ,y_{m,n})\), and therefore \(\psi _1,\psi _2\) are valid proofs.

Soundness

Theorem 5

Let \(\mathsf {Adv}_{\text {PS}}(\mathcal {A})\) be the advantage of a PPT adversary \(\mathcal {A}\) against the soundness of the scheme. There exist PPT adversaries \(\mathcal {B}_1, \mathcal {B}_2, \mathcal {B}_{3,1}, \mathcal {B}_{3,2},\mathcal {B}_4,\mathcal {B}_5\) such that

figure d

Proof. In order to prove soundness we will prove indistinguishability of the following games.

  • \(\mathsf {Real}\): This is the real soundness game. The output is 1 if the adversary produces a false accepting proof, i.e. if there is some \(x_i\notin \mathcal {Z}\) and the verifier accepts the proof.

  • \(\mathsf {Game}_0\): This game is identical to the previous one, except that the commitment key \({\varvec{u}}\) is chosen by the game in order to extract \([{\varvec{x}}]_1\) from \([{\varvec{c}}]_1\).

  • \(\mathsf {Game}_1\): This game is identical to the previous one, except that some \(j^* \leftarrow \{1,\dots ,n\}\) is chosen and the game aborts if the extracted value \([{\varvec{x}}]_1\) is such that \([x_{j^*}]_1\in [\mathcal {Z}]_1\).

  • \(\mathsf {Game}_2\): For \(i=1,\ldots , n\), let \(\alpha _i(X)\) and \(\beta _i\) be the quotient and the reminder, respectively, of dividing \(\ell _i(X)\) by \(X-r_{j^*}\). Let \(\alpha _{n+1}(X)\) and \(\beta _{n+1}\) be the quotient and the reminder of dividing t(X) by \(X-r_{j^*}\). This game is identical to the previous one, except that \(\mathbf {{Q}}_1\) is now a uniformly random matrix conditioned on having rank 1, and for \(i=1,\ldots , n+1\), \(\left[ \varvec{\phi }_{i}\right] _1\) is changed to

    $$\begin{aligned} \left[ \varvec{\phi }_{i}\right] _1= [\alpha _i(s)]_1{\varvec{e}}_2^3+ \beta _i[\varepsilon ]_1{\varvec{e}}_3^3+[\mathbf {{Q}}_1]_1 {\varvec{r}}_i, \end{aligned}$$

    where \({\varvec{e}}_j^3\) is the jth vector of the canonical basis of \(\mathbb {Z}_p^3\), \({\varvec{r}}_{i} \leftarrow \mathbb {Z}_p^3\), \(\varepsilon \leftarrow \mathbb {Z}_p\).

  • \(\mathsf {Game}_3\): Let \(\alpha _i(X)\) and \(\beta _i\) be defined as above. This game is identical to the previous one, except that \(\mathbf {{Q}}_2\) is now a uniformly random matrix conditioned on having rank 1, and each \(\left[ \hat{\varvec{\phi }}_{i}\right] _2\) is now defined as

    $$\begin{aligned} \left[ \hat{\varvec{\phi }}_{i}\right] _2= [\alpha _{i}(s)]_2{\varvec{e}}_2^4+[\beta _i]_2{\varvec{e}}_3^4+\beta _i[\varepsilon ]_2{\varvec{e}}_4^4+ [\mathbf {{Q}}_2]_2\widetilde{{\varvec{r}}}_{i}, \end{aligned}$$

    where \({\varvec{e}}_j^4\) is the jth vector of the canonical basis of \(\mathbb {Z}_p^4\), \(\widetilde{{\varvec{r}}}_{i} \leftarrow \mathbb {Z}_p^4\) and \(\varepsilon \leftarrow \mathbb {Z}_p\) is the same value used in the definition of \([\varvec{\phi }_i]_1\).

Obviously, the games \(\mathsf {Real}\) and \(\mathsf {Game}_0\) are indistinguishable. The proofs of indistinguishablility of \(\mathsf {Game}_1,\mathsf {Game}_2\) and \(\mathsf {Game}_2,\mathsf {Game}_3\) are the same as their analogues in Sect. 3.1, which can be found in the full version. We proceed to prove that in \(\mathsf {Game}_3\) the adversary wins only with negligible probability.

Lemma 5

There exists adversaries \(\mathcal {B}_{3,i}\) against the soundness of , an adversary \(\mathcal {B}_4\) against \(\mathcal {Z}\text {-}\mathsf {GSDH}\) in \({\mathbb {G}}_1\), and an adversary \(\mathcal {B}_5\) against \(n\text {-}\mathsf {QTSDH}\) such that

Proof

Let \(E_1\) be the event where \(({\varvec{c}},V,{\varvec{q}}_1)\) is not in the image of \(\mathbf {{M}}\), \(E_2\) the event that \((Y,{\varvec{q}}_2)\) is not in the image of \(\mathbf {{N}}\), and \(E_3 = \overline{E_1\cup E_2}\). Then

$$\begin{aligned} \Pr [\mathsf {Game}_3(\mathcal {A})=1]\le&\Pr [\mathsf {Game}_3(\mathcal {A})=1 | E_1] + \Pr [\mathsf {Game}_3(\mathcal {A})=1 | E_2]\,+\nonumber \\&+\,\Pr [\mathsf {Game}_3(\mathcal {A})=1 | E_3], \end{aligned}$$
(6)

and, clearly,

$$\begin{aligned} \Pr [\mathsf {Game}_3(\mathcal {A})=1 | E_1] + \Pr [\mathsf {Game}_3(\mathcal {A})=1 | E_2]\le \mathsf {Adv}_{\varPi _1.\mathsf {LS}}(\mathcal {B}_{3,1})+\mathsf {Adv}_{\varPi _2.\mathsf {LS}}(\mathcal {B}_{3,2}). \end{aligned}$$

We now proceed to bound \(\Pr [\mathsf {Game}_3(\mathcal {A})=1 | E_3]\). Conditioned on \(E_3\), there exist some \({\varvec{x}}^\dag ,{\varvec{w}},\delta _v,{\varvec{r}}_{q.1}\) and \({\varvec{y}}^\dag ,\delta _y,{\varvec{r}}_{q.2}\) such that \(({\varvec{c}},V,{\varvec{q}}_1)^\top =\mathbf {{M}}({\varvec{x}}^\dag ,{\varvec{w}},\delta _v,{\varvec{r}}_{q.1})^\top \) and \((Y,{\varvec{q}}_2)^\top =\mathbf {{N}}({\varvec{y}}^\dag ,\delta _y,{\varvec{r}}_{q.2})^\top \). Given that \({\varvec{c}}\) is perfectly binding, it must be that \({\varvec{x}}={\varvec{x}}^\dag \). It follows that \(V = \sum _{i=1}^n x_i \ell _i(s)+\delta _v t(s) = v(s)\) and \(Y=y^\dag (s)\) for some polynomial \(y^\dag (X) = \sum _{i=1}^n\sum _{j=1}^m y_{i,j}^\dag \sigma _i \ell _i(X)+\delta _y t(X)\). Further, except with probability \(1{\slash }q\), each \({\varvec{e}}^{i}_j\) is linearly independent of the columns of \([\mathbf {{Q}}_1]_1,[\mathbf {{Q}}_2]_2\), so one can extract from \([{\varvec{q}}_1]_1\) (resp. \([{\varvec{q}}_2]_2\)) the coefficients of these vectors in its expression in terms of \([\mathbf {{Q}}_1]_1, {\varvec{e}}^{3}_2,{\varvec{e}}^{3}_3\) (resp. \([\mathbf {{Q}}_2]_2, {\varvec{e}}^{4}_2,{\varvec{e}}^{4}_3,{\varvec{e}}^{4}_4\)), which are:

$$ \left[ \begin{array}{c} \sum \nolimits _{i=1}^{n+1} x_i\alpha _i(s)\\ \sum \nolimits _{i=1}^{n+1} x_i\beta _i\varepsilon \end{array}\right] _1 = \left[ \begin{array}{c} {\alpha (s)}\\ {\beta \varepsilon } \end{array} \right] _1 \text { and } \left[ \begin{array}{c} \sum \nolimits _{i,j=1}^{m,n} y_{i,j}^\dag \sigma _i \tilde{\alpha }_j(s)+\delta _y\tilde{\alpha }_{n+1}(s)\\ \sum \nolimits _{i,j=1}^{m,n} y_{i,j}^\dag \sigma _i{\beta }_j+\delta _y\tilde{\beta }_{n+1}\\ \sum \nolimits _{i,j=1}^{m,n} y_{i,j}^\dag \sigma _i{\beta }_j\varepsilon +\delta _y\tilde{\beta }_{n+1}\varepsilon \end{array}\right] _2 = \left[ \begin{array}{c} \tilde{\alpha }(s)\\ \tilde{\beta }\\ \tilde{\beta }\varepsilon \end{array}\right] _2 $$

where \(x_{n+1}=\delta _v\) and \(\alpha (X),\tilde{\alpha }(X)\) are the quotients and \(\beta ,\tilde{\beta }\) are the reminders of dividing, respectively, v(X) and y(X) by \(X-r_{j^*}\).

If we divide both sides of the verification equation by \((s-r_{j^*})\), and we denote by \(\alpha _0(s),\beta _0\) we get that

$$\begin{aligned} e\left( [H]_1,\left[ \frac{t(s)}{s-r_{j^*}}\right] _2\right) =&\,\frac{1}{s-r_{j^*}}(e([v_0(s)]_1-[v(s)]_1,[y(s)]_2)-[1]_T)\\ =&\,\frac{1}{s-r_{j^*}}\left[ (v_0(s)-v(s))(\tilde{\alpha }(s)(s-r_{j^*}) + \tilde{\beta })-1\right] _T\\ =&\, [(v_0(s)-v(s))\tilde{\alpha }(s)+\alpha (s)\tilde{\beta }]_T+\left[ \frac{(v_0(s)-\beta )\tilde{\beta }-1}{s-r_{j^*}}\right] _T \end{aligned}$$

Note that \(\beta = v(r_{j^*}) = x_{j^*}, v_0(s) = \mathsf {sk}\) and thus if \((v_0(s)-\beta )\tilde{\beta }-1=0\), then \(\tilde{\beta }\) is a valid signature for \(x_{j^*}\).

Let \(E_4\) the event \((v_0(s)-\beta )\tilde{\beta }-1=0\) and thus \(\Pr [\mathsf {Game}_4(\mathcal {A})=1 | E_3]\le \Pr [\mathsf {Game}_4(\mathcal {A})=1 | E_4\cap E_3]+\Pr [\mathsf {Game}_4(\mathcal {A})=1 |\overline{E}_4\cap E_3 ]\).

We build an adversary \(\mathcal {B}_4\) against Assumption 6 which receives \(gk,\{[\mathsf {sk}^i]_1,[\mathsf {sk}^i]_2\}_{i\in [m]},[\varepsilon ]_{1,2}\). Essentially, the adversary works as the one described in Sect. 4.1 for the (non-aggregated) set membership argument. It simulates \(\mathsf {Game}_4(\mathcal {A})\) computing all the discrete logarithms of the CRS itself, except for the Boneh-Boyen secret key, \([\varepsilon ]_{1,2}\), and the signatures in the CRS are computed as in Sect. 4.1. When \(\mathcal {A}\) outputs \([{\varvec{q}}_1]_1,[{\varvec{q}}_2]_2\), \(\mathcal {B}_4\) extracts \([\beta \varepsilon ]_1,[\tilde{\beta }]_2\) and returns \(([x_{j^* }]_1, [\beta \varepsilon ]_1, [\tilde{\beta }]_2)\). In the case \(E_4\), we have already argued that \(\tilde{\beta }\) is a valid signature for \(x_{j^*}\), and in this game \(x_{j^*} \notin S\). We conclude that .

We also construct \(\mathcal {B}_5\) an adversary against Assumption 8. It receives as input \([\varepsilon ]_1,[\varepsilon ]_2\), \([s]_1,[s]_2,\ldots ,[s^d]_1[s^d]_2\) and it starts a simulation of \(\mathsf {Game}_4(\mathcal {A})\), by sampling honestly the rest of the elements of the CRS. Finally, \(\mathcal {A}\) outputs \([V]_1,[Y]_2,[{\varvec{q}}_1]_1,[{\varvec{q}}_2]_2\) as part of the purported proof for \([{\varvec{c}}]_1\). We will see in the following how \(\mathcal {B}_4\) computes \([\nu ]_T := \left[ \frac{(v_0(s)-\beta )\tilde{\beta }-1}{s-r_{j^* }}\right] _T\) and returns \(\left( [v_0(s)-\beta ]_1\right. \), \(\left[ (v_0(s)-\beta )\varepsilon \right] _1\), \([\tilde{\beta }]_2\), \([\tilde{\beta }\varepsilon ]_2\), \(\left. [\nu ]_T\right) \), with \((v_0(s)-\beta )\tilde{\beta }-1\ne 0\), breaking Assumption 8.

The values \([\tilde{\alpha }(s)]_2,[\tilde{\beta }]_2\) and \([\tilde{\beta }\varepsilon ]_2\) are extracted from \([{\varvec{q}}_2]_2\), while \([\alpha (s)]_1,[\beta \varepsilon ]_1\) are extracted from \([{\varvec{q}}_1]_1\), \([\beta ]_1=[x_{j^* }]_1\) is extracted from \([{\varvec{c}}]_1\), \(\beta _0 = \mathsf {sk}\), and \([v_0(s)\varepsilon ]_1 = \mathsf {sk}[\varepsilon ]_1\) can be computed by \(\mathcal {B}_5\) because it sampled \(\mathsf {sk}\). The value \([\nu ]_T\) is computed as

$$ [\nu ]_T := e\left( [H]_1,\left[ \frac{t(s)}{s-r_{j^* }}\right] _2\right) - e([v_0(s)]_1-[V]_1,[\tilde{\alpha }(s)]_2) - e\left( [\alpha (s)]_1,[\tilde{\beta }]_2\right) . $$

Zero-Knowledge. The proof of perfect zero-knowledge is essentially the same as for Theorem 2. Note that \([V]_1,[Y]_2,[{\varvec{q}}_1]_1,[{\varvec{q}}_2]_2\) are independent of \({\varvec{x}}\), while \([H]_1\) is the unique solution to the verification equation. Perfect zero-knowledge of the argument of membership in linear spaces implies that the proofs \(\psi _1,\psi _2\) can be simulated with the same distribution as honest proofs.

5 Shuffle Arguments

From our results, we can construct two different shuffle arguments in the CRS model under falsifiable assumptions. They both follow the basic template of the shuffle argument of [15]. Let \([{\varvec{c}}_1]_2,[{\varvec{c}}_2]_2\) be two vectors of n ciphertexts which open to vectors of plaintexts \([{\varvec{m}}_1]_2,[{\varvec{m}}_2]_2\), respectively, and we want to prove that \({\varvec{m}}_2\) is a permutation of \({\varvec{m}}_1\). The shuffle argument of [15] consists of the following steps. The CRS includes a vector of group elements \([{\varvec{z}}]_1=([z_1]_1,\dots ,[z_n]_1)\) sampled uniformly and independently. The prover chooses a permutation \([{\varvec{x}}]_1=([x_1]_1,\dots ,[x_n]_1)\) of \([{\varvec{z}}]_1\) and proves: (1) \(x_i\in \mathcal {Z}=\{z_1,\dots ,z_n\}\) for all \(i\in \{1,\dots ,n\}\), (2) \(\sum x_i =\sum z_i\) and (3) \(\sum z_im_{1,i}=\sum x_im_{2,i}\).

The first two steps force \({\varvec{x}}\) to be a permutation of \({\varvec{z}}\): if all \(x_i \in \mathcal {Z}\) and their sum equals the sum of all the elements in \(\mathcal {Z}\) and \({\varvec{x}}\) is not a permutation, the prover has found a non-trivial combination of elements of \(\mathcal {Z}\) which is 0, which is a type of kernel problem. The last step links this fact with \({\varvec{m}}_2\) being a permutation of \({\varvec{m}}_1\).

In both our constructions and in the original argument of [15], Steps (2) and (3) are handled with the following Groth-Sahai equations, in which uppercase letters are variables for which the prover has provided commitments: (2) \(\sum [X_i]_1 =\sum [z_i]_1\) and (3) \(\sum e([z_i]_1,[M_{1,i}]_2)=\sum e([X_i]_1,[M_{2,i}]_2)\).

We next specify two different ways of proving Step 1, which results in two different constructions with different performance.

5.1 Unit Vector Argument

The first approach is the closest to the work of González et al. [15]. There, Step 1 is rewritten as proving that \({\varvec{x}}={\varvec{z}}^\top {\varvec{B}}\), for a matrix \({\varvec{B}}=({\varvec{b}}_1|\dots |{\varvec{b}}_n)\in \{0,1\}^{n^2}\), where the \({\varvec{b}}_i\) are unitary vectors (not necessarily different, as this is handled by step 2). The approach of [15] is to adopt a commit-and-prove strategy using arguments for linear spaces and the bitstring argument of [14]. The ‘prove’ part is constant-size, but the ‘commit’ part is a priori quadratic, as we would need to commit to each entry of the matrix \({\varvec{B}}\).

To overcome this and obtain linear complexity, they switch to shrinking commitments to each row \({\varvec{b}}_i^*\) of \({\varvec{B}}\), which take only two elements each. Obviously these commitments cannot be perfectly binding, and this fact interferes with the extraction step in soundness proof. However, a key step in their argument is that they set these commitments in a way that one single coordinate \(j^*\) (which remains unknown to the adversary) is perfectly binding. Thus the corresponding column is uniquely determined and can be extracted in the proof. From here, it is concluded that an adversary cannot cheat in the \(j^*\)-th ciphertext, and since \(j^*\) is unknown to the adversary, general soundness is reduced to this case with a tightness loss of \(1{\slash }n\). Note that this is on top of the factor \(1{\slash }n\) from the bitstring argument, resulting in a soundness loss of \(1/n^2\).

We observe that we can plug our unit vector argument instead of the one from [14], modified to accept shrinking commitments to each of the rows of \({\varvec{B}}\) as those in [15]. We include an additional game at the beginning of the soundness proof of the unit vector argument, in which we choose a random coordinate and abort if the corresponding commitment is not in the language. From here on the proof works as in unit vector presented in the full version. This proof inherits the disadvantages of [15], namely the quadratic CRS and the tightness loss in the security reduction, but we improve the proof size from \((4n+17)|{\mathbb {G}}_1|+14|{\mathbb {G}}_2|\) to \((4n+11)|{\mathbb {G}}_1|+8|{\mathbb {G}}_2|\) and our proof still uses falsifiable and static assumptions.

5.2 Argument of Membership in a Set of Group Elements

Another approach to Step 1, instead of the aggregated unit vector proofs, is to prove directly membership in a subset \(\mathcal {Z}=\{[z_1]_1,\dots ,[z_n]_1\}\subset {\mathbb {G}}_1\). Note that the set is witness sampleable and in particular, the discrete logarithms might be known when generating the CRS. More precisely, we want to construct an argument for the language

and for efficiency, the proof should be aggregated. This can be achieved by modifying the aggregated membership proof in a subset of \(\mathbb {Z}_p\) from Sect. 4.2. Note that there we had \(x\in \mathbb {Z}_p\), and this was necessary to produce the proof, so to ensure completeness when the prover knows only \([x]_1\in \mathcal {Z}\subset {\mathbb {G}}_1\), we provide additional elements in the CRS. This is possible because the set is witness sampleable. More precisely, x was involved in the definition of the terms

$$\begin{aligned} \begin{aligned}&[V]_1=[v(s)]_1,\qquad \text { where }v(X)=\sum _{i=1}^n x_{i}\ell _i(X)+\delta _v t(X),\\&\left[ {\varvec{q}}_1\right] _1 = \left[ \sum _{i=1}^n x_i \varvec{\phi }_{i}+ \delta _v \varvec{\phi }_{n+1} + \mathbf {{Q}}_1{\varvec{r}}_{q.1}\right] _1, \end{aligned} \end{aligned}$$

so we include the elements \(\{[z_i\ell _j(s)]_1, [z_i\phi _j]_1\}_{i,j\in \{1,\dots ,n\}}\) in the CRS. The proof works exactly the same, as the reduction could only open the commitments in the group.

We can use this to prove Step 1 of the shuffle argument above. In this case, the CRS size is still quadratic in the number of ciphertexts, but we avoid losing the second factor \(1{\slash }n\) in the reduction, and the proof consists only of the commitments to \([x_i]_1\) and a constant number of elements. More precisely, the proof size is \((2n+11)|{\mathbb {G}}_1|+8|{\mathbb {G}}_2|\).