Keywords

1 Introduction

Bilinear pairing groups have enabled the construction of a plethora of rich cryptographic primitives in the last two decades, starting from the seminal works on three-party key exchange [30] and identity-based encryption (IBE) [11]. In particular, the Groth-Sahai non-interactive zero knowledge (NIZK) proof system [24] for proving algebraic statements over pairing groups has proven to be a powerful tool to construct more efficient advanced cryptographic protocols, such as group signatures [21], anonymous credentials [7], and UC-secure commitment [17] schemes.

Quasi-Adaptive NIZK for Linear Subspaces. There are many applications which require NIZK systems for proving membership in linear subspaces of group vectors. A couple of examples are CCA2-secure public-key encryption via the Naor-Yung paradigm [42], and publicly verifiable CCA2-secure IBE [29].

For proving linear subspace membership, the Groth-Sahai system has a proof size linear in the dimension of the language and the subspace, in terms of number of group elements. To achieve better efficiency, Jutla and Roy proposed a weaker notion [32] called quasi-adaptive NIZK arguments (QA-NIZK), where the common reference string (CRS) may depend on the linear subspace and the soundness is computationally adaptive. For computationally adaptive soundness, the adversary is allowed to submit a proof for its adaptively chosen invalid statement. Based on their work, further improvements [1, 33, 38] gave QA-NIZK systems with constant proof size. This directly led to KDM-CCA2-secure PKE and publicly verifiable CCA2-secure IBE with constant-size ciphertexts.

Structure-Preserving Signature. Structure-Preserving (SP) cryptography [3] has evolved as an important paradigm in designing modular protocols. In order to enable interoperability, it is required for SP primitives to support verification only by pairing product equations, which enable zero-knowledge proofs using Groth-Sahai NIZKs.

Structure-preserving signature (SPS) schemes are the most important building blocks in constructing anonymous credential [7], voting systems and mix-nets [22], and privacy-preserving point collection [25]. In an SPS, all the public keys, messages, and signatures are group elements and verification is done by checking pairing-product equations. Constructing SPS is a very challenging task, as traditional group-based signatures use hash functions, which are not structure-preserving.

Tight Security. The security of a cryptographic scheme is proven by constructing a reduction \(\mathcal {R}\) which uses a successful adversary \(\mathcal {A}\) against the security of the scheme to solve some hard problem. Concretely, this argument establishes the relation between the success probability of \(\mathcal {A}\) (denoted by \(\varepsilon _{\mathcal {A}}\)) and that of \(\mathcal {R}\) (denoted by \(\varepsilon _{\mathcal {R}}\)) as \(\varepsilon _{\mathcal {A}}\le \ell \cdot \varepsilon _{\mathcal {R}}+ \mathsf {negl}(\lambda )\), where \(\mathsf {negl}(\lambda )\) is negligible in the security parameter \(\lambda \). The reduction \(\mathcal {R}\) is called tight if \(\ell \) is a small constant and the running time of \(\mathcal {R}\) is approximately the same as that of \(\mathcal {A}\). Most of the recent works consider a variant notion of tight security, called almost tight security, where the only difference is that \(\ell \) may linearly (or, even better, logarithmically) depend on the security parameter \(\lambda \). It is worth mentioning that the security loss in all our schemes is \(O(\log Q)\), where Q is the number of \(\mathcal {A}\)’s queries. We note that \(Q \ll 2^\lambda \) and thus our security loss is much less than \(O(\lambda )\). In this paper, we do not distinguish tight security and almost tight security, but we do provide the concrete security bounds.

Tightly secure schemes are more desirable than their non-tight counterparts, since tightly secure schemes do not need to compensate much for their security loss and allow universal key-length recommendations independent of the envisioned size of an application. In recent years, there have been significant efforts in developing schemes with tight security, such as PKEs [18, 19, 26,27,28], IBEs [9, 13, 29], and signatures [4, 8, 20, 28].

As discussed above, QA-NIZK and SPS are important building blocks for advanced protocols which are embedded in larger scale settings. Designing efficient QA-NIZK and SPS with tight security is very important, since non-tight schemes can result in much larger security loss in the derived protocols.

QA-NIZK: Tight Security or Compact Proofs? Several of the aforementioned applications of QA-NIZK require a stronger security notion, called simulation soundness, where an adversary can adaptively query simulated proofs for vectors either inside or outside the linear subspace and in the end the adversary needs to forge a proof on a vector outside the subspace. We assume that the simulation oracle can be queried by the adversary up to Q times. If \(Q>1\), we call the QA-NIZK scheme unbounded simulation-sound and if \(Q=1\), we call it one-time simulation-sound. Many applications, such as multi-challenge (KDM-)CCA2-secure PKE and CCA2-secure IBE, require unbounded simulation soundness.

If we consider the tightness, CRS and proof sizesFootnote 1 of previous works, we have three different flavors of unbounded simulation-sound QA-NIZK schemes: (1) schemes with non-tight security, but compact CRS-es (which only depend on the dimension of the subspace) and constant-size proofs [37]; (2) schemes with tight security and constant-size proofs, but linear-size CRS-es (which are linearly in \(\lambda \)) [18, 29]; and (3) schemes with tight security and compact CRS-es, but linear-size proofs (in the dimension of the language and the subspace) [5, 6].

A few remarks are made for the tightly secure QA-NIZK scheme of Abe et al. [5, 6]. Its proceedings version has a bug and the authors fix it in the ePrint version [6], but the proof size of the new scheme linearly depends on the dimension of the language and the subspace. To be more technical, the work of Abe et al. achieves tight simulation soundness via the (structure-preserving) adaptive partitioning of [4, 31]. Due to its use of OR proofs (cf. Fig. 1 in their full version [6]), the QA-NIZK proof size ends up being linear in the size of the language and the subspace (in particular, \(|\pi |=O(n_1+n_2)\)). Thus, it remained open and interesting to construct a tightly simulation-sound QA-NIZK with compact CRS-es and constant-size proofs.

SPS: Tightness with Shorter Signatures. In the past few years, substantial progress was made to improve the efficiency of SPS. So far the schemes with shortest signatures have 6 signature elements with non-tight reduction [34] by improving [36], or 12 elements with security loss \(36 \log (Q)\) [6], or 14 elements with security loss \(6 \log (Q)\) [20], where Q is the number of signing queries. Our goal is to construct tightly secure SPS with shorter signatures and less security loss.

1.1 Our Contributions

To make progress on the aforementioned two questions, we construct a QA-NIZK scheme with 14 proof elements and an SPS scheme with 11 signature elements, based on the Symmetric eXternal Diffie-Hellman (SXDH) assumption. The security of both schemes is proven with tight reduction to the Matrix Diffie-Hellman (MDDH) assumption [16], which is an algebraic generalization of Diffie-Hellman assumptions (including SXDH). The security proof gives us algebraic insights to our constructions and furthermore our constructions can be implemented by (possibly weaker) linear assumptions beyond SXDH.

Our QA-NIZK scheme is the first one that achieves tight simulation soundness, compact CRS-es and constant-size proofs at the same time. Even among the tightly simulation-sound schemes, our scheme has less security loss. Since it achieves better efficiency, using our scheme immediately improves the efficiency of the applications of QA-NIZK with unbounded simulation soundness, including publicly verifiable CCA2-secure PKE with multiple challenge ciphertexts.

In contrast to the Abe et al. framework [5], we use a simpler and elegant framework to achieve better efficiency. Technically, we make novel use of the recent core lemma from [20] to construct a designated-verifier QA-NIZK (DV-QA-NIZK) and then compile it to (publicly verifiable) QA-NIZK by using the bilinearity of pairings. As a by-product, we achieve a tightly secure DV-QA-NIZK, where the verifier holds a secret verification key.

Let \(\mathcal {L}_{[\mathbf {{M}}]_1}:= \{ [\mathbf {y} ]_1\in \mathbb {G}_1^{n_1} : \exists \mathbf {w}\in \mathbb {Z}_p^{n_2} \text { such that } \mathbf {y} = \mathbf {{M}} \mathbf {w} \}\)Footnote 2 be a linear subspace, where \(\mathbf {{M}} \in \mathbb {Z}_p^{n_1\times n_2}\) and \(n_1> n_2\). We compare the efficiency and security loss of QA-NIZK schemes in Table 1. Here we instantiate our schemes (in both Tables 1 and 2) based on the SXDH assumption for a fair comparison.

Table 1. Comparison of unbounded simulation-sound QA-NIZK schemes for proving membership in \( \mathcal {L}_{[\mathbf {{M}}]_1}\). \(|\mathsf {crs}|\) and \(|\pi |\) denote the size of CRS-es and proofs in terms of numbers of group elements. For asymmetric pairings, notation (xy) means x elements in \(\mathbb {G}_1\) and y elements in \(\mathbb {G}_2\). Q denotes the number of simulated proofs and \(\lambda \) is the security parameter.

Our second contribution is a more efficient tightly secure SPS. It contains 11 signature elements and \(n_1+15\) public key elements, while the scheme from [5] contains 12 and \(3n_1+23\) elements respectively, where \(n_1\) denotes the number of group elements in a message vector. We give a comparison between our scheme and previous ones in Table 2. Compared with GHKP18, our construction has shorter signatures and less pairing-product equations (PPEs) with the same level of security loss. Compared with AJOR18, our construction has shorter signature and tighter security, but slightly more PPEs. We leave constructing an SPS with the same signature size and security loss but less PPEs as an interesting open problem. As an important building block of our SPS, we propose the notion of designated-prover OR proof systems for a unilateral language, where a prover holds a secret proving key and the language is defined in one single group. We believe that it is of independent interest.

Table 2. Comparison of structure-preserving signatures for message space \(\mathbb {G}^{n_1}\) (in their most efficient variants). “\(|\mathsf {m}|\)”, “\(|\sigma |\)”, and “\(|\mathsf {vk}|\)” denote the size of messages, signatures, and public keys in terms of numbers of group elements. Q denotes the number of signing queries. “\(\#\) PPEs” denotes the number of pairing-product equations. “NL” denotes the number of non-linear equations that includes signatures in both groups. “L1” denotes the number of linear equations in \(\mathbb {G}_1\) group. “L2” denotes the number of linear equations in \(\mathbb {G}_2\) group.

1.2 Our QA-NIZK: Technical Overview

The Kiltz-Wee Framework. In contrast to the work of Abe et al. [5], our construction is motivated by the simple Kiltz-Wee framework [37], where they implicitly constructed a simulation-sound DV-QA-NIZK and then compiled it to a simulation-sound QA-NIZK with pairings. However, their simulation-sound DV-QA-NIZK is not tight. In the following, we focus on constructing a tightly simulation-sound DV-QA-NIZK. By a similar “DV-QA-NIZK \( \rightarrow \) QA-NIZK transformation as in [37], we derive our QA-NIZK with shorter proofs and tighter simulation soundness in the end.

The DV-QA-NIZK in [37] is essentially a simple hash proof system [14] for the linear language \(\mathcal {L}_{[\mathbf {{M}}]_1}\): to prove that \([\mathbf {y}]_1 = [\mathbf {{M}} \mathbf {x}]_1\) for some \(\mathbf {x} \in \mathbb {Z}_p\), the prover outputs a proof as \(\pi := [\mathbf {x}^\top \mathbf {{p}}]_1\), where the projection \([\mathbf {{p}}]_1:= [\mathbf {{M}}^\top \mathbf {k}]_1\) is published in the CRS. With the vector \(\mathbf {k}\) as the secret verification key, a designated verifier can check whether \(\pi = [\mathbf {y}^\top \mathbf {k}]_1\). By using \(\mathbf {k}\) as a simulation trapdoor, a zero-knowledge simulator can return the simulated proof as \(\pi :=[\mathbf {y}^\top \mathbf {k}]_1\), due to the following equation:

$$\begin{aligned} \mathbf {x}^\top \mathbf {{p}} = \mathbf {x}^\top (\mathbf {{M}}^\top \mathbf {k}) = \mathbf {y}^\top \mathbf {k} . \end{aligned}$$

Soundness is guaranteed by the fact that the value \(\mathbf {y}^{*\top } \mathbf {k}\) is uniformly random, given \(\mathbf {{M}}^\top \mathbf {k}\), if \(\mathbf {y}^*\) is outside the span of \(\mathbf {{M}}\).

Affine MACs and Unbounded Simulation Soundness. To achieve unbounded simulation soundness, we need to hide the information of \(\mathbf {k}\) in all the \(Q_s\)-many simulation queries, in particular for the information outside the span of \(\mathbf {{M}}^\top \). The Kiltz-Wee solution is to blind the term \(\mathbf {y}^\top \mathbf {k}\) with a 2-universal hash proof system. Via a non-tight reduction the hash proof system can be proved to be a pseudorandom affine message authentication code (MAC) scheme proposed by [9]. Technically, unbounded simulation soundness requires the underlying affine MAC to be pseudorandom against multiple challenge queries. This notion has been formally considered in [29] later and it is stronger than the original security in [9]. Because of that, the affine MAC based on the Naor-Reingold PRF in [9] cannot be directly used in constructing tightly simulation-sound QA-NIZK.

Gay et al. [18] constructed a tightly secure unbounded simulation-sound QA-NIZKFootnote 3. Essentially, their tight PCA-secure PKE against multiple challenge ciphertexts is a pseudorandom affine MAC against multiple challenge queries. Then they use this MAC to blind the term \(\mathbf {y}^\top \mathbf {k}\). However, this tight solution has a large CRS, namely, the number of group elements in the CRS is linear in the security parameter. That is because the number of \(\mathbb {Z}_p\) elements in the underlying affine MAC secret keys is also linear in the security parameter. These \(\mathbb {Z}_p\) elements are later converted as group elements in the CRS of QA-NIZK. To the best of our knowledge, current pairing-based affine MACs enjoy either tight security and linear size secret keys or constant size secret keys but non-tight security. Therefore, it may be more promising to develop a new method, other than affine MACs, to hide \(\mathbf {y}^\top \mathbf {k}\) with compact CRS and tight security.

Our Solution. We solve the above dilemma by a novel use of the core lemma from [20]. To give more details, we fix some matrices \(\mathbf {{A}}_0, \mathbf {{A}}_1 \in \mathbb {Z}_p^{2k\times k}\), choose a random vector \(\mathbf {k}'\) and consider \(\mu :=([\mathbf {t}]_1, [u']_1,\pi ')\) that has the distribution:

(1)

In a nutshell, the NIZK proof \(\pi '\) guarantees that \(\mathbf {t}\) is from the disjunction space and, by introducing randomness in the “right” space, the core lemma shows that \([u']_1\) is pseudorandom with tight reductions. The core lemma itself is not a MAC scheme, since it does not have message inputs, although it has been used to construct a tightly secure (non-affine) MAC in [20].

A “naive” attempt: Using the Core Lemma. To have unbounded simulation soundness, our first attempt is to use the pseudorandom value \([u']_1\) to directly blind the term \(\mathbf {y}^\top \mathbf {k}\) from the DV-QA-NIZK with only adaptive soundness in a straightforward way. Then the resulting DV-QA-NIZK outputs the proof \(([\mathbf {t}]_1, [u]_1,\pi ')\), which has the following distribution:

(2)

In order to publicly generate a proof for a valid statement \([\mathbf {y}]_1= [\mathbf {{M}} \mathbf {x}]_1\) with witness \(\mathbf {x} \in \mathbb {Z}_p^{n_2}\), we publish \([\mathbf {{M}}^\top \mathbf {k}]_1,[\mathbf {{A}}_0^\top \mathbf {k}']_1\) and CRS for generating \(\pi '\) in the CRS of our DV-QA-NIZK. Verification is done with designated verification key \((\mathbf {k}, \mathbf {k}')\). Zero knowledge can be proven using \((\mathbf {k}, \mathbf {k}')\).

However, when we try to prove the unbounded simulation soundness, we run into a problem. The core lemma shows the following two distributions are tightly indistinguishable:

where and \(i=1,...,Q\). In the proof of unbounded simulation soundness, we switch from \(\textsf {REAL}\) to \(\textsf {RAND}\) and then we can argue that all our simulated proofs are random, since \(\mathbf {y}^\top \mathbf {k}\) is blinded by the random value \(\mathbf {t}_i^\top \mathbf {k}'_i\). Unfortunately, here we cannot use an information-theoretical argument to show that an adversary cannot compute a forgery for an invalid statement: An adversary can reuse the \(\mathbf {k}_j\) in the j-th (\(1\le j \le Q\)) simulation query on \([\mathbf {y}_j]_1 \in \mathsf {Span}([\mathbf {{M}}']_1)\) and \(\mathsf {Span}([\mathbf {{M}}']_1) \cap \mathsf {Span}([\mathbf {{M}}]_1)= \{ [\mathbf {0}]_1 \}\) and given the additional information \(\mathbf {{M}}'^\top \mathbf {k} \) from the j-th query an adversary can compute a valid proof for another invalid statement \(\mathbf {y}^* \in \mathsf {Span}(\mathbf {{M}}')\).

Moreover, this straightforward scheme has an attack: An adversary can ask for a simulated proof \(\pi :=([\mathbf {t}]_1,[u]_1, \pi ')\) on an invalid \([\mathbf {y}]_1\). Then it computes \(([2 \mathbf {t}]_1,[2u]_1)\) and adapts the OR proof \(\pi '\) accordingly to \(\hat{\pi }\). The proof \(\pi ^*:=([2 \mathbf {t}]_1,[2u]_1,\hat{\pi })\) is a valid proof for an invalid statement \([ \mathbf {y}^*]_1:=[2 \mathbf {y}]_1\notin \mathsf {Span}([\mathbf {{M}}]_1)\).

From Failure to Success via Pairwise Independence. The above problem happens due to the malleability in the “naive” attempt. We introduce non-malleability by using a pairwise independent function in \(\mathbf {k}\). More precisely, let \(\tau \in \mathbb {Z}_p\) be a tag and our DV-QA-NIZK proof is still \(([\mathbf {t}]_1,[u]_1,\pi ')\) with \(([\mathbf {t}]_1, \pi ')\) as in Eq. (2) but

$$u:= \mathbf {y}^\top (\mathbf {k}_0 + \tau \mathbf {k}_1)+\mathbf {t}^\top \mathbf {k}'.$$

We assume that all the tags in the simulated proofs and forgery are distinct, which can be achieved by using a collision-resistant hash as \(\tau := H([\mathbf {y}]_1,[\mathbf {t}]_1,\pi ') \in \mathbb {Z}_p\). Given \(\mathbf {k}_j\) the adversary can only see \(\mathbf {y}_j^\top (\mathbf {k}_0 + \tau _j \mathbf {k}_1)\) from the j-th query and for all the other queries the random values \(\mathbf {t}_i^\top \mathbf {k}_i\) (\(i\ne j\)) hide the information about \(\mathbf {k}_0\) and \(\mathbf {k}_1\). Given \(\mathbf {k}_0 + \tau _j \mathbf {k}_1\) for a \(\tau _j\), the pairwise independence guarantees that even for a computationally unbounded adversary it is hard to compute \(\mathbf {k}_0 + \tau ^* \mathbf {k}_1\) for any \(\tau ^* \ne \tau _j\). Thus, the unbounded simulation soundness is concluded. Details are presented in Sect. 3.1. In a nutshell, we use the pseudorandom element \([u']_1\) from the core lemma to hide \([\mathbf {y}^\top (\mathbf {k}_0+\tau \mathbf {k}_1)]_1\) from a one-time simulation sound DV-QA-NIZK.

From Designated to Public Verification. What is left to do is to convert our DV-QA-NIZK scheme into a QA-NIZK. Intuitively, we first make u publicly verifiable via the (tuned) Groth-Sahai proof technique, and then modify the QA-NIZK so that we can embed the secret key of our DV-QA-NIZK into it without changing the view of the adversary. Then we can extract a forgery for the \(\mathsf {USS}\) experiment of the DV-QA-NIZK from the forgery by the adversary. Similar ideas have been used in many previous works [9, 12, 20, 33, 36, 37].

1.3 Our SPS: Technical Overview

The recent SPS schemes exploit the adaptive partitioning paradigm [4, 19, 27] to achieve tight security. In this paradigm, NIZK for OR languages [23, 43] plays an important role, while at the same time, it also incurs high cost. Our basic idea is to replace the full-fledged OR proof system proposed by Gay et al. [20] with one in the designated-prover setting, where a prover is allowed to use a secret proving key. Intuitively, it is easier to achieve an efficient scheme in such a setting since it suffers less restrictions. In fact, the previous SPS scheme in [5] has already exploited the designated-prover setting to reduce the proof size. However, it only works for bilateral OR language (i.e., one out of two words lies in the linear span of its corresponding space), while an OR-proof for unilateral language (i.e., a single word lies in the linear span of either one of two spaces) is required in the construction of [20]. Thus, some new technique is necessary for solving this problem.

For ease of exposition, we focus on the SXDH setting now, where the following OR-language is in consideration:

$$ {\mathcal {L}}_1:= \{[\mathbf {y}]_1\in \mathbb {G}_1^{2}\mid \exists r\in \mathbb {Z}_p:[\mathbf {y}]_1= [\mathbf {{A}}_0]_1\cdot r \vee [\mathbf {y}]_1= [\mathbf {{A}}_1]_1\cdot r\}. $$

Let \(\mathbf {{A}}_1=(a,\ b)^\top \), we observe that it is equivalent to the following language.

$$ {\mathcal {L}}_2:= \{[y_0, y_1]_1^\top \in \mathbb {G}_1^{2}\mid \exists x,x'\in \mathbb {Z}_p:[y_1]_1-[y_0]_1\cdot \frac{b}{a}= [x]_1\wedge [\mathbf {y}]_1\cdot x= [\mathbf {{A}}_0]_1\cdot x' \}. $$

Specifically, when \(x=0\), we have \([y_1]_1-[y_0]_1\cdot \frac{b}{a}=[0]_1\), i.e., \([y_0, y_1]_1^\top \) is in the span of \(\mathbf {{A}}_1\). Otherwise, we have \([\mathbf {y}]_1= [\mathbf {{A}}_0]_1\cdot \frac{x'}{x}\), i.e., \([y_0, y_1]_1^\top \) is in the span of \(\mathbf {{A}}_0\). Note that this language is an “AND-language” now. More importantly, a witness consists only of 2 scalars and a statement consists only of 3 equations. Hence, when applying the Groth-Sahai proof [15, 24], the proof size will be only 7 (4 elements for committing the witness and 3 elements for equations), which is shorter than the well-known OR proof in [43] (10 elements). However, the statement contains \(\frac{b}{a}\) now, which may leak information on a witness. To avoid this, we make \(\frac{b}{a}\) part of the witness and store its commitment (which consists of 2 group elements) in the common reference string. By doing this, we can ensure that the information on \(\frac{b}{a}\) will not be leaked and \(\frac{b}{a}\) is always “fixed”, due to the hiding and biding properties of commitments respectively. Also, notice that this does not increase the size of proofs at all. This scheme satisfies perfect soundness, and zero-knowledge can be tightly reduced to the SXDH assumption. Since the prover has to use \(\frac{b}{a}\) to generate a witness for \({\mathcal {L}}_2\) given a witness for \({\mathcal {L}}_1\), this scheme only works in the designated-prover setting. However, notice that when simulating the proof, \(\mathbf {{A}}_0\) and \(\mathbf {{A}}_1\) are not necessary, which is a crucial property when applying to the partitioning paradigm.

We further generalize this scheme to one under the \(\mathcal {D}_k\)-MDDH assumptions for a fixed k. The size of proof will become \(O(k^3)\), and the zero-knowledge property can be reduced to the \(\mathcal {D}_k\)-MDDH assumption with almost no security loss.

Replacing the OR-proof system of [20] with our designated-prover ones immediately derives the most efficient SPS by now. We refer the reader to Table 2 for the comparison between our scheme and the previous ones.

Additionally, we give another designated-prover OR proof scheme where the proof size is \(O(k^2)\), which is smaller than the above scheme when \(k> 1\). As a trade-off, it suffers a security loss of k. When \(k=1\), its efficiency is the same as that of our original designated-prover OR proof scheme described above. In symmetric groups, we adapt the designated-prover OR proof to provide the most efficient full NIZK (i.e., one with public prover and verifier algorithms) for OR languages based on the \(\mathcal {D}_k\)-MDDH assumptions by now.

2 Preliminaries

Notations. We denote an empty string as \(\epsilon \). We use to denote the process of sampling an element x from set \(\mathcal {S}\) uniformly at random. For positive integers \(k>1, \eta \in \mathbb {Z}^+\) and a matrix \(\mathbf {{A}} \in \mathbb {Z}_p^{(k+\eta ) \times k}\), we denote the upper square matrix of \(\mathbf {{A}}\) by \(\overline{\mathbf {{A}}} \in \mathbb {Z}_p^{k \times k}\) and the lower \(\eta \) rows of \(\mathbf {{A}}\) by \(\underline{\mathbf {{A}}} \in \mathbb {Z}_p^{\eta \times k}\). Similarly, for a column vector \(\mathbf {v} \in \mathbb {Z}_p^{k+\eta }\), we denote the upper k elements by \(\overline{\mathbf {v}} \in \mathbb {Z}_p^{k}\) and the lower \(\eta \) elements of \(\mathbf {v}\) by \(\underline{\mathbf {v}} \in \mathbb {Z}_p^{\eta }\). For a bit string \(m\in \{0,1\}^{n}\), \(m_i\) denotes the ith bit of \(m\) (\(i\le n\)) and \(m_{|i}\) denotes the first i bits of \(m\).

All our algorithms are probabilistic polynomial time unless we stated otherwise. If \(\mathcal {A}\) is a probabilistic polynomial time algorithm, then we write to denote the random variable that outputted by \(\mathcal {A}\) on input b.

Games. We follow [9] to use code-based games for defining and proving security. A game \(\mathsf {G}\) contains procedures \(\textsc {Init}\) and \(\textsc {Finalize}\), and some additional procedures \(\textsc {P}_1,\ldots , \textsc {P}_n\), which are defined in pseudo-code. All variables in a game are initialized as 0, and all sets are empty (denote by \(\emptyset \)). An adversary \(\mathcal {A}\) is executed in game \(\mathsf {G}\) (denote by \({\mathsf {G}}^{\mathcal {A}}\)) if it first calls \(\textsc {Init}\), obtaining its output. Next, it may make arbitrary queries to \(\textsc {P}_i\) (according to their specification) and obtain their output, where the total number of queries is denoted by Q. Finally, it makes one single call to \(\textsc {Finalize}(\cdot )\) and stops. We use \( \mathsf {G}^\mathcal {A}\Rightarrow d\) to denote that \(\mathsf {G}\) outputs d after interacting with \(\mathcal {A}\), and d is the output of \(\textsc {Finalize}\).

2.1 Collision Resistant Hash Functions

Let \(\mathcal {H}\) be a family of hash functions \(H:\{0,1\}^*\rightarrow \{0,1\}^{\lambda }\). We assume that it is efficient to sample a function from \(\mathcal {H}\), which is denoted by .

Definition 1

(Collision resistance). We say a family of hash functions \({\mathcal H}\) is collision-resistant (\(\mathsf {CR}\)) if for all adversaries \(\mathcal {A}\)

is negligible.

2.2 Pairing Groups and Matrix Diffie-Hellman Assumptions

Let \(\mathsf {GGen}\) be a probabilistic polynomial time (PPT) algorithm that on input \(1^\lambda \) returns a description \(\mathcal {G}:=(\mathbb {G}_1,\mathbb {G}_2,\mathbb {G}_T,p,{P}_1,{P}_2,e)\) of asymmetric pairing groups where \(\mathbb {G}_1\), \(\mathbb {G}_2\), \(\mathbb {G}_T\) are cyclic groups of order p for a \(\lambda \)-bit prime p, \({P}_1\) and \({P}_2\) are generators of \(\mathbb {G}_1\) and \(\mathbb {G}_2\), respectively, and \(e: \mathbb {G}_1 \times \mathbb {G}_2\rightarrow \mathbb {G}_T\) is an efficient computable (non-degenerated) bilinear map. Define \({P}_T:=e({P}_1, {P}_2)\), which is a generator in \(\mathbb {G}_T\). In this paper, we only consider Type III pairings, where \(\mathbb {G}_1 \ne \mathbb {G}_2 \) and there is no efficient homomorphism between them.

We use implicit representation of group elements as in [16]. For \(s \in \{1,2,T\}\) and \(a \in \mathbb {Z}_p\) define \([a]_s = a {P}_s\in \mathbb {G}_s\) as the implicit representation of a in \(\mathbb {G}_s\). Similarly, for a matrix \(\mathbf {{A}} = (a_{ij}) \in \mathbb {Z}_p^{n\times m}\) we define \([\mathbf {{A}}]_s\) as the implicit representation of \(\mathbf {{A}}\) in \(\mathbb {G}_s\). \(\mathsf {Span}(\mathbf {{A}}):=\{\mathbf {{A}}\mathbf {r}|\mathbf {r}\in \mathbb {Z}_p^m\}\subset \mathbb {Z}_p^{n}\) denotes the linear span of \(\mathbf {{A}}\), and similarly \(\mathsf {Span}([\mathbf {{A}}]_s):=\{[\mathbf {{A}}\mathbf {r}]_s |\mathbf {r}\in \mathbb {Z}_p^m\}\subset \mathbb {G}_s^{n}\). Note that it is efficient to compute \([\mathbf {{AB}}]_s\) given \(([\mathbf {{A}}]_s,\mathbf {{B}})\) or \((\mathbf {{A}},[\mathbf {{B}}]_s)\) with matching dimensions. We define \([\mathbf {{A}}]_1 \circ [\mathbf {{B}}]_2:= e([\mathbf {{A}}]_1,[\mathbf {{B}}]_2) = [\mathbf {{A}} \mathbf {{B}}]_T\), which can be efficiently computed given \([\mathbf {{A}}]_1\) and \([\mathbf {{B}}]_2\).

Next we recall the definition of the Matrix Decisional Diffie-Hellman (\(\mathsf {MDDH}\)) [16] and related assumptions [41].

Definition 2

(Matrix distribution). Let \(k,\ell \in \mathbb {N}\) with \(\ell >k\). We call \(\mathcal {D}_{\ell ,k}\) a matrix distribution if it outputs matrices in \(\mathbb {Z}_p^{\ell \times k}\) of full rank k in polynomial time. By \(\mathcal {D}_k\) we denote \(\mathcal {D}_{k+1,k}\).

Without loss of generality, we assume the first k rows of form an invertible matrix. For a matrix , we define the set of kernel matrices of \(\mathbf {{A}}\) as

$$\begin{aligned} \mathsf {ker}(\mathbf {{A}}):= \{ \mathbf {a}^\perp \in \mathbb {Z}_p^{(\ell -k) \times \ell } \mid \mathbf {a}^\perp \cdot \mathbf {{A}} = \mathbf {0} \in \mathbb {Z}_p^{(\ell -k ) \times k} \text { and }\mathbf {a}^\perp \text { has rank } (\ell -k) \}. \end{aligned}$$

Given a matrix \(\mathbf {{A}}\) over \(\mathbb {Z}_p^{\ell \times k}\), it is efficient to sample an \(\mathbf {a}^\perp \) from \(\mathsf {ker}(\mathbf {{A}})\).

The \(\mathcal {D}_{\ell ,k}\)-Matrix Diffie-Hellman problem is to distinguish the two distributions \(([\mathbf {{A}}], [\mathbf {{A}} \mathbf {w}])\) and \(([\mathbf {{A}} ],[\mathbf {u}])\) where , and .

Definition 3

(\(\mathcal {D}_{\ell ,k}\)-matrix decisional Diffie-Hellman assumption). Let \(\mathcal {D}_{\ell ,k}\) be a matrix distribution and \(s \in \{1,2,T\}\). We say that the \(\mathcal {D}_{\ell ,k}\)-Matrix Diffie-Hellman (\(\mathcal {D}_{\ell ,k}\text{- }\mathsf {MDDH}\)) is hard relative to \(\mathsf {GGen}\) in group \(\mathbb {G}_s\) if for all PPT adversaries \(\mathcal {A}\), it holds that

is negligible in the security parameter \(\lambda \), where the probability is taken over , and .

We define the Kernel Diffie-Hellman assumption \(\mathcal {D}_{k}\text{- }\mathsf {KerMDH}\) [41] which is a natural search variant of the \(\mathcal {D}_{k}\text{- }\mathsf {MDDH}\) assumption.

Definition 4

(\(\mathcal {D}_{k}\)-kernel Diffie-Hellman assumption, \(\mathcal {D}_{k}\text{- }\mathsf {KerMDH}\)). Let \(\mathcal {D}_{k}\) be a matrix distribution and \(s \in \{1,2\}\). We say that the \(\mathcal {D}_{k}\)-kernel Matrix Diffie-Hellman (\(\mathcal {D}_{k}\text{- }\mathsf {KerMDH}\)) is hard relative to \(\mathsf {GGen}\) in group \(\mathbb {G}_s\) if for all PPT adversaries \(\mathcal {A}\), it holds that

is negligible in security parameter \(\lambda \), where the probability is taken over , .

The following lemma shows that the \(\mathcal {D}_{k}\text{- }\mathsf {KerMDH}\) assumption is a relaxation of the \(\mathcal {D}_k\text{- }\mathsf {MDDH}\) assumption since one can use a non-zero vector in the kernel of \(\mathbf {{A}}\) to test membership in the column space of \(\mathbf {{A}}\).

Lemma 1

(\(\mathcal {D}_k\text{- }\mathsf {MDDH}\Rightarrow \mathcal {D}_{k}\text{- }\mathsf {KerMDH}\) [41]). For any matrix distribution \(\mathcal {D}_k\), if \(\mathcal {D}_{k}\text{- }\mathsf {MDDH}\) is hard relative to \(\mathsf {GGen}\) in group \(\mathbb {G}_s\), then \(\mathcal {D}_{k}\text{- }\mathsf {KerMDH}\) is hard relative to \(\mathsf {GGen}\) in group \(\mathbb {G}_s\).

For \(Q > 1\), , consider the Q-fold \(\mathcal {D}_{\ell ,k}\text{- }\mathsf {MDDH}\) problem which is distinguishing the distributions \(([\mathbf {{A}}], [\mathbf {{A}} \mathbf {{W}}])\) and \(([\mathbf {{A}}], [\mathbf {{U}}])\). That is, the Q-fold \(\mathcal {D}_{\ell ,k}\text{- }\mathsf {MDDH}\) problem contains Q independent instances of the \(\mathcal {D}_{\ell ,k}\text{- }\mathsf {MDDH}\) problem (with the same \(\mathbf {{A}}\) but different \(\mathbf {w}_i\)). The following lemma shows that the two problems are tightly equivalent and the reduction only loses a constant factor \(\ell -k\).

Lemma 2

(Random self-reducibility [16]). For \(\ell >k\) and any matrix distribution \(\mathcal {D}_{\ell ,k}\), \(\mathcal {D}_{\ell ,k}\text{- }\mathsf {MDDH}\) is random self-reducible. In particular, for any \(Q \ge 1\), if \(\mathcal {D}_{\ell ,k}\text{- }\mathsf {MDDH}\) is hard relative to \(\mathsf {GGen}\) in group \(\mathbb {G}_s\), then Q-fold \(\mathcal {D}_{\ell ,k}\text{- }\mathsf {MDDH}\) is hard relative to \(\mathsf {GGen}\) in group \(\mathbb {G}_s\), where \(\mathsf {T}(\mathcal {B})\approx \mathsf {T}(\mathcal {A})+Q\cdot \mathsf {poly}(\lambda )\) and

$$\mathsf {Adv}_{\mathbb {G}_s,\mathcal {D}_{\ell ,k},\mathcal {A}}^{Q\text{- }\mathsf {mddh}}(\lambda )\le (\ell -k)\mathsf {Adv}_{\mathbb {G}_s,\mathcal {D}_{\ell ,k},\mathcal {B}}^{\mathsf {mddh}}(\lambda )+\frac{1}{p-1}.$$

The boosting lemma in [35] shows that the \(\mathcal {D}_{2k,k}\)-\(\mathsf {MDDH}\) assumption reduces to the \(\mathcal {D}_{k} \text{- }\mathsf {MDDH}\) assumption with a security loss of a factor of k.

2.3 Non-interactive Zero-Knowledge Proof

In this section, we follow [24, 37] to recall the notion of a non-interactive zero-knowledge proof [10] and then an instantiation for an OR-language.

Let \(\mathsf {par}\) be the public parameter and \({\mathcal L}=\{{{\mathcal L}}_{\mathsf {par}}\}\) be a family of languages with efficiently computable witness relation \({\mathcal R}_{{\mathcal L}}\). This definition is as follows .

Definition 5

(Non-interactive zero-knowledge proof [24]). A non-interactive zero-knowledge proof (\(\mathsf {NIZK}\)) for \({\mathcal L}\) consists of five PPT algorithms \(\varPi =({\mathsf {Gen}}, {\mathsf {TGen}},{\mathsf {Prove}}, {\mathsf {Ver}}, {\mathsf {Sim}})\) such that:

  • \({\mathsf {Gen}}(\mathsf {par})\) returns a common reference string \(\mathsf {crs}\).

  • \({\mathsf {TGen}}(\mathsf {par})\) returns \(\mathsf {crs}\) and a trapdoor \(\mathsf {td}\).

  • \({\mathsf {Prove}}(\mathsf {crs}, x, w)\) returns a proof \(\pi \).

  • \({\mathsf {Ver}}(\mathsf {crs},x,\pi )\) returns 1 (accept) or 0 (reject). Here, \({\mathsf {Ver}}\) is deterministic.

  • \({\mathsf {Sim}}(\mathsf {crs},\mathsf {td},x)\) returns a proof \(\pi \).

Perfect completeness is satisfied if for all \(\mathsf {crs}\in {\mathsf {Gen}}~(1^\lambda ,\mathsf {par})\), all \(x~\in ~{\mathcal L}\), all witnesses w such that \({\mathcal R}_{{\mathcal L}}(x,w)=1\), and all \(\pi \in {\mathsf {Prove}}(\mathsf {crs},x,w)\), we have

$$\begin{aligned} {\mathsf {Ver}}(\mathsf {crs},x,\pi )=1. \end{aligned}$$

Zero-knowledge is satisfied if for all PPT adversaries \(\mathcal {A}\) we have that

is negligible, where \(Sim(\mathsf {crs},x,w)\) returns if \({\mathcal R}_{{\mathcal L}}(x,w)=1\) and aborts otherwise.

Perfect soundness is satisified if for all \(\mathsf {crs}\in {\mathsf {Gen}}(\mathsf {par})\), for all words \(x\notin {\mathcal L}\) and all proofs \(\pi \) it holds \({\mathsf {Ver}}(\mathsf {crs},x,\pi )=0\).

Notice that Gay et al. [20] adopted a stronger notion of composable zero-knowled- ge. However, one can easily see that the standard we defined above is enough for their constructions, as well as ours introduced later. Also, we can define perfect zero-knowledge, which requires \(\mathsf {Adv}_{\varPi ,\mathcal {A}}^{\mathsf {zk}}(\lambda )=0\), and computational soundness, which requires that for all for all words \(x\notin {\mathcal L}\),

is negligible.

\(\mathsf {NIZK}\) for an OR-Language. Let \(\mathcal {G}\leftarrow \mathsf {GGen}(1^\lambda )\), \(k\in \mathbb {N}\), , and \(\mathsf {par}:=(\mathcal {G},[\mathbf {{A}}_0]_1,[\mathbf {{A}}_1]_1)\). We refer the reader to the full paper for a \(\mathsf {NIZK}\) proof scheme, which was previously presented in [37] and also implicitly given in [23, 43], for the OR-language

$$ \mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}:= \{[\mathbf {x}]_1\in \mathbb {G}_1^{2k}\mid \exists \mathbf {r}\in \mathbb {Z}_p^k:[\mathbf {x}]_1= [\mathbf {{A}}_0]_1\cdot \mathbf {r} \vee [\mathbf {x}]_1= [\mathbf {{A}}_1]_1\cdot \mathbf {r}\}. $$

It will be used as a building block of our \(\mathsf {QANIZK}\) proof.

2.4 Quasi-Adaptive Zero-Knowledge Argument

The notion of Quasi-Adaptive Zero-Knowledge Argument (\(\mathsf {QANIZK}\)) was proposed by Jutla and Roy [32], where the common reference string \(\mathsf {CRS}\) depends on the specific language for which proofs are generated. In the following, we recall the definition of \(\mathsf {QANIZK}\) [18, 37]. For simplicity, we only consider arguments for linear subspaces.

Let \(\mathsf {par}\) be the public parameters for \(\mathsf {QANIZK}\) and \(\mathcal {D}_{\mathsf {par}}\) be a probability distribution over a collection of relations \(R=\{R_{[\mathbf {{M}}]_1}\}\) parametrized by a matrix \([\mathbf {{M}}]_1 \in \mathbb {G}_1^{n_1\times n_2}\) (\(n_1> n_2\)) with associated language \({\mathcal L}_{[\mathbf {{M}}]_1}=\{ [\mathbf {t}]_1:\exists \mathbf {w}\in \mathbb {Z}_q^t, \text { s.t. } [\mathbf {t}]_1= [\mathbf {{M}} \mathbf {w}]_1\}\). We consider witness sampleable distributions [32] where there is an efficiently sampleable distribution \(\mathcal {D}'_{\mathsf {par}}\) outputs \(\mathbf {{M}}' \in \mathbb {Z}_q^{n_1\times n_2}\) such that \([\mathbf {{M}}']_1\) distributes the same as \([\mathbf {{M}}]_1\). We note that the matrix distribution in Definition 2 is sampleable.

We define the notions of \(\mathsf {QANIZK}\), designated-prover \(\mathsf {QANIZK}\) (\(\mathsf {DPQANIZK}\)), designated-verifier \(\mathsf {QANIZK}\) (\( \mathsf {DVQANIZK}\)), designated-prover-verifier \(\mathsf {QANIZK}\) (\( \mathsf {DPVQANIZK}\)) as follow.

Definition 6

(\(\mathsf {QANIZK}\)). Let \(\mathsf {X}\in \{ \epsilon , \mathsf {DP},\mathsf {DV},\mathsf {DPV}\}\). An \(\mathsf {XQANIZK}\) for a language distribution \(\mathcal {D}_{\mathsf {par}}\) consists of four PPT algorithms \(\varPi =(\mathsf {Gen},\mathsf {Prove},\mathsf {Ver},\mathsf {Sim})\).

  • \(\mathsf {Gen}(\mathsf {par},[\mathbf {{M}}]_1)\) returns a common reference string \(\mathsf {crs}\), a prover key \(\mathsf {prk}\), a verifier key \(\mathsf {vrk}\) and a simulation trapdoor \(\mathsf {td}\):

    • \(\mathsf {X}= \epsilon \) iff \(\mathsf {prk}=\mathsf {vrk}=\epsilon \).

    • \(\mathsf {X}= \mathsf {DP}\) iff \(\mathsf {vrk}=\epsilon \).

    • \(\mathsf {X}=\mathsf {DV}\) iff \(\mathsf {prk}=\epsilon \).

    • \(\mathsf {X}=\mathsf {DPV}\) iff \(\mathsf {prk}\ne \epsilon \) and \(\mathsf {vrk}\ne \epsilon \).

  • \(\mathsf {Prove}(\mathsf {crs},\mathsf {prk},\)\( [\mathbf {y}]_1,\mathbf {w})\) returns a proof \(\pi \).

  • \(\mathsf {Ver}(\mathsf {crs},\mathsf {vrk},\)\([\mathbf {y}]_1,\pi )\) returns 1 (accept) or 0 (reject). Here, \(\mathsf {Ver}\) is a deterministic algorithm.

  • \(\mathsf {Sim}(\mathsf {crs},\mathsf {td},\)\([\mathbf {y}]_1)\) returns a simulated proof \(\pi \).

Perfect completeness is satisfied if for all \(\lambda \), all \([\mathbf {{M}}]_1\), all \(([\mathbf {y}]_1,\mathbf {w})\) with \([\mathbf {y}]_1=[\mathbf {{M}}\mathbf {w}]_1\), all \((\mathsf {crs},\mathsf {prk},\mathsf {vrk}, \mathsf {td}) \in \mathsf {Gen}(\mathsf {par},[\mathbf {{M}}]_1)\), and all \(\pi \in \mathsf {Prove}(\mathsf {crs},\mathsf {prk},\)\([\mathbf {y}]_1,\mathbf {w})\), we have

$$\begin{aligned} \mathsf {Ver}(\mathsf {crs},\mathsf {vrk},{} [\mathbf {y}]_1,\pi )=1. \end{aligned}$$

Perfect zero knowledge is satisfied if for all \(\lambda \), all \([\mathbf {{M}}]_1\), all \(([\mathbf {y}]_1,\mathbf {w})\) with \([\mathbf {y}]_1=[\mathbf {{M}}\mathbf {w}]_1\), and all \((\mathsf {crs},\mathsf {prk},\mathsf {vrk},\mathsf {td})\in \mathsf {Gen}(\mathsf {par},[\mathbf {{M}}]_1)\), the following two distributions are identical:

$$\begin{aligned} \mathsf {Prove}(\mathsf {crs},\mathsf {prk},[\mathbf {y}]_1,\mathbf {w})~~\text{ and }~~\mathsf {Sim}(\mathsf {crs},\mathsf {td},[\mathbf {y}]_1). \end{aligned}$$

We define the (unbounded) simulation soundness for all types of \(\mathsf {QANIZK}\).

Definition 7

(Unbounded simulation soundness). Let \(\mathsf {X}\in \{ \epsilon , \mathsf {DP},\mathsf {DV},\mathsf {DPV}\}\). An \(\mathsf {XQANIZK}\) \(\varPi :=(\mathsf {Gen},\mathsf {Prove},\mathsf {Ver},\mathsf {Sim})\) is unbounded simulation sound (\(\mathsf {USS}\)) if for any adversary \(\mathcal {A}\),

$$ \mathsf {Adv}_{\varPi ,\mathcal {A}}^{\mathsf {uss}}(\lambda ):=\Pr [\mathsf {USS}^{\mathcal {A}} \Rightarrow 1] $$

is negligible, where Game \(\mathsf {USS}\) is defined in Fig. 1.

Fig. 1.
figure 1

\(\mathsf {USS}\) security game for \(\mathsf {XQANIZK}\).

Weak \(\mathsf {USS}\). We can also consider a weak notion of simulation soundness. in the sense that it is only required that \([\mathbf {y}^*]_1\notin {\mathcal Q}_{\mathsf {sim}}\).Footnote 4

Witness-Samplable Distribution. Here we define simulation soundness for witness-sampleable distributions, namely, \(\textsc {Init}\) gets \(\mathbf {{M}} \in \mathbb {Z}_p^{n_1\times n_2}\) as input, proofs of our \(\mathsf {DVQANIZK}\) and \(\mathsf {QANIZK}\) schemes do not require the explicit \(\mathbf {{M}}\) over \(\mathbb {Z}_p\). In all the standard definitions of (simulation) soundness of \(\mathsf {QANIZK}\) for linear subspaces, the challenger needs information on \(\mathbf {{M}}\) in \(\mathbb {Z}_p\) (not necessary the whole matrix) to check whether the target word \([\mathbf {y}^*]_1\) is inside the language \(\mathsf {Span}([{\mathbf {{M}}}]_1)\). This information can be a non-zero kernel vector of \(\mathbf {{M}}\) (either in \(\mathbb {Z}_p\) or in \(\mathbb {G}_2\)). We can also define \(\mathsf {USS}\) with respect to non-witness sampleable distributions while our security proofs (with straightforward modifications) introduced later also hold. In this case, we have to allow the challenger to use super polynomial computational power to check whether \([\mathbf {y}^*]_1\in \mathsf {Span}(\mathbf {{M}})\), i.e., then the \(\mathsf {USS}\) game becomes non-falsifiable. Otherwise, we have to assume that the attacker always gives \([\mathbf {y}^*]_1\notin \mathsf {Span}({\mathbf {{M}}})\) in \(\mathsf {USS}\). In fact, we note that many constructions and applications of simulation-sound \(\mathsf {QANIZK}\)s consider witness-sampleable distributions (c.f., [18, 29, 32, 38]).

2.5 Structure-Preserving Signature

We now recall the notion of structure-preserving signature (\(\mathsf {SPS}\)) [3] and unforgeability against chosen message attacks (\(\mathsf {UF\text {-}CMA}\)).

Definition 8

(Signature). A signature scheme is a tuple of PPT algorithms \(\mathsf {SIG}:=(\mathsf {Gen},\mathsf {Sign},\mathsf {Ver})\) such that:

  • \(\mathsf {Gen}(\mathsf {par})\) returns a verification/signing key pair \((\mathsf {vk},\mathsf {sk})\).

  • \(\mathsf {Sign}(\mathsf {sk}, \mathsf {m})\) returns a signature \(\sigma \) for \(\mathsf {m}\in \mathcal {M}\).

  • \(\mathsf {Ver}(\mathsf {vk},\mathsf {m},\sigma )\) returns 1 (accept) or 0 (reject). Here \(\mathsf {Ver}\) is deterministic.

Correctness is satisfied if for all \(\lambda \in \mathbb {N}\), all \(\mathsf {m}\in \mathcal {M}\), and all \((\mathsf {vk},\mathsf {sk}) \in \mathsf {Gen}(\mathsf {par})\),

$$\begin{aligned} \mathsf {Ver}(\mathsf {vk},\mathsf {m},\mathsf {Sign}(\mathsf {sk},\mathsf {m}))=1. \end{aligned}$$

Definition 9

(Structure-preservation). A signature scheme is said to be structure-preserving if its verification keys, signing messages, and signatures consist only of group elements and verification proceeds via only a set of pairing product equations.

Definition 10

(\(\mathsf {UF\text {-}CMA}\) security). For a signature scheme \(\mathsf {SIG}:=(\mathsf {Gen}, \mathsf {Sign},\mathsf {Ver})\) and any adversary \(\mathcal {A}\), we define the following experiment:

Fig. 2.
figure 2

\(\mathsf {UF\text {-}CMA}\) security game for \(\mathsf {SIG}\).

A signature scheme \(\mathsf {SIG}\) is unforgeable against chosen message attacks

\((\mathsf {UF\text {-}CMA})\), if for all PPT adversaries \(\mathcal {A}\),

$$ \mathsf {Adv}^{\mathsf {uf\text{- }cma}}_{\mathsf {SIG},\mathcal {A}}(\lambda ):= \Pr [\mathsf {UF\text {-}CMA}^{\mathcal {A}} \Rightarrow 1] $$

is negligible, where Game \(\mathsf {UF\text {-}CMA}\) is defined in Fig. 2.

3 Quasi-Adaptive NIZK

In this section, we construct a \(\mathsf {QANIZK}\) with tight simulation soundness. As a stepping stone, we develop a \(\mathsf {DVQANIZK}\) based on the Matrix Diffie-Hellman assumption. By using the Kernel Matrix Diffie-Hellman assumption and pairings, our \(\mathsf {DVQANIZK}\) gives us a more efficient \(\mathsf {QANIZK}\). All the security reductions in this section are tight.

The Core Lemma. We recall the useful core lemma from [20], which can computationally introduce randomness. More precisely, it shows that moving from experiment \(\mathsf {Core}_{0}\) to \(\mathsf {Core}_{1}\) can (up to negligible terms) only increase the winning chances of an adversary.

Fig. 3.
figure 3

Security games \(\mathsf {Core}_{0}\) and \(\mathsf {Core}_{1}\) for the core lemma. \(\mathbf {RF}: \mathbb {Z}_p\rightarrow \mathbb {Z}_p^{2k}\) is a random function. All the codes are executed in both games, except the boxed codes which are only executed in \(\mathsf {Core}_{1}\).

Lemma 3

(Core lemma). If the \(\mathcal {D}_k\)-\(\mathsf {MDDH}\) assumption holds in the group \(\mathbb {G}_2\), and \(\varPi ^{\mathsf {or}}={({\mathsf {Gen}}_{\mathsf {or}},\mathsf {TGen}_{\mathsf {or}},\mathsf {Prove}_{\mathsf {or}},\mathsf {Ver}_{\mathsf {or}},\mathsf {Sim}_{\mathsf {or}})}\) is a \(\mathsf {NIZK}\) for \(\mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}\) with perfect completeness, perfect soundness, and zero-knowledge, then for any adversary \(\mathcal {A}\) against the core lemma, there exist adversaries \(\mathcal {B}\), \(\mathcal {B}^\prime \) with running time \(T(\mathcal {B}) \approx T(\mathcal {B}^\prime ) \approx T(\mathcal {A}) + Q\cdot \mathsf {poly}(\lambda )\) such that

$$\begin{aligned} \mathsf {Adv}_{\mathcal {A}}^{\mathsf {core}}(\lambda ) :=&\Pr [\mathsf {Core}_{0}^\mathcal {A}\Rightarrow 1] - \Pr [\mathsf {Core}_{1}^\mathcal {A}\Rightarrow 1] \\ \le&(4k \lceil \log Q \rceil +2)\cdot \mathsf {Adv}_{\mathbb {G}_2,\mathcal {D}_{2k,k},\mathcal {B}}^{\mathsf {mddh}}(\lambda )+(2\lceil \log Q \rceil +2)\cdot \mathsf {Adv}_{\mathsf {NIZK},\mathcal {B}^\prime }^{\mathsf {zk}}(\lambda ) \\ {}&+\lceil \log Q \rceil \cdot \varDelta _{\mathcal {D}_{2k,k}}+ \frac{4\lceil \log Q \rceil +2}{p-1} + \frac{\lceil \log Q \rceil \cdot Q}{p}, \end{aligned}$$

where \(\varDelta _{\mathcal {D}_{2k,k}}\) is a statistically small term for \(\mathcal {D}_{2k ,k}\).

In a slight departure from [20], we include the term \([\mathbf {{A}}_0^\top \mathbf {k}]_1\) in \(\mathsf {crs}\). We argue that the core lemma still holds by the following reasons (for notation, our \(\mathbf {k}\) is their \(\mathbf {k}_0\)):

  • The main purpose of \(\mathbf {k}\) is to introduce the constant random function \(\mathbf {F}_0(\epsilon )\) in the transition from \(\mathsf {G}_2\) to \(\mathsf {G}_{3.0}\) in Lemma 4 in [20]. The same argument still holds, given \([\mathbf {{A}}_0^\top \mathbf {k}]_1\).

  • The randomization of Lemma 5 in [20] is done by switching \([\mathbf {t}]_1\) into the right span, and this can be done independent of \(\mathbf {k}\). Additionally, we note that, given \([\mathbf {{A}}_0^\top \mathbf {k}]_1\), one cannot efficiently compute \([\mathbf {t}^\top \mathbf {k}]_1\) without knowing \(\mathbf {s}\in \mathbb {Z}_p^k\) s.t. \(\mathbf {t}=\mathbf {{A}}_0 \mathbf {s}\).

We give some brief intuition about the proof of the lemma here. Similar to [20], we re-randomize \(\mathbf {k}\) via a sequence of hybrid games. In the i-th hybrid game, we set \(u=\mathbf {t}^\top (\mathbf {k} +\mathbf {RF}_i(\mathsf {c}_{|i}))\) where \(\mathbf {RF}_i\) is a random function and \(\mathsf {c}_{|i}\) denotes the first i-bit prefix of the counter \(\mathsf {c}\) for queries to \(\textsc {Eval}_{\mathsf {core}}\). To proceed from the i-th game to the \((i+1)\)-th, we choose \(\mathbf {t}\in \mathsf {Span}(\mathbf {{A}}_{\mathsf {c}_{i+1}})\) in \(\textsc {Eval}_{\mathsf {core}}\) depending on the \((i+1)\)-th bit of \(\mathsf {c}\). We note that the view of the adversary does not change due to the \(\mathcal {D}_{2k,k}\)-\(\mathsf {MDDH}\) assumption. Then, as in [20], we can construct \(\mathbf {RF}_i\) in the way that it satisfies \(\mathbf {t}^\top \mathbf {RF}_{i+1}(\mathsf {c}_{|i+1}) = \mathbf {t}^\top \mathbf {RF}_{i}(\mathsf {c}_{|i})\). The main difference is that our \(\mathbf {RF}_i\) additionally satisfies \(\mathbf {{A}}_0^\top (\mathbf {k}+\mathbf {RF}_{i+1}(0^{i+1}))= \mathbf {{A}}_0^\top (\mathbf {k}+\mathbf {RF}_i(0^i))\), namely, it not only re-randomizes \(\mathbf {k}\) but also ensures that the \(\mathbf {{A}}_0^\top \mathbf {k}\) part in \(\mathsf {crs}\) is always independent of all the \(u'\)-s generated by \(\textsc {Eval}_{\mathsf {core}}\). We furthermore make consistent changes to \(\textsc {Finalize}_{\mathsf {core}}\) as in [20]. We refer the reader to the full paper for the full proof.

3.1 Stepping Stone: Designated-Verifier QA-NIZK

Let \(\mathcal {G}\leftarrow \mathsf {GGen}(1^\lambda )\), \(\mathsf {par}:=\mathcal {G}\), \(k\in \mathbb {N}\), \(\mathcal {H}\) be a collision-resistant hash function family, and \(\varPi ^{\mathsf {or}}:=({\mathsf {Gen}}_{\mathsf {or}},\mathsf {Prove}_{\mathsf {or}},\mathsf {Ver}_{\mathsf {or}})\) be a \(\mathsf {NIZK}\) system for language \(\mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}\). Our \(\mathsf {DVQANIZK}\) \( \varPi ^{\mathsf {dv}}:=(\mathsf {Gen},\mathsf {Prove},\mathsf {Ver},\mathsf {Sim}) \) is defined as in Fig. 4. We note that our scheme can be easily extended to a tag-based scheme by putting the label \(\ell \) inside the hash function. Thus, our scheme can be used in all the applications that require tag-based \(\mathsf {DVQANIZK}\).

Fig. 4.
figure 4

Construction of \(\varPi ^{\mathsf {dv}}:=(\mathsf {Gen},\mathsf {Prove},\mathsf {Ver},\mathsf {Sim})\).

Theorem 1

(Security of \(\varPi ^{\mathsf {dv}}\)). \(\varPi ^{\mathsf {dv}}\) is a \(\mathsf {DVQANIZK}\) with perfect zero-knowledge and (tightly) unbound simulation soundness. In particular, for any adversary \(\mathcal {A}\), there exist adversaries \(\mathcal {B}\) and \(\mathcal {B}'\) with \(\mathsf {T}(\mathcal {B}) \approx \mathsf {T}(\mathcal {A})\) and

$$\begin{aligned} \mathsf {Adv}_{\varPi ^{\mathsf {dv}},\mathcal {A}}^{\mathsf {uss}}(\lambda ) \le&\mathsf {Adv}_{\mathcal {H},\hat{\mathcal {B}}}^{\mathsf {cr}}(\lambda ) + (4k \lceil \log Q \rceil +2)\cdot \mathsf {Adv}_{\mathbb {G}_1,\mathcal {D}_{2k,k},\mathcal {B}}^{\mathsf {mddh}}(\lambda ) \\ +&(2\lceil \log Q \rceil +2)\cdot \mathsf {Adv}_{\varPi ^{\mathsf {or}},\mathcal {B}^\prime }^{\mathsf {zk}}(\lambda ) +\lceil \log Q \rceil \cdot \varDelta _{\mathcal {D}_{2k,k}} \\ +&\frac{4\lceil \log Q \rceil +2}{p-1} + \frac{(\lceil \log Q \rceil + 1)\cdot Q+1}{p}. \end{aligned}$$

Proof

(of Theorem 1). Perfect completeness follows directly from the correctness of the OR proof system and the fact that for all \(\mathbf {y}=\mathbf {{M}}\mathbf {w}\), \(\mathbf {{p}} := \mathbf {{A}}_0^\top \mathbf {k}\), \(\mathbf {{p}}_0 := \mathbf {{M}}^\top \mathbf {k}_0\), \(\mathbf {{p}}_1:=\mathbf {{M}}^\top \mathbf {k}_1\), and \(\mathbf {t}=\mathbf {{A}}_0 \mathbf {s}\), for any \(\tau \), we have

$$\begin{aligned} \mathbf {w}^\top (\mathbf {{p}}_0+\tau \mathbf {p}_1) + \mathbf {s}^\top \mathbf {{p}}&=\mathbf {w}^\top ( \mathbf {{M}}^\top \mathbf {k}_0+\tau \mathbf {{M}}^\top \mathbf {k}_1) + \mathbf {s}^\top \mathbf {{A}}_0^\top \mathbf {k}\\&= \mathbf {y}^\top ( \mathbf {k}_0 + \tau \mathbf {k}_1) + \mathbf {t}^\top \mathbf {k}. \end{aligned}$$

Moreover, since

$$\begin{aligned} \mathbf {w}^\top (\mathbf {{p}}_0+\tau \mathbf {p}_1) + \mathbf {s}^\top \mathbf {{p}}&=\mathbf {w}^\top ( \mathbf {{M}}^\top \mathbf {k}_0+\tau \mathbf {{M}}^\top \mathbf {k}_1) + \mathbf {s}^\top \mathbf {{p}} \\&= \mathbf {y}^\top ( \mathbf {k}_0 + \tau \mathbf {k}_1) + \mathbf {s}^\top \mathbf {{p}}, \end{aligned}$$

proofs generated by \(\mathsf {Prove}\) and \(\mathsf {Sim}\) for the same \(\mathbf {y}=\mathbf {{M}}\mathbf {w}\) are identical. Hence, perfect zero knowledge is also satisfied.

We now focus on the tight simulation soundness of \(\varPi ^{\mathsf {dv}}\). Let \(\mathcal {A}\) be an adversary against the unbounded simulation soundness of \(\varPi ^{\mathsf {dv}}\). We bound the advantage of \(\mathcal {A}\) via a sequence of games defined in Fig. 5.

Fig. 5.
figure 5

Games \(\mathsf {G}_0\), \(\mathsf {G}_1\) and \(\mathsf {G}_2\) for the proof of Theorem 1. \(\mathbf {RF}: \mathbb {Z}_p\rightarrow \mathbb {Z}_p^{2k}\) is a random function. Given \(\mathbf {{M}}\) over \(\mathbb {Z}_p\), it is efficient to check whether \([\mathbf {y}^*]_1 \in \mathcal {L}_{[\mathbf {{M}}]_1}\).

\(\mathsf {G}_0\) is the real \(\mathsf {USS}\) experiment for \(\mathsf {DVQANIZK}\) as defined in Definition 7.

Lemma 4

( \(\mathsf {G}_0\)). \( \Pr [\mathsf {USS}^\mathcal {A}\Rightarrow 1] = \Pr [\mathsf {G}_0^\mathcal {A}\Rightarrow 1]. \)

Lemma 5

(\(\mathsf {G}_{0}\text { to } \mathsf {G}_{1}\)). There is an adversary \(\mathcal {B}\) breaking the collision resistance of \(\mathcal {H}\) with \(\mathsf {T}(\mathcal {B}) \approx \mathsf {T}(\mathcal {A})\) and \(\mathsf {Adv}_{\mathcal {H},\mathcal {B}}^{\mathsf {cr}}(\lambda ) \ge |\Pr [\mathsf {G}_0^{\mathcal {A}}\Rightarrow 1]-\Pr [\mathsf {G}_1^{\mathcal {A}}\Rightarrow 1]|.\)

Proof

We note that in \(\mathsf {G}_0\) and \(\mathsf {G}_1\) the value u is uniquely defined by \(\mathbf {y}, \mathbf {t}\) and \(\pi _{\mathsf {or}}\). Thus, if \(\mathcal {A}\) asks \(\textsc {Finalize}\) with \(([\mathbf {y}^*]_1, [\mathbf {t}^*]_1,\pi _{\mathsf {or}}^*)\) that appears from one of the \(\textsc {Sim}\) queries, then \(\textsc {Finalize}\) will output 0, since \(([\mathbf {y}^*]_1,\pi ^*:= ([\mathbf {y}^*]_1, [\mathbf {t}^*]_1,[u^*]_1,\pi _{\mathsf {or}}^*) ) \in \mathcal {Q}_{\mathsf {sim}}\). Now if \(([\mathbf {y}^*]_1, [\mathbf {t}^*]_1,\pi _{\mathsf {or}}^*)\) has never appeared from one of the \(\textsc {Sim}\) queries, but \(\tau ^*=H([\mathbf {y}^*]_1, [\mathbf {t}^*]_1,\pi _{\mathsf {or}}^*) \in \mathcal {Q}_{\text {tag}}\), the we can construct a straightforward reduction \(\mathcal {B}\) to break the \(\mathsf {CR}\) property of \(\mathcal {H}\).    \(\square \)

Lemma 6

(\(\mathsf {G}_{1}\text { to } \mathsf {G}_{2}\)). There is an adversary \(\mathcal {B}\) breaking the core lemma (cf. Lemma 3) with running time \(\mathsf {T}(\mathcal {B}) \approx \mathsf {T}(\mathcal {A})\) and \(\mathsf {Adv}_{\mathcal {B}}^{\mathsf {core}}(\lambda )= \Pr [\mathsf {G}_1^\mathcal {A}\Rightarrow 1] - \Pr [\mathsf {G}_2^\mathcal {A}\Rightarrow 1].\)

Proof

We construct the reduction \(\mathcal {B}\) defined in Fig. 6 to break the core lemma. Clearly, if \(\mathcal {B}\)’s oracle access is from \(\mathsf {Core}_{0}\), then \(\mathcal {B}\) simulates \(\mathsf {G}_1\); and if \(\mathcal {B}\)’s oracle access is from \(\mathsf {Core}_{1}\) (which uses a random function \(\mathbf {RF}\)), then \(\mathcal {B}\) simulates \(\mathsf {G}_2\). Thus, \( \Pr [\mathsf {G}_1^\mathcal {A}\Rightarrow 1] - \Pr [\mathsf {G}_2^\mathcal {A}\Rightarrow 1] = \Pr [\mathsf {Core}_{0}^\mathcal {B}\Rightarrow 1] - \Pr [\mathsf {Core}_{1}^\mathcal {B}\Rightarrow 1] = \mathsf {Adv}_{\mathcal {B}}^{\mathsf {core}}(\lambda ), \) which concludes the lemma.    \(\square \)

Fig. 6.
figure 6

Reduction \(\mathcal {B}\) for the proof of Lemma 6 with oracle \(\textsc {Init}_{\mathsf {core}}\), \(\textsc {Eval}_{\mathsf {core}}\), \(\textsc {Finalize}_{\mathsf {core}}\) defined in Fig. 3. We highlight the oracle calls with grey.

Lemma 7

( \(\mathsf {G}_2\)). \( \Pr [\mathsf {G}_2^\mathcal {A}\Rightarrow 1] = \frac{Q}{p}. \)

Proof

We apply the following information-theoretical arguments to show that even a computationally unbounded adversary \(\mathcal {A}\) can win in \(\mathsf {G}_2\) only with negligible probability. If \(\mathcal {A}\) wants to win in \(\mathsf {G}_2\), then \(\mathcal {A}\) needs to output a fresh and valid \(\pi ^*:=([\mathbf {t}^*]_1,[u^*]_1,\pi _{\mathsf {or}}^*)\). According to the additional rejection rule introduced in \(\mathsf {G}_2\), \(u = \mathbf {y}^{*\top } (\mathbf {k}_0 + \tau ^*\mathbf {k}_1 )+ \mathbf {t}^{*\top } (\mathbf {k}+ \mathbf {RF}(j^*))\) must hold for some \(0\le j^*\le Q\). Fix a \(j^*\le Q\), we show that \(\mathcal {A}\) can compute such a u with probability at most 1/p.

The argument is based on the information leak about \(\mathbf {k}_0\) and \(\mathbf {k}_1\):

  • For the j-th \(\textsc {Sim}\) query (\(j \ne j^*\)), the term \(\mathbf {t}^\top \mathbf {RF}(j)\) completely blinds the information about \(\mathbf {k}_0\) and \(\mathbf {k}_1\) as long as \(\mathbf {t}\ne \mathbf {0}\).

  • For the \(j^*\)-th \(\textsc {Sim}\) query, we cannot use the entropy from the term \((\mathbf {k}+ \mathbf {RF}(j^*))\) to hide \(\mathbf {k}_0\) and \(\mathbf {k}_1\) anymore, but we make the following stronger argument.

    We assume that \(\mathcal {A}\) learns the term \(\mathbf {t}^\top (\mathbf {k}+ \mathbf {RF}(j^*))\), and thus \(\mathbf {y}^\top (\mathbf {k}_0 + \tau \mathbf {k}_1)\) is also leaked to \(\mathcal {A}\). However, since \(\tau ^*\ne \tau \), the terms \((\mathbf {k}_0 + \tau ^*\mathbf {k}_1)\) and \((\mathbf {k}_0 + \tau \mathbf {k}_1)\) are pairwise independent.

Now together with the information leaked from \(\mathbf {{M}}^\top \mathbf {k}_0\) and \(\mathbf {{M}}^\top \mathbf {k}_1\) in \(\mathsf {crs}\), from \(\mathcal {A}\)’s view, the term \(\mathbf {y}^{*\top }(\mathbf {k}_0 + \tau ^*\mathbf {k}_1)\) is distributed uniformly at random, given \(\mathbf {y}^\top (\mathbf {k}_0 + \tau \mathbf {k}_1)\) from the \(j^*\)-th \(\textsc {Sim}\) query (\([\mathbf {y}]_1 \) may not be in \(\mathcal {L}_{[\mathbf {{M}}]_1}\)). Thus, \(\mathcal {A}\) can compute the random term \(\mathbf {y}^{*\top }(\mathbf {k}_0 + \tau ^*\mathbf {k}_1)\) and make \(\textsc {Finalize}\) output 1 with probability at most 1/p. By the union bound, \(\mathcal {A}\) can win in \(\mathsf {G}_2\) with probability at most \((Q+1) / p\).    \(\square \)

From Lemmata 4 to 7, we have \( \mathsf {Adv}_{\varPi ^{\mathsf {dv}},\mathcal {A}}^{\mathsf {uss}}(\lambda ):= \Pr [\mathsf {USS}^\mathcal {A}] \le \mathsf {Adv}_{\mathcal {H},\hat{\mathcal {B}}}^{\mathsf {cr}}(\lambda ) + \mathsf {Adv}_{\mathcal {B}'}^{\mathsf {core}}(\lambda ) + \frac{(Q+1)}{p}. \) By Lemma 3, we conclude Theorem 1 as

$$\begin{aligned} \mathsf {Adv}_{\varPi ^{\mathsf {dv}},\mathcal {A}}^{\mathsf {uss}}(\lambda ) \le&\mathsf {Adv}_{\mathcal {H},\hat{\mathcal {B}}}^{\mathsf {cr}}(\lambda ) + (4k \lceil \log Q \rceil +2)\cdot \mathsf {Adv}_{\mathbb {G}_1,\mathcal {D}_{2k,k},\mathcal {B}}^{\mathsf {mddh}}(\lambda ) \\ +&(2\lceil \log Q \rceil +2)\cdot \mathsf {Adv}_{\mathsf {NIZK},\mathcal {B}^\prime }^{\mathsf {zk}}(\lambda ) +\lceil \log Q \rceil \cdot \varDelta _{\mathcal {D}_{2k,k}} \\ +&\frac{4\lceil \log Q \rceil +2}{p-1} + \frac{(\lceil \log Q \rceil + 1)\cdot Q+1}{p}. \end{aligned}$$

   \(\square \)

3.2 QA-NIZK

Let \(\mathcal {G}\leftarrow \mathsf {GGen}(1^\lambda )\), \(\mathsf {par}:=\mathcal {G}\), \(k\in \mathbb {N}\), \(\mathcal {H}\) be a collision-resistant hash function family, and \(\varPi ^{\mathsf {or}}:=({\mathsf {Gen}}_{\mathsf {or}},\mathsf {Prove}_{\mathsf {or}},\mathsf {Ver}_{\mathsf {or}})\) be a \(\mathsf {NIZK}\) system for language \(\mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}\). Our (publicly verifiable) \(\mathsf {QANIZK}\) \( \varPi :=(\mathsf {Gen},\mathsf {Prove},\mathsf {Ver},\mathsf {Sim}) \) is defined as in Fig. 7. The main idea behind our construction is to tightly compile the \(\mathsf {DVQANIZK}\) \(\varPi ^{\mathsf {dv}}\) from Fig. 4 by using pairings. Again we note that our scheme can be easily extended to a tag-based scheme by putting the label \(\ell \) inside the hash function. Thus, our scheme can be used in all the applications that require tag-based \(\mathsf {QANIZK}\).

Fig. 7.
figure 7

Construction of \(\varPi \).

Theorem 2

(Security of \(\varPi \)). \(\varPi \) defined in Fig. 7 is a \(\mathsf {QANIZK}\) with perfect zero-knowledge and (tight) unbounded simulation soundness if the \(\mathcal {D}_{k}\)-\(\mathsf {KerMDH}\) assumption holds in \(\mathbb {G}_2\) and the \(\mathsf {DVQANIZK}\) \(\varPi ^{\mathsf {dv}}\) in Fig. 4 is unbounded simulation sound. In particular, for any adversary \(\mathcal {A}\), there exist adversaries \(\mathcal {B}\) and \(\mathcal {B}'\) with \(\mathsf {T}(\mathcal {B}) \approx \mathsf {T}(\mathcal {B}') \approx \mathsf {T}(\mathcal {A})+Q\cdot \mathsf {poly}(\lambda )\), where Q is the number of queries to \(\textsc {Sim}\), \(\mathsf {poly}\) is independent of Q and

$$ \mathsf {Adv}_{\varPi ,\mathcal {A}}^{\mathsf {uss}}(\lambda ) \le \mathsf {Adv}_{\mathbb {G}_1,\mathcal {D}_k,\mathcal {B}}^\mathsf {kmdh}(\lambda )+\mathsf {Adv}_{\varPi ^{\mathsf {dv}},\mathcal {B}'}^{\mathsf {uss}}(\lambda ). $$

Proof

(of Theorem 2). Perfect completeness follows directly from the completeness of the OR proof system and the fact that for all \(\mathbf {{P}} := \mathbf {{A}}_0^\top \mathbf {{K}}\), \(\mathbf {{P}}_0 := \mathbf {{M}}^\top \mathbf {{K}}_0\), \(\mathbf {{P}}_1 := \mathbf {{M}}^\top \mathbf {{K}}_1\), \(\mathbf {{C}}:= \mathbf {{K}} \mathbf {{A}} \), \(\mathbf {{C}}_0 := \mathbf {{K}}_0 \mathbf {{A}}\), \(\mathbf {{C}}_1 := \mathbf {{K}}_1 \mathbf {{A}}\), and any \(\tau \)

$$\begin{aligned}&[\mathbf {w}^\top (\mathbf {{P}}_0 + \tau \mathbf {{P}}_1) + \mathbf {s}^\top \mathbf {{P}}]_1 \circ [\mathbf {{A}}]_2 \\ =&[\mathbf {w}^\top (\mathbf {{M}}^\top \mathbf {{K}}_0 + \tau \mathbf {{M}}^\top \mathbf {{K}}_1) + \mathbf {s}^\top \mathbf {{A}}_0^\top \mathbf {{K}}]_1 \circ [\mathbf {{A}}]_2\\ =&[\mathbf {w}^\top \mathbf {{M}}^\top ]_1 \circ [\mathbf {{K}}_0\mathbf {{A}}+\tau \mathbf {{K}}_1\mathbf {{A}}]_2 + [\mathbf {s}^\top \mathbf {{A}}_0^\top ]_1 \circ [\mathbf {{K}}\mathbf {{A}}]_2\\ =&[\mathbf {y}^{\top }]_1 \circ [\mathbf {{C}}_0 + \tau \mathbf {{C}}_1]_2+[\mathbf {t}^{\top }]_1 \circ [\mathbf {{C}}]_2. \end{aligned}$$

Moreover, since

$$\begin{aligned} \mathbf {w}^\top (\mathbf {{P}}_0 + \tau \mathbf {{P}}_1) + \mathbf {s}^\top \mathbf {{P}}&=\mathbf {w}^\top ( \mathbf {{M}}^\top \mathbf {{K}}_0+\tau \mathbf {{M}}^\top \mathbf {{K}}_1) + \mathbf {s}^\top \mathbf {{P}} \\&= \mathbf {y}^\top ( \mathbf {{K}}_0 + \tau \mathbf {{K}}_1) + \mathbf {s}^\top \mathbf {{P}}, \end{aligned}$$

the output of \(\mathsf {Prove}\) is identical to that of \(\mathsf {Sim}\) for the same \(\mathbf {y}=\mathbf {{M}}\mathbf {w}\). Hence, perfect zero knowledge is also satisfied.

We now focus on the tight simulation soundness of \(\varPi \). We prove it by a sequence of games: \(\mathsf {G}_0\) is defined as the real experiment, \(\mathsf {USS}\) (we omit the description here), \(\mathsf {G}_1\) and \(\mathsf {G}_2\) are defined as in Fig. 8.

Fig. 8.
figure 8

Games \(\mathsf {G}_1\) and for proving Theorem 2.

Lemma 8

( \(\mathsf {G}_0\)). \(\Pr [\mathsf {USS}^\mathcal {A}\Rightarrow 1]=\Pr [\mathsf {G}_0^\mathcal {A}\Rightarrow 1].\)

In \(\mathsf {G}_1\), \(\textsc {Finalize}\) additionally verifies the adversarial forgery with secret keys \(\mathbf {{K}}\), \(\mathbf {{K}}_0\), and \(\mathbf {{K}}_1\) as in Fig. 8.

Lemma 9

(\(\mathsf {G}_{0}\text { to } \mathsf {G}_{1}\)). There is an adversary \(\mathcal {B}\) breaking the \(\mathcal {D}_{k}\text{- }\mathsf {KerMDH}\) assumption over \(\mathbb {G}_2\) with \(\mathsf {T}(\mathcal {B}) \approx \mathsf {T}(\mathcal {A})+Q\cdot \mathsf {poly}(\lambda )\) and \(\mathsf {Adv}_{\mathbb {G}_2,\mathcal {D}_k,\mathcal {B}}^\mathsf {kmdh}(\lambda ) \ge |\Pr [\mathsf {G}_0^{\mathcal {A}}\Rightarrow 1]-\Pr [\mathsf {G}_1^{\mathcal {A}}\Rightarrow 1]|. \)

Proof

It is straightforward that a pair \(([{\mathbf {y}}^*]_1,\pi ^*)\) passing the \(\textsc {Finalize}\) in \(\mathsf {G}_1\) always passes the \(\textsc {Finalize}\) in \(\mathsf {G}_0\). We now bound the probability that \(\mathcal {A}\) produces \(([{\mathbf {y}}^*]_1,\pi ^*)\) that passes the verification in \(\mathsf {G}_0\) but not that in \(\mathsf {G}_1\). For \(\pi ^*=([\mathbf {t}^*]_1,[\mathbf {u}^*]_1,\pi _{\mathsf {or}}^*)\), the verification equation in \(\mathsf {G}_0\) is:

$$ \begin{aligned}{}[\mathbf {u}^*]_1 \circ [\mathbf {{A}} ]_2&=[{\mathbf {y}^*}^{\top }]_1 \circ [\mathbf {{K}}_0 \mathbf {{A}} + \tau \mathbf {{K}}_1 \mathbf {{A}}]_2+[\mathbf {t}^{\top }]_1 \circ [\mathbf {{K}} \mathbf {{A}}]_2 \\ {}&\Leftrightarrow [\mathbf {u}^*-{\mathbf {y}^*}^\top (\mathbf {{K}}_0+\tau \mathbf {{K}}_1)-\mathbf {t}^\top \mathbf {{K}}]_1 \circ [\mathbf {{A}}]_2= [\mathbf {0}]_T. \end{aligned} $$

One can see that for any \(([\mathbf {t}^*]_1,[\mathbf {u}^*]_1,\pi _{\mathsf {or}}^*)\) that passes the verification equation in \(\mathsf {G}_0\) but not that in \(\mathsf {G}_1\), \(\mathbf {u}^*-\mathbf {y}^*(\mathbf {{K}}_0+\tau \mathbf {{K}}_1)-\mathbf {t}^\top \mathbf {{K}}\) is a non-zero vector in the kernel of \(\mathbf {{A}}\).

We now construct an adversary \(\mathcal {B}\) as follows. On receiving \((\mathcal {G},[\mathbf {{A}}]_1)\) from the \(\mathcal {D}_{k}\text{- }\mathsf {KerMDH}\) experiment, \(\mathcal {B}\) samples all other parameters by itself and simulates \(\mathsf {G}_0\) for \(\mathcal {A}\). When \(\mathcal {A}\) outputs a tuple \(([\mathbf {t}^*]_1,[\mathbf {u}^*]_1,\pi _{\mathsf {or}}^*)\), \(\mathcal {B}\) outputs \(\mathbf {u}^*-{\mathbf {y}^*}^\top (\mathbf {{K}}_0+\tau \mathbf {{K}}_1)-\mathbf {t}^\top \mathbf {{K}}\). Since \(\mathcal {B}\) succeeds in its experiment when \(\mathcal {A}\) outputs a tuple such that \({\mathbf {u}^*-{\mathbf {y}^*}^\top (\mathbf {{K}}_0+\tau \mathbf {{K}}_1)-\mathbf {t}^\top \mathbf {{K}}}\) is a non-zero vector in the kernel of \(\mathbf {{A}}\), we have \(\mathsf {Adv}_{\mathbb {G}_1,\mathcal {D}_k,\mathcal {B}}^\mathsf {kmdh}(\lambda ) \ge |\Pr [\mathsf {G}_0^{\mathcal {A}}\Rightarrow 1]-\Pr [\mathsf {G}_1^{\mathcal {A}}\Rightarrow 1]|\), completing the proof of this lemma.    \(\square \)

Lemma 10

( \(\mathsf {G}_{1}\text { to } \mathsf {G}_{2}\)). \(\Pr [\mathsf {G}_1^\mathcal {A}\Rightarrow 1]=\Pr [\mathsf {G}_2^\mathcal {A}\Rightarrow 1].\)

Proof

Now we finish the reduction to the \(\mathsf {KerMDH}\) assumption and we can have \(\mathbf {{A}}\) over \(\mathbb {Z}_p\). In \(\mathsf {G}_2\), for \(i\in \{0,1\}\) we replace \(\mathbf {{K}}_i\) by \(\mathbf {{K}}_i'+ {\mathbf {k}}_i {\mathbf {{a}}^\bot }\) for \(\mathbf {a}^\bot \in \mathsf {ker}(\mathbf {{A}})\), where , and . Furthermore, we replace \(\mathbf {{K}}\) by \(\mathbf {{K}}'+ {\mathbf {k}} {\mathbf {{a}}^\bot }\) for and . Since \(\mathbf {{K}}'\) and \(\mathbf {{K}}'_i\) are uniformly random, \(\mathbf {{K}}\) and \(\mathbf {{K}}_i\) in \(\mathsf {G}_2\) are distributed at random and the same as in \(\mathsf {G}_1\). Thus, \(\mathsf {G}_2\) is distributed the same as \(\mathsf {G}_1\).    \(\square \)

Lemma 11

(\(\mathsf {G}_2\)). There is an adversary \(\mathcal {B}'\) breaking the \(\mathsf {USS}\) security of \(\varPi ^\mathsf {dv}\) defined in Fig. 4 with \(\mathsf {T}(\mathcal {B}') \approx \mathsf {T}(\mathcal {A})+Q\cdot \mathsf {poly}(\lambda )\) and \(\Pr [\mathsf {G}_2^\mathcal {A}\Rightarrow 1]\le \mathsf {Adv}_{\varPi ^{\mathsf {dv}},\mathcal {B}'}^{\mathsf {uss}}(\lambda ).\)

Proof

We construct a reduction \(\mathcal {B}'\) in Fig. 9 to break the \(\mathsf {USS}\) security of \(\varPi ^{\mathsf {dv}}\) defined in Fig. 4.

Fig. 9.
figure 9

Reduction \(\mathcal {B}'\) for the proof of Lemma 11 with oracle access to \(\textsc {Init}_{\mathsf {dv}}\), \(\textsc {Sim}_{\mathsf {dv}}\) and \(\textsc {Finalize}_{\mathsf {dv}}\) as defined in \(\mathsf {G}_0\) of Fig. 5. We highlight the oracle calls with grey.

We note that the \([\mathbf {p}]_1, [\mathbf {p}_i]_1\) \((i=0,1)\) from \(\textsc {Init}_{\mathsf {dv}}\) have the forms, \(\mathbf {p}= \mathbf {{A}}_0^\top \mathbf {k}\) and \(\mathbf {p}_i = \mathbf {{M}}^\top \mathbf {k}_i\) for some random \(\mathbf {k}\in \mathbb {Z}_p^{2k} \) and \(\mathbf {k}_i \in \mathbb {Z}_p^{n_1}\), and furthermore the value \([u]_1\) from \(\textsc {Sim}_{\mathsf {dv}}\) has the form \(u=\mathbf {y}^\top (\mathbf {k}_0 + \tau \mathbf {k}_1) + \mathbf {t}^\top \mathbf {k}\). Hence, essentially, \(\mathcal {B}'\) simulate the security game with \(\mathbf {{K}}\) and \(\mathbf {{K}}_i\) that are implicitly defined as \(\mathbf {{K}} := \mathbf {{K}}'+\mathbf {k} \cdot \mathbf {a}^\perp \) and \(\mathbf {{K}}_i := \mathbf {{K}}'_i + \mathbf {k}_i \cdot \mathbf {a}^\perp \). The simulated \(\textsc {Init}\) and \(\textsc {Sim}\) are identical to those in \(\mathsf {G}_2\).

In \(\mathsf {G}_2\), \(\textsc {Finalize}([\mathbf {y}^*]_1,\pi ^*:=([\mathbf {t}^*]_1,[\mathbf {u}^*]_1,\pi _{\mathsf {or}}^*))\) outputs 1 if

$$\begin{aligned} \mathbf {u}^* = \mathbf {y^*}^\top (\mathbf {{K}}'_0 + \tau ^*\mathbf {{K}}'_1) + {\mathbf {t}^*}^\top \mathbf {{K}}' + (\underbrace{{{\mathbf {y}^*}^{\top }}(\mathbf {k}_0+\tau ^* \mathbf {k}_1)+{{\mathbf {t}^*}^{\top }}\mathbf {k}}_{=:v})\cdot \mathbf {a}^\perp \end{aligned}$$

and \(([\mathbf {y}^*]_1,\pi ^*)\notin \mathcal {Q}_{\mathsf {sim}}\) and \([\mathbf {y}^*]_1 \notin \mathcal {L}_{[\mathbf {{M}}]_1}\) and \(\mathsf {Ver}(\mathsf {crs},[\mathbf {y}^*]_1,\pi ^*) =1\). Thus, if \(\mathcal {A}\) can make \(\textsc {Finalize}([\mathbf {y}^*]_1,\pi ^*)\) output 1 then \(\mathcal {B}'\) can extract the corresponding \([v]_1\) to break the \(\mathsf {USS}\) security. We conclude the lemma.    \(\square \)

To sum up, we have \(\Pr [\mathsf {USS}^\mathcal {A}\Rightarrow 1]\le \mathsf {Adv}_{\mathbb {G}_1,\mathcal {D}_k,\mathcal {B}}^\mathsf {kmdh}(\lambda )+\mathsf {Adv}_{\varPi ^{\mathsf {dv}},\mathcal {B}'}^{\mathsf {uss}}(\lambda ) \) with \(\mathcal {B}\) and \(\mathcal {B}'\) as defined above.    \(\square \)

3.3 Application: Tightly \(\mathsf {IND\text{- }mCCA}\)-Secure \(\mathsf {PKE}\)

By instantiating the labeled (enhanced) USS-QA-NIZK in the generic construction in [5] with our construction in Sect. 3.2, we immediately obtain a more efficient publicly verifiable labeled public-key encryption (\(\mathsf {PKE}\)) with tight IND-CCA2 security in the multi-user, multi-challenge setting (\(\mathsf {IND\text{- }mCCA}\)). The security reduction is independent of the number of decryption-oracle requests of the CCA2 adversary. We refer the reader to the full paper for the definition of labeled \(\mathsf {IND\text{- }mCCA}\) secure \(\mathsf {PKE}\) and the construction.

4 Tightly Secure Structure-Preserving Signature

In this section, we present an \(\mathsf {SPS}\) via a designated-prover \(\mathsf {NIZK}\) for the OR-language, whose security can be tightly reduced to the \(\mathcal {D}_{2k,k}\)-\(\mathsf {MDDH}\) and \(\mathcal {D}_{k}\)-\(\mathsf {MDDH}\) assumptions.

4.1 Designated-Prover OR-Proof

In this section, we construct \(\mathsf {NIZK}\)s in the designated-prover setting. In contrast to [5], we focus on the language \(\mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}\) defined in Sect. 2.3, where a single word \(\mathbf {y}\) is required to be in the linear span of either one of two spaces given by matrices \(\mathbf {{A}}_0\) and \(\mathbf {{A}}_1\).

While previous techniques [23, 43] require ten group elements in a proof, our novel solution gives a \(\mathsf {QANIZK}\) with only seven group elements under the SXDH hardness assumption, by leveraging the privacy of the prover CRS.

Definition. For , we define the notion of designated-prover OR-proof for \(\mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}\).

Definition 11

(Designated-Prover OR-Proof). A designated-prover proof system for \(\mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}\) is the same as that of \(\mathsf {NIZK}\) for \(\mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}\) (see Sect. 2.3), except that

  • \(\mathsf {Gen}\) takes \((\mathsf {par},\mathbf {{A}}_0,\mathbf {{A}}_1)\) as input instead of \((\mathsf {par},[\mathbf {{A}}_0]_1,[\mathbf {{A}}_1]_1)\) and outputs an additional prover key \(\mathsf {prk}\).

  • \(\mathsf {Prove}\) takes \(\mathsf {prk}\) as additional input.

  • In the soundness definition, the Adversary is given oracle access to \(\mathsf {Prove}\) with \(\mathsf {prk}\) instantiated by the one output by \(\mathsf {Gen}\).

Construction. Let \(\mathcal {G}\leftarrow \mathsf {GGen}(1^\lambda )\), \(\mathsf {par}:=\mathcal {G}\), and \(k\in \mathbb {N}\). In Fig. 10 we present a Designated-Prover OR-proof system for \(\mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}\).

Fig. 10.
figure 10

Designated-prover OR-proof for \(\mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}\).

Lemma 12

If the \(\mathcal {D}_k\)-\(\mathsf {MDDH}\) assumption holds in the group \(\mathbb {G}_2\), then the proof system \(\varPi ^{\mathsf {or}}={({\mathsf {Gen}}_{\mathsf {or}},\mathsf {TGen}_{\mathsf {or}},\mathsf {Prove}_{\mathsf {or}},\mathsf {Ver}_{\mathsf {or}},\mathsf {Sim}_{\mathsf {or}})}\) as defined in Fig. 10 is a designated-prover or-proof system for \(\mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}\) with perfect completeness, perfect soundness, and zero-knowledge. More precisely, for all adversaries \(\mathcal {A}\) attacking the zero-knowledge property of \(\varPi ^{\mathsf {or}}\), we obtain an adversary \(\mathcal {B}\) with \(T(\mathcal {B})\approx T(\mathcal {A})+Q\cdot \mathsf {poly}(\lambda )\) and \( \mathsf {Adv}_{\varPi ^{\mathsf {or}},\mathcal {A}}^{\mathsf {zk}}(\lambda )\le \mathsf {Adv}_{\mathcal {G},\mathbb {G}_2,\mathcal {D}_{k},\mathcal {B}}^{\mathsf {mddh}}(\lambda ). \)

We refer the reader to Introduction for the high-level idea of our construction. We refer the reader to the full paper for the full proof.

Extensions. For larger matrices \(\mathbf {{A}}_0, \mathbf {{A}}_1\), and under \(\mathcal {D}_k\)-MDDH assumption for a fixed k, we improve our proof size so that it asymptotically approaches a factor of two. As a trade-off, it loses a factor of k.

Roughly, for some invertible matrix \(\mathbf {{U}}\), we exploit the following language instead:

$$ \mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}:= \{[\mathbf {y}]_1\in \mathbb {G}_1^{2k}\mid \exists \mathbf {x}\in \mathbb {Z}_p^{1\times k}, \mathbf {{X}}\in \mathbb {Z}_p^{k\times k}:\mathbf {{A}}_0 \mathbf {{X}} =\mathbf {y} \mathbf {x} \vee \mathbf {y}^\top \mathbf {{A}}_1^{\bot }{\mathbf {{U}}} =\mathbf {x}\}. $$

One can see that it is also equal to \(\mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}\), since \(\mathbf {y}\) is in the span of \(\mathbf {{A}}_0\) if \(\mathbf {x} \ne \mathbf {0}\) and in the span of \(\mathbf {{A}}_1\) otherwise. Instead of directly applying the Groth-Sahai proof to it as before, we make careful adjustment on the proof for \([\mathbf {y}]_1^\top \mathbf {{A}}_1^{\bot }{\mathbf {{U}}}=[\mathbf {x}]_1\) and commitment of the information on \(\mathbf {{A}}_1^\bot \) in this case. We also extend it to an efficient OR-Proof in the symmetric pairing, which might be of independent interest. We refer the reader to the full paper for the constructions and security proofs.

4.2 Structure-Preserving Signature

By replacing the underlying OR-proof in the \(\mathsf {SPS}\) in [20] with our designated-prover one, we immediately obtain a more efficient \(\mathsf {SPS}\). A signature consists only of 11 elements, which is the shortest known for tightly secure \(\mathsf {SPS}\)-es.

Fig. 11.
figure 11

Tightly \(\mathsf {UF\text {-}CMA}\) secure structure-preserving signature scheme \(\varSigma \) with message space \(\mathbb {G}_1^n\). \(k\in \mathbb {N}\) and the public parameter is \(\mathsf {par}=\mathcal {G}\) where \(\mathcal {G}\leftarrow \mathsf {GGen}(1^\lambda )\).

Theorem 3

(Security of \(\varSigma \)). If \(\varPi ^{\mathsf {or}}:=({\mathsf {Gen}}_{\mathsf {or}},\mathsf {TGen}_{\mathsf {or}},\mathsf {Ver}_{\mathsf {or}},\mathsf {Sim}_{\mathsf {or}})\) is a non-interactive zero-knowledge proof system for \(\mathcal {L}^{\vee }_{\mathbf {{A}}_0, \mathbf {{A}}_1}\), the signature scheme \(\varSigma \) described in Fig. 11 is \(\mathsf {UF\text {-}CMA}\) secure under the \(\mathcal {D}_{2k,k}\)-\(\mathsf {MDDH}\) and \(\mathcal {D}_{k}\)-\(\mathsf {MDDH}\) assumptions. Namely, for any adversary \(\mathcal {A}\), there exist adversaries \(\mathcal {B}, \mathcal {B}^\prime \) with running time \(T(\mathcal {B}) \approx T(\mathcal {B}^\prime ) \approx T(\mathcal {A}) + Q \cdot \mathsf {poly}(\lambda )\), where Q is the number of signing queries, \(\mathsf {poly}\) is independent of Q, and

$$\begin{aligned} \mathsf {Adv}_{\mathsf {SPS},\mathcal {A}}^{\mathsf {uf\text{- }cma}}\le&(4k \lceil \log Q \rceil +2)\cdot \mathsf {Adv}_{\mathbb {G}_1,\mathcal {D}_{2k,k},\mathcal {B}}^\mathsf {mddh}\\&+(2\lceil \log Q \rceil +3)\cdot \mathsf {Adv}_{\mathbb {G}_2,\mathcal {D}_{k},\mathcal {B}^\prime }^\mathsf {mddh}+\lceil \log Q \rceil \cdot \varDelta _{\mathcal {D}_{2k,k}}\\&+ \frac{4\lceil \log Q \rceil +2}{p-1} + \frac{(Q+ 1)\lceil \log Q \rceil +Q}{p} +\frac{Q}{p^k} .\end{aligned}$$

We omit the proof of the above theorem since it is exactly the same as the security proof of the \(\mathsf {SPS}\) in [20] except that we adopt the notion of standard zero knowledge instead of the composable one and the OR-proof system is a designated-prover one now, which does not affect the validity of the proof at all. We refer the reader to [20] for the details. Notice that in the \(\mathsf {MDDH}\) games of the security proof, the reduction algorithm is not allowed to see \(\mathbf {{A}}_0\) and \(\mathbf {{A}}_1\) so that it cannot run the honest generation algorithm \({\mathsf {Gen}}_{\mathsf {or}}(\mathsf {par},\mathbf {{A}}_0,\mathbf {{A}}_1)\). However, it does not have to, since in all the \(\mathsf {MDDH}\) games, common reference strings are always switched to simulated ones, namely, the reduction algorithms only have to run \({\mathsf {TGen}}_{\mathsf {or}}(\mathsf {par},[\mathbf {{A}}_0]_1,[\mathbf {{A}}_1]_1)\).

4.3 \(\mathsf {DPQANIZK}\) and Black-Box Construction

We can also use our designated-or-proof system to construct a structure-preserv- ing \(\mathsf {DPQANIZK}\) with weak \(\mathsf {USS}\), which might be of independent interest. We refer the reader to the full paper for the construction and security proof of it.

On the other hand, as shown in [5, 6], there is an alternative approach for constructing SPS directly from \(\mathsf {DPQANIZK}\). It is just mapping a message to an invalid instance out of the language and simulating a proof with a trapdoor behind a common reference string published as a public key. In the concrete construction in [5, 6], \(n_0+1\) extra elements are included in a public key so that they are used to make sure that messages consisting of \(n_0\) elements are certainly mapped to invalid instances. We can take the same approach but with improved mapping that requires only one extra element assuming the hardness of the computational Diffie-Hellman problem. The resulting signature size is exactly the same as that of proofs of \(\mathsf {DPQANIZK}\) and the public-key size is that of a common-reference string plus one element.