1 Introduction

Verifiable delay functions (VDFs), recently formalized by Boneh et al. [BBB+18], have proven to be extremely useful in a wide array of exciting applications. These include, among others, verifiable randomness beacons (e.g., [LW15] and also [GS98, BCG15, BGZ16, PW18, HYL20, GLO+21]), resource-efficient blockchains [CP19], computational time-stamping (e.g., [CE12, LSS20] and the references therein) and time-based proofs of replication (e.g., [Ler14, ABB+16, Pro17, BDG17, Fis19]). Roughly speaking, a VDF is a function \(f: \mathcal {X}\rightarrow \mathcal {Y}\) which is defined with respect to a delay parameter T and offers the following sequentiality guarantee: It should not be possible to compute f on a randomly-chosen input in time less than T, even with preprocessing and polynomially-many parallel processors. However, the function should be computable in time polynomial in T. Moreover, the function should be efficiently verifiable: For an input \(x\in \mathcal {X}\), it should be possible to produce alongside the output \(y \in \mathcal {Y}\), a short proof \(\pi \) asserting that indeed \(y = f(x)\). Verifying the proof should be much quicker than computing the function anew.

Proofs of Correct Exponentiation. The main VDF candidates currently known are based on the “repeated squaring” function in groups of unknown order, such as RSA groups or class groups of imaginary quadratic fields. This function – first introduced for delay purposes by Rivest, Shamir and Wagner [RSW96] – is defined by \(x \mapsto x^{2^T}\), where T is the delay parameter and the exponentiation is with respect to the group operation. The recent and elegant works of Wesolowski [Wes19], of Pietrzak [Pie19] have augmented the repeated squaring function with non-interactive proofs, yielding full-fledged candidate VDF constructions (see also the survey of Boneh et al. [BBF18] covering these constructions). Both of these proofs are based on applying the Fiat-Shamir heuristic [FS86] to succinct proofs of correct exponentiation. These are protocols in which a (possibly malicious) prover tries to convince a verifier that \(y = x^e\), for a joint input consisting of two group elements x and y and an arbitrary (and potentially very large) exponent e.Footnote 1 The succinctness of these protocols manifests itself both in their communication complexity, and in the verifier’s running time, which is much lesser than the time it would take the verifier to compute \(x^e\) on her own. Very recently, in an independent and concurrent work, Block et al. [BHR+21] showed how to generalize Pietrzak’s protocol to obtain a proof of correct exponentiation that is information-theoretically secure in any group of unknown order.

Verifying Multiple VDF Outputs. In many of the applications of VDFs, one might be (and in some cases even likely to be) interested in verifying not only one but many VDF outputs at once. Examples for such scenarios include (but are not restricted to) verifying that a storage service maintains multiple replicas of the same file via a VDF-based proof of replication; verifying the shared randomness produced by a VDF-based randomness beacon during the last several epochs; and verifying the time-stamps of multiple files stamped using a VDF-based time-stamping scheme. Unfortunately, verifying multiple VDF outputs naïvely, by verifying the proof for each of them separately and independently, comes at a premium: If one wishes to verify n individual VDF proofs, each of which is \(\ell \) bits long and takes time t to verify, then verifying all of them using the aforesaid naïve approach results in a total proof size (and hence communication overhead) of \(n\cdot \ell \) and verification time of \(n \cdot t\).

Existing Approaches. The related and fundamental problem of verifying many exponentiations in cryptographic groups, traces back to the seminal work of Bellare, Garay and Rabin [BGR98], which presented elegant batch verification algorithms. However, their work, motivated by the task of batch verification of signatures, did not consider the setting of an external prover and the efficiency considerations attached to it (i.e., succinctness). Moreover, their more efficient approach and its analysis rely on cyclic groups of prime order, which seem somewhat unlikely to accommodate VDF constructions [RSS20] (see Sect. 1.2). In the context of VDF verification, WesolowskiFootnote 2 recently presented a “batch version” of his proof of correct exponentiation. Alas, the soundness of this batch proof is proven under the adaptive root assumption, which is a new and rather strong assumption in groups of unknown order. In particular this assumption is stronger than the low order assumption which underlies Pietrzak’s protocol [BBF18, Pie19],Footnote 3 and making this assumption is of course undesirable when starting from the information-theoretically sound protocol of Block et al. [BHR+21]. This state of affairs urges the search for succinct and efficient batch proofs of correct exponentiation which rely on weaker assumptions than the adaptive root assumption.

1.1 Our Contributions

We present simple and efficient batch verification techniques for proofs of correct exponentiation, extending the basic techniques of Bellare, Garay and Rabin to the external-prover setting and to composite-order groups. In conjunction with current VDF candidates based on proofs of correct exponentiation for the repeated squaring function [RSW96, Wes19, Pie19, BHR+21], our techniques immediately give rise to VDFs with batch verification. Our compilers rely on weaker assumptions than currently-known batching techniques for verifiable delay functions, paving the way to a variety of new instantiations.

Batch Proofs of Correct Exponentiation. We define the notion of a batch proof of correct exponentiation. This is a protocol in which a prover and a verifier share as input n pairs \((x_1,y_1),\ldots , (x_n,y_n)\) of group elements and an exponent \(e\in \mathbb {N}\), and the prover attempts to convince the verifier that \(x_i^e = y_i\) for each \(i\in [n]\).Footnote 4 Loosely speaking, we say that a batch proof of correct exponentiation has soundness error \(\delta \) if in case \(x_i^e \ne y_i\) for some \(i \in [n]\), no efficient (malicious) prover can convince a verifier to accept with probability greater than \(\delta + \mathsf{negl}(\lambda )\), where \(\lambda \in \mathbb {N}\) is the security parameter (see Sect. 3 for the formal definition).

A General Compiler. As our first main contribution, we show how to compile any proof of correct exponentiation into a batch proof of correct exponentiation, offering significant savings in both communication and verification time, relative to the naïve transformation. The soundness of this compiler essentially relies on an information-theoretic argument, which is then derandomized using a pseudorandom function. This makes it a generic compiler which can be applied in any group, as long as the underlying proof of correct exponentiation is sound in this group, hence also making it compatible with the new proof of correct exponentiation of Block et al. [BHR+21].

Theorem 1.1

(informal). Let \(\mathbb {G}\) be a group and assume the existence of a one-way function and of a proof of correct exponentiation in \(\mathbb {G}\) with communication complexity \(c = c(\lambda , e)\), verification time \(t = t(\lambda ,e)\) and soundness error \(\delta = \delta (\lambda )\), where \(\lambda \in \mathbb {N}\) is the security parameter and \(e \in \mathbb {N}\) is the exponent. Then, for any \(n,m \in \mathbb {N}\), there exists a batch proof of correct exponentiation for n pairs of elements in the group \(\mathbb {G}\), with communication complexity \(c_\mathsf{batch} = c\cdot m + \lambda \), verification time \(t_\mathsf{batch} = m\cdot t + n\cdot m\cdot \mathsf{poly}(\lambda )\) and soundness error \(\delta _\mathsf{batch} = \delta + 2^{-m}\).

An Improved Compiler Based on the Low Order Assumption. Our second main contribution is an improved compiler, whose soundness is based on the low order assumption in groups of unknown order, recently introduced by Boneh et al. [BBF18]. Roughly speaking, for an integer \(\ell \), the \(\ell \)-low order assumption asserts that one cannot efficiently come up with a group element \(z \ne 1\) and an exponent \(\omega < \ell \) such that \(z^\omega = 1\). This compiler enjoys significant improvements over our general compiler: The communication complexity is now completely independent of the desired soundness guarantee (i.e., one can reduce the soundness error without increasing communication), and the running time of the verifier is also improved. Concretely, we prove the following theorem.

Theorem 1.2

(informal). Let \(\mathbb {G}\) be a group and assume the existence of a one-way function and of a proof of correct exponentiation in \(\mathbb {G}\) with communication complexity \(c = c(\lambda , e)\), verification time \(t = t(\lambda ,e)\) and soundness error \(\delta = \delta (\lambda )\), where \(\lambda \in \mathbb {N}\) is the security parameter and \(e \in \mathbb {N}\) is the exponent. Assume that the \(\ell \)-low order assumption holds in \(\mathbb {G}\) for an integer \(\ell = \ell (\lambda )\). Then, for any \(n \in \mathbb {N}\) and \(s \le \ell \), there exists a batch proof of correct exponentiation for n pairs of elements in the group \(\mathbb {G}\), with communication complexity \(c_\mathsf{batch} = c + \lambda \), verification time \(t_\mathsf{batch} = t + O(n \cdot \log (s) \cdot \mathsf{poly(\lambda )})\), and soundness error \(\delta _\mathsf{batch} = \delta + 1/s\).

In Sect. 6, we also discuss why the low order assumption is necessary for our compiler to yield the soundness guarantees of Theorem 1.2.

Instantiating the Compiler. The compiler from Theorem 1.2 relies on the same techniques as those underlying Wesolowski’s [Wes20] batch proof (which, as mentioned above, can be traced back to Bellare, Garay and Rabin [BGR98]). However, our compiler is modular and our analysis of its soundness is based solely on the low order assumption (compared to Wesolowski’s reliance on the adaptive root assumption). This means that our compiler can be applied to both the protocols of Wesolowski and of Pietrzak, without making any further assumptions beyond those required by their single-instance protocols (and one-way functions for derandomization purposes). Concretely, there are currently three main candidates for families of groups in which the low order assumption is plausible:Footnote 5

  • The groups \(\boldsymbol{QR_N}\) and \(\boldsymbol{QR_N^+}\). The low order assumption holds information-theoretically in the group \(QR_N\) of quadratic residues modulo N when N is the product of two safe primes (as well as in the isomorphic group \(QR_N^+\) of signed quadratic residues modulo N, in which group membership is efficiently recognizable). It may also rely on the assumption that the low order problem is computationally hard in these groups for other choices of N. The reader is referred to Sect. 2 for further details regarding these groups.

  • RSA groups. The low order assumption cannot hold in the RSA group \(\mathbb {Z}_N^*\) since \(-1\in \mathbb {Z}_N\) is always of order two in this group. Boneh et al. [BBF18] suggested to work over the quotient group \(\mathbb {Z}_N^*/\{\pm 1\}\) instead. We consider two additional possibilities. One is to settle on a slightly weaker soundness guarantee: If the verifier accepts, it must be the case that \(x_i^e \in \{y_i, -y_i \}\) for every \(i \in [n]\). Observe, that this requirement indeed seems compatible with many of the applications of VDFs mentioned above.Footnote 6 When N is the product of two safe primes, we show (following Seres and Burcsi [SB20]) that this weaker notion of soundness for our compiler is actually implied by the hardness of factoring N. The second option is to compose our compiler with an additional protocol, specifically tailored in order to prove that \(y_i \ne -x_i^e\) for each i. We discuss this approach below.

  • Class groups of imaginary quadratic fields. The security of the low order assumption in these groups is still unclear [BBF18, BKS+20], but at least for now, there are possible parameters for which the low order assumption remains unbroken in these groups.

Strong Soundness in RSA Groups via Proofs of Order. As discussed above, when our compiler from Theorem 1.2 is used within RSA groups, we can obtain only a weaker form of soundness. The issue is that a malicious prover can still convince the verifier that \(x_i^e = y_i\) for every i, even though there exists an index j for which \(y_j/x_j^e = -1\). To remedy this situation we present a protocol that allows the prover to convince the verifier that \(\mathsf{order}(y_i/x_i^e) \ne 2\) for every i, and hence in particular \(y_i /x_i^e \ne -1\) for every i. The protocol builds on the work of Di Crescenzo et al. [CKK+17], but extends it in a non-trivial manner to save in communication (or proof size, when Fiat-Shamir is applied). It enjoys efficient verification and information-theoretic soundness when the modulus N of the RSA group is the product of two safe primes, and it can be made succinct (i.e., with communication complexity is independent of the number n of pairs of group elements) using a pseudorandom function. Hence, in such groups, executing this protocol in parallel to our compiler from Theorem 1.2 yields a full-fledged sound compiler in RSA groups, without compromising on weaker soundness notions or making strong assumptions. In Theorem 1.3 below, by “safe RSA groups” we mean RSA groups whose modulus is the product of two \(\lambda \)-bit safe primes.

Theorem 1.3

(informal). Assume the existence of a one-way function and of a proof of correct exponentiation in safe RSA groups with communication complexity \(c = c(\lambda , e)\), verification time \(t = t(\lambda ,e)\) and soundness error \(\delta = \delta (\lambda )\), where \(\lambda \in \mathbb {N}\) is the security parameter and \(e \in \mathbb {N}\) is the exponent. Then, for any \(n, m \in \mathbb {N}\) and \(s < 2^{\lambda -1}\), there exists a batch proof of correct exponentiation for n pairs of elements in safe RSA groups, with communication complexity \(c_\mathsf{batch} = c + O(\lambda )\), verification time \(t_\mathsf{batch} = t + O(n \cdot m \cdot \log (s) \cdot \mathsf{poly(\lambda )})\), and soundness error \(\delta _\mathsf{batch} = \delta + 1/s + 2^{-m}\).

A Statistically-Sound Proof of Correct Exponentiation in RSA Groups. To complete the picture, we present a proof of correct exponentiation in standard (safe) RSA groups.Footnote 7 The protocol is obtained by extending Pietrzak’s protocol [Pie19] with techniques similar to those used to prove Theorem 1.3. The protocol actually achieves statistical soundness in safe RSA groups, with very little overhead in terms of communication and verification time when compared to Pietrzak’s protocol.

Theorem 1.4

(informal). There exists a statistically-sound proof of correct exponentiation in safe RSA groups, whose communication complexity and verification time essentially match those of Pietrzak’s protocol.

Though the statistically-sound proof of correct exponentiation of Block et al. [BHR+21] can also be instantiated in safe RSA groups, their protocol incurs a factor \(\lambda \) overhead in communication complexity when compared to Pietrzak’s protocol. In contrast, our protocol only incurs an overhead of factor 2 in communication complexity.

Interpreting Our Results. We make two clarifications in order to help the reader interpret the above results. Firstly, we emphasize that the parameters m (from Theorems 1.1 and 1.3) and s (from Theorems 1.2 and 1.3) do not scale with the number n of pairs \((x_i,y_i)\) to be verified, and can be fine-tuned at will to achieve the desired tradeoff between the soundness error of the batch protocol on the one hand, and the communication complexity and verifier’s running time on the other hand. Secondly, we stress that in all of the above theorems, the polynomials referred to by \(\mathsf{poly}\) are fixed polynomials that depend only on \(\lambda \), and do not scale with neither n nor the exponent e. These two points have several important implications:

  • The communication overhead incurred by our compilers is completely independent of n.

  • The verification time depends linearly on n, but in all of our compilers, we manage to “decouple” the terms which depend on n from the terms which depend on the original verification time t in the underlying proof of correct exponentiation protocol (and hence we also decouple n from the exponent e). This should be contrasted with the naïve solution discussed above, in which the verification time is \(t \cdot n\). Moreover, observe that some linear dependency on n seems unavoidable, since merely reading the verifier’s input takes time at least n.

  • One can set m to be super-logarithmic in the security parameter \(\lambda \) (e.g., by setting \(m(\lambda ) = \log (\lambda ) \cdot \log ^*(\lambda )\)) in Theorems 1.1 and 1.3, and s to be super-polynomial in Theorems 1.2 and 1.3, to obtain protocols with negligible soundness error, with only slightly greater communication complexities and verification times than those of the underlying proof of correct exponentiation.

Applying the Fiat-Shamir Heuristic. All of our compilers add only a single public coin message from the verifier to the prover. Consequently, if the Fiat-Shamir heuristic [FS86] can be applied in the random oracle model to the underlying proof of correct exponentiation, then it can be applied to resulted batch proof as well (as long as m and s are set such that the soundness error is negligible). In particular, the Fiat-Shamir heuristic may be applied to the compiled versions (via our compilers) of the protocols of Wesolowski [Wes19], of Pietrzak [Pie19], and of Block et al. [BHR+21] to obtain non-interactive batch proofs of correct exponentiation. See Sect. 5 and the related work of Lombardi and Vaikuntanathan [LV20] for a more exhaustive discussion on the matter.

Communication in the Interactive Setting. As illustrated below, we use pseudorandom functions in order to shrink the length of the public coin message from the verifier to the prover. This is necessary only in the interactive setting, since when the Fiat-Shamir heuristic is applied this message is computed locally by the prover (and hence does not affect the proof’s length). Indeed, verification of VDFs is typically considered to be non-interactive, but we nevertheless believe that exploring interactive verification of VDFs is interesting and well-justified by applications in which verification is done by a single verifier in an online manner, such as VDF-based proofs of storage. Using an interactive protocol in such settings eliminates the need to rely on the Fiat-Shamir transform.

1.2 Additional Related Work and Open Problems

Batch Verification for Group Exponentiation. The line of works that seems most related to ours was initiated by the seminal work of Bellare, Garay and Rabin [BGR98] (following up on the works of Fiat [Fia89], Naccache et al. [NMV+94] and of Yen and Laih [YL95]Footnote 8) and considers the following problem: Let \(\mathbb {G}\) be a cyclic group and let g be a generator of the group. The task, given n exponents \(x_1,\ldots , x_n\) and n group elements \(h_1,\ldots , h_n\), is to verify that \(g^{x_i} = h_i\) for each \(i \in [n]\). Bellare et al. presented several approaches for solving this problem, exhibiting different savings in terms of computational costs vis-à-vis the naïve solution of raising g to the power of each \(x_i\).

Our compilers are inspired by two elegant techniques of Bellare et al. – the “random subsets” technique and the “random exponents” technique – but there are some key differences between their work and ours. Firstly, we embed these techniques within the framework of succinct proofs of correct exponentiation (which we extend to the batch setting). This setting presents its own set of unique technical challenges, the main one being reducing the communication overhead (or proof size, in the non-interactive setting). This challenge does not arise in the setting considered in their work (and in follow-up works), which is motivated by batch verification of signatures. Secondly, the random exponents technique as proposed by Bellare et al. and its analysis explicitly and inherently assumes that the group at hand is of prime order. Such groups do not seem to enable VDF constructions, as Rotem, Segev and Shahaf [RSS20] recently showed how break the sequentiality of any such construction in the generic-group model. We extend the approach underlying the random exponents technique to composite-order groups.

In a concurrent and independent work, Block et al. [BHR+21] also extended the random subsets technique of Bellare et al. that we use to derive Theorem 1.1, in the context of hidden-order groups (though they did not explicitly observe the connection between the work of Bellare et al. and theirs). Their motivation was to extend Pietrzak’s protocol [Pie19] to obtain an information-theoretically sound protocol, while ours is proof batching, but the application of the random subsets technique is quite similar in both cases.

Di Crescenzo et al. [CKK+17] considered the related problem of batch delegation of exponentiation in RSA groups, while extending the random exponents technique of Bellare, Garay and Rabin. Our treatment of RSA groups is also inspired by the techniques of Di Crescenzo et al. but their protocol includes a communication overhead which is linear in the number n of exponentiations to be delegated (and verified) – which in our setting, is exactly what we are trying to avoid. We manage to get rid of this dependency altogether.

A long line of follow-up works succeeded the work of Bellare et al. suggesting various improvements to their techniques in various settings (see for example [BP00, CL06, CY07, CHP07, CL15] and the references therein). An interesting open question is whether some of these techniques can be used in order to improve our results.

Other VDF Candidates. This work focuses on VDF candidates which are based on the repeated squaring function in groups of unknown order. Other candidates have also been proposed over the last couple of years. Some of which are based on different assumptions, such as candidates based on super-singular isogenies [FMP+19, Sha19], and the VeeDo VDF candidate in prime fields [Sta20]; while other constructions (e.g., [EFK+20, DGM+20]) achieve various desired properties. An interesting possible direction for future research is to enable batch verification to these candidate VDFs as well, either relying on our techniques or presenting new ones tailored specifically for these candidates.

Necessity of One-Way Functions in the Interactive Setting. Informally put, our protocols use some function f to derandomize a long public coin message from the verifier to the prover. In our proposed instantiations, f is implemented using a cryptographic pseudorandom generator or pseudorandom function (both are known to exist assuming one-way functions [GGM86, Nao91, HIL+99]), and we show that the soundness of this approach is “as good” as the soundness before the derandomization. However, in all of our protocols, we only need the output of f on a uniformly-random input to satisfy some specific statistical property. Hence, an interesting open question is whether our reliance on one-way functions is necessary, or can f be instantiated without them while still offering comparable security guarantees. One possibility is to use a Nissan-Wigderson type pseudorandom generator [NW94, IW97], relying on worst-case assumptions. Another is to use \(\epsilon \)-biaset sets (see for example [NN93, AGH+92, Ta-17] and the references therein), though this approach seems to inherently yield a slightly worse communication to soundness tradeoff.

1.3 Technical Overview

In this section we provide a high-level overview of the techniques used throughout the paper. In this overview, we ignore various technical subtleties that arise in the full proofs.

The Basic Random Subset Compiler. We start by describing the basic idea which underlines the generic compiler guaranteed by Theorem 1.1. Let \(\varPi \) be any (single-instance) proof of correct exponentiation, and let \((x_1,y_1),\ldots , (x_n,y_n)\) and \(e\in \mathbb {N}\) be the n pairs of group elements and the exponent that are shared by the prover and the verifier as input. Recall that the prover wishes to convince the verifier that \(y_i = x_i^e\) for each \(i \in [n]\). The basic technical observation underlying the compiler is that if for some index \(i\in [n]\) it holds that \(y_i \ne x_i^e\), then with probability at least 1/2 over the choice of a uniformly random subset \(\mathcal {S}\) of [n], it holds that \(\prod _{j\in \mathcal {S}} y_j \ne \left( \prod _{j\in \mathcal {S}} x_j \right) ^e\). This observation then naturally lends itself to obtain a batch proof of correct exponentiation: First, the verifier simply chooses such a subset \(\mathcal {S}\) uniformly at random and sends it to the prover. Then, the verifier and the prover execute \(\varPi \) on shared input \((x' = \prod _{j\in \mathcal {S}} x_j, y' = \prod _{j\in \mathcal {S}} y_j, e)\); that is, the prover uses \(\varPi \) to convince the verifier that indeed \(y' = (x')^e\). By the above observation, if \(\varPi \) has soundness error \(\delta \), then the compiled batch protocol has soundness error at most \(\delta + 1/2\). The reader is referred to Sect. 4 for a formal description of the compiler and its analysis.

Amplifying Soundness and Reducing Communication. The above compiler suffers from two main drawbacks: The soundness error of the resulted batch protocol is at least 1/2, and its communication complexity is necessarily linear in the number n of pairs of group elements, since a uniformly chosen subset of [n] has n bits of entropy. Fortunately, this situation is easy to remedy by introducing two simple modifications to the protocol. First, instead of choosing just one subset \(\mathcal {S}\) of [n], the verifier chooses m such subsets \(\mathcal {S}_1,\ldots , \mathcal {S}_m\) for some integer m which parameterizes the compiler. Then, the verifier and the prover run m parallel executions of the underlying protocol \(\varPi \), where in the ith execution, they run on shared input \((x'_i = \prod _{j\in \mathcal {S}_i} x_j, y'_i = \prod _{j\in \mathcal {S}_i} y_j, e)\). Suppose that \(y_j\ne x_j^e\) for some \(j\in [n]\). Using the observation from the previous paragraph, it is straightforward that if \(\mathcal {S}_1,\ldots , \mathcal {S}_m\) are chosen independently and uniformly at random from all subsets of [n], then the probability that \(y'_i = (x'_i)^e\) for each \(i \in [m]\) is at most \(2^{-m}\). Hence, if \(\varPi \) has soundness error \(\delta \), then the compiled batch protocol has soundness error at most \(\delta + 2^{-m}\) (regardless of whether the soundness of the underlying protocol \(\varPi \) is amplified via parallel repetition).

In order to attend to the large communication complexity of the compiled protocol (now the verifier has to send \(k\cdot n\) bits to the prover), we derandomize the choice of the sets \(\mathcal {S}_1,\ldots , \mathcal {S}_m\). Instead of sampling these sets explicitly and sending their description to the prover, the verifier now samples and sends a short key k to a pseudorandom function \(\mathsf {PRF}\). This key can now succinctly represent \(\mathcal {S}_1,\ldots , \mathcal {S}_m\); for example by letting \(j\in \mathcal {S}_i\) if and only if \(\mathsf {PRF}_k(i,j) = 1\), for every \(i \in [m]\) and \(j\in [n]\). Roughly speaking, the security of the pseudorandom function guarantees that if \(y_j\ne x_j^e\) for some \(j\in [n]\), then the probability that \(y'_i = (x'_i)^e\) for each \(i \in [m]\) is at most \(2^{-m} + \mathsf{negl}(\lambda )\), where \(\lambda \) is the security parameter. See Sect. 5 for a formal description and analysis of the strengthened protocol.

The Random Exponents Compiler. We now describe the idea behind our improved compiler based on the low order assumption (Theorem 1.2). The observation is that the basic Random Subset Compiler as described above can be viewed in a more general manner: The verifier chooses \(\alpha _1,\ldots , \alpha _n \leftarrow \{0,1\}\), and then the two parties invoke the underlying protocol \(\varPi \) on joint input \(x' = \prod _{i\in [n]} x_i^{\alpha _i}\), \(y' = \prod _{i\in [n]} y_i^{\alpha _i}\) and e. The idea is to now let the verifier choose \(\alpha _1,\ldots , \alpha _n\) from a large domain; concretely, from the set [s] for some appropriately chosen parameter \(s \in \mathbb {N}\). As before, after the verifier chooses \(\alpha _1,\ldots , \alpha _n\) and sends them over to the prover, the two parties invoke the underlying protocol \(\varPi \) on shared input \((x',y',e)\) for asserting that \(y' = (x')^e\). It should be noted that as in the Random Subset Compiler, the choice of \(\alpha _1,\ldots ,\alpha _n\) can be derandomized using a pseudorandom function in order to save in communication, without significantly affecting the soundness of the compiler.

The proof of soundness of the compiled protocol now has to rely on the s-low order assumption, which roughly speaking, says that it should be hard to find a group element x and a positive integer \(\omega < s\) such that \(x^\omega = 1\). We wish to argue that if the s-low order assumption holds in the group at hand and \(y_j\ne x_j^e\) for some \(j\in [n]\), then enlarging the domain from which \(\alpha _1,\ldots , \alpha _n\) are drawn (up to and including [s]) proportionally reduces the probability that \(y' = (x')^e\). This is done by a reduction, which we now informally describe, to the s-low order assumption. For the formal statement and reduction, we refer the reader to Sect. 6.

Let \((x_1,y_1),\ldots , (x_n,y_n)\) be n pairs of elements in a group \(\mathbb {G}\) such that at least one index i satisfies \(y_i \ne x_i^e\), and let \(i^*\) be the first such index. Consider the following algorithm A for finding a low order element in \(\mathbb {G}\). A first samples \(n+1\) integers \(\alpha _1,\ldots ,\alpha _{i^*-1},\alpha _{i^*+1},\ldots ,\alpha _n,\beta , \beta '\) uniformly at random from [s]. Then, it checks that \(\left( x_{i^*}^\beta \cdot \prod _{i\in [n] \setminus \{ i^*\}} x_i^{\alpha _i}\right) ^e = y_{i^*}^\beta \cdot \prod _{i\in [n] \setminus \{ i^*\}} y_i^{\alpha _i}\), that \(\left( x_{i^*}^{\beta '} \cdot \prod _{i\in [n] \setminus \{ i^*\}} x_i^{\alpha _i}\right) ^e = y_{i^*}^{\beta '} \cdot \prod _{i\in [n] \setminus \{ i^*\}} y_i^{\alpha _i}\), and that \(\beta \ne \beta '\). If any of these conditions does not hold, it aborts. Otherwise, if all of these conditions check out, A outputs the group element \(z = y_{i^*}/x_{i^*}^e\) together with the exponent \(\omega = |\beta - \beta '|\). It is easy to verify that if both of the equalities checked by A hold, then this implies that \(z^\omega = 1\), while the inequality checked by A implies that indeed \(\omega \ne 0\). Now assume towards contradiction that the probability that \(y' = (x')^e\) is at least \(1/s + \epsilon \) for some \(\epsilon > 0\). Then, a careful analysis shows that the probability that A does not abort is at least \(\epsilon ^2\). Informally, this implies that if the s-low order assumption holds in \(\mathbb {G}\) and the underlying protocol \(\varPi \) has soundness error \(\delta \), then the compiled batch protocol has soundness error at most \(\delta + 1/s +\mathsf{negl}(\lambda )\).

Strong Soundness in Safe RSA Groups. Recall that, as mentioned in Sect. 1.1, the s-low order assumption cannot hold in the group \(\mathbb {Z}^*_N\) for any \(s \ge 2\), since \(N-1\) is always an element of order 2 in the group. Therefore, the Random Exponents Compiler obtains a weaker form of soundness when applied in \(\mathbb {Z}^*_N\), guaranteeing only that \(y_i = \pm x_i^e\). To counter this problem, we present a protocol for proving that \(\mathsf{order}(y_i/x_i^e) \ne 2\) for every i. Our basic approach follows a technique by Di Crescenzo et al. [CKK+17] for proving that \(\mathsf{order}(y/x^e) \ne 2\) for \(x,y \in \mathbb {Z}_N^*\) and an odd exponent e. In their protocol, the prover computes \(w = x^{(e+1)/2}\) and sends it to the verifier, who then accepts if and only if \(w^2 = x\cdot y\). The idea is that if N is the product of two safe primes and \(\mathsf{order}(y/x^e) = 2\), then \(x\cdot y\) must be a quadratic non-residue modulo N. This is true since, as we prove, all group elements of order 2 in \(\mathbb {Z}_N^*\), including \(y/x^e\), are quadratic non-residues modulo N. Now observe that \(x\cdot y = (y/x^e)\cdot x^{e+1}\). Since e is odd, \(x^{e+1}\) is a quadratic residue modulo N, and we conclude that \(x\cdot y\) is a quadratic non-residue modulo N. This means that \(w^2\), which is of course a quadratic residue modulo N, cannot be equal to \(x\cdot y\) and the verifier will inevitably reject.

Generalizing this approach to arbitrary exponents is fairly straightforward, by having the prover compute w as \(x^{\lceil (e+1)/2 \rceil }\) and then having the verifier check that \(w^2 = x^{1+ (e + 1 \mod 2)} \cdot y\). The more acute issue is that when moving to the batch setting, the naïve way for verifying that \(\mathsf{order}(y_i/x_i^e) \ne 2\) for every \(i\in [n]\) is by running the above protocol n times in parallel, which results in communication complexity which is linear in n. To avoid this overhead, we combine the ideas of Di Crescenzo et al. with techniques from our Random Subset Compiler. Concretely, in our final protocol, the verifier chooses a key k to a pseudorandom function, to succinctly represent m random subsets \(\mathcal {S}_1,\ldots , \mathcal {S}_m\) of [n], and sends k to the prover. The prover then computes \(w_j := \prod _{i \in \mathcal {S}_j} x_i^{\lceil (e+1)/2 \rceil }\) for every \(j\in [m]\) and sends \(w_1,\ldots , w_m\) to the verifier. Finally, the verifier computes \(t_j := \prod _{i \in \mathcal {S}_j} x_i^{1+ (e + 1 \mod 2)} \cdot y_i\) for every \(j\in [m]\) and accepts if and only if \(t_j= w_j^2\) for all j-s. A careful analysis shows that if \(\mathsf{order}(y_i/x_i^e) = 2\) for some \(i\in [n]\) and the subsets \(\mathcal{S}_1, \ldots , \mathcal{S}_m\) uniformly and independently at random, then each \(t_j\) is a quadratic non-residue with probability at least 1/2. This implies that the verifier will accept with probability at most \(2^{-m} + \mathsf{negl}(\lambda )\). We refer the reader to the full version for a detailed description of our protocol and a formal analysis of its soundness.

A Statistically-Sound Protocol in Safe RSA Groups. Our statistically-sound proof of correct exponentiation in RSA groups is obtained by extending the protocol of Pietrzak [Pie19] using techniques similar to those detailed above. We start by recalling Pietrzak’s protocol. Suppose that the prover wishes to convince the verifier that \(y = x^{2^T}\) for a group \(\mathbb {G}\), elements \(x,y\in \mathbb {G}\) and an integer T, and assume for ease of presentation that \(T=2^t\) for some \(t\in \mathbb {N}\). In the beginning of the protocol, the prover computes \(z = x^{2^{T/2}}\) and sends z to the verifier. Now the prover wishes to prove to the verifier that indeed \(z = x^{2^{T/ 2}}\) and \(y = z^{2^{T/2}}\), since if \(y \ne x^{2^T}\) then it must be that \(z \ne x^{2^{T/ 2}}\) or \(y \ne z^{2^{T/2}}\). One possibility is to recurse on both claims, until the exponent is small enough for the verifier to verify the claims herself. However, in this manner the number of sub-claims will blowup very quickly, resulting in a lengthy proof and in a long verification time. So instead, Pietrzak’s idea is to merge both claims using (implicitly) the random exponents technique of Bellare, Garay and Rabin [BGR98]. The verifier samples a random integer \(r \leftarrow [2^\lambda ]\) and sends it to the prover, and then the two parties recurse on the (single) instance \((x' = x^r\cdot z, y' = z^r\cdot y, T' = T/2)\). That is, the prover now needs to convince the verifier of the claim \(y' = (x')^{T'}\), where \(T'\) is half the size of T. Suppose now that all elements in the group are of order at least \(2^\lambda \).Footnote 9 In this case, if \(y \ne x^{2^T}\) then there is at most one value of \(r \in [2^\lambda ]\) for which \(y' = (x')^{T'}\) and hence \(\Pr \left[ y' = (x')^{T'} \right] \le 2^{-\lambda }\) over the choice of r. This recursion continues for \(\log T = t\) rounds until \(T = 1\), in which case the verifier can simply check the relation \(y = x^2\) herself using a single squaring in the group.

In order to extend this protocol to the group \(\mathbb {Z}_N^*\) for a modulus N that is the product of two safe primes, we use similar techniques to those used above for extending the Random Exponents Compiler. Concretely, consider a round in Piertzak’s protocol in which the prover wants to prove that \(y = x^{2^T}\). In addition to \(z = x^{2^{T/2}}\), the prover now also computes \(w = x^{2^{T/2 - 1} + 1}\) and sends it to the verifier. The verifier then checks that \(x^2 \cdot z = w^2\), and if that is not the case then the verifier rejects immediately. This additional verification is made in each of the \(\log T\) rounds of the protocol, and if the verifier does not reject in any of the rounds and the check for \(T=1\) goes through, then the verifier accepts.

To analyze the soundness of our protocol, suppose that all group elements (other than the identity) are either of order 2 or have order at least \(2^\lambda \); this is the case for \(\mathbb {Z}_N^*\) where N is the product of two large enough safe primes. Let x and y be group elements such that \(y \ne x^{2^T}\), and assume that z and w are the group elements which the prover sends to the verifier. If \(x^2\cdot z \ne w^2\), then the verifier will surely reject and we are done, so for the rest of the analysis we assume that \(x^2\cdot z = w^2\). Consider two cases:

  • If \(z = x^{2^{T/2}}\), then for any \(r \in [2^\lambda ]\) it holds that \((x^{r}\cdot z)^{2^{T/2}} = z^r \cdot x^{2^T} \ne z^r \cdot y\). In other words, \(\Pr \left[ y' = (x')^{T'} \right] = 0\), where \(x' = x^{r}\cdot z\), \(y' = z^r\cdot y\), \(T' = T/2\) and the probability is taken over \(r\leftarrow [2^\lambda ]\).

  • If \(z \ne x^{2^{T/2}}\), then we prove that there is at most one value \(r\in [2^\lambda ]\) for which \((x^r \cdot z)^{2^{T/2}} = z^r \cdot y\). Assume towards contradiction otherwise; that is, that there are two distinct integers \(r, r' \in [2^\lambda ]\) for which this equality holds. Rearranging, this means that \((x^{2^{T/2}}/z)^{d} = 1\) for \(d = r - r'\). Since \(x^{2^{T/2}} \ne z\) and \(d < 2^\lambda \), we obtain that the order of \(x^{2^{T/2}}/z\) is greater than 1 and lesser than \(2^\lambda \), and hence this order must be 2. On the one hand, this means that \(x^{2^{T/2}}/z\) is a quadratic non-residue modulo N (recall that all elements of order 2 in a safe RSA group are quadratic non-residues), and hence z is a quadratic non-residue modulo N. But on the other hand, our assumption that \(x^2\cdot z = w^2\) implies that z is a quadratic residue modulo N, arriving at a contradiction.

Over all, we obtain that in each round whose input (xyT) satisfies \(y \ne x^{2^T}\), the probability that the input \((x',y',T')\) to the next round will satisfy \(y' = (x')^{T'}\) is at most \(2^{-\lambda }\) over the choice of \(r\leftarrow [2^\lambda ]\). The soundness of our protocol then follows by taking a union bound over all rounds. We refer the reader to the full version for a detailed description of our protocol and a formal analysis of its soundness.

1.4 Paper Organization

The remainder of this paper is organized as follows. First, in Sect. 2 we present the basic notation, mathematical background and standard cryptographic primitives that are used throughout the paper. In Sect. 3 we formally define proofs of correct exponentiation and their batch variant. In Sect. 4 we present a simplified version of our Random Subsets Compiler for general groups; and then in Sect. 5 we present the necessary amendments required in order to obtain the full-fledged compiler. In Sect. 6 we present our improved compiler and analyze its security based on the low order assumption.

Due to space limitations some of our contributions appear in the full version of this paper. In particular, in the full version we give tighter security analyses for our more efficient compiler in the specific cases of \(QR_N^+\) and RSA groups. In addition, we show how to obtain strong soundness for this compiler in safe RSA groups, and present our new proof of correct exponentiation in such groups.

2 Preliminaries

In this section we present the basic notions and standard cryptographic tools that are used in this work. For an integer \(n \in \mathbb {N}\) we denote by [n] the set \(\{1,\ldots , n\}\). For a set \(\mathcal {X}\), we denote by \(2^{\mathcal {X}}\) the power set of \(\mathcal {X}\); i.e., the set which contains all subsets of \(\mathcal {X}\) (including the empty set and \(\mathcal {X}\) itself). For a distribution X we denote by \(x \leftarrow X\) the process of sampling a value x from the distribution X. Similarly, for a set \(\mathcal {X}\) we denote by \(x \leftarrow \mathcal {X}\) the process of sampling a value x from the uniform distribution over \(\mathcal {X}\). A function \(\nu : \mathbb {N} \rightarrow \mathbb {R}^+\) is negligible if for any polynomial \(p(\cdot )\) there exists an integer N such that for all \(n > N\) it holds that \(\nu (n) \le 1/p(n)\).

Pseudorandom Functions. We use the following standard notion of a pseudorandom function. Let \(\mathsf {PRF}= (\mathsf {PRF}.\mathsf {Gen}, \mathsf {PRF}.\mathsf {Eval})\) be a function family over domain \(\{ \mathcal {X}_\lambda \}_{\lambda \in \mathbb {N}}\) with range \(\{ \mathcal {Y}_\lambda \}_{\lambda \in \mathbb {N}}\) and key space \(\{ \mathcal {K}_\lambda \}_{\lambda \in \mathbb {N}}\), such that:

  • \(\mathsf {PRF}.\mathsf {Gen}\) is a probabilistic polynomial-time algorithm, which takes as input the security parameter \(\lambda \in \mathbb {N}\) and outputs a key \(K \in \mathcal {K}_\lambda \).

  • \(\mathsf {PRF}.\mathsf {Eval}\) is a deterministic polynomial-time algorithm, which takes as input a key \(K\in \mathcal {K}_\lambda \) and a domain element \(x \in \mathcal {X}_\lambda \) and outputs a value \(y \in \mathcal {Y}_\lambda \).

For ease of notation, for a key \(K\in \mathcal {K}_\lambda \), we denote by \(\mathsf {PRF}_K(\cdot )\) the function \(\mathsf {PRF}.\mathsf {Eval}(K,\cdot )\). We also assume without loss of generality that for every \(\lambda \in \mathbb {N}\), it holds that \(\mathcal {K}_\lambda = \{0,1\}^\lambda \) and that \(\mathsf {PRF}.\mathsf {Gen}(1^\lambda )\) simply samples K from \(\{0,1\}^\lambda \) uniformly at random. Using these conventions, the following definition captures the standard notion of a pseudorandom function family.

Definition 2.1

A function family \(\mathsf {PRF}= (\mathsf {PRF}.\mathsf {Gen}, \mathsf {PRF}.\mathsf {Eval})\) is pseudorandom if for every probabilistic polynomial-time algorithm \(\mathsf{D}\), there exists a negligible function \(\nu (\cdot )\) such that

$$\begin{aligned} \mathsf {Adv}_{\mathsf {PRF}, \mathsf{D}}(\lambda ) {\mathop {=}\limits ^\mathsf{def}} \left| \Pr _{K \leftarrow \{0,1\}^\lambda } \left[ \mathsf{D}(1^\lambda )^{\mathsf {PRF}_K(\cdot )} = 1 \right] - \Pr _{f \leftarrow \mathcal {F}_\lambda } \left[ \mathsf{D}(1^\lambda )^{f(\cdot )} = 1 \right] \right| \le \nu (\lambda ), \end{aligned}$$

for all sufficiently large \(\lambda \in \mathbb {N}\), where \(\mathcal {F}_\lambda \) is the set of all functions mapping \(\mathcal {X}_\lambda \) into \(\mathcal {Y}_\lambda \).

RSA Groups and the Factoring Assumption. We will use to following formalization in order to reason about ensembles of RSA moduli and the hardness of finding their factorizations. Let \(\mathsf {ModGen}\) be a probabilistic polynomial-time algorithm, which takes as input the security parameter \(\lambda \in \mathbb {N}\), and outputs a bi-prime modulus \(N = p\cdot q\) and possibly additional parameters \(\mathsf {pp}\).

Definition 2.2

The factoring assumption holds with respect to modulus generation algorithm \(\mathsf {ModGen}\) if for every probabilistic polynomial time algorithm \(\mathsf{A}\), there exists a negligible function \(\nu (\cdot )\) such that

figure a

for all sufficiently large \(\lambda \in \mathbb {N}\).

The following simple lemma (see for example [Bon99]) states that it is easy to find a factorization of an RSA modulus N given a non-trivial square root of unity in the RSA group \(\mathbb {Z}^*_N\).

Lemma 2.3

There exist a deterministic algorithm \(\mathsf{A}\), such that for every pair (pq) of primes and every group element \(x \in \mathbb {Z}^*_N\) for which \(x^2 = 1\) and \(x\not \in \{1,-1\}\), where \(N = p\cdot q\), it holds that \(\mathsf{A}(N, x)\) outputs p and q. Moreover, \(\mathsf{A}\) runs in time polynomial in \(\log (N)\).

Using Safe Primes. We will sometimes focus on the case in which the RSA modulus N is the product of two safe primes. That is, \(N = p'\cdot q'\), such that \(p'\) and \(q'\) are primes and there exist primes p and q for which \(p' = 2p+1\) and \(q' = 2q + 1\). In this case, the order of the RSA group \(\mathbb {Z}^*_N\) is \(\varphi (N) = 4\cdot p\cdot q\), where \(\varphi (\cdot )\) is Euler’s totient function.

The Group \(\boldsymbol{{QR}_N^+.}\) Another group of interest in this work is the group \({QR}_N\) of quadratic residues modulo N, where N is an RSA modulus generated by the modulus generation algorithm \(\mathsf {ModGen}\). This is the group defined by

$$\begin{aligned} {QR}_N{\mathop {=}\limits ^\mathsf{def}} \left\{ x^2 \mod N \; : \; x\in \mathbb {Z}^*_N \right\} . \end{aligned}$$

The order of the group \({QR}_N\) is \(\varphi (N)/4\). If N is the product of two safe primes \(p' = 2p+1\) and \(q' = 2q +1\), this means the order of \({QR}_N\) is \(p \cdot q\).

We will also consider the group \({QR}_N^+\) of signed quadratic residues modulo N, defined by

$$\begin{aligned} {QR}_N^+ {\mathop {=}\limits ^\mathsf{def}} \left\{ |x| \; : \; x \in {QR}_N\right\} , \end{aligned}$$

where the absolute value operator \(|\cdot |\) is with respect to the representation of \(\mathbb {Z}_N^*\) elements as elements in \(\{ -(N-1)/2,\ldots ,(N-1)/2 \}\). This is because membership in \({QR}_N^+\) can be decided in polynomial timeFootnote 10 and we will implicitly use this fact when reasoning about these groups. The map \(|\cdot |\) acts as an isomorphism from \({QR}_N\) to \({QR}_N^+\), and hence \({QR}_N^+\) is also of order \(\varphi (N)/4\). For a more in-depth discussion on the use of \({QR}_N^+\) instead of \({QR}_N\) see [FS00, HK09, Pie19].

Working Over General Groups. Some of the results in this paper are more general, and do not assume working over a specific group. In these cases, the algorithm \(\mathsf {ModGen}\) will be replaced by a group generation algorithm \(\mathsf {GGen}\). This is a probabilistic polynomial-time algorithm which takes in as input the security parameter and outputs a description of a group \(\mathbb {G}\), and possibly additional public parameters \(\mathsf {pp}\). All groups in this paper are assumed to be abelian and we will not note this explicitly hereinafter. We will also implicitly assume that for all groups considered in this paper, their group operation is implementable in time polynomial in the security parameter \(\lambda \).

The Low Order Assumption. We will rely on the following formalization of the low order assumption, put forth by Boneh et al. [BBF18] as a prerequesite for instantiating Pietrzak’s protocol [Pie19] in general groups. For a group \(\mathbb {G}\), let \(1_\mathbb {G}\) denote the identity element of the group.

Definition 2.4

Let \(\mathsf {GGen}\) be a group generation algorithm, and let \(d = d(\lambda )\) be an integer function of the security parameter \(\lambda \in \mathbb {N}\). We say that the d-low order assumption holds with respect to \(\mathsf {GGen}\) if for every probabilistic polynomial-time algorithm \(\mathsf {A}\), there exists a negligible function \(\nu (\cdot )\) such that

$$\begin{aligned} \mathsf {Adv}^\mathsf{LowOrd}_{\mathsf {GGen}, d, \mathsf {A}}(\lambda ) {\mathop {=}\limits ^\mathsf{def}} \Pr \left[ \mathsf{LowOrd}^{\mathsf {GGen}}_{d,\mathsf{A}}(\lambda ) =1 \right] \le \nu (\lambda ) \end{aligned}$$

for all sufficiently large \(\lambda \in \mathbb {N}\), where the experiment \( \mathsf{LowOrd}^{\mathsf {GGen}}_{d,\mathsf{A}}(\lambda ) \) is defined as follows:

  1. 1.

    \(\mathbb {G} \leftarrow \mathsf {GGen}(1^\lambda )\).

  2. 2.

    \((x,\omega ) \leftarrow \mathsf {A}(\mathbb {G})\).

  3. 3.

    Output 1 if \(x \ne 1_\mathbb {G}\); \(\omega < d\); and \(x^\omega = 1_{\mathbb {G}}\). Otherwise, output 0.

Pietrzak observed (although not in this terminology) that the d-low order assumption holds information-theoretically in the group \(QR^+_N\), whenever N is the product of two safe primes \(p' = 2p+1\) and \(q' = 2q +1\), and \(d \le \min \{p,q \}\).

In cases where \(\mathbb {G}\) is naturally embedded in some ring R and \(-1_{\mathbb {G}} \in \mathbb {G}\) (that is, the additive inverse of the multiplicative identity is an element of the group),Footnote 11 we can consider a weakening of Definition 2.4, requiring that the adversary is unable to come up with a low order element other than \(\pm 1_{\mathbb {G}}\).

Definition 2.5

Let \(\mathsf {GGen}\) be a group generation algorithm, and let \(d = d(\lambda )\) be an integer function of the security parameter \(\lambda \in \mathbb {N}\). We say that the weak d-low order assumption holds with respect to \(\mathsf {GGen}\) if for every probabilistic polynomial-time algorithm \(\mathsf {A}\), there exists a negligible function \(\nu (\cdot )\) such that

$$\begin{aligned} \mathsf {Adv}^\mathsf{WeakLO}_{\mathsf {GGen}, d, \mathsf {A}}(\lambda ) {\mathop {=}\limits ^\mathsf{def}} \Pr \left[ \mathsf{WeakLO}^{\mathsf {GGen}}_{d,\mathsf{A}}(\lambda ) =1 \right] \le \nu (\lambda ) \end{aligned}$$

for all sufficiently large \(\lambda \in \mathbb {N}\), where the experiment \( \mathsf{WeakLO}^{\mathsf {GGen}}_{d,\mathsf{A}}(\lambda )\) is defined as follows:

  1. 1.

    \(\mathbb {G} \leftarrow \mathsf {GGen}(1^\lambda )\).

  2. 2.

    \((x,\omega ) \leftarrow \mathsf {A}(\mathbb {G})\).

  3. 3.

    Output 1 if \(x \not \in \{1_{\mathbb {G}}, - 1_{\mathbb {G}} \}\); \(\omega < d\); and \(x^\omega = 1_{\mathbb {G}}\). Otherwise, output 0.

Seres and Burcsi [SB20] recently proved (as a special case) that in RSA groups with a modulus N which is the product of two safe primes \(p' = 2p+1\) and \(q' = 2q +1\), the weak d-low order assumption for \(d \le \min \{p,q \}\) is equivalent to factoring N.

3 Succinct Proofs of Correct Exponentiation

In this section we review the notion of succinct proofs of correct exponentiation. First, in Sect. 3.1, we define proofs of correct exponentiation for a single instance and then, in Sect. 3.2, we extend the definition to account for the task of batch verification.

3.1 The Basic Definition

Loosely speaking, a proof of correct exponentiation is a protocol executed by two parties, a prover and a verifier, with a common input (xye), where x and y are elements in some group \(\mathbb {G}\) and e is an integer. The goal of the prover is to convince the verifier that \(y = x^e\). Of course, the verifier can just compute \(x^e\) and compare the result to y on her own, but we will be interested in protocols in which the verifier works much less than that. Concretely, we are typically interested in protocols in which the verifier runs in time \(\ll \mathsf{poly}(\log (e), \lambda )\), which is the time it will take the verifier to compute \(x^e\) on her own, assuming that the group operation is implementable in time polynomial in the security parameter \(\lambda \in \mathbb {N}\).

More formally, a proof of correct exponentiation (PoCE) is a triplet \(\pi =(\mathsf {GGen}, \mathsf {P}, \mathsf {V})\) of probabilistic polynomial-time algorithms, where \(\mathsf {GGen}\) is a group generation algorithm (recall Sect. 2), \(\mathsf {P}\) is the prover and \(\mathsf {V}\) is the verifier. We denote by the random variable corresponding to the output of \(\mathsf {V}\) when the joint input to \(\mathsf {P}\) and to \(\mathsf {V}\) is \(\mathsf{input}\) and \(\mathsf {P}\) additionally receives the private auxiliary information \(\mathsf{aux}\). In case \(\mathsf {P}\) receives no auxiliary information, we write . The properties which should be satisfied by a PoCE are defined in the following definition.

Definition 3.1

Let \(\delta = \delta (\lambda )\) be a function of the security parameter \(\lambda \in \mathbb {N}\), and let \(t = t(\lambda ,e)\) and \(c = c(\lambda ,e)\) be functions of \(\lambda \in \mathbb {N}\) and of the exponent \(e\in \mathbb {N}\). A triplet \(\pi =(\mathsf {GGen}, \mathsf {P}, \mathsf {V})\) of probabilistic polynomial-time algorithms is said to be a \((\delta ,c, t)\)-proof of correct exponentiation (PoCE) if the following conditions hold:

  1. 1.

    Completeness: For every \(\lambda \in \mathbb {N}\), for every \((\mathbb {G},\mathsf {pp})\) in the support of \(\mathsf {GGen}(1^\lambda )\) and for every input \((x,y, e) \in \mathbb {G}^2 \times \mathbb {N}\) such that \(x^e = y\), it holds that

    figure b

    where the probability is over the randomness of \(\mathsf {P}\) and of \(\mathsf {V}\).

  2. 2.

    \(\boldsymbol{\delta }\)-Soundness: For every pair \(\mathsf {P}^*= (\mathsf {P}^*_1, \mathsf {P}^*_2)\) of probabilistic polynomial-time algorithms, there exists a negligible function \(\nu (\cdot )\) that

    for all sufficiently large \(\lambda \in \mathbb {N}\).

  3. 3.

    Succinctness: For every \(\lambda \in \mathbb {N}\), for every \((\mathbb {G},\mathsf {pp})\) in the support of \(\mathsf {GGen}(1^\lambda )\) and for every input \((x,y, e) \in \mathbb {G}^2 \times \mathbb {N}\), it holds that: The total length of all messages exchanged between \(\mathsf {P}\) and \(\mathsf {V}\) in a random execution of the protocol on joint input \((\mathbb {G}, \mathsf {pp}, x,y,e)\) is at most \(c(\lambda ,e)\) with probability 1, where the probability is over the randomness of \(\mathsf {P}\) and of \(\mathsf {V}\).

  4. 4.

    Efficient verification: For every \(\lambda \in \mathbb {N}\), for every \((\mathbb {G},\mathsf {pp})\) in the support of \(\mathsf {GGen}(1^\lambda )\) and for every input \((x,y, e) \in \mathbb {G}^2 \times \mathbb {N}\), it holds that: The running time of \(\mathsf {V}\) in a random execution of the protocol on joint input \((\mathbb {G}, \mathsf {pp}, x,y,e)\) is at most \(t(\lambda ,e)\) with probability 1, where the probability is over the randomness of \(\mathsf {P}\) and of \(\mathsf {V}\).

3.2 Batch Proofs of Correct Exponentiation

We now turn to define batch proofs of correct exponentiation. In such proofs, the joint input is composed of 2n group elements \(x_1,\ldots ,x_n,y_1,\ldots ,y_n\) and an exponent e, for some \(n\in \mathbb {N}\). The prover now wishes to convince the verifier that \(x_i^e = y_i\) for each of the \(i \in [n]\). The definition is a natural extension of Definition 3.1, except that now the communication complexity and the running time of the verifier may both scale with the integer n. It might also make sense to consider the case where the soundness error \(\delta \) is also a function of n, but this will not be the case in our protocols, and hence we do not account for this case in our definition. The formal definition below uses the same notation as did Definition 3.1.

Definition 3.2

Let \(\delta = \delta (\lambda )\) be a function of the security parameter \(\lambda \in \mathbb {N}\), and let \(t = t(\lambda ,e,n)\) and \(c = c(\lambda , e, n)\) be function of \(\lambda \), of the exponent \(e \in \mathbb {N}\) and of \(n \in \mathbb {N}\). A triplet \(\pi =(\mathsf {GGen}, \mathsf {P}, \mathsf {V})\) of probabilistic polynomial-time algorithms is said to be a \((\delta ,c,t)\)-batch proof of correct exponentiation (BPoCE) if the following conditions hold:

  1. 1.

    Completeness: For every integers \(\lambda , n \in \mathbb {N}\), every \((\mathbb {G},\mathsf {pp})\) in the support of \(\mathsf {GGen}(1^\lambda )\) and every input \((\vec {x}=(x_1,\ldots , x_n), \vec {y}=(y_1,\ldots , y_n), e) \in \mathbb {G}^n \times \mathbb {G}^n \times \mathbb {N}\) such that \(x_i^e = y_i\) for every \(i \in [n]\), it holds that

    figure c

    where the probability is over the randomness of \(\mathsf {P}\) and of \(\mathsf {V}\).

  2. 2.

    \(\boldsymbol{\delta }\)-Soundness: For every pair \(\mathsf {P}^*= (\mathsf {P}^*_1, \mathsf {P}^*_2)\) of probabilistic polynomial-time algorithms, there exists a negligible function \(\nu (\cdot )\) such that

    for all sufficiently large \(\lambda \in \mathbb {N}\), where \(\vec {x}=(x_1,\ldots ,x_n)\) and \(\vec {y} = (y_1,\ldots ,y_n)\).

  3. 3.

    Succinctness: For every \(\lambda , n \in \mathbb {N}\), for every \((\mathbb {G},\mathsf {pp})\) in the support of \(\mathsf {GGen}(1^\lambda )\) and for every input \((\vec {x},\vec {y}, e) \in \mathbb {G}^n \times \mathbb {G}^n \times \mathbb {N}\), it holds that: The total length of all messages exchanged between \(\mathsf {P}\) and \(\mathsf {V}\) in a random execution of the protocol on joint input \((\mathbb {G}, \mathsf {pp}, \vec {x},\vec {y},e)\) is at most \(c(\lambda ,e, n)\) with probability 1, where the probability is over the randomness of \(\mathsf {P}\) and of \(\mathsf {V}\).

  4. 4.

    Efficient verification: For every \(\lambda , n \in \mathbb {N}\), for every \((\mathbb {G},\mathsf {pp})\) in the support of \(\mathsf {GGen}(1^\lambda )\) and for every input \((\vec {x},\vec {y}, e) \in \mathbb {G}^n \times \mathbb {G}^n \times \mathbb {N}\), it holds that: The running time of \(\mathsf {V}\) in a random execution of the protocol on joint input \((\mathbb {G}, \mathsf {pp}, \vec {x},\vec {y},e)\) is at most \(t(\lambda ,e, n)\) with probability 1, where the probability is over the randomness of \(\mathsf {P}\) and of \(\mathsf {V}\).

On Using a Single Exponent. The above definition considers the setting of a single exponent for all n pairs of group elements; that is, the joint input includes a single exponent \(e \in \mathbb {N}\) for which the prover contends that \(x_i^e = y_i\) for all \(i\in [n]\). Note that this setting is indeed in line with the motivation described in Sect. 1 of batch verification of many VDF outputs based on the repeated squaring function. This is the case, since in this scenario the exponent e is determined by the delay parameter T. In the examples mentioned in Sect. 1, a scenario in which all outputs were computed with respect to the same delay parameter is reasonable. It might still be of interest, both theoretically and for specific applications (see for example [BBF19]), to construct batch proofs of correct exponentiation with different exponents, and we leave it is as an interesting open question.

4 Warm-Up: The Random Subset Compiler

In this section we present a simplified version of our general compiler, which we call “The Random Subset Compiler” following Bellare, Garay and Rabin [BGR98]. This simplified version is based on a technique introduced by Bellare et al. for related, yet distinct, purposes (recall Sect. 1.2). In our context of proofs of correct exponentiation, this technique introduces quite a large communication overhead and a considerable amount of additional soundness error. Nevertheless, we start off with this simplified version as it already captures the main ideas behind the full-fledged compiler. Then, in Sect. 5 we show how to simultaneously amplify the soundness guarantees of our compiler and considerably reducing the communication overhead.

Let \(\delta = \delta (\lambda )\) be a function of the security parameter \(\lambda \in \mathbb {N}\), and let \(c = c(\lambda , e)\) and \(t = t(\lambda , e)\) be functions of \(\lambda \) and of the exponent \(e\in \mathbb {N}\). Our compiler uses as a building block any \((\delta ,c,t)\)-PoCE (recall Definition 3.1) \(\pi = (\mathsf {GGen}, \mathsf {P}, \mathsf {V})\) and produces a protocol \(\mathsf{Batch}_1(\pi ) = (\mathsf {GGen}, \mathsf {P}_\mathsf{Batch}, \mathsf {V}_\mathsf{Batch})\), which is a \((\delta ',c', t')\)-BPoCE for \(\delta ' = \delta + 1/2\) and for related functions \(c' = c'(\lambda ,e, n)\) and \(t' = t'(\lambda ,e,n)\).

figure d

Theorem 4.1 below establishes the completeness, soundness, succinctness and verifier efficiency of \(\mathsf{Batch}_1(\pi )\). It uses the following notation: We let \(t_\mathsf{op}(\lambda )\) denote a bound on the time required to apply the binary group operation on two group elements, in a group \(\mathbb {G}\) generated by \(\mathsf {GGen}(1^\lambda )\).

Theorem 4.1

Assume that \(\pi \) is a \((\delta ,c,t)\)-PoCE. Then, \(\mathsf{Batch}_1(\pi )\) is a \((\delta ',c',t')\)-BPoCE, where for every \(\lambda , e, n\in \mathbb {N}\):

  • \(\delta '(\lambda ) = \delta (\lambda ) + 1/2\).

  • \(c'(\lambda ,n, e) = c(\lambda , e) + n\).

  • \(t'(\lambda , n, e) = t(\lambda , e) + O(n\cdot t_\mathsf{op}(\lambda ))\).

We start by presenting our main technical lemma, which we will use in the proof of Theorem 4.1 as well as in subsequent sections.

Lemma 4.2

Let \(\mathbb {G}\) be a group. For every integers \(n, e \in \mathbb {N}\) and vectors \(\vec {x}, \vec {y} \in \mathbb {G}^n\) the following holds: If there exists an index \(i \in [n]\) such that \(x_i^e \ne y_i\), then

figure e

Proof of

Lemma 4.2. For a subset \(\mathcal {I} \subseteq [n]\), we say that \(\mathcal {I}\) is biased if \(\left( \prod _{i\in \mathcal {I}} x_i\right) ^e \ne \prod _{i\in \mathcal {I}} y_i\), and otherwise we say that \(\mathcal {I}\) is balanced. Denote by \(\mathcal {S}_\mathsf{Balanced}\) and by \(\mathcal {S}_\mathsf{Biased}\) the set of all balanced subsets of [n] and the set of all biased subsets of [n], respectively.

Suppose that there exists an index \(i \in [n]\) such that \(x_i^e \ne y_i\), and let \(i^*\) be an arbitrary such index (e.g., the minimal index for which the inequality holds). We wish to show that \(\left| \mathcal {S}_\mathsf{Balanced} \right| \le \left| \mathcal {S}_\mathsf{Biased} \right| \), as this will conclude the proof of the lemma. To this end, consider a partition \(\mathcal {P}\) of \(2^{[n]}\) to \(2^{n-1}\) pairs as follows:

$$\begin{aligned} \mathcal{P} = \left\{ (\mathcal {I}, \mathcal {I}\cup \{ i^*\}) \ : \ i^*\not \in \mathcal {I} \right\} . \end{aligned}$$

In each pair \( (\mathcal {I}, \mathcal {I}\cup \{ i^*\})\) in \(\mathcal {P}\), at most one subset of \(\mathcal {I}\) and \(\mathcal {I}\cup \{ i^*\}\) can be balanced. This is the case since if \(\mathcal {I} \in \mathcal {S}_\mathsf{Balanced}\), then it must be that \(\mathcal {I}\cup \{ i^*\} \in \mathcal {S}_\mathsf{Biased}\) since

$$\begin{aligned} \prod _{i \in \mathcal {I} \cup \{ i^*\}} x_i^e= & {} \left( x_i^*\right) ^e \cdot \prod _{i \in \mathcal {I}} x_i^e \nonumber \\= & {} \left( x_i^*\right) ^e \cdot \prod _{i \in \mathcal {I}} y_i \end{aligned}$$
(1)
$$\begin{aligned}\ne & {} y_{i^*} \cdot \prod _{i \in \mathcal {I}} y_i \nonumber \\= & {} \prod _{i \in f(\mathcal {I})} y_i, \end{aligned}$$
(2)

where Eq. (1) holds because \(\mathcal {I}\) is balanced, and (2) holds due to the assumption that \(x_{i^*}^e \ne y_{i^*}\). Since at most one subset in each pair of the \(2^{n-1}\) pairs in \(\mathcal{P}\) is balanced, it holds that \(\left| \mathcal {S}_\mathsf{Balanced} \right| \le \left| \mathcal {S}_\mathsf{Biased} \right| \) concluding the proof.    \(\blacksquare \)

We are now ready to prove Theorem 4.1, establishing the completeness, soundness, verifier efficiency and succinctness of our protocol \(\pi _\mathsf{Batch}\).

Proof of

Theorem 4.1. Completeness follows immediately from the completeness of \(\pi \) and the fact that if \(x_i^e = y_i\) for every \(i \in [n]\), then it also holds that \(\left( \prod _{i\in \mathcal {I}} x_i\right) ^e = \prod _{i\in \mathcal {I}} y_i\) for any choice of \(\mathcal {I} \in 2^{[n]}\).

We now turn to prove that \(\mathsf{Batch}_1(\pi )\) satisfies \(\delta '\)-soundness for \(\delta ' = \delta + 1/2\). Let \(\mathsf {P}_\mathsf{Batch}^*= (\mathsf {P}_{\mathsf{Batch},1}^*, \mathsf {P}_{\mathsf{Batch},2}^*)\) be a malicious prover attempting to break the soundness of \(\mathsf{Batch}_1(\pi )\). Consider the following pair \(\mathsf {P}^*= (\mathsf {P}_1^*, \mathsf {P}_2^*)\) attempting to break the soundness of \(\pi \). On input \((\mathbb {G},\mathsf {pp})\) generated by \(\mathsf {GGen}(1^\lambda )\), the algorithm \(\mathsf {P}^*_1\) is defined as follows:

  1. 1.

    Invoke \((n,\vec {x},\vec {y},e , \mathsf {st}) \leftarrow \mathsf {P}_{\mathsf{Batch},1}^*(\mathbb {G},\mathsf {pp})\), where \(\vec {x} = (x_1, \ldots , x_n)\) and \(\vec {y} = (y_1,\ldots , y_n)\).

  2. 2.

    Sample \(\mathcal {I} \leftarrow 2^{[n]}\).

  3. 3.

    Compute \(x_\mathsf{prod} = \prod _{i \in \mathcal {I}} x_i\) and \(y_\mathsf{prod} = \prod _{i \in \mathcal {I}} y_i\).

  4. 4.

    Set \(\mathsf {st}' = (\mathsf {st}, \vec {x},\vec {y},\mathcal {I})\).

  5. 5.

    Output \((x_\mathsf{prod}, y_\mathsf{prod}, e, \mathsf {st}')\).

Then, the algorithm \(\mathsf {P}_2^*\), running on private input \(\mathsf {st}'\) and interacting with the verifier \(\mathsf {V}\) on joint input \((\mathbb {G},\mathsf {pp}, x_\mathsf{prod}, y_\mathsf{prod}, e)\), is defined as follows:

  1. 1.

    Parse \(\mathsf {st}'\) as \((\mathsf {st}, \vec {x}, \vec {y}, \mathcal {I})\).

  2. 2.

    Invoke \(\mathsf {P}_{\mathsf{Batch},2}^{*}\) on input st and simulate to \(\mathsf {P}_{\mathsf{Batch},2}^{*}\) an execution of \(\mathsf{Batch}_1(\pi )\) on joint input \((\mathbb {G},\mathsf {pp}, \vec {x}, \vec {y}, e)\):

    1. (a)

      Send \(\mathcal {I}\) to \(\mathsf {P}_{\mathsf{Batch},2}^*\) as the first message of the verifier in \(\mathsf{Batch}_1(\pi )\).

    2. (b)

      Let \(\mathsf {V}\) play the role of \(\mathsf {V}_\mathsf{Batch}\) in all subsequent rounds, by relaying all messages from \(\mathsf {P}_{\mathsf{Batch},2}^*\) to \(\mathsf {V}\) and vice versa.

We now turn to bound \(\mathsf {Adv}_{\pi , \mathsf {P}^*}^\mathsf{PoCE}\). To that end, we define the following events:

  • Let \(\mathsf{NotEqual}\) denote the event in which for some \(i\in [n]\), it holds that \(x_i^e \ne y_i\), where \(n,\vec {x},\vec {y}\) and e are outputted by \(\mathsf {P}_{\mathsf{Batch},1}^*\) in Step 1 of \(\mathsf {P}_1^*\), \(\vec {x} = (x_1,\ldots , x_n)\) and \(\vec {y} = (y_1,\ldots , y_n)\).

  • Let \(\mathsf{BiasedSet}\) be the event in which

    $$\begin{aligned} \left( \prod _{i\in \mathcal {I}} x_i\right) ^e \ne \prod _{i\in \mathcal {I}} y_i, \end{aligned}$$

    where \(n,\vec {x},\vec {y}\) and e are as before and \(\mathcal {I}\) is the random subset sampled by \(\mathsf {P}_1^*\) in Step 2.

  • Let \(\mathsf{PWin}\) be the event in which \(\mathsf {P}^*\) wins; that is, \(\mathsf {V}\) outputs 1 and \(\mathsf{BiasedSet}\) holds.

  • Let \(\mathsf{PBatchWin}\) be the event in which \(\mathsf {P}_\mathsf{Batch}^*\) wins in the simulation of \(\mathsf{Batch}_1(\pi )\) by \(\mathsf {P}^*\): The simulated \(\mathsf {V}_\mathsf{Batch}\) outputs 1 and \(\mathsf{NotEqual}\) holds.

Equipped with this notation, it holds that

figure f

where Eq. (3) holds since conditioned on \(\mathsf{PBatchWin}\), it holds that \(\mathsf {V}_\mathsf{Batch}\) (in the simulation of \(\mathsf{Batch}_1(\pi )\)) outputs 1. This implies that \(\mathsf {V}\) outputs 1, and hence that . By total probability,

figure g

where Eq. (4) holds since \(\mathsf {P}^*\) perfectly simulates \(\mathsf{Batch}_1(\pi )\) to \(\mathsf {P}_\mathsf{Batch}^*\); and Eq. (5) is true since \(\mathsf{PBatchWin}\) is contained in \(\mathsf{NotEqual}\), and hence \(\mathsf{PBatchWin} \wedge \overline{\mathsf{BiasedSet}}\) is contained in \(\mathsf{NotEqual}\wedge \overline{\mathsf{BiasedSet}}\).

We are left with bounding . Indeed, Lemma 4.2 immediately implies that

figure h

Taking Eq. (3), (6) and (7) together and rearranging, we get that

$$\begin{aligned} \mathsf {Adv}_{\mathsf{Batch}_1(\pi ), \mathsf {P}_\mathsf{Batch}^*}^\mathsf{BPoCE} \le \mathsf {Adv}_{\pi , \mathsf {P}^*}^\mathsf{PoCE} +\frac{1}{2}, \end{aligned}$$

which implies – since \(\pi \) satisfies \(\delta \)-soundness – that there exists a negligible function \(\nu (\cdot )\) such that

$$\begin{aligned} \mathsf {Adv}_{\mathsf{Batch}_1(\pi ), \mathsf {P}_\mathsf{Batch}^*}^\mathsf{BPoCE} \le \delta (\lambda ,e) +\frac{1}{2} + \nu (\lambda ), \end{aligned}$$

for all sufficiently large \(\lambda \in \mathbb {N}\).

We have proved that \(\mathsf{Batch}_1(\pi )\) satisfies \(\delta '\)-soundness for \(\delta ' = \delta + 1/2\). To conclude the proof, we are left with bounding the verifier’s running time \(t'\) and the communication complexity \(c'\) of \(\mathsf{Batch}_1(\pi )\). As for the running time of \(\mathsf {V}_\mathsf{Batch}\): The verifier samples a random subset \(\mathcal {I}\), computes \(\prod _{i\in \mathcal {I}} x_i\) and \(\prod _{i\in \mathcal {I}} y_i\) and participates in a single execution of \(\pi \). Since the products computed by \(\mathsf {V}_\mathsf{Batch}\) include at most n group elements each, it follows that her running time is \(t' = O(n \cdot t_\mathsf{op}(\lambda )) + t\). The communication in \(\mathsf{Batch}_1(\pi )\) includes the subset \(\mathcal {I}\), which can be encoded using n bits, and all messages exchanged in a single execution of \(\pi \). Therefore, the total communication is \(c' = c + n\). This concludes the proof of Theorem 4.1.    \(\blacksquare \)

5 Amplifying Soundness and Reducing Communication

In this section, we address the two main drawbacks of the compiler from Sect. 4; namely, its large soundness error, and the fact that the communication complexity is linearly dependent on the number n of pairs \((x_i,y_i)\). In order to do so, we introduce an improved compiler, which differs from the one found in Sect. 4 in two respects. First, the verifier now chooses m random subsets \(\mathcal {I}_1, \ldots , \mathcal {I}_m \subseteq [n]\) for some integer m which is a parameter of the protocol, and the parties invoke m parallel executions of the underlying protocol \(\pi \) on the m inputs which are induced by these subsets. Note that sending the representation of m random subsets of [n] requires that the verifier sends additional \(m\cdot n\) bits to the prover. To avoid this communication overhead, we let the verifier succinctly represent these m subsets via a single key to pseudorandom function, thus reducing the additive communication overhead to just \(\lambda \) bits, where \(\lambda \in \mathbb {N}\) is the security parameter. As we will show, this essentially does not harm the soundness guaranteed by the protocol.

We now turn to formally present our compiler. Let \(\delta = \delta (\lambda )\) a be a function of the security parameter \(\lambda \in \mathbb {N}\), and let \(c = c(\lambda , e)\) and \(t = t(\lambda , e)\) be functions of \(\lambda \) and of the exponent \(e\in \mathbb {N}\). Our compiler is parameterized by an integer \(m\in \mathbb {N}\), and uses the following building blocks:

  • A \((\delta ,c,t)\)-PoCE \(\pi = (\mathsf {GGen}, \mathsf {P}, \mathsf {V})\) (recall Definition 3.1).

  • A pseudorandom function family \(\mathsf {PRF}\), such that for every \(\lambda \in \mathbb {N}\) and for every key \(K \in \{0,1\}^\lambda \), the function \(\mathsf {PRF}_K\) maps inputs in \(\{0,1\}^{\lceil \log (m\cdot n) \rceil }\) to outputs in \(\{0,1\}\). For ease of notation, for integers \(j \in [m]\) and \(i \in [n]\), we will write \(\mathsf {PRF}_K(j \Vert i)\), with the intention (but without noting it explicitly) that the input to the function is a bit string representing the integers j and i.

The compiler produces a protocol \(\mathsf{Batch}_2^m(\pi ) = (\mathsf {GGen}, \mathsf {P}_\mathsf{Batch}, \mathsf {V}_\mathsf{Batch})\), which is a \((\delta ',c', t')\)-BPoCE for \(\delta ' = \delta + 2^{-m}\) and for related functions \(c' = c'(\lambda ,e, n)\) and \(t' = t'(\lambda ,e,n)\).

figure i

Theorem 5.1 establishes the completeness, soundness, succinctness and verifier efficiency of \(\mathsf{Batch}_2^m(\pi )\). Recall that we denote by \(t_\mathsf{op} = t_\mathsf{op}(\lambda )\) the time required to apply the binary group operation on two group elements, in a group \(\mathbb {G}\) generated by \(\mathsf {GGen}(1^\lambda )\). We also denote by \(t_\mathsf{prf} = t_\mathsf{prf}(\lambda ,m,n)\) the time required to compute \(\mathsf {PRF}_K(z)\) for \(K\in \{0,1\}^\lambda \) and \(z \in \{0,1\}^{\lceil \log (m\cdot n) \rceil }\).

Theorem 5.1

Assume that \(\mathsf {PRF}\) is a pseudorandom function and that \(\pi \) is a \((\delta ,c,t)\)-PoCE. Then, \(\mathsf{Batch}_2^m(\pi )\) is a \((\delta ', c', t')\)-BPoCE, where:

  • \(\delta '(\lambda ) = \delta (\lambda ) + 2^{-m}\).

  • \(c'(\lambda ,n, e) = m\cdot c(\lambda , e) + \lambda \).

  • \(t'(\lambda , n, e) = m\cdot t(\lambda , e) + \lambda + O\left( m\cdot n\cdot (t_\mathsf{op} + t_\mathsf{prf})\right) \).

Before proving Theorem 5.1, a couple of remarks are in order.

Applying the Fiat-Shamir Heuristic. If the Fiat-Shamir heuristic [FS86] can be applied to \(\pi \) in the random oracle model, then it can also be applied to \(\mathsf{Batch}_2^m(\pi )\) as well, as long as \(m = \omega (\log (\lambda ))\) (and hence \(2^{-m}\) is a negligible function of the security parameter \(\lambda \in \mathbb {N}\)). This is the case since our compiler only adds a single public coin message from the verifier to the prover. In particular, the Fiat-Shamir heuristic may be applied whenever \(\pi \) is a constant-round public-coin protocol with negligible soundness error, which is indeed the case for the protocol of Wesolowski [Wes19]. It should be noted that even though the protocol of Pietrzak [Pie19] is not constant-round, the Fiat-Shamir heuristic may still be applied to it in the random oracle model, and so it can also be applied to the compiled version thereof using our compiler.

Replacing \(\boldsymbol{\mathsf {PRF}}\) with a Pseudorandom Generator. Our use of \(\mathsf {PRF}\) enables us to handle cases in which n is not a-priori bounded and can be chosen by the malicious prover (this is in line with Definition 3.2). However, in many cases it makes sense to consider values of n which are a-priori bounded. In such cases, we can replace our use of the pseudorandom function with a pseudorandom generator \(\mathsf{PRG}\) mapping seeds of length \(\lambda \) to outputs of length \(m\cdot n\).Footnote 12 Instead of sampling a key K, the verifier will now sample a seed \(s \leftarrow \{0,1\}^\lambda \) to \(\mathsf{PRG}\) and send it over to the prover. Then, both the prover and the verifier can compute \(y = \mathsf{PRG}(s)\) and parse y as the natural encoding of m subsets \(\mathcal {I}_1,\ldots , \mathcal {I}_m\) of [n] (i.e., for each \(j \in [m]\), the vector \((y_{(j-1)\cdot n +1}, \ldots , y_{j\cdot n})\) is the characteristic vector of \(\mathcal {I}_j\)). In practice, \(\mathsf{PRG}\) can be efficiently implemented via a cryptographic hash function (e.g., SHA).

Proof of Theorem 5.1. We start by analyzing the communication complexity and the verifier’s running time. Per the running time of the verifier: It samples a random key \(K\leftarrow \{0,1\}^\lambda \), taking time \(\lambda \); makes \(m\cdot n\) invocation of \(\mathsf {PRF}\), taking time \(m\cdot n\cdot t_\mathsf{prf}\); computes 2m products of at most n group elements each, taking time \(O(m\cdot n \cdot t_\mathsf{op})\); and participates in m executions of \(\pi \), which takes time \(m\cdot t\). It follows that her running time is \(t' = t + \lambda + O(m\cdot n \cdot (t_\mathsf{op} + t_\mathsf{prf}))\). The communication in \(\mathsf{Batch}_2^m(\pi )\) includes the key K and all messages exchanged in m executions of \(\pi \), resulting in a total communication complexity of \(c' = m\cdot c + \lambda \). The \(\delta '\)-soundness of \(\mathsf{Batch}_2^m(\pi )\) follows immediately from the following lemma and the pseudorandomness of \(\mathsf {PRF}\).

Lemma 5.2

For every pair \(\mathsf {P}_\mathsf{Batch}^*= (\mathsf {P}_{\mathsf{Batch},1}^*, \mathsf {P}_{\mathsf{Batch},2}^*)\) of probabilistic polynomial time algorithms, there exist a probabilistic polynomial-time algorithm \(\mathsf{D}\) and a negligible function \(\nu (\cdot )\) such that

$$\begin{aligned} \mathsf {Adv}_{\mathsf{Batch}_2^m(\pi ), \mathsf {P}^*_\mathsf{Batch}}^\mathsf{BPoCE} \le \delta + 2^{-m} + \mathsf {Adv}_{\mathsf {PRF}, \mathsf{D}}(\lambda ) + \nu (\lambda ) \end{aligned}$$

for all sufficiently large \(\lambda \in \mathbb {N}\).

Due to space limitations, the proof of Lemma 5.2 can be found in the full version of the paper.

6 An Improved Compiler from the Low Order Assumption

In this section we present an improved compiler, which enjoys significant communication improvements over our general compiler from Sect. 5. Concretely, the communication complexity of the resulted protocol is completely independent of the additional soundness error (the verification time, though also improved, still depends on it).Footnote 13 The cost is that this compiler, unlike the previous one, relies on an algebraically-structured computational assumption – the low order assumption (recall Definition 2.4). However, this caveat does not seem overly restrictive when to compiler is applied to either the protocol of Pietrzak or to that of Wesolowski [Pie19, Wes19], both of which rely either on this assumption or stronger ones. Our compiler is inspired by an approach presented by Bellare, Garay and Rabin [BGR98] (which also implicitly underlies the batch proof of Wesolowski [Wes20]), while introducing some new ideas for the setting of succinct BPoCE (see Sects. 1.2 and 1.3 for details).

6.1 The Compiler

We now present the compiler. Let \(\mathsf {GGen}\) be a group generation algorithm (recall Sect. 2). Let \(\delta = \delta (\lambda )\) a be a function of the security parameter \(\lambda \in \mathbb {N}\), and let \(c = c(\lambda , e)\) and \(t = t(\lambda , e)\) be functions of \(\lambda \) and of the exponent \(e\in \mathbb {N}\). Our compiler is parameterized by an integer s, and uses the following building blocks:

  • A \((\delta ,c,t)\)-PoCE \(\pi = (\mathsf {GGen}, \mathsf {P}, \mathsf {V})\).

  • A pseudorandom function family \(\mathsf {PRF}\), such that for every \(\lambda \in \mathbb {N}\) and for every key \(K \in \{0,1\}^\lambda \), the function \(\mathsf {PRF}_K\) maps inputs in \(\{0,1\}^{\lceil \log (n) \rceil }\) to outputs in [s].Footnote 14 For ease of notation, for an integer \(i \in [n]\), we will write \(\mathsf {PRF}_K(i)\), with the intention (but without noting it explicitly) that the input to the function is a bit string representing the integer i.

The compiler produces a protocol \(\mathsf{Batch}_3^s(\pi ) = (\mathsf {GGen}, \mathsf {P}', \mathsf {V}')\), which is a \((\delta ',c', t')\)-BPoCE for related functions \(\delta ' = \delta '(\lambda )\), \(c' = c'(\lambda ,e, n)\) and \(t' = t'(\lambda ,e,n)\).

figure j

Similarly to our discussion in Sect. 5, if the Fiat-Shamir heuristic [FS86] can be applied to \(\pi \) in the random oracle, then it can also be applied to \(\mathsf{Batch}_3^s(\pi )\). Moreover, if the number n of pairs \((x_i,y_i)\) is a-priori bounded, then the use of \(\mathsf {PRF}\) can be replaced by a pseudorandom generator in a similar manner to what was done in Sect. 5.

Theorem 6.1 establishes the completeness, soundness, succinctness and verifier efficiency of \(\mathsf{Batch}_3^s(\pi )\), in cases where the low order assumption holds with respect to \(\mathsf {GGen}\). Recall that we denote by \(t_\mathsf{op} = t_\mathsf{op}(\lambda )\) the time required to apply the binary group operation on two group elements in \(\mathbb {G}\) that is generated by \(\mathsf {GGen}(1^\lambda )\). We also denote by \(t_\mathsf{prf} = t_\mathsf{prf}(\lambda ,s,n)\) the time required to compute \(\mathsf {PRF}_K(z)\) for \(K\in \{0,1\}^\lambda \) and \(z \in \{0,1\}^{\lceil \log (n) \rceil }\).

Theorem 6.1

Assume that \(\mathsf {PRF}\) is a pseudorandom function, that \(\pi \) is a \((\delta ,c,t)\)-PoCE, and that the s-low order assumption holds with respect to \(\mathsf {GGen}\). Then, \(\mathsf{Batch}_3^s(\pi )\) is a \((\delta ', c', t')\)-BPoCE, where:

  • \(\delta '(\lambda ) = \delta (\lambda ) + 1/s\).

  • \(c'(\lambda ,n, e) = c(\lambda , e) + \lambda \).

  • \(t'(\lambda , n, e) = t(\lambda , e) + \lambda + n \cdot t_\mathsf{prf} + O(n\cdot \log (s)\cdot t_\mathsf{op})\).

Instantiating the Compiler. Basing the compiler on the general low order assumption gives rise to several possible instantiations. In particular:

  • The groups \(\boldsymbol{QR_N}\) and \(\boldsymbol{QR_N^+}\). The low order assumption holds unconditionally in the group \(QR_N\) of quadratic residues modulo N when N is the product of two safe primes, as well as in the (isomorphic) group \(QR_N^+\) of signed quadratic residues modulo N (recall Sect. 2). Concretely, if \(N = (2p+1)\cdot (2q+1)\) for prime p and q, then \(QR_N\) and \(QR_N^+\) contain no elements of order less than \(\min \{p,q\}\). In the context of VDFs, this was observed by Pietrzak [Pie19] and by Boneh et al. [BBF18]. However, it is also plausible that the assumption holds computationally in the groups \(QR_N\) and \(QR_N^+\) when the factors of N are not safe primes.

  • RSA groups. It is tempting to instantiate our compiler within the RSA group \(\mathbb {Z}_N^*\) as perhaps the best-understood group of unknown order. Alas, the low order assumption cannot hold in \(\mathbb {Z}_N^*\) since \(-1\in \mathbb {Z}_N\) is always of order two in this group. One possible ramification, suggested by Boneh et al. is to work over the quotient group \(\mathbb {Z}_N^*/\{\pm 1\}\). Another possibility is to settle on a slightly weaker soundness guarantee for BPoCEs, which allows a malicious prover to convince the verifier that \(y_i = x_i^e\) for every i, even though for some indices \(y_i = -x_i^e\). This weakened soundness guarantee is defined in the full version and can be shown to follow from the weak low order assumption (Definition 2.5), using essentially the same proof as the proof of Theorem 6.1. Moreover, Seres and Burcsi [SB20] have shown that when N is the product of two safe primes, breaking the weak low order assumption in \(\mathbb {Z}^*_N\) is equivalent to factoring the modulus N.Footnote 15 A third option is to compose our compiler with an additional protocol, specifically dedicated to proving that \(y_i \ne -x_i^e\) for each i. We present and analyze such a protocol in the paper’s full version.

  • Class groups of imaginary quadratic fields. These groups were suggested in the context of VDFs by Wesolowski [Wes19] as candidate groups of unknown order. The security of the low order assumption in these groups is still unclear [BBF18, BKS+20]; but at least until proven otherwise, it is possible that our compiler can be instantiated in a sub-family of these groups as well. See the recent work of Belabas et al. [BKS+20] for further details on the possible choice of parameters for such groups.

On the Tightness of the Reduction. In Sect. 6.2 we prove the soundness of our compiler based on the low order assumption. This general reduction, however, suffers from a cubic security loss: Given a prover which breaks the soundness of the resulting BPoCE with advantage \(\delta + 1/s + \epsilon \), we construct an adversary breaking the low order assumption with advantage \(O(\epsilon ^3)\). Coming up with a tight reduction to the general low order assumption seems to be beyond current techniques. Hence, instead, in the full version of the paper, we give specific proofs for the soundness of our compiler in the groups \(QR_N^+\) and \(\mathbb {Z}_N^*\). In the former, our proof is information-theoretic, while in the latter, it relies on a tight reduction to the factoring assumption.

Necessity of the Low Order Assumption. We note that our reliance on the s-low-order assumption in Theorem 6.1 is necessary. To see why that is, suppose that we work in a group \(\mathbb {G}\) in which the assumption does not hold; that is, given the group description it is easy to find a group element z and an integer \(\omega < s\) such that \(z^\omega = 1_\mathbb {G}\). In this case, the attacker can output an instance \(((x_i,y_i)_{i\in [n]}, e)\) such that n and e are arbitrary integers, \(x_1,\ldots ,x_n\) are arbitrary group elements, \(y_i = x_i^e\) for every \(i \in \{2,\ldots , n\}\) and \(y_1 = z\cdot x_1^e\). The verifier \(\mathsf {V}'\) will incorrectly accept whenever the group elements x and y computed by \(\mathsf {P}'\) and \(\mathsf {V}'\) in Step refStep:CombinedInstance of the protocol satisfy \(y = x^e\). This occurs when the exponents \(\alpha _1,\ldots ,\alpha _n\) satisfy \(\left( \prod _{i=1}^{n} x_i^{\alpha _i} \right) ^e = \prod _{i=1}^{n} y_i^{\alpha _i}\). By the choices made by the attacker, this equality holds whenever \(z^{\alpha _1} = 1_{\mathbb {G}}\), which happens with probability at least 1/s.

Proof of Theorem 6.1. We first analyze the communication complexity and the running time of the verifier, and then in Sect. 6.2, we base the soundness of \(\mathsf{Batch}_3^s(\pi )\) on the low order assumption. As for the running time of verifier: It samples a random key \(K\leftarrow \{0,1\}^\lambda \), taking time \(\lambda \); makes n invocation of \(\mathsf {PRF}\), taking time \(n\cdot t_\mathsf{prf}\); raises n elements to exponents which are bounded by s, which takes time \(O(n\cdot \log (s) \cdot t_\mathsf{op})\); computes the product of n group elements, taking time \((n-1) \cdot t_\mathsf{op})\); and participates in a single execution of \(\pi \), which takes time t. It follows that her running time is \(t' = t + \lambda + n\cdot t_\mathsf{prf} + O(n \cdot (\log (s) + 1) \cdot t_\mathsf{op})\). The communication in \(\mathsf{Batch}_3^s(\pi )\) includes the key K and all messages exchanged in a single execution of \(\pi \), resulting in a total communication of \(c' = c + \lambda \).

6.2 Soundness Analysis Based on the Low Order Assumption

The proof of soundness for \(\mathsf{Batch}_3^s(\pi )\) follows the same outline as did the corresponding proof in Sect. 5, and is by reduction to the \(\delta \)-soundness of \(\pi \), to the pseudorandomness of \(\mathsf {PRF}\) and to the low order assumption with respect to \(\mathsf {GGen}\). Since the reduction and its analysis are extremely similar to those presented in Sect. 5, we forgo presenting them explicitly here, and instead concentrate on the main differences.

Concretely, the only major difference between the soundness analysis of \(\mathsf{Batch}_3^s(\pi )\) and the analysis of \(\mathsf{Batch}_2^m(\pi )\) in Sect. 5, is that instead of relying on Lemma 4.2 in order to lower bound the probability that \(x^e \ne y\) (where x and y are computed from \(\vec {x},\vec {y}\) as defined in Step refStep:CombinedInstance of \(\mathsf{Batch}_3^{s}(\pi )\)), we rely on Lemma 6.2 and Corollary 6.3 found below. Loosely, relying on the low order assumption with respect to \(\mathsf {GGen}\), Lemma 6.2 and its corollary assert that if there is some \(i\in [n]\) for which \(x_i^e \ne y_i\), then with probability at most \(1/s + \mathsf{negl}(\lambda )\) over the choice of \(\alpha _1, \ldots , \alpha _n \leftarrow [s]\), it holds that \(x^e = y\).Footnote 16

Lemma 6.2

Let \(\mathbb {G}\) be a group. For every integers \(n, e \in \mathbb {N}\), any integer s, any number \(\epsilon \in (0,1)\) and any vectors \(\vec {x}, \vec {y} \in \mathbb {G}^n\) the following holds: If there exists an index \(i \in [n]\) such that \(x_i^e \ne y_i\) and

figure k

then there exists an algorithm \(\mathsf{A}\) which receives as input \(\vec {x},\vec {y}\) and e, runs in time \(\mathsf{poly}(\lambda , n, \log (e))\), and with probability at least \(\epsilon ^2\) outputs \((u,\omega ) \in \mathbb {G}\times \mathbb {N}\) such that \(u \ne 1_\mathbb {G}\), \(1< \omega < s\) and \(u^\omega = 1_{\mathbb {G}}\).

Corollary 6.3 below follows from Lemma 6.2 and from the definition of the low order assumption (Definition 2.4).

Corollary 6.3

Let \(\mathsf {GGen}\) be a group generation algorithm and let \(s = s(\lambda )\) be a function of the security parameter \(\lambda \). If the s-low order assumption holds with respect to \(\mathsf {GGen}\), then for any probabilistic polynomial-time algorithm \(\mathsf{P}_0^*\), there exists a negligible function \(\nu (\cdot )\) such that

figure l

for all sufficiently large \(\lambda \in \mathbb {N}\).

Due to space limitations, the proofs of Lemma 6.2 and Corollary 6.3 can be found in the paper’s full version.