Abstract
A Verifiable Delay Function (VDF) is a function that takes at least T sequential steps to evaluate and produces a unique output that can be verified efficiently, in time essentially independent of T. In this work we study tight VDFs, where the function can be evaluated in time not much more than the sequentiality bound T.
On the negative side, we show the impossibility of a black-box construction from random oracles of a VDF that can be evaluated in time \(T + O(T^\delta )\) for any constant \(\delta < 1\). On the positive side, we show that any VDF with an inefficient prover (running in time cT for some constant c) that has a natural self-composability property can be generically transformed into a VDF with a tight prover efficiency of \(T+O(1)\). Our compiler introduces only a logarithmic factor overhead in the proof size and in the number of parallel threads needed by the prover. As a corollary, we obtain a simple construction of a tight VDF from any succinct non-interactive argument combined with repeated hashing. This is in contrast with prior generic constructions (Boneh et al., CRYPTO 2018) that required the existence of incremental verifiable computation, which entails stronger assumptions and complex machinery.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
1 Introduction
Verifiable Delay Functions (VDFs), introduced by Boneh et al. [5], is a recent cryptographic primitive which allows one to put protocol parties on halt for a set amount of time. VDFs are functions that are characterized by three properties. For a time parameter \(T\), (i) it should be possible to compute the function in sequential time \(T\). Furthermore, a VDF should be T-sequential in the sense that (ii) it should not be possible to compute such a function in (possibly parallel) time significantly less than \(T\). Finally, (iii) the function should produce a proof \(\pi \) which convinces a verifier that the function output has been correctly computed. Such a proof \(\pi \) should be succinct, in the sense that the proof size and the verification complexity are (essentially) independent of \(T\). These properties enable a prover to prove to the verifier that a certain amount of time has elapsed, say, by computing the function on an input provided by the verifier.
After the seminal work of Boneh et al. [5], VDFs have rapidly generated interest in the community and several follow-up constructions have recently been proposed [18, 22], and there is active work in implementing and optimizing them for near-term practical use [8]. This is partially motivated by the large range of applications of this primitive. As an example, VDFs can turn blockchains into randomness beacons – introducing a delay in the generation of the randomness prevents malicious miners from sampling blocks adaptively to bias the outcome of the beacon. VDFs are also useful as a computational time-stamp and have further applications in the context of proofs of replications [1] and resource-efficient blockchains [9].
One of the major efficiency metrics for a VDF is the prover’s computational complexity, in relation to the time parameter T. This determines the time taken to evaluate a VDF by an honest prover, and therefore the gap with respect to the best possible successful malicious machine (which we bound to take time atleast T). In the ideal case this gap is non-existent, i.e., the prover can compute the VDF in time exactly T without resorting to massive parallelization. This work asks the following question:
When do VDFs with tight prover complexity exist?
Our Negative Result: Motivated by concerns about concrete efficiency, we first investigate the possibility of black-box constructions of VDFs from other cryptographic primitives. In particular, given the prevalence of strong and efficient candidates for simple cryptographic primitives like one-way functions and collision-resistant hash functions (SHA256, for instance), we would ideally like to use these as black-boxes to get similarly strong and efficient VDFs. As a negative result, we show that it is impossible to construct a T-sequential VDF where the prover runtime is close to T (with any number of processors) in a black-box manner from random oracles (and thus one-way functions or collision-resistant hash functions).
Theorem 1 (Informal)
There is no black-box construction of a T-sequential VDF from a random oracle where the prover makes at most \(T + O(T^\delta )\) rounds of queries to the oracle for some constant \(\delta < 1\).
Our Positive Result: On the other hand, we find that the natural generic non-blackbox approach to constructing a VDF can actually be made tight. All known constructions of VDF proceed by iteratively applying some function f to the given input – that is, computing f(x), f(f(x)), and so on. A proof that this was done correctly is computed either afterwards or during this computation. We show that, assuming a modest parallelism of the prover, we can bootstrap any such VDF where the prover complexity may not be tight into a VDF where the prover runtime matches the sequentiality bound \(T\). More specifically, we construct VDFs that can be computed in parallel time \(T+O(1)\) using \(O( log (T))\) processors and space.
Our bootstrapping theorem consists of a compiler that transforms any VDF with a somewhat natural self-composability property into a VDF with a tight prover complexity. Roughly speaking, we require that the evaluation of a VDF on time parameter T can be decomposed into two (sequential) evaluations of the same VDF on parameters \(T_1\) and \(T_2\) such that \(T_1 + T_2 = T\). This is satisfied by all known VDF candidates. The resulting scheme is as secure as the underlying VDF. Furthermore, the transformation is practically efficient and simple to implement.
Theorem 2 (Informal)
If there exists a self-composable VDF with a prover runtime bounded by \(c\cdot T\) for some constant c, then there exists a VDF with prover runtime bounded by \(T+O(1)\).
As our transformation mentioned above is black-box, Theorems 1 and 2 together rule out black-box constructions from random oracles of a self-composable VDF where the prover makes at most cT rounds of queries for some constant c. We highlight a few other interesting corollaries of our theorem:
-
(1)
Assuming the existence of an inherently sequential function and (not necessarily incremental) succinct non-interactive arguments [14, 17], there exists a T-sequential VDF, for any T, where the prover runs in time T using a poly-logarithmic number of processors.
This improves over generic constructions from prior work [5], which required incremental verifiable computation [3, 21], which is a stronger primitive.Footnote 1 Next, we turn our attention to specific number-theoretic constructions. In this context we improve the prover efficiency of the construction of Pietrzak [18], where the prover runs in time approximately \(T + \sqrt{T}\).
-
(2)
Assuming the security of Pietrzak’s VDF, there exists a \(T\)-sequential VDF with prover parallel runtime exactly \(T\) using \( log log (T)\) processors and space. The proof size is increased by a factor of \( log log (T)\) and the verifier parallel complexity is unchanged.
Our result generalizes a prior work by Wesolowski [22] which obtains a specific construction of a tight VDF. We discuss the consequences of our bootstrapping theorem in greater detail in Appendix A. Regarding the additional parallelism used by our prover, we stress that a \( log (T)\) parallelism by the prover is also (implicitly) assumed by prior work: In the circuit model, even computing a simple collision-resistant hash with security parameter \(\lambda \) already requires at least \(\lambda \) (\(> log (T)\) in our setting) parallel processors. Our compiler adds an extra logarithmic factor, which is well in reach of modern GPUs.
1.1 Our Techniques
In this section we give a brief overview of the main ideas behind this work. To simplify the exposition, we first discuss our bootstrapping theorem and then we give some intuition behind our impossibility result.
Self-composability and Weak Efficiency. In the standard syntax of VDFs, both the output value y and the proof \(\pi \) are computed by a single function \(\mathsf {Eval}\). For the purposes of this work, it is instructive to think of the algorithms which compute a VDF function values y and proofs \(\pi \) as separate algorithms \(\mathsf {Comp}\) and \(\mathsf {Prove}\): \(\mathsf {Comp}\) takes as input the time parameter \(T\) and an input x and outputs a function value y and auxiliary information \(\alpha \). On the other hand \(\mathsf {Prove}\) takes as input \(\alpha \) and outputs a proof \(\pi \). The first property that we assume for the underlying VDF is weak-efficiency: On time parameter \(T\), the \(\mathsf {Comp}\) algorithm runs in time \(T\) whereas \(\mathsf {Prove}\) runs in time at most cT, for some constant c. While our argument is generic, for the purpose of this outline we always assume that \(c=1\), i.e., it takes the same number of steps to compute the function via \((y,\alpha ) \leftarrow \mathsf {Comp}(T,x)\) and to compute the proof \(\pi \leftarrow \mathsf {Prove}(\alpha )\).
We also assume that the function \(\mathsf {Comp}\) is self-composable. Namely, let that \(T= T_1 + T_2\) for any \(T_1,T_2 > 0\). For any input x, we require that if \((y_1,\cdot ) \leftarrow \mathsf {Comp}(T_1,x)\), \((y_2,\cdot ) \leftarrow \mathsf {Comp}(T_2,y_1)\) and \((y,\cdot ) \leftarrow \mathsf {Comp}(T,x)\), then it holds that \(y_2 = y\). In other words, we require that the function \((y,\cdot ) \leftarrow \mathsf {Comp}(T,x)\) can be computed in two smaller steps, where we feed the result of the first step back into the function to obtain the actual output.
We argue that these are mild requirements for VDFs, in fact all of the recent constructions fit into this paradigm and therefore can be used as input for our compiler. This includes the more structured approach of repeated squaring over hidden order groups [18, 22] and even the straightforward combination of repeated hashing and succinct arguments [5].
Bootstrapping VDFs. In favor of a simpler presentation, throughout the following informal discussion we assume that \(T= 2^t\), for some integer t. The more general case is handled in the main body of the paper and follows with minor modifications. Recall that, by the self-composability property, we can split the computation \((y,\alpha ) \leftarrow \mathsf {Comp}(2^t,x)\) into two separate blocks \((y_1,\alpha _1) \leftarrow \mathsf {Comp}(2^{t-1},x)\) and \((y,\alpha _2) \leftarrow \mathsf {Comp}(2^{t-1},y_1)\). Our main insight is that we do not need to compute a proof \(\pi \) for the full trace of the computation, instead we can compute two separate proofs for the corresponding subroutines. Then the final proof will consist of the concatenation of the two proofs. More specifically, we will set \(\pi = (\pi _1, \pi _2)\), where \(\pi _1 \leftarrow \mathsf {Prove}(\alpha _1)\) and \(\pi _2 \leftarrow \mathsf {Prove}(\alpha _2)\).
This modification allows us to leverage parallelism. To evaluate the function on input x, one first computes \((y_1,\alpha _1) \leftarrow \mathsf {Comp}(2^{t-1},x)\) in a single threaded computation. Once this step is reached, the computation forks into two parallel threads: A thread \(S_1\) that computes \((y,\alpha _2) \leftarrow \mathsf {Comp}(2^{t-1},y_1)\) and a thread \(S_2\) which computes \(\pi _1 \leftarrow \mathsf {Prove}(\alpha _1)\). Note that by the weak efficiency of the VDF, the runtime of the algorithm \(\mathsf {Prove}\) is identical to that \(\mathsf {Comp}(2^{t-1},x)\), i.e., \(2^{t-1}\) steps. It follows that \(S_1\) and \(S_2\) will terminate simultaneously. In other words, both y and \(\pi _1\) will be available at the same time.
At this point only the computation of the proof \(\pi _2\) is missing. If we were to do it naively, then we would need to add an extra \(2^{t-1}\) steps to compute \(\mathsf {Prove}(\alpha _2)\), after the computation of \((y,\alpha _2) \leftarrow \mathsf {Comp}(2^{t-1},y_1)\) terminates. This would yield a total computation time of \(T+ T/2\), which is still far from optimal. However, observe that our trick has cut the original computation overhead by a factor of 2. This suggest a natural strategy to proceed: We recursively apply the same algorithm as above on \(\mathsf {Comp}(2^{t-1},y_1)\) and further split the proof \(\pi _2\) into two sub-proofs for two equal-length chunks of computation. The recursion proceeds up to the point where the computation of the function consists of a single step, and therefore there is no proof needed. Note that this happens after \(t = log (T)\) iterations. Since we spawn a new thread for each level of the recursion, this also bounds the total amount of parallelism of the prover.
Our actual proof \(\pi \) now consists of \(((y_1,\pi _1),\dots ,(y_t,\pi _t))\), i.e. the proof \(\pi \) consists of t components. We verify \(\pi \) in the canonical way, that is, setting \(y_0 = x\) we compute \(\mathsf {Vf}(2^{t-i},y_{i-1},y_i,\pi _i)\) for all \(1 \le i \le t\) and accept if all verify and \(y_t = y\).
Black-Box Impossibility. We show that there cannot be a black-box construction of a VDF from a random oracle if the overhead in computing the proof in the VDF is small. That is, if the number of sequential rounds of queries that the algorithm \(\mathsf {Eval}\) makes to the oracle is less than \(T + O(T^\delta )\) for some constant \(\delta < 1\). Note that our transformation sketched above is itself black-box, and thus our result also rules out black-box constructions of self-composable VDFs with a constant-factor overhead in generating the proof.
The central idea behind the impossibility is observation that since the verification algorithm \(\mathsf {Vf}\) makes only a small number of queries, it cannot tell whether \(\mathsf {Eval}\) actually made all the queries it was supposed to. For simplicity, suppose that \(\mathsf {Eval}\) makes exactly T sequential rounds of queries, and that the sequentiality of the VDF guarantees that its output cannot be computed in less than T rounds of queries. On input x, suppose \(\mathsf {Eval}(x) = (y,\pi )\). Efficiency and completeness of the VDF require that the verification algorithm \(\mathsf {Vf}\), making only \( poly (log(T))\) queries to the oracle, accepts when given \((x,y,\pi )\). Whereas, soundness requires that the same \(\mathsf {Vf}\) rejects when given \((x,y',\pi ')\) for any \(y' \ne y\) and any \(\pi '\). We show that all of these cannot happen at the same time while making only black-box use of the random oracle, if this oracle is the only source of computational hardness.
Consider an alternative evaluation algorithm \(\overline{\mathsf {Eval}}\) that behaves the same as \(\mathsf {Eval}\) except that on one of the rounds of queries that \(\mathsf {Eval}\) makes, instead of making the queries to the oracle, it sets their answers on its own to uniformly random strings of the appropriate length. Otherwise it proceeds as \(\mathsf {Eval}\) does, and outputs whatever \(\mathsf {Eval}\) would output. Now, unless the algorithm \(\mathsf {Vf}\) makes one of the queries that \(\overline{\mathsf {Eval}}\) skipped, it should not be able to distinguish between the outputs of \(\mathsf {Eval}\) and \(\overline{\mathsf {Eval}}\). As these skipped queries are only a 1/T fraction of the queries that \(\overline{\mathsf {Eval}}\) made, \(\mathsf {Vf}\), which only makes \( poly ( log (T))\) queries, catches them with only this small probability. Thus, if \(\mathsf {Vf}\) accepts the output \((y,\pi )\) of \(\mathsf {Eval}(x)\), then it should also mostly accept the output \((y',\pi ')\) of \(\overline{\mathsf {Eval}}(x)\).
On the other hand, as \(\overline{\mathsf {Eval}}\) made less than T rounds of queries to the oracle, sequentiality implies that \(y'\) should be different from y (except perhaps with negligible probability). Thus, with high probability, \(\mathsf {Vf}\) accepts \((y',\pi ')\) where \(y' \ne y\), contradicting soundness.
1.2 Related Work
A related concept is that of Proofs of Sequential Work (PoSW), originally introduced by Mahmoody, Moran, and Vadhan [15]. A PoSW can be seen as a relaxed version of a VDF where the output of the function is not necessarily unique: PoSW satisfy a loose notion of sequentiality where the prover is required to perform at least \(\alpha T\) sequential steps, for some constant \(\alpha \in [0,1]\). In contrast with VDFs, PoSWs admit efficient instantiations that make only black-box calls to a random oracle [10, 11].
Time-lock puzzles [20] allow one to hide some information for a certain (polynomial) amount of time. This primitive is intimately related to sequential computation as it needs to withstand attacks from massively parallel algorithms. Time lock-puzzles have been instantiated in RSA groups [20] or assuming the existence of succinct randomized encodings and of a worst case non-parallelizable language [4]. Time-lock puzzles can be seen as VDFs with an additional encryption functionality, except that there is no requirement for an efficient verification algorithm. In this sense the two primitives are incomparable.
Two constructions of tight VDFs were proposed in the seminal work of Boneh et al. [5], one assuming the existence of incremental verifiable computation and the other from permutation polynomials (shown secure against a new assumption). The latter scheme achieves only a weak form of prover efficiency since it requires T parallel processors to be evaluated tightly. Shortly after, two number theoretic constructions have been presented independently by Pietrzak [18] and Wesolowski [22], based on squaring in groups of unknown order. In their original presentation, neither of these schemes was tight as the prover required additional \(\sqrt{T}\) and \(T/ log {T}\) extra steps, respectively [6].
However, Wesolowski [22] later updated his paper with a paragraph that sketches a method to improve the prover complexity of his construction from \(T+O(T/ log {T}) \) to \(T + O(1)\), using techniques similar to the ones in our compiler. Our bootstrapping theorem can be seen as a generalization of these techniques and can be applied to a broader class of constructions. Finally we mention of a new VDF instance from supersingular isogenies [12] where the validity of the function output can be checked without the need to compute any extra proof, however the public parameters of such a scheme grow linearly with the time parameter T.
In a concurrent and independent work by Mahmoody et al. [16], an impossibility result is also shown. Compared to our result, their lower bound rules out the existence of blackbox VDFs for a looser set of parameters (i.e., it is a more general result). In contrast with our work, their paper does not contain any results on new instantiations of VDFs.
2 Verifiable Delay Functions
We denote by \(\lambda \in \mathbb {N}\) the security parameter. We say that a function \( negl \) is negligible if it vanishes faster than any polynomial. Given a set S, we denote by the uniform sampling from S. We say that an algorithm runs in parallel time T with P-many processors if it can be implemented by a PRAM machine with P parallel processors running in time T. We say that an algorithm is PPT if it can be implemented by a probabilistic machine running in time polynomial in \(\lambda \).
Here we recall the definition of verifiable delay functions (VDF) from [5].
Definition 1 (Verifiable Delay Function)
A VDF \(\mathfrak {V}= (\mathsf {Setup}, \mathsf {Gen}, \mathsf {Eval}, \mathsf {Vf})\) is defined as the following tuple of algorithms.
-
\(\mathsf {Setup}(1^\lambda ) \rightarrow pp:\) On input the security parameter \(1^\lambda \), the setup algorithm returns the public parameters \(pp\). By convention, the public parameters encode an input domain \(\mathcal {X}\) and an output domain \(\mathcal {Y}\).
-
\(\mathsf {Gen}(pp) \rightarrow x:\) On input the public parameters \(pp\), the instance generation algorithm sample a random input .
-
\(\mathsf {Eval}(pp, x, T) \rightarrow (y, \pi ):\) On input the public parameters \(pp\), an input \(x \in \mathcal {X}\), and a time parameter T, the evaluation algorithm returns an output \(y \in \mathcal {Y}\) together with a proof \(\pi \). The evaluation algorithm may use random coins to compute \(\pi \) but not for computing y.
-
\(\mathsf {Vf}(pp, x, y, \pi , T) \rightarrow \{0,1\}:\) On input the public parameters \(pp\), an input \(x \in \mathcal {X}\), an output \(y \in \mathcal {Y}\), a proof \(\pi \), and a time parameter T, the verification algorithm output a bit \(\{0,1\}\).
Efficiency. We require that the setup and the instance generation algorithms run in time \( poly (\lambda )\), whereas the running time of the verification algorithm must be bounded by \( poly ( log (T), \lambda )\). For the evaluation algorithm, we require it to run in parallel time exactly T. We also consider less stringent notions of efficiency where its (parallel) running time is bounded by cT, for some constant c.
Completeness. The completeness property requires that correctly generated proofs always cause the verification algorithm to accept.
Definition 2 (Completeness)
A VDF \(\mathfrak {V}= (\mathsf {Setup}, \mathsf {Gen}, \mathsf {Eval}, \mathsf {Vf})\) is complete if for all \(\lambda \in \mathbb {N}\) and all \(T \in \mathbb {N}\) it holds that
Sequentiality. We require a VDF to be sequential in the sense that no machine should be able to gain a noticeable speed-up in terms of parallel running time, when compared with the honest evaluation algorithm.
Definition 3 (Sequentiality)
A VDF \(\mathfrak {V}= (\mathsf {Setup}, \mathsf {Gen}, \mathsf {Eval}, \mathsf {Vf})\) is sequential if for all \(\lambda \in \mathbb {N}\) and for all pairs of PPT machines \((\mathcal {A}_1, \mathcal {A}_2)\), such that the parallel running time of \(\mathcal {A}_2\) (with any polynomial amount of processors in T) is less than T, there exists a negligible function \( negl \) such that
Soundness. For soundness we require that it is computationally hard to find two valid outputs for a single instance of the VDF. Note that here we do not constrain the running time of the adversary, except for being polynomial in \(\lambda \) and T.
Definition 4 (Soundness)
A VDF \(\mathfrak {V}= (\mathsf {Setup}, \mathsf {Gen}, \mathsf {Eval}, \mathsf {Vf})\) is sound if for all \(\lambda \in \mathbb {N}\) and for all PPT machines \(\mathcal {A}\) there exists a negligible function \( negl \) such that
3 A Bootstrapping Theorem for VDFs
In this section we propose a compiler that takes as input any weakly efficient VDF (that satisfies some natural composability properties) and turns it into a fully-fledged efficient scheme. We first characterize the exact requirements of the underlying VDF, then we describe our construction and we show that it preserves all of the properties of the underlying scheme.
3.1 Building Block
We require that the underlying VDF can be composed with itself (possibly using different time parameters) arbitrarily many times, without altering the function output, i.e., \(\mathsf {Eval}(pp, \cdot , T_1) \circ \mathsf {Eval}(pp, \cdot , T_2) = \mathsf {Eval}(pp, \cdot , T_1 + T_2)\). More concretely, we assume the existence of a VDF that satisfies the following.
Definition 5 (Self-Composability)
A VDF \(\mathfrak {V}= (\mathsf {Setup}, \mathsf {Gen}, \mathsf {Eval}, \mathsf {Vf})\) is self-composable if, for all \(pp\) in the support of \(\mathsf {Setup}(1^\lambda )\), for all \(x\in \mathcal {X}\), all \((T_1, T_2)\) bounded by a sub-exponential function in \(\lambda \), we have that \(\mathsf {Eval}(pp, x, T_1 + T_2) = \mathsf {Eval}(pp, y, T_2)\), where \((y, \cdot ) = \mathsf {Eval}(pp, x, T_2)\).
Note that this also implies that the domain and the range of the function are identical, i.e., \(\mathcal {X} = \mathcal {Y}\). We stress that this property is satisfied by all known candidate VDF constructions [5, 18, 22]. To characterize the second requirement, it is going to be useful to refine the syntax of our primitive. We assume that the evaluation algorithm \(\mathsf {Eval}(pp, x, T)\) is split in the following subroutines:
-
\(\mathsf {Eval}(pp, x, T):\) On input the public parameters \(pp\), and input \(x \in \mathcal {X}\), and a time parameter T, execute the subroutine \((y, \alpha ) \leftarrow \mathsf {Comp}(pp,x, T)\). Then compute \(\pi \leftarrow \mathsf {Prove}(\alpha )\) and return \((y, \pi )\).
This captures the compute-and-prove approach, where the prover evaluates some inherently sequential function and then computes a short proof of correctness, potentially using some information from the intermediate steps of the computation (\(\alpha \)). Note that this refinement is done without loss of generality since one can recover the original syntax by encoding the proof \(\pi \) in the advice \(\alpha \) and set the prove algorithm to be the identity function. We are now in the position of stating the efficiency requirements of the input VDF.
Definition 6 (Weak Efficiency)
A VDF \(\mathfrak {V}= (\mathsf {Setup}, \mathsf {Gen}, \mathsf {Eval}, \mathsf {Vf})\) is weakly efficient if there exists a function \(\varPsi : \mathbb {N}\rightarrow \mathbb {N}\) and a non-negative constant c such that for all \(T \in \mathbb {N}\) it holds that \(0 \le \varPsi (T) \le cT\) and \(\mathsf {Prove}\) runs in parallel time \(\varPsi (T)\), where T is the parallel running time of \(\mathsf {Eval}(\cdot , \cdot , T)\).
Note that the total running time of the evaluation algorithm is bounded by \((c+1) T\), for some constant c. This condition is again met by all known VDF instances [5, 18, 22], since they are all based on the compute-and-prove paradigm: Given a long sequential computation, the corresponding proof \(\pi \) can be computed in parallel time at most linear in T by using essentially any verifiable computation scheme.
3.2 Scheme Description
Let \(\mathfrak {V}= (\mathsf {Setup}, \mathsf {Gen}, \mathsf {Eval}, \mathsf {Vf})\) be a weakly efficient and self-composable VDF and let \(\varPsi \) be the corresponding function such that, on input T, the running time of the subroutine \(\mathsf {Prove}\) is bounded by \(\varPsi (T)\). Our construction \(\overline{\mathfrak {V}} = (\overline{\mathsf {Setup}}, \overline{\mathsf {Gen}}, \overline{\mathsf {Eval}}, \overline{\mathsf {Vf}})\) is shown below.
The setup and the instance generation algorithms are unchanged. The new evaluation algorithm is defined recursively: On input some time T, the algorithm defines S to be the smallest non-negative integer such that \(S + \varPsi (S) \ge T\). Recall that the correctness of the evaluation of the \(\mathsf {Comp}\) algorithm on time parameter S can be proven in time \(\varPsi (S)\), by the weak efficiency condition of the VDF. Thus, S approximates from above the maximum amount of computation that can be performed and proven within time T. The algorithm then branches, depending on the value of S:
-
(1)
\(S \le 1:\) In this case the algorithm simply computes \(\mathsf {Comp}(pp, x, T)\) and outputs the result of the computation, without computing a proof. Observe that since \(S = 1\) it holds that \(T \le (1 + c)\), where c is the constant such that \(\varPsi (S) \le c S\). Thus the algorithm runs for at most \(c+1\) steps. This corresponds to the last step of the recursion.
-
(2)
\(S > 1:\) In this case the algorithm computes the underlying VDF on time parameter S and outputs the resulting \((y, \alpha ) \leftarrow \mathsf {Comp}(pp, x, S)\). At this point the algorithm branches in two parallel threads:
-
(a)
The first thread computes the proof \(\pi \leftarrow \mathsf {Prove}(\alpha )\).
-
(b)
The second thread calls the evaluation algorithm recursively on input \((pp, y, T - S)\), which returns an output \(\tilde{y}\) and a list L.
The algorithm returns the function output \(\tilde{y}\) and the list L augmented (from the left) with the intermediate pair \((y, \pi )\), i.e., \((y, \pi ) \cup L\).
-
(a)
The output of the computation consists of n pairs \((y_1,\pi _1), \dots , (y_n, \pi _n)\), where n is the depth of the recursion, and a function output \(\tilde{y}\). Each output-proof pair can be verified independently and the correctness of the function output \(\tilde{y}\) can be checked by recomputing at most c steps of the VDF on input \(y_n\).
Note that the only parameter that affects the efficiency of the prover and the verifier and the size of the proof is the depth of the recursion n. Intuitively, each step of the recursion slices off a chunk of the computation without affecting the total runtime of the algorithm, until only a constant number of steps is left. If S and \(\varPsi (S)\) are not too far apart, then n can be shown to be bounded by a poly-logarithmic factor in T. We give a pictorial representation of the steps of the computation in Fig. 1.
3.3 Analysis
The completeness of our scheme follows from the self-composability of the underlying VDF. In the following we analyze the efficiency of our construction and we show that the properties of the underlying VDF are preserved under self-composition.
Efficiency. Recall that S is always a non-negative integers and therefore each step of the recursion is executed on input some integer \(\le T\). Thus we can bound the size of each proof \(\pi _i\) by the size of the proof of the underlying VDF on time parameter T. It follows that the proof size of our scheme is at most a factor n larger, where n is the depth of the recursion. In the following we bound the depth of the recursion, thus establishing also a bound also on the proof size. We begin by stating and proving the following instrumental lemma.
Lemma 1
Let \(\varPsi : \mathbb {N}\rightarrow \mathbb {N}\) be a function such that there exists a constant c such that for all \(S \in \mathbb {N}\) it holds that \(\varPsi (S)\le c S\). Fix an \(S_0 \in \mathbb {N}\) and define \(S_i\) to be the smallest non-negative integer such that \(S_i + \varPsi (S_i) \ge S_0 - \sum ^{i-1}_{j=1} S_j\). Then for all \(S_0 \in \mathbb {N}\) and for all \(i \ge 0\) it holds that \(S_0 - \sum ^{i}_{j=1} S_j \le S_0\left( \frac{c}{c+1}\right) ^i\).
Proof
We prove the claimed bound by induction over i. For the base case \(i=0\) we have \(S_0 \le S_0\), which is trivial. For the induction step, recall that \(S_i\) is defined to be the smallest integer such that \(S_i + \varPsi (S_i) \ge S_0 - \sum ^{i-1}_{j=1} S_j\). Since \(\varPsi (S_i) \le c S_i\), for some non-negative c, we have that
By induction hypothesis, it follows that
The bound on the depth of the recursion is obtained by observing that the i-th copy of the evaluation algorithm is called on time parameter exactly \(T - \sum ^{i}_{j=1} S_j\), where \(S_j\) is defined as above. Note that if \(T - \sum ^{i}_{j=1} S_j \le 1\), then the recursion always stops, since the condition \(1 + \varPsi (1) \ge 1\) is satisfied for all non-negative \(\varPsi \). If we set \(S_0 = T\) and apply Lemma 1 we obtain the following relation
Solving for i, we have that \( log _{\left( \frac{c+1}{c}\right) }(T)\) iterations always suffice for the algorithm to terminate. This also implies a bound on the number of processors needed to evaluate the function. We now establish a bound on the parallel runtime of the evaluation algorithm.
Lemma 2
Let \(\mathfrak {V}\) be a weakly efficient VDF. Then, for all \(pp\) in the support of \(\overline{\mathsf {Setup}}(1^\lambda )\), for all \(x \in \mathcal {X}\), and for all \(T \in \mathbb {N}\), the algorithm \(\overline{\mathsf {Eval}}(pp, x, T)\) terminates in at most \(T+c\) steps.
Proof
We first consider the main thread of the execution. Set \(S_0 = T\) and define \(S_i\) to be the smallest integer such that \(S_i + \varPsi (S_i) \ge S_0 - \sum ^{i-1}_{j=1} S_j\). Observe that the main thread consists of n applications of the algorithm \(\mathsf {Comp}\) on time parameter \(S_i\), where n is the depth of the recursion, and one application on input \(S_0 - \sum _{j=1}^n S_j\). Thus, by the weak efficiency of \(\mathfrak {V}\), the total running time of the main thread is exactly
To bound the runtime of the parallel threads we bound the difference with respect to the amount of steps needed for the main thread to terminate, starting from the moment the execution forks. We show a bound assuming \(\varPsi (S) = c S\), which implies a bound for the general case since the proving algorithm can only get faster. Consider the i-th recursive instance of the algorithm: After computing \((\alpha _i,y_i) \leftarrow \mathsf {Comp}(pp, x_i, S_i)\), the main thread proceeds by calling \(\overline{\mathsf {Eval}}\) on input \(T-S_i\) and spawns a new thread to compute a proof on input \(\alpha _i\). As discussed above, we know that the main thread will terminate within \(T-S_i\) steps, thus all we need to do is bounding the amount of extra steps needed for the computation of the proof. Note that we have
since we assumed that \(S_i\) was the smallest integer that satisfies \(S_i + \varPsi (S_i) \ge T\). Substituting,
where \(\varPsi (S_i)\) is exactly the number of steps needed to compute \(\pi _i\). This holds for all recursive instances of the algorithms and concludes our proof.
We remark that the extra c steps needed for our algorithm to terminate are due to the rounding that happens when S does not divide T. For the common case where T is a power of 2 and \(\varPsi \) is the identity function, then \(T = 2S\) in all of the recursive calls of the algorithm and the process terminates in exactly T steps. The verifier complexity is bounded by that of n parallel calls to the verifier of the underlying VDF on input some time parameter \(\le T\), where n is poly-logarithmic in T (see discussion above), plus an extra parallel instance that recomputes at most \(c + 1\) steps of the \(\mathsf {Comp}\) algorithm.
Sequentiality. In the following we show that our transformation preserves the sequentiality of the underlying VDF.
Theorem 3 (Sequentiality)
Let \(\mathfrak {V}\) be a self-composable sequential VDF, then \(\overline{\mathfrak {V}}\) is sequential.
Proof
Let \(\mathcal {A}\) be and adversary that, on input a random instance x, finds the corresponding image y in time less than T. By definition of our evaluation algorithm, y is computed as
where \((y_i, \cdot ) \leftarrow \mathsf {Comp}(pp, y_{i-1}, S_i)\), for all \(1\le i \le n\), setting \(y_0 = x\). Invoking the self-composability of the underlying VDF, we have that
twhich implies that y is the correct image of the underlying VDF for the same time parameter T. Thus the existence of \(\mathcal {A}\) contradicts the sequentiality of \(\mathfrak {V}\).
Soundness. The following theorem establishes the soundness of our scheme. Note that the reduction is tight, which means that our construction is exactly as secure as the input VDF.
Theorem 4
Let \(\mathfrak {V}\) be a self-composable and sound VDF, then \(\overline{\mathfrak {V}}\) is sound.
Proof
Let \(\mathcal {A}\) be an adversary that, on input the honestly generated public parameters \(pp\), outputs some tuple \((T, x, y, \pi )\) such that \(\pi \) is a valid proof, but \((y, \cdot ) \ne \overline{\mathsf {Eval}}(pp, x, T)\). Let \(\pi = ((y_1, \pi _1), \dots , (y_n, \pi _n))\) and set \(y_0 = x\). Then we claim that one of the following conditions must be satisfied:
-
(1)
There exists some \(i \in \{1, \dots , n\}\) such that \((y_i, \cdot ) \ne \mathsf {Comp}(pp, y_{i-1}, S_i)\), where \(S_i\) is defined as in the verification algorithm, or
-
(2)
\((y, \cdot ) \ne \mathsf {Comp}(pp, y_{n}, T - \sum ^n_{j=1}S_j)\).
If none of the above conditions is met, then we have that
by the self-composability of \(\mathfrak {V}\), which contradicts the initial hypothesis. It follows that a successful attack implies at least one of the above conditions. However, (1) implies that we can extract a tuple \((y_{i-1}, y_i, \pi _i, S_i)\), such that \(\pi _i\) is a valid proof but \((y_i, \cdot ) \ne \mathsf {Eval}(pp, y_{i-1}, S_i)\), which contradicts the soundness of \(\mathfrak {V}\). On the other hand, if (2) happens then the verifier always rejects. It follows that the existence of \(\mathcal {A}\) implies that the underlying VDF is not sound.
4 Black-Box Impossibility
In this section, we show that it is not possible to have a (tight) VDF whose only source of computational hardness is a random oracle. This implies that VDFs cannot be constructed by making just black-box use of several cryptographic primitives such as One-Way Functions and Collision-Resistant Hash Functions.
In this setting, we give all the algorithms in a VDF construction (that is, \(\mathsf {Setup}\), \(\mathsf {Gen}\), \(\mathsf {Eval}\), and \(\mathsf {Vf}\)), as well as any adversaries, access to a random oracle that we denote by H. Our measure of the complexity of an algorithm will be the number of queries that it makes to H, and sequentiality is also measured by the number of rounds of queries made to H adaptively.
Theorem 5
There is no black-box construction of a VDF \((\mathsf {Setup}, \mathsf {Gen}, \mathsf {Eval}, \mathsf {Vf})\) from a random oracle where \(\mathsf {Eval}(\cdot ,\cdot ,T)\) makes at most \(T + O(T^{\delta })\) rounds of queries for some \(\delta < 1\), and is also T-sequential.
Noting that our transformation from Sect. 3 (of any self-composable weakly efficient VDF into a VDF with constant proof overhead) is black-box, we can extend this impossibility as follows.
Corollary 1
There is no black-box construction of a self-composable VDF \((\mathsf {Setup}, \mathsf {Gen}, \mathsf {Eval}, \mathsf {Vf})\) from a random oracle where \(\mathsf {Eval}(\cdot ,\cdot ,T)\) makes at most cT rounds of queries (for some constant c) and is also T-sequential.
Our approach is as follows. Imagine replacing the answers to all but \((T-1)\) rounds of queries that \(\mathsf {Eval}\) makes to H with something completely random (independent of H). Since we only replaced a small fraction of the queries made, the output of \(\mathsf {Eval}\) in this case should look the same to \(\mathsf {Vf}\), which makes very few queries to H. On the other hand, if this replacement did not change the value of y output by \(\mathsf {Eval}\), then it means that we could have computed \(\mathsf {Eval}\) by answering these queries ourselves, and thus making only \((T-1)\) rounds of queries to H, contradicting sequentiality. Thus, if \(\mathsf {Eval}\) is indeed T-sequential, then this replacement allows us to break soundness.
Proof
(Theorem 5). Fix any alleged VDF \((\mathsf {Setup}, \mathsf {Gen}, \mathsf {Eval}, \mathsf {Vf})\) with access to a random oracle H. Suppose, for some \(\delta < 1\), there is a function \(p(\lambda ,T)\) that is \(O( poly (\lambda )\cdot T^{\delta })\) such that, for any \(\lambda \), T, and \(pp\leftarrow \mathsf {Gen}^H(1^\lambda )\), the evaluation algorithm \(\mathsf {Eval}(pp,\cdot ,T)\) makes at most \(T + p(\lambda ,T)\) rounds of queries to H. Without loss of generality (since the algorithms are not bounded in the memory they use), we assume that an algorithm does not make the same query twice to H.
We construct an adversarial evaluator \(\overline{\mathsf {Eval}}\) that works as follows on input (pp, x, T):
-
1.
Pick a uniformly random set \(I \subseteq [T+p(\lambda ,T)]\) of size \((p(\lambda ,T)+1)\).
-
2.
Run \(\mathsf {Eval}(pp,x,T)\) as is whenever it is not making oracle queries.
-
3.
When \(\mathsf {Eval}\) makes the \(i^{\text {th}}\) round of oracle queries \((q_1, \dots , q_m)\),
-
if \(i \notin I\), respond with \((H(q_1), \dots , H(q_m))\).
-
if \(i \in I\), respond with \((a_1, \dots , a_m)\), where the \(a_i\)’s are uniformly random strings of the appropriate length.
-
-
4.
Output the \((y,\pi )\) that \(\mathsf {Eval}\) produces at the end.
The following claim states that if \(\mathsf {Vf}\) makes only a small number of queries (which it has to for efficiency), it cannot distinguish between the outputs of \(\mathsf {Eval}\) and \(\overline{\mathsf {Eval}}\).
Claim (Indistinguishability)
Suppose \(pp\) and x are generated from \(\mathsf {Setup}^H(1^\lambda )\) and \(\mathsf {Gen}^H(pp)\), respectively. Let \((y,\pi ) \leftarrow \mathsf {Eval}^H(pp,x,T)\), and \((\overline{y},\overline{\pi }) \leftarrow \overline{\mathsf {Eval}}^H(pp,x,T)\). If the algorithm \(\mathsf {Vf}^H(pp,\cdot ,\cdot ,\cdot ,T)\) makes at most \(T/8p(\lambda ,T)\) queries to H, then, for all \(\lambda \in \mathbb {N}\) and all \(T \in \mathbb {N}\), it holds that:
We defer the proof of the above to later in this section. The next claim states that, if the given VDF is sequential, then the output y as computed by \(\overline{\mathsf {Eval}}\) has to differ with high probability from that computed by \(\mathsf {Eval}\). This follows immediately from the observation that \(\overline{\mathsf {Eval}}\) makes at most \((T-1)\) rounds of queries to H, and if so outputs that same y as \(\mathsf {Eval}\) with non-negligible probability, this would immediately contradict T-sequentiality.
Claim (Sensitivity)
Suppose the given VDF is T-sequential, and that \(pp\) and x are generated from \(\mathsf {Setup}^H(1^\lambda )\) and \(\mathsf {Gen}^H(pp)\), respectively. Let \((y,\pi ) \leftarrow \mathsf {Eval}^H(pp,x,T)\), and \((\overline{y},\overline{\pi }) \leftarrow \overline{\mathsf {Eval}}^H(pp,x,T)\). Then, there exists a negligible function \( negl \) such that for all \(\lambda \in \mathbb {N}\) and all \(T \in \mathbb {N}\) it holds that:
We now construct an adversary \(\mathcal {A}\) that breaks the soundness of the supposed VDF. \(\mathcal {A}\), on input the parameters \(pp\), works as follows:
-
1.
Generate \(x\leftarrow \mathsf {Gen}^H(pp)\), and set \(T = 2^{\lambda }\).
-
2.
Compute \((\overline{y},\overline{\pi }) \leftarrow \overline{\mathsf {Eval}}^H(pp,x,T)\).
-
3.
Output \((T,x,\overline{y},\overline{\pi })\).
Our argument now is based on the following three points:
-
By the efficiency of the VDF, \(\mathsf {Vf}\) makes much fewer than \(T/8p(\lambda ,T)\) (\(= 2^{\varOmega (\lambda )}\)) queries.
-
So, by the correctness of the VDF and Claim 4, the probability that \(\mathsf {Vf}(pp, x, \overline{y}, \overline{\pi }, T)\) does not output 1 is at most 1/4.
-
By Claim 4, the probability that \(\overline{y}\) agrees with the output of \(\mathsf {Eval}\) is at most \( negl (\lambda )\).
Together, by the union bound, we have:
Noting that \(\mathcal {A}\) runs in nearly the same time as \(\mathsf {Eval}\), this contradicts the claim that we started with a VDF that is both sequential and sound, proving the theorem.
Proof
(Indistinguishability Claim). Recall that \(\overline{\mathsf {Eval}}\) generates \((\overline{y},\overline{\pi })\) just by altering the oracle that \(\mathsf {Eval}\) has access to. Denoting this altered oracle by \(\overline{H}\), note that \(\overline{H}\) is also a random oracle, and that if \(\mathsf {Vf}\) also had access to \(\overline{H}\) instead of H, then the behavior of the whole system would not change. That is,
Suppose, when given input \((pp,x,y,\pi ,T)\), the algorithm \(\mathsf {Vf}^{H}\) makes N queries \(q_1, \dots , q_N\) to the oracle. Its behavior when given access to \(\overline{H}\) instead of H is different only if the two oracles disagree on at least one of these queries. For any query \(q_i\), the algorithm \(\overline{\mathsf {Eval}}\) alters the oracle at this query if it happens to be made by \(\mathsf {Eval}\) in a round contained in I. This happens with probability less than \((p(\lambda ,T)+1)/(T+p(\lambda ,T))\). This implies that, for any \(i \in [N]\),
Thus, by the union bound, the probability that \(\mathsf {Vf}^{\overline{H}}\) behaves differently from \(\mathsf {Vf}^{H}\) on any input is at most \(2Np(\lambda ,T)/T\). If \(N \le T/8p(\lambda ,T)\), together with (1), this implies that:
Notes
- 1.
Known constructions of incremental verifiable computation require succinct arguments with knowledge extraction.
References
Armknecht, F., Barman, L., Bohli, J.-M., Karame, G.O.: Mirror: enabling proofs of data replication and retrievability in the cloud. In: 25th SENIX Security Symposium (USENIX Security 16), pp. 1051–1068 (2016)
Ben-Sasson, E., Chiesa, A., Tromer, E., Virza, M.: Scalable zero knowledge via cycles of elliptic curves. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014. LNCS, vol. 8617, pp. 276–294. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44381-1_16
Bitansky, N., Canetti, R., Chiesa, A., Tromer, E.: Recursive composition and bootstrapping for SNARKS and proof-carrying data. In: Boneh, D., Roughgarden, T., Feigenbaum, J. (eds.) 45th Annual ACM Symposium on Theory of Computing, Palo Alto, CA, USA, 1–4 June, pp. 111–120. ACM Press (2013)
Bitansky, N., Goldwasser, S., Jain, A., Paneth, O., Vaikuntanathan, V., Waters, B.: Time-lock puzzles from randomized encodings. In: Sudan, M. (ed.) ITCS 2016: 7th Conference on Innovations in Theoretical Computer Science, Cambridge, MA, USA, 14–16 January, pp. 345–356. Association for Computing Machinery (2016)
Boneh, D., Bonneau, J., Bünz, B., Fisch, B.: Verifiable delay functions. In: Shacham, H., Boldyreva, A. (eds.) CRYPTO 2018. LNCS, vol. 10991, pp. 757–788. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96884-1_25
Boneh, D., Bünz, B., Fisch, B.: A survey of two verifiable delay functions. Cryptology ePrint Archive, Report 2018/712 (2018). https://eprint.iacr.org/2018/712
Buchmann, J., Williams, H.C.: A key-exchange system based on imaginary quadratic fields. J. Cryptol. 1(2), 107–118 (1988). https://doi.org/10.1007/BF02351719
Chia network second VDF competition. https://www.chia.net/2019/04/04/chia-network-announces-second-vdf-competition-with-in-total-prize-money.en.html. Accessed 22 Apr 2019
Cohen, B.: Proofs of space and time. Blockchain protocol analysis and security engineering (2017). https://cyber.stanford.edu/sites/default/files/bramcohen.pdf
Cohen, B., Pietrzak, K.: Simple proofs of sequential work. In: Nielsen, J.B., Rijmen, V. (eds.) EUROCRYPT 2018. LNCS, vol. 10821, pp. 451–467. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-78375-8_15
Döttling, N., Lai, R.W.F., Malavolta, G.: Incremental proofs of sequential work. In: Ishai, Y., Rijmen, V. (eds.) EUROCRYPT 2019. LNCS, vol. 11477, pp. 292–323. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17656-3_11
De Feo, L., Masson, S., Petit, C., Sanso, A.: Verifiable delay functions from supersingular isogenies and pairings. Cryptology ePrint Archive, Report 2019/166 (2019). https://eprint.iacr.org/2019/166
Fiat, A., Shamir, A.: How to prove yourself: practical solutions to identification and signature problems. In: Odlyzko, A.M. (ed.) CRYPTO 1986. LNCS, vol. 263, pp. 186–194. Springer, Heidelberg (1987). https://doi.org/10.1007/3-540-47721-7_12
Kilian, J.: A note on efficient zero-knowledge proofs and arguments (extended abstract). In: 24th Annual ACM Symposium on Theory of Computing, Victoria, BC, Canada, 4–6 May, pp. 723–732. ACM Press (1992)
Mahmoody, M., Moran, T., Vadhan, S.P.: Publicly verifiable proofs of sequential work. In: Kleinberg, R.D. (ed.) ITCS 2013: 4th Innovations in Theoretical Computer Science, Berkeley, CA, USA, 9–12 January, pp. 373–388. Association for Computing Machinery (2013)
Mahmoody, M., Smith, C., Wu, D.J.: A note on the (im)possibility of verifiable delay functions in the random oracle model. Cryptology ePrint Archive, Report 2019/663 (2019). https://eprint.iacr.org/2019/663
Micali, S.: CS proofs (extended abstracts). In: 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, USA, 20–22 November, pp. 436–453. IEEE Computer Society Press (1994)
Pietrzak, K.: Simple verifiable delay functions. In: Blum, A. (ed.) ITCS 2019: 10th Innovations in Theoretical Computer Science Conference, San Diego, CA, USA, 10–12 January, vol. 124, pp. 60:1–60:15. LIPIcs (2019)
Rivest, R.L., Shamir, A., Adleman, L.M.: A method for obtaining digital signature and public-key cryptosystems. Commun. Assoc. Comput. Mach. 21(2), 120–126 (1978)
Rivest, R.L., Shamir, A., Wagner, D.A.: Time-lock puzzles and timed-release crypto. Technical Report MIT/LCS/TR-684 (1996)
Valiant, P.: Incrementally verifiable computation or proofs of knowledge imply time/space efficiency. In: Canetti, R. (ed.) TCC 2008. LNCS, vol. 4948, pp. 1–18. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78524-8_1
Wesolowski, B.: Efficient verifiable delay functions. Cryptology ePrint Archive, Report 2018/623 (2018). https://eprint.iacr.org/2018/623
Acknowledgments
S. Garg is supported in part from DARPA SIEVE Award, AFOSR Award FA9550-15-1-0274, AFOSR Award FA9550-19-1-0200, AFOSR YIP Award, NSF CNS Award 1936826, DARPA and SPAWAR under contract N66001-15-C-4065, a Hellman Award, a Sloan Research Fellowship and research grants by the Okawa Foundation, Visa Inc., and Center for Long-Term Cybersecurity (CLTC, UC Berkeley). The views expressed are those of the author and do not reflect the official policy or position of the funding agencies.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Instantiations
A Instantiations
In the following we survey the existing candidate VDF schemes and we discuss the implications of our results.
1.1 A.1 Compute-and-Prove VDF
The original work of Boneh et al. [5] discusses an instantiation for VDF based on any (conjectured) inherently sequential function and a succinct non-interactive argument system (SNARG) [14, 17]. The prover simply evaluates the function on a randomly chosen input and computes a short proof that the computation is done correctly. However, such an approach is dismissed since the time to compute a SNARG is typically much longer than that needed for the corresponding relation. Therefore, to achieve meaningful sequentiality guarantees, the prover needs to resort to massive parallelization which requires a number of processors linear in the time parameter T.
For this reason they turned their attention to incremental verifiable computation schemes [21]. Such a primitive derives from the recursive composition of SNARGs and allow one to compute the proof incrementally as the computation proceeds. However, this feature comes at a cost: The number of recursions introduces an exponential factor in the running time of the extractor and therefore the schemes can be shown sound only for a constant amount of iterations. Other constructions [3] circumvent this issue by constructing computation trees of constant depth, however the overhead given by the recursive application of a SNARG is typically the bottleneck of the computation.
Our approach can be seen as a lightweight composition theorem for VDFs and rehabilitates the compute-and-prove paradigm using standard SNARGs in conjunction with iterated sequential functions: Most of the existing SNARG schemes can be computed in time quasi-linear in T [2] and can be parallelized to meet our weak efficiency requirements using a poly-logarithmic amount of processors (in the time parameter T). Our compiler shows that the combination of SNARGs and iterated sequential functions already gives a tightly sequential VDF, for any value of T.
1.2 A.2 Wesolowski’s VDF
A recent work by Wesolowski [22] builds an efficient VDF exploiting the conjectured sequentiality of repeated squaring in groups of unknown order, such as RSA groups [19] or class groups of imaginary quadratic order [7]. Loosely speaking, given a random instance \(x\in \mathbb {G}\) and a time parameter T, the sequential function is defined as \(f(x) = x^{2^T}\). Wesolowski proposes a succinct argument for the corresponding language
where the verification is much faster than recomputing the function from scratch. The argument goes as follows:
-
(1)
The verifier samples a random prime p from the set of the first \(2^\lambda \) primes.
-
(2)
The prover computes \(q, r \in \mathbb {Z}\) such that \(2^T = pq + r\) and outputs \(\pi = x^q\) as the proof.
-
(3)
The proof \(\pi \) can be verified by checking that \(\pi ^p x^r = y\), where r is computed as \(2^T\mod p\).
The argument can be made non-interactive using the Fiat-Shamir transformation [13]. Note that the value of q cannot be computed by the prover explicitly since the order of the group is unknown, however it can be computed in the exponent of x in time close to T.
Wesolowski’s proof consists of a single group element and the verifier workload is essentially that of two exponentiations in \(\mathbb {G}\). The main shortcoming of the scheme is that the time to compute a valid proof is proportional to the time to compute the function. However, Wesolowski briefly explains how to reduce this overhead to a constant factor using parallel processors. The modification sketched in his paper is essentially an ad-hoc version of our compiler.
1.3 A.3 Pietrzak’s VDF
Recently, Pietrzak [18] also showed an efficient succinct argument for the same language \(\mathcal {L}\), taking a slightly different route. In the following we briefly recall the backbone of the argument:
-
(1)
If \(T = 1\), the verifier simply checks that \(x^2 = y\).
-
(2)
Else the prover computes \(z = x^{2^{T/2}}\) and sends it to the verifier.
-
(3)
The verifier samples some \(r \in \{1, \dots , 2^\lambda \}\).
-
(4)
The prover and the verifier recurse on input \((\mathbb {G}, x^rz, z^ry, T/2)\).
The resulting argument is less efficient than Wesolowski’s approach in terms of proof size and verifier complexity by a factor of \( log (T)\). However Pietrzak’s argument can be computed in time approximately \(\sqrt{T}\) using roughly \(\sqrt{T}\) memory by storing some intermediate values of the function evaluation.
It is clear that such a VDF fulfills the conditions to apply our compiler and allows us to truncate the additional \(\sqrt{T}\) factor from the proof computation. Due to the increased proof size, it might appear that the resulting scheme is strictly worse than that obtained by combining our compiler with Wesolowski’s approach. However the significantly shorter proving time allows us to give a sharper bound on the number of recursion of our algorithm: In each iteration the new time parameter is computed as \(1/2\sqrt{4T+1} -1\) and therefore approximately \( log log (T)\) iterations suffice to hit the bottom of the recursion. As a consequence, Pietrzak’s argument needs less parallelism to achieve optimal prover runtime. We also point out that Pietrzak’s argument rests on a weaker assumption, as discussed in [6].
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Döttling, N., Garg, S., Malavolta, G., Vasudevan, P.N. (2020). Tight Verifiable Delay Functions. In: Galdi, C., Kolesnikov, V. (eds) Security and Cryptography for Networks. SCN 2020. Lecture Notes in Computer Science(), vol 12238. Springer, Cham. https://doi.org/10.1007/978-3-030-57990-6_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-57990-6_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-57989-0
Online ISBN: 978-3-030-57990-6
eBook Packages: Computer ScienceComputer Science (R0)