Keywords

1 Introduction

Obfuscation and Structured Assumptions. Indistinguishability obfuscation (iO) is a powerful cryptographic object, and along with one-way functions, it implies almost every cryptographic primitive, from deniable encryption [42] to functional encryption [26] and fully-homomorphic encryption [18]. However, it is not currently known whether iO gives rise to structures in which algebraic assumptions hold (such as DDH, DCR, LWE etc.). In this work, we are motivated by the following open problem:

Can structured objects (such as DDH groups) be bootstrapped from unstructured objects (like generic one-way functions and iO)?

We make progress in this direction by developing the first construction of dual-mode non-interactive zero-knowledge (NIZK) proof systems from unstructured assumptions (iO, one-way functions and lossy trapdoor functions). This dual-mode NIZK can be used in the constructions from [1, 2, 21], allowing us to answer this question in the affirmative.

Zero-Knowledge Proof Systems. Zero-knowledge (ZK) proof systems [28, 29] are (implicitly or explicitly) at the heart of countless cryptographic constructions. In a ZK proof system, a prover \(P\) tries to convince a verifier \(V\) of the validity of a statement \(x\). “Validity” usually means that \(x\in L\) for some language \(L\in \mathsf {NP} \). In this case, \(P\) obtains a witness \(w\) to \(x\in L\). For security, we require soundness, which means that no dishonest prover can convince \(V\) of a false statement \(x\notin L\). Additionally, we may want to protect \(P\) (and in particular the used witness \(w\)) in several ways. For instance, the protocol is zero-knowledge if it is possible to efficiently simulate (transcripts of) protocol runs even without \(w\). Alternatively, we can require the protocol to be witness-hiding or witness-indistinguishable [23].

ZK proof systems can be interactive or non-interactive (the latter of which means that the prover sends only one message to the verifier). In this work, we are interested in non-interactive ZK (NIZK) proof systems [10]. There exist already various NIZK proof systems, ranging from generic [22, 24, 42] to highly efficient constructions based on concrete number-theoretic assumptions [24, 32, 44]. Some of these systems only allow to prove \(x\in L\) for specific languages \(L\), while others can be used to prove statements from arbitrary languages \(L\in \mathsf {NP} \).

Dual-Mode Proof Systems. Some NIZK systems enjoy statistical security, i.e., information-theoretic soundness or zero-knowledge guarantees. However, interestingly, no NIZK system can be statistically sound and statistically zero-knowledge simultaneously. Hence, a NIZK system can be secure only either against unbounded malicious provers or against unbounded malicious verifiers.

Fortunately, there is a compromise that combines the best of both worlds: Groth-Sahai proofs [32] are statistically sound or statistically zero-knowledge depending on the choice of public parameters \(\mathsf {crs} \). Furthermore, both choices of parameters are computationally indistinguishable. This “dual-mode” property leads to comparatively simple proofs for complex protocols (e.g., for anonymous credentials [4] or payment systems [33]). In the case of [2, 21], a proof without using dual-mode properties in fact does not seem obvious at all.Footnote 1

Until recently, only Groth-Sahai proofs [32] (and their variants, e.g., [9, 20, 35]) were known to possess this dual-mode property.Footnote 2 These proof systems all rely on a very specific and structured algebraic setting (pairing-friendly cyclic groups). In contrast, we rely on generic rather than algebraic techniques, resulting in a fundamentally new way of obtaining dual-mode proof systems.

Concurrent Work. Concurrently and independently to this work, [19, 39] have put forward breakthrough approaches to obtain dual-mode NIZKs from the LWE assumption. These constructions rely on rich algebraic structures and are non-blackbox. In contrast, our techniques are generic and our perspective is closer to computational complexity, in that we investigate whether the existence of a powerful non-algebraic object (iO) can lead to algebraic ones.

Our Contribution. In this paper, we give the first generic construction of dual-mode NIZK proofs from (the combination of) the following ingredients:

  • subexponentially secure indistinguishability obfuscation (iO, [3, 26]),

  • subexponentially secure one-way functions,

  • a (selectively) subexponentially secure functional encryption scheme,

  • lossy encryption [5, 40], and

  • lossy functions (LFs), a relaxation of lossy trapdoor functions [41] which we introduce in this paper.

We stress that some of our ingredients are implied by (a combination of) others: Functional encryption can be constructed from iO and one-way functions [26]. Conversely, subexponentially secure functional encryption implies subexponentially secure iO and one-way functions (e.g., [8] and the references therein). Furthermore, both LFs and lossy encryption are implied by lossy trapdoor functions [41].

As a side note, we remark that thus, a subexponential variant of any of the DDH, \(k\)-LIN, QR, DCR, or LWE assumptions, along with subexponential iO implies all of our ingredients.Footnote 3

Of course, since we assume iO, our construction is far from practical. Still, it has interesting theoretical applications. For instance, it allows to instantiate dual-mode NIZK proofs in the recent works [1, 2, 21] without any additional assumptions, and in particular without pairing-friendly groups. (Incidentally, these works already assume what we need for our construction.)

In particular, combining our results with the scheme from [1], shows that it is possible to obtain a very structured object (namely, a cyclic group in which Diffie-Hellman and similar assumptions hold) solely from an unstructured and generic object (iO), and a mildly structured object (a lossy trapdoor function).Footnote 4

Similarly, implementing [2, 21] with our system (instead of with Groth-Sahai proofs) yields a pairing-friendly group (with non-unique representation) from iO and a DDH group (both subexponentially secure). Therefore, we also give an answer to the following open problem (Fig. 1):

Can bilinear groups be bootstrapped from DDH groups and iO?

Fig. 1.
figure 1

Some implications on previous results. “\(\mathsf {iO} \)”, “LTDF” and “SDDH” denote subexponential versions of indistinguishability obfuscation, lossy trapdoor functions and the “Strong DDH” (a \(q\)-type variant of the Diffie-Hellman assumption).

Open Problems. We note that the groups from [1, 2, 21] all enjoy non-unique representations of group elements. That is, equality of group elements can be tested, but there does not exist a canonical form. Removing this limitation remains an open problem.

Our Techniques

Existing Generic Approaches. Before explaining our main ideas, we first mention that generic constructions of NIZKs from iO already exist. Namely, [42] present a NIZK construction that only assumes iO and one-way functions. Their construction is (even perfectly) zero-knowledge. However, proofs are in their case simply signatures of the corresponding statement \(x\). Thus, their construction is inherently limited to computational soundness, in the sense that it is not clear how to tweak this construction to obtain statistical soundness.

Secondly, it is possible to construct a notion of trapdoor permutations from iO that is in turn sufficient to construct statistically sound NIZK proofs [17] (cf. [6, 7, 22, 30]). However, it is not clear how to tweak this NIZK construction to obtain statistical zero-knowledge.

The Hidden Bits Model. Similarly to [17], our starting point is also the generic NIZK construction from [22]. This work presents a statistically sound and perfectly zero-knowledge NIZK protocol in an ideal model of computation called the “hidden bits model” (HBM).Footnote 5 It will be helpful to first recall the HBM before going further. In a nutshell, the HBM gives the prover \(P\) access to an ideal random bitstring \(\mathsf {hrs} =(\mathsf {hrs} _1,\dots ,\mathsf {hrs} _t)\in \{0,1\} ^t\). Next, \(P\) selects a subset \(\mathcal {I} \subseteq [t]\) and a proof \(\pi \). Then, the verifier \(V\) is activated with \(\mathcal {I},\pi \), the subset \((\mathsf {hrs} _i)_{i\in \mathcal {I}}\) of \(\mathsf {hrs} \) that is selected by \(\mathcal {I} \), and of course the instance \(x\). Finally, \(V\) is supposed to output a verdict \(b\in \{0,1\} \).

Two Ways to Implement the HBM. Note that the power of the HBM stems from the fact that \(\mathsf {hrs} \) is ideally random (and cannot be tampered with by \(P\)), but only revealed in part to \(V\). When implementing the HBM, we will necessarily have to compromise on some of these properties. However, it will be interesting to see what the consequences of such compromises are. Specifically, when implementing the HBM in the HBM-based NIZK protocol of [22], we can observe the following:

  1. (a)

    if we implement the HBM such that \(\mathsf {hrs} \) is truly random (or selected from a small set of possible \(\mathsf {hrs} \) values, each of which is individually truly random), then the resulting NIZK protocol is statistically sound and computationally zero-knowledge,

  2. (b)

    if we implement the HBM such that the unopened bits \((\mathsf {hrs} _i)_{i\notin \mathcal {I}}\) are statistically hidden from \(V\), then the resulting NIZK protocol is statistically zero-knowledge and computationally sound.

Known implementations of the HBM (e.g., [22, 30, 31]) follow (a), and thus enjoy statistical soundness guarantees. Our main strategy will be to build a dual-mode NIZK proof system by implementing the HBM in a way that allows to switch (by switching public parameters) between (a) and (b).

A First Approach. Our first step will be to set up the hidden string \(\mathsf {hrs} \) as

$$ \mathsf {hrs} =\mathsf {H} (X)\oplus \mathsf {crs} $$

for a value \(X\) chosen freely by \(P\), a yet-to-be-defined function \(\mathsf {H} \), and a truly random “randomizing string” \(\mathsf {crs} \) fixed in the public parameters. If \(\mathsf {H} \) is a pseudorandom generator (that admits a suitable partial opening process, see [31] for an explicit formulation), this yields the core of existing HBM implementations. In particular, if \(\mathsf {H} \) has a small image, then we are in case (a) above, and the resulting NIZK is statistically sound.

However, suppose we can switch (in a computationally indistinguishable way) \(\mathsf {H} (X)\) to have a large image, such that in fact \(\mathsf {H} (X)\in \{0,1\} ^t\) is close to uniformly distributed for random \(X\). We call such a “switchable” object a lossy function (LF). An LF can be easily constructed, e.g., by universally hashing the output of a lossy trapdoor function \(F\). For suitable choices of parameters, \(\mathsf {H} (X):=h(F(X))\) is close to uniform if \(F\) is injective (and \(X\) random), and has a small range if \(F\) does.

With \(\mathsf {H} (X)\) close to uniform, we are in case (b) above, assuming that the process itself of revealing \(\mathsf {hrs} _{\mathcal {I}}\) does not reveal additional information about other bit positions. Hence, we obtain a statistically zero-knowledge NIZK protocol, and in summary even a dual-mode NIZK that can be switched between statistically sound and statistically zero-knowledge modes of operation.

Managing the Opening Process. The main problem with our first approach is that it is not clear how to partially open a subset \(\mathsf {hrs} _{\mathcal {I}}\) of \(\mathsf {hrs} \) to a verifier \(V\). Previous HBM implementations (e.g., [22, 31]) devised elaborate ways to partially open suitably designed pseudorandom generators (in the role of \(\mathsf {H} \) above). We cannot use those techniques for two reasons. First, their opening process might reveal statistical information about the unopened parts of \(\mathsf {hrs} \). Second, these techniques require specific \(\mathsf {H} \) functions, and do not appear to work with “switchable” functions \(\mathsf {H} \) as we need. Hence, we use the strong ingredients mentioned above to design our own opening process.

We will use a functional encryption scheme \(\mathsf {FE} \). We will publicize a truly random \(\mathsf {crs} \), a statement \(Z\) from a language \(L'\) that is hard to decide, along with an \(\mathsf {FE} \) public key \(\mathsf {fmpk} \), and a corresponding secret key \(\mathsf {sk} _{\mathsf {f}}\) for the following function \(\mathsf {f} \):

$$\begin{aligned} \mathsf {f} (X,\mathcal {I},z,T) := {\left\{ \begin{array}{ll} (T,\mathcal {I}) &{}\text {if} \ z \ \hbox {is a witness to} Z\in L' \\ (\mathsf {H} (X)_{\mathcal {I}},\mathcal {I}) &{}\text {else}. \end{array}\right. } \end{aligned}$$

An opening consists of an encryption

$$\begin{aligned} C=\mathsf {FE}.\mathsf {Enc} (\mathsf {fmpk},(X,\mathcal {I},0,0)) \end{aligned}$$

that will decrypt to \(\mathsf {f} (X,\mathcal {I},0,0)=\mathsf {H} (X)_{\mathcal {I}}\) under \(\mathsf {sk} _{\mathsf {f}}\). The verifier will receive this opening, retrieve \(\mathsf {H} (X)_{\mathcal {I}}\) with \(\mathsf {sk} _{\mathsf {f}}\), and compute \(\mathsf {hrs} _{\mathcal {I}}=\mathsf {H} (X)_{\mathcal {I}}\oplus \mathsf {crs} _{\mathcal {I}}\).

Observe that this process has the following properties:

  • If \(Z\notin L'\), then \(\mathsf {sk} _{\mathsf {f}}(C)=(\mathsf {H} (X)_{\mathcal {I}},\mathcal {I})\) always. Hence, if additionally \(\mathsf {H} \) has a small range, we are in case (a) above, and the corresponding NIZK protocol is statistically sound.

  • If \(Z\in L'\) with witness \(z\), then any prover who knows \(z\) can efficiently open \(\mathsf {hrs} _{\mathcal {I}}\) arbitrarily, by encrypting \((0,\mathcal {I},z,T)\) for \(T=\mathsf {crs} _{\mathcal {I}}\oplus \mathsf {hrs} _{\mathcal {I}}\) and the desired \(\mathsf {hrs} _{\mathcal {I}}\). Furthermore, such openings obviously do not contain any information about potential other positions of \(\mathsf {hrs} \). This means we are in case (b) above, and the corresponding NIZK protocol is statistically zero-knowledge.

By using \(\mathsf {FE} \)’s security, it is possible to show that these two types of openings are indistinguishable to a verifier. However, as formulated, they are of course not indistinguishable to a prover yet. Hence, we will additionally publicize an obfuscated algorithm \(\mathsf {PC} \) that will get as input a statement \(x\) with witness \(w\), and random coins r. Depending on the mode (sound or zero-knowledge), \(\mathsf {PC} (x,w,r)\) will then either encrypt \((X,\mathcal {I},0,0)\) or \((0,\mathcal {I},z,T)\), for pseudorandom \(X\) and \(T\) derived from \(r\).

A Taste of the Security Proof. For security, we will show that the public parameters in both modes are computationally indistinguishable. The security proof is somewhat technical, and we would like to highlight only one interesting theme here. Namely, observe that the prover algorithm \(\mathsf {PC} \) is inherently probabilistic. In the proof, we need to modify \(\mathsf {PC} \)’s behavior, and in particular decouple its output distribution from its input \(w\). Specifically, when aiming at statistical soundness, the output of \(\mathsf {PC} \) will encrypt, and thus depend on \(w\). But when trying to achieve zero-knowledge, \(\mathsf {PC} \)’s output should not reveal (in a statistical sense) which witness \(w\) has been used.Footnote 6

This decoupling process is particularly cumbersome to go through because \(\mathsf {PC} \) itself is public and can be run on arbitrary inputs. Any change that essentially makes \(\mathsf {PC} \) ignore its \(w\) input will be easily detectable. Hence, we add an indirection that helps to remove dependencies on \(w\). Specifically, we let \(\mathsf {PC} \) first compute \(a=\mathsf {LE}.\mathsf {Enc} (\mathsf {lpk},(x,w);r)\) using a lossy encryption scheme \(\mathsf {LE} \). If the corresponding public key \(\mathsf {lpk} \) is injective (i.e., leads to decryptable ciphertexts), then \(a\) determines \(w\). Hence, any case distinction (or hybrid argument) we make for different values of \(w\) can alternatively be made for different values of \(a\). On the other hand, if \(\mathsf {lpk} \) is lossy, then \(a\) will be statistically independent of the plaintext \((x,w)\).

Hence, \(a\) can be used as a single value that (a) can serve as a “fingerprint” of (or in some sense even as a substitute for) \(w\) in the proof, but (b) can be easily made independent of \(w\) by switching \(\mathsf {lpk} \) into lossy mode. Equipped with this gadget, we will structure the proof as a large hybrid argument over all values of \(a\) (encrypted at this point with an injective \(\mathsf {lpk} \)). In each step, we modify \(\mathsf {PC} \)’s behavior for one particular value of \((x,w)\), and change the corresponding \(\mathsf {FE} \) ciphertext \(C\) from an encryption of \((X,\mathcal {I},0,0)\) to \((0,\mathcal {I},z,T)\) for a pseudorandom value \(T\) derived from \(a\).

Roadmap. After recalling some preliminaries in Sect. 2, we present our proof system in Sect. 3, followed by its analysis in Sect. 4. In the full version, we provide a schematic overview over our main proof, a proof of a technical lemma, a recap of the HBM-based NIZK from [22], and an analysis of the (statistical) extractability of our scheme.

2 Preliminaries

Notation. Throughout this paper, \(\lambda \) denotes the security parameter. For a natural number \(n\in \mathbb {N}\), [n] denotes the set \(\{1, \dots , n\}\). A probabilistic polynomial time algorithm (PPT, also denoted efficient algorithm) runs in time polynomial in the (implicit) security parameter \(\lambda \). A positive function f is negligible if for any polynomial p there exists a bound \(B>0\) such that, for any integer \(k\ge B\), \(f(k)\le 1/{\vert p(k)\vert }\). An event depending on \(\lambda \) occurs with overwhelming probability when its probability is at least \(1-{{\,\mathrm{\mathsf {negl}}\,}}(\lambda )\) for a negligible function \({{\,\mathrm{\mathsf {negl}}\,}}\). Given a finite set S, the notation \(x\leftarrow _{\textsc {r}}S\) means a uniformly random assignment of an element of S to the variable x. If A is a probabilistic algorithm, \(y \leftarrow _{\textsc {r}}A(\cdot )\) denotes the process of running A on some appropriate input and assigning its output to y. The notation \(\mathcal {A} ^{\mathcal {O}}\) indicates that the algorithm \(\mathcal {A} \) is given oracle access to \(\mathcal {O}\). We denote \(a \leftarrow A; b \leftarrow B(a);\ldots \) for running the experiment where a is chosen from A, after which b is chosen from B, which might depend on a and so on. This determines a probability distribution over the outputs and we write \(\Pr [a \leftarrow A;b \leftarrow B(a);\ldots : C(a,b,\ldots )]\) for the probability of the condition \(C(a,b,\ldots )\) being satisfied after running the experiment. For two distributions \(D_1,D_2\), we denote by \(\varDelta (D_1,D_2)\) the statistical distance. We also write \(D_1\equiv D_2\) when the distributions are identical, \(D_1 \approx _{\textsc {c}}D_2\) when the distributions are computationally indistinguishable and \(D_1 \approx _{\epsilon } D_2\) when \(\varDelta (D_1,D_2)\le \epsilon \).

2.1 Puncturable Pseudorandom Function

A pseudorandom function (PRF) originally introduced in [27], is a tuple of PPT algorithms \(\mathsf {PRF} =(\mathsf {PRF}.\mathsf {KeyGen}, \mathsf {PRF}.\mathsf {Eval})\). Let \(\mathcal {K}\) denote the key space, \(\mathcal {X}\) denote the domain, and \(\mathcal {Y}\) denote the range. The key generation algorithm \(\mathsf {PRF}.\mathsf {KeyGen} \) on input of \(1^{\lambda } \), outputs a random key from \(\mathcal {K}\) and the evaluation algorithm \(\mathsf {PRF}.\mathsf {Eval} \) on input of a key K and \(x\in \mathcal {X}\), evaluates the function \(F:\mathcal {K}\times \mathcal {X}\mapsto \mathcal {Y}\). The core property of PRFs is that, on a random choice of key K, no probabilistic polynomial-time adversary should be able to distinguish \(F(K,\cdot )\) from a truly random function, when given black-box access to it. Puncturable PRFs (pPRFs) have the additional property that some keys can be generated punctured at some point, so that they allow to evaluate the PRF at all points except for the punctured point. As observed in [13, 14, 36], it is possible to construct such punctured keys for the original construction from [27], which can be based on any one-way functions [34].

Definition 1

(Puncturable Pseudorandom Function [13, 14, 36]). A puncturable pseudorandom function (pPRF) is with punctured key space \(\mathcal {K}_p\) is a triple of PPT algorithms \((\mathsf {PRF}.\mathsf {KeyGen}, \mathsf {PRF}.\mathsf {Puncture}, \mathsf {PRF}.\mathsf {Eval})\) such that:

  • \(\mathsf {PRF}.\mathsf {KeyGen} (1^{\lambda })\) outputs a random key \(K\in \mathcal {K}\),

  • \(\mathsf {PRF}.\mathsf {Puncture} (K,x)\), on input \(K\in \mathcal {K}\), \(x\in \mathcal {X}\), outputs a punctured key \(K\{x\} \in \mathcal {K}_p\),

  • \(\mathsf {PRF}.\mathsf {Eval} (K',x')\), on input a key \(K'\) (punctured or not), and a point \(x'\), outputs an evaluation of the PRF.

We require \(\mathsf {PRF} \) to meet the following conditions:

  • Functionality preserved under puncturing. For all \(\lambda \in \mathbb {N}\), for all \(x\in \mathcal {X}\),

    $$\begin{aligned} \Pr [&K\leftarrow _{\textsc {r}}\mathsf {PRF}.\mathsf {KeyGen} (1^{\lambda }), K\{x\}\leftarrow _{\textsc {r}}\mathsf {PRF}.\mathsf {Puncture} (K, x):\\&\forall x'\in \mathcal {X}\setminus \{x\}:\mathsf {PRF}.\mathsf {Eval} (K, x')=\mathsf {PRF}.\mathsf {Eval} (K\{x\}, x')]=1\text {.} \end{aligned}$$
  • Pseudorandom at punctured points. For every stateful PPT adversary \(\mathcal {A} \) and every security parameter \(\lambda \in \mathbb {N}\), the advantage of \(\mathcal {A} \) in \({\mathsf {Exp}\text {-}\mathsf {s}\text {-}\mathsf {pPRF}}\) (described in Fig. 2) is negligible, namely:

Sub-exponential Security. We say that \(\mathsf {PRF} \) is sub-exponentially secure when it satisfies Definition 1 and in addition it satisfies: for every PPT adversary \(\mathcal {A}\), the advantage \(\mathsf {Adv} _{\mathsf {s}\text {-}\mathsf {cPRF}}(\lambda ,{\mathcal {A}})\le \frac{1}{2^{\lambda ^{\epsilon }}}\), for some positive constant \(0<\epsilon <1\).

Definition 1 corresponds to a selective security notion for puncturable pseudorandom functions; adaptive security could be considered, but will not be required in our work. For ease of notation we often write \(F(\cdot , \cdot )\) instead of \(\mathsf {PRF}.\mathsf {Eval} (\cdot , \cdot )\).

Fig. 2.
figure 2

Experiment \({\mathsf {Exp}\text {-}\mathsf {s}\text {-}\mathsf {pPRF}} _{\mathcal {A}}(\lambda )\) for the pseudo-randomness at punctured points.

2.2 Lossy Functions

We generalize the notion of LTDF (lossy trapdoor function) due to [41] and introduce lossy functions. LTDFs (Lossy trapdoor functions) can be sampled in two indistinguishable modes: an injective and a lossy mode. When sampling injective functions, the setup also provides a trapdoor which can be used to invert the function. Unlike LTDFs, for lossy functions we require that functions can be sampled in two modes, but in which one mode is “more lossy” than the other. Thus, instead of an injective and a lossy mode, we have a “less lossy” and a “more lossy” mode, which we denote as “dense” and “lossy” modes. Since we do not necessarily have injectivity in the dense setting, we also do not have trapdoors as in LTDFs.

Definition 2

(Lossy Functions). A tuple \(\mathsf {LF} =(\mathsf {Setup},\mathsf {Eval})\) of \( PPT \) algorithms is a family of (nkmi)-lossy functions if the following properties hold:

  • Sampling functions: Both \(\mathsf {Setup} (1^{\lambda },\text {dense})\) of dense functions and \(\mathsf {Setup} (1^{\lambda },\mathrm {lossy})\) of lossy functions output a function index s. We require that \(\mathsf {Eval} (s, \cdot )\) is a deterministic function on \(\{0,1\} ^n\rightarrow \{0,1\} ^m\) in both cases. In the following, we use the shorthand notation .

  • Dense functions have images statistically close to uniformly random: for all \(s\leftarrow _{\textsc {r}}\mathsf {LF} (1^{\lambda },\text {dense})\), we have that:

    $$\begin{aligned} \varDelta ((s(U_n),s),(U_m,s))\le \frac{1}{2^i}. \end{aligned}$$
  • Lossy functions have small image size: The image size of lossy functions is bounded by \(2^{k}\). In particular, for all \(s \leftarrow _{\textsc {r}}\mathsf {Setup} (1^{\lambda },\mathrm {lossy})\),

    $$\begin{aligned} |\{\mathsf {Eval} (s,x):x\in \{0,1\}^n\}|\le 2^{k}. \end{aligned}$$
  • Indistinguishability: The outputs of \(\mathsf {Setup} (1^{\lambda },\mathrm {lossy})\) and \(\mathsf {Setup} (1^{\lambda },\text {dense})\) are computationally indistinguishable, i.e. \(\{\mathsf {Setup} (1^{\lambda },\mathrm {lossy})\}\approx _{\textsc {c}}\{\mathsf {Setup} (1^{\lambda },\text {dense})\}\)

We can generalise Definition 2 even further. Instead of asking that in dense mode the evaluation of the function is statistically close to a uniformly random, we may instead define the dense mode as having \(H_\infty (\mathsf {Eval} (s,U_n))\ge m+2\log \big (\frac{1}{\epsilon }\big )\). Then, by the leftover hash lemma, we can combine \(\mathsf {LF} \) with a 2-universal hash function to ensure that the output is statistically close to uniformly random as in Definition 2. For clarity, we do not use this generalization in our proofs.

Concrete Instantiations: The lossy trapdoor functions from [41] are also lossy functions in the sense of Definition 2. Moreover, composed with 2-universal hash functions, they satisfy the necessary parameters in our construction (see Sect. 3). This would yield suitable lossy functions based on DDH and LWE.

2.3 Lossy Encryption

Definition 3

[5, 40]: A lossy public-key encryption scheme is a tuple \(\mathsf {LE} = (\mathsf {Gen}, \mathsf {Enc}, \mathsf {Dec})\) of polynomial-time algorithms such that

  • \(\mathsf {Gen} (1^\lambda , \mathrm {inj})\) outputs keys \((\mathsf {pk}, \mathsf {sk})\), keys generated by \(\mathsf {Gen} (1^\lambda , \mathrm {inj})\) are called injective keys.

  • \(\mathsf {Gen} (1^\lambda , \mathrm {lossy})\) outputs keys \((\mathsf {pk} _\mathrm {lossy}, \bot )\), keys generated by \(\mathsf {Gen} (1^\lambda , \mathrm {lossy})\) are called lossy keys.

  • \(\mathsf {Enc} (\mathsf {pk},\cdot ,\cdot ):M\times R\rightarrow C\).

Additionally, the algorithms must satisfy the following properties:

  1. 1.

    Correctness on injective keys. For all plaintexts \(x \in X\),

    $$\begin{aligned} \Pr [(\mathsf {pk}, \mathsf {sk}) \leftarrow _{\textsc {r}}\mathsf {Gen} (1^{\lambda }, \mathrm {inj}); r \leftarrow R : \mathsf {Dec} (\mathsf {sk}, \mathsf {Enc} (\mathsf {pk}, x, r)) = x] = 1. \end{aligned}$$
  2. 2.

    Indistinguishability of keys. Public keys \(\mathsf {pk} \) are computationally indistinguishable in lossy and injective modes. Specifically, if \(\mathrm {proj}: (\mathsf {pk}, \mathsf {sk}) \rightarrow \mathsf {pk} \) is the projection map, then:

    $$\begin{aligned} \{\mathrm {proj}(\mathsf {Gen} (1^{\lambda },\mathrm {inj}))\}\approx _{\textsc {c}}\{\mathrm {proj}(\mathsf {Gen} (1^{\lambda },\mathrm {lossy}))\}. \end{aligned}$$
  3. 3.

    Lossiness of lossy keys. For all \((\mathsf {pk} _\mathrm {lossy}, \bot ) \leftarrow _{\textsc {r}}\mathsf {Gen} (1^{\lambda }, \mathrm {lossy})\), and all \(x_0, x_1 \in M\), the two distributions \(\{r \leftarrow _{\textsc {r}}R : (\mathsf {pk} _\mathrm {lossy}, \mathsf {Enc} (\mathsf {pk} _\mathrm {lossy}, x_0, r))\}\) and \(\{r \leftarrow _{\textsc {r}}R : (\mathsf {pk} _\mathrm {lossy}, \mathsf {Enc} (pk_\mathrm {lossy}, x_1, r))\}\) are statistically close, i.e. the statistical distance is negligible in \(\lambda \).

We define a lossy encryption scheme \(\mathsf {LE}\) to be \(\mu \)-lossy if for all \((\mathsf {pk} _\mathrm {lossy}, \bot )\leftarrow _{\textsc {r}}\mathsf {Gen} (1^{\lambda }, \mathrm {lossy})\) and for all \(x_0,x_1\), we have that:

$$\begin{aligned} \{r \leftarrow _{\textsc {r}}R : (\mathsf {pk} _\mathrm {lossy}, \mathsf {Enc} (\mathsf {pk} _\mathrm {lossy}, x_0, r))\}\approx _{\mu }\{r \leftarrow _{\textsc {r}}R : (\mathsf {pk} _\mathrm {lossy}, \mathsf {Enc} (pk_\mathrm {lossy}, x_1, r))\} \end{aligned}$$

2.4 Functional Encryption

Definition 4

[12, 38, 43] A functional encryption scheme for a class of functions \(\mathcal {F} = \mathcal {F} (1^{\lambda })\) over message space \(\mathcal {M}= \mathcal {M}_\lambda \) consists of four polynomial time algorithms \(\mathsf {FE} = (\mathsf {Setup}, \mathsf {KeyGen},\mathsf {Enc}, \mathsf {Dec})\):

  1. 1.

    \(\mathsf {Setup} (1^{\lambda })\) – on input the security parameter \(\lambda \) outputs master public key \(\mathsf {mpk}\) and master secret key \(\mathsf {msk} \).

  2. 2.

    \(\mathsf {KeyGen} (\mathsf {msk},f)\) – on input the master secret key \(\mathsf {msk}\) and a description of function \(f\in \mathcal {F} \) and outputs a corresponding secret key \(\mathsf {sk} _f\).

  3. 3.

    \(\mathsf {Enc} (\mathsf {mpk}, x)\) – on input the master public key \(\mathsf {mpk}\) and a string x, outputs a ciphertext \(\mathsf {ct} \).

  4. 4.

    \(\mathsf {Dec} (\mathsf {sk} _f , \mathsf {ct})\) – on inputs the secret key \(\mathsf {sk} _f\) and a ciphertext encrypting message \(m \in M\), outputs f(m).

A functional encryption scheme is perfectly correct for \(\mathcal {F} \) if for all \(f \in \mathcal {F} \) and all messages \(m \in \mathcal {M}\):

$$\begin{aligned} \Pr [ (\mathsf {mpk},\mathsf {msk}) \leftarrow _{\textsc {r}}\mathsf {Setup} (1^{\lambda }): \mathsf {Dec} (\mathsf {KeyGen} (\mathsf {msk},f),\mathsf {Enc} (\mathsf {mpk},m)) = f(m) ] = 1 \end{aligned}$$

In addition, for the proof of Theorem 14, we need a stronger property from the functional encryption schemes we use in our construction, which we call special correctness of decryption keys. Special correctness requires that decrypting any (potentially maliciously generated) ciphertext under the decryption key \(\mathsf {sk} _f\) yields a result which lies in the range of the function f. The functional encryption scheme based on \(\mathsf {iO} \) and one-way functions from [26] satisfies this property.

Definition 5

(Special correctness of decryption keys). A functional encryption scheme satisfies special correctness if for all \(\lambda \in \mathbb {N}\), for all \(\mathsf {ct} \), for all \(f\in \mathcal {F} \), it holds that:

$$\begin{aligned} \Pr \Bigg [\begin{array}{l} (\mathsf {mpk},\mathsf {msk}) \leftarrow _{\textsc {r}}\mathsf {Setup} (1^{\lambda })\text {, }\\ \mathsf {sk} _f \leftarrow _{\textsc {r}}\mathsf {KeyGen} (\mathsf {msk},f)\\ \end{array}: \mathsf {Dec} (\mathsf {sk} _f,\mathsf {ct})\in \mathsf {Im}(f)\cup \{\bot \}\Bigg ]\ge 1-{{\,\mathrm{\mathsf {negl}}\,}}(\lambda )\text {,} \end{aligned}$$

where \(\mathsf {Im}(f)=\{f(m):m\in \mathcal {M}\}\) denotes the image of the function f.

Definition 6

(Selectively Indistinguishable Security). A functional encryption scheme \(\mathsf {FE} \) is selectively indistinguishable secure ( SEL-IND-FE-CPA) if for all stateful \( PPT \) adversaries \(\mathcal {A} \), the advantage of \(\mathcal {A}\) in the experiment \({\mathsf {Exp}\text {-}s\text {-}\mathsf {IND}\text {-}\mathsf {FE}\text {-}\mathsf {CPA}} \) described in Fig. 3 is negligible, namely:

Fig. 3.
figure 3

Experiment \({\mathsf {Exp}\text {-}s\text {-}\mathsf {IND}\text {-}\mathsf {FE}\text {-}\mathsf {CPA}}\) for the selective indistinguishable security of \(\mathsf {FE}\). The queries of \(\mathcal {A} \) to oracle \(\mathsf {FE}.\mathsf {KeyGen} (\mathsf {msk},\cdot )\) are restricted to functions f such that \(f(m_0)=f(m_1)\).

Definition 7

(Sub-exponential Selectively Indistinguishability Security). A functional encryption scheme \(\mathsf {FE} \) is sub-exponentially selectively indistinguishability secure if it satisfies Definition 6 and in addition: for all PPT adversaries \(\mathcal {A}\):

$$\begin{aligned} \mathsf {Adv} ^{\mathsf {FE}}_{{\mathsf {Exp}\text {-}s\text {-}\mathsf {IND}\text {-}\mathsf {FE}\text {-}\mathsf {CPA}}}(\lambda ,{\mathcal {A}})\le \frac{1}{2^{\lambda ^{\epsilon }}}, \ \textit{for some positive constant} \ 0<\epsilon <1. \end{aligned}$$

2.5 Indistinguishability Obfuscation

Definition 8

(Indistinguishability Obfuscator[3, 26]). A uniform \( PPT \) machine \(\mathsf {iO} \) is called an indistinguishability obfuscator for a circuit class \({\mathcal {C} _\lambda }\) if the following conditions are satisfied:

  • For all security parameters \(\lambda \in \mathbb {N}\), for all \(C \in \mathcal {C} _\lambda \), for all inputs x, we have:

    $$\begin{aligned} Pr[C'(x)=C(x):C' \leftarrow _{\textsc {r}}\mathsf {iO} (\lambda ,C)]=1 \end{aligned}$$
  • For any (not necessarily uniform) \( PPT \) distinguisher \({\mathcal {A}}\), for all security parameters \(\lambda \in \mathbb {N}\), for all pairs of circuits \(C_0,C_1 \in \mathcal {C} _\lambda \), we have that if \(C_0(x) = C_1(x)\) for all inputs x, then:

Sub-exponential Security. We say that \(\mathsf {iO} \) is sub-exponentially secure when it satisfies Definition 8 and also it satisfies that: for every (not necessary uniform) PPT distinguisher \(\mathcal {A}\), the advantage \(\mathsf {Adv} ^{\mathsf {iO}}(\lambda ,{\mathcal {A}})\) is bounded by \(\frac{1}{2^{\lambda ^{\epsilon }}}\), for some positive constant \(0<\epsilon <1\).

2.6 Dual-Mode NIWI Proof Systems

A dual-mode non-interactive witness indistinguishable (DM-NIWI) proof system [32] is a special type of non-interactive witness indistinguishable (NIWI) proof system, in which the common reference string (CRS) generation is dual-mode. The dual-mode property means that these systems have common reference string algorithms which generate indistinguishable CRS in “binding” or “hiding” modes. The system satisfies statistical soundness and extractability in binding mode and statistical witness indistinguishability in hiding mode.

Definition 9

A binary relation R is polynomially bounded if it is decidable in polynomial time and there is a polynomial p such that \(|w| \le p(|x|)\), for all \((x, w) \in R\). For any such relation and any x we set \(L_R = \{x |\ \exists w\ s.t.\ (x,w) \in R\}\).

Fig. 4.
figure 4

Experiment \({\mathsf {Exp}\text {-}\mathsf {CRS}\text {-}\mathsf {IND}} ^\mathsf {DM\text {-}NIWI} _b\) for CRS indistinguishability.

Definition 10

(Dual-mode non-interactive witness indistinguishable proof systems[32]). Let R be a polynomially-bounded binary relation R. A dual-mode non-interactive witness indistinguishable (DM-NIWI) proof system for language \(\mathcal {L}_{R}\in \mathsf {NP} \) is a tuple of PPT algorithms \(\mathsf {DM\text {-}NIWI} = ( \mathsf {Setup}, \mathsf {Prove}, \mathsf {Verify}, \mathsf {Extract})\).

  • \( \mathsf {Setup} (1^{\lambda },\mathrm {binding})\) on input the security parameter, outputs a common reference string \(\mathsf {crs} \) which we call binding. It also outputs the corresponding extraction trapdoor \(\mathsf {td}_{\mathsf {ext}} \).

  • \( \mathsf {Setup} (1^{\lambda },\mathrm {hiding})\) on input the security parameter, outputs a common reference string \(\mathsf {crs} \), which we call a hiding \(\mathsf {crs} \).

  • \( \mathsf {Prove} (\mathsf {crs}, x, w)\), on input \(\mathsf {crs} \), a statement x and a witness w, outputs a proof \(\pi \).

  • \( \mathsf {Verify} (\mathsf {crs}, x, \pi )\), on input \(\mathsf {crs} \), a statement x and a proof \(\pi \), outputs either 1 or 0.

  • \( \mathsf {Extract}(\mathsf {td}_{\mathsf {ext}},x,\pi )\) on input the extraction trapdoor \(\mathsf {td}_{\mathsf {ext}} \), a statement x and a proof \(\pi \), it outputs a witness w.

We require the \(\mathsf {DM\text {-}NIWI} \) to meet the following properties:

  • CRS indistinguishability. Common reference strings generated via \(\mathsf {Setup} (1^{\lambda }, \mathrm {binding})\) and \(\mathsf {Setup} (1^{\lambda }, \mathrm {hiding})\) are computationally indistinguishable. More formally, for all non-uniform PPT adversaries \(\mathcal {A} \), the advantage of \({\mathcal {A}}\) in the experiment \({\mathsf {Exp}\text {-}\mathsf {CRS}\text {-}\mathsf {IND}}\) described in Fig. 4 is negligible, namely:

  • Perfect completeness in both modes. For every \((x, w)\in R\), we have that:

    $$\begin{aligned} \Pr \bigg [\begin{array}{l} \mathsf {crs} \leftarrow _{\textsc {r}}\mathsf {Setup} (1^{\lambda },\mathrm {binding})\text {, }\\ \pi \leftarrow _{\textsc {r}} \mathsf {Prove} (\mathsf {crs}, x, w) \end{array}: \mathsf {Verify} (\mathsf {crs}, x, \pi )=1\bigg ]=1\text {.} \end{aligned}$$

    The same holds when instead of \(\mathsf {crs} \leftarrow _{\textsc {r}}\mathsf {Setup} (1^{\lambda },\mathrm {binding})\), we have \(\mathsf {crs} \leftarrow _{\textsc {r}}\mathsf {Setup} (1^{\lambda },\mathrm {hiding})\).

  • Statistical soundness in binding mode. The system is statistically sound if for every (possibly unbounded) adversary \(\mathcal {A} \), we have that

    $$\begin{aligned} \Pr \bigg [\begin{array}{l} (\mathsf {crs},\mathsf {td}_{\mathsf {ext}})\leftarrow _{\textsc {r}}\mathsf {Setup} (1^{\lambda },\mathrm {binding}), \\ (x,\pi )\leftarrow _{\textsc {r}}\mathcal {A} (\mathsf {crs}) \end{array} : \mathsf {Verify} (\mathsf {crs}, x, \pi )=1 \wedge x\notin \mathcal {L}_{R}\bigg ]={{\,\mathrm{\mathsf {negl}}\,}}(\lambda )\text {.} \end{aligned}$$
  • Statistical extractability in binding mode. For any \((x,\pi )\), it holds that:

    $$\begin{aligned} \Pr \bigg [\begin{array}{l} (\mathsf {crs}, \mathsf {td}_{\mathsf {ext}}) \leftarrow _{\textsc {r}}\mathsf {Setup} (1^{\lambda }, \mathrm {binding}), \\ w \leftarrow _{\textsc {r}}\mathsf {Extract}(\mathsf {crs},\mathsf {td}_{\mathsf {ext}}, x, \pi ) \end{array}:\bigg (\begin{array}{l}\mathsf {Verify} (\mathsf {crs}, x, \pi ) = 1\\ \implies (x, w) \in R\end{array}\bigg )\bigg ]=1-{{\,\mathrm{\mathsf {negl}}\,}}(\lambda ). \end{aligned}$$

    Note: In binding mode, statistical extractability implies statistical soundness.

  • Statistical witness-indistinguishability in hiding mode. We say that the \(\mathsf {DM\text {-}NIWI} \) system is statistically witness-indistinguishable if for every x, \(w_0\), \(w_1\) with both \((x,w_0)\in R\) and \((x,w_1)\in R\), proofs of x with witness \(w_{0}\) are indistinguishable from proofs of x with witness \(w_{1}\). More formally, for every interactive (potentially unbounded) adversary \(\mathcal {A} \):

where \(\mathcal {A} \) is restricted to choosing \((x,w_0,w_1)\), such that both \((x,w_0)\in R\) and \((x,w_1)\in R\).

Remark. Like with the original presentation of Groth and Sahai [32], we focus our presentation on witness-indistinguishable (and not zero-knowledge) proof systems. Unlike zero-knowledge, witness-indistinguishability has useful compositional properties (see [23]). If zero-knowledge is desired, however, a simple transformation is possible: instead of proving \(x\in L\), prove \(x\in L\vee \hat{x}\in \hat{L}\) with our system, where \(\hat{L}\) is any fixed hard-to-decide language, and \(\hat{x}\) is a fixed instance determined in \(\mathsf {crs} \). In binding mode, set up \(\hat{x}\notin \hat{L}\), so that \(x\in L\vee \hat{x}\in \hat{L}\) implies \(x\in L\). In hiding mode, set up \(\hat{x}\in \hat{L}\), in which case a witness to this fact can be used as a simulation trapdoor to efficiently simulated proofs that achieve statistical zero-knowledge.

2.7 Hidden Bits Non-interactive Zero-Knowledge

In our construction, we rely on a NIZK protocol in the hidden bits model. The hidden-bits model was introduced by [22] and is an idealized setting in which the bits of the common reference string are hidden from the verifier (but not from the prover). We call this the hidden reference string \(\mathsf {hrs} \).

When the prover computes a proof, it can choose which bits of \(\mathsf {hrs} \) to reveal to the verifier. Denote the revealed bit set by \(\mathcal {I} \), then by \(\mathsf {hrs} _{\mathcal {I}}\) we will refer to the corresponding revealed bits of the \(\mathsf {hrs} \). Our construction can be based on the hidden-bits NIZK from [22], which proves graph Hamiltonicity and therefore covers any \(\mathsf {NP} \) statement. Nevertheless, our construction is generic enough to be based on any hidden-bits NIZK with statistical soundness and perfect zero-knowledge (if we only had statistical ZK, then we would only get statistical correctness of \(\mathsf {DM\text {-}NIWI}\)). The hidden-bits NIZK from [22] satisfies both statistical soundness and perfect ZK.

Definition 11

[22] A pair of \( PPT \) algorithms \(\mathsf {NIZK}_{H} =(\mathsf {P}_{H},\mathsf {V}_{H})\) is a NIZK proof system in the hidden-bits model if it satisfies the following properties:

  1. 1.

    Completeness: there exists a polynomial r denoting the length of the hidden random string, such that for every \((x, w) \in \mathcal {R} \) we have that:

    $$\begin{aligned} \underset{\mathsf {P}_{H},\mathsf {hrs} \leftarrow \{0,1\} ^{{t(|x|,\lambda )}}}{\Pr } [(\pi ,\mathcal {I}) \leftarrow \mathsf {P}_{H} (x,w,\mathsf {hrs}):\mathsf {V}_{H} (x,\mathsf {hrs} _\mathcal {I},\mathcal {I},\pi ) = 1] = 1 \end{aligned}$$

    where \(\mathcal {I} \subseteq [{t(|x|,\lambda )}]\) and \(\mathsf {hrs} _\mathcal {I} = \{ \mathsf {hrs} [i] : i \in \mathcal {I} \}.\)

  2. 2.

    Statistical Soundness: for every \(x \notin \mathcal {L}\) we have that:

    $$\begin{aligned} \underset{\mathsf {hrs} \leftarrow \{0,1\} ^{{t(|x|,\lambda )}}}{\Pr } [\exists \pi ,\mathcal {I}: \mathsf {V}_{H} (x,\mathsf {hrs} _\mathcal {I},\mathcal {I},\pi ) = 1] < \frac{1}{2^{\lambda +|x|}}. \end{aligned}$$
  3. 3.

    Perfect Zero-Knowledge: there exists a \( PPT \) algorithm \(\mathsf {S}_{H} \) such that:

    For ease of notation, we denote by the statistical distance between distributions \(D_0\) and \(D_1\). In the case of perfect ZK, .

3 Construction

In Fig. 5, we describe our \(\mathsf {DM\text {-}NIWI} \) candidate. Our scheme uses a hidden-bits NIZK proof system \(\mathsf {NIZK}_{H} =(\mathsf {P}_{H},\mathsf {V}_{H})\) as a building block. To distinguish common reference strings and proofs between the two proof systems, we denote by lowercase (\(\pi ,\mathsf {hrs} \)) the proofs and hidden reference strings for \(\mathsf {NIZK}_{H} \). In contrast, the common reference string and proofs of \(\mathsf {DM\text {-}NIWI} \) are denoted as \(\mathsf {CRS} \) and \(\varPi \), respectively.

The \(\mathsf {CRS} \) of \(\mathsf {DM\text {-}NIWI} \) contains the public key \(\mathsf {lpk} \) of a lossy encryption scheme \(\mathsf {LE} \), a lossy function \(\mathsf {H} \), uniformly random Z and \(\mathsf {crs} \), a functional decryption function \(\mathsf {sk} _f\) and an obfuscated program \(\mathsf {PC} \). Prover program \(\mathsf {PC} (x,w,r)\) first encrypts (xw) using randomness r to obtain \(a=\mathsf {LE}.\mathsf {Enc} (\mathsf {lpk},(x,w);r)\). Then it computes either a \(\mathsf {Hiding Proof} \) or a \(\mathsf {Binding Proof} \) depending on the mode and outputs as proof a \(\mathsf {FE} \) ciphertext C and a hidden-bits proof \(\pi \). The verifier decrypts C using \(\mathsf {sk} _f\) and then uses the hidden-bits verifier to check proof \(\pi \).

Notation and Parameters. For security parameter \(\lambda \), we denote by \({p(|x|+\lambda )}\) the ciphertext size of \(\mathsf {LE} \). By \({p_2(|x|,\lambda )}\), we denote the size of the randomness needed to compute \(\mathsf {FE} \) ciphertexts, while \({p_3(|x|,\lambda )}\) denotes the size of the random tape needed by the hidden-bits simulator \(\mathsf {S}_{H} \). Recall that \({t(|x|,\lambda )}\) is the polynomial from Definition 11. Then \(\mathsf {LF} \) must be a \(\big ({p_1(|x|,\lambda )},\lambda ,{t(|x|,2\lambda +|x|)},{{p(|x|+\lambda )}+\lambda }\big )\)-lossy function. Consider the subexponential security level of \(\mathsf {iO},\mathsf {FE} \) and \(\mathsf {PRF} \) to be \(\frac{1}{2^{{\kappa }^\epsilon }}\), for some constant \(0<\epsilon <1\). Then \(\kappa \) must be chosen as \(({{p(|x|+\lambda )}+\lambda })^{(1/\epsilon )}\).

Fig. 5.
figure 5

Dual-mode NIWI scheme \(\mathsf {DM\text {-}NIWI} =(\mathsf {Setup}, \mathsf {Prover}, \mathsf {Verifier})\). \(\mathsf {LF} \) is a class of lossy functions, \(\mathsf {PRG}.\mathsf {Setup} \) outputs pseudo-random generators from \(\{0.1\}^{\lambda }\) to \(\{0,1\}^{2\lambda +|x|}\), \(\mathsf {FE} \) is a functional encryption scheme, \(\mathsf {LE} \) is a lossy encryption scheme, \(\mathsf {iO} \) is an indistinguishability obfuscator and \((\mathsf {P}_{H},\mathsf {V}_{H})\) is the hidden-bits model NIZK from [22]. Parameter \({\kappa }\) is chosen so that the sub-exponential security level is sufficient.

4 Security Proof

Theorem 12

Let \(\mathsf {PRF} \) be a subexponentially-secure puncturable pseudo-random function, \(\mathsf {iO} \) be a subexponentially-secure obfuscator, \(\mathsf {PRG} \) a secure pseudo-random generator, \(\mathsf {LE} \) a secure lossy encryption scheme and \(\mathsf {FE} \) a subexponentially-secure selectively-\(\mathsf {IND\text {-}CPA} \) functional encryption scheme, then the scheme \(\mathsf {DM\text {-}NIWI} =(\mathsf {DM\text {-}NIWI}.\mathsf {Setup},\mathsf {DM\text {-}NIWI}.\mathsf {Prover}, \mathsf {DM\text {-}NIWI}.\mathsf {Verifier})\) described in Fig. 5 is a secure dual-mode non-interactive witness-indistinguishable system.

4.1 Completeness

Lemma 13

The \(\mathsf {DM\text {-}NIWI} \) system in Fig. 5 is perfectly complete.

Proof

Completeness follows from the completeness of the hidden-bits \(\mathsf {NIZK}_{H} \), the perfect ZK of \(\mathsf {NIZK}_{H} \), the perfect correctness of \(\mathsf {FE} \) and the functionality of \(\mathsf {iO} \) (the fact that for all programs C, we have that \(\mathsf {iO} (C)\) is functionally equivalent to C). Consider any \((x,w)\in R\) and \((C,\pi )=\mathsf {DM\text {-}NIWI}.\mathsf {Prover} (\mathsf {CRS},x,w,r)\). We want to show that \(\mathsf {DM\text {-}NIWI}.\mathsf {Verifier} (C,\pi ,\mathsf {CRS})=1\) with probability 1.

Case 1: \(\mathbf {\mathsf {CRS} \leftarrow _{\textsc {r}}\mathsf {DM\text {-}NIWI}.\mathsf {Setup} (1^{\lambda },\mathrm {binding})}\) Since \((C,\varPi )\) is a proof computed by the honest prover, we know that \((\pi ,\mathcal {I}) \leftarrow \mathsf {P}_{H} (x,w,\mathsf {hrs})\), where \(\mathsf {hrs} \) is derived from a, the lossy encryption of (xw). From the perfect correctness of \(\mathsf {FE} \), we have that indeed \((T\oplus \mathsf {crs})_\mathcal {I} =\mathsf {hrs} _I\). Therefore, from the perfect correctness of \(\mathsf {NIZK}_{H} \), it follows that \(\mathsf {V}_{H} (\mathcal {I},(T\oplus \mathsf {crs})_\mathcal {I},x,\pi )\) accepts with probability 1.

Case 2: \(\mathbf {\mathsf {CRS} \leftarrow _{\textsc {r}}\mathsf {DM\text {-}NIWI}.\mathsf {Setup} (1^{\lambda },\mathrm {hiding})}\) Since \((C,\varPi )\) is a proof computed by the honest prover, we know that \((\mathsf {hrs} _\mathcal {I},\pi ,\mathcal {I}) \leftarrow \mathsf {S}_{H} (x;r_3)\), where \(r_3\) is the random tape used by the hidden-bits simulator \(\mathsf {S}_{H} \). By the perfect correctness of \(\mathsf {FE} \), decrypting C yields indeed \(\mathsf {hrs} _\mathcal {I} \oplus \mathsf {crs} _\mathcal {I} \), therefore we can recover \(\mathsf {hrs} _\mathcal {I} \). Now, since \(\mathsf {NIZK}_{H} \) has perfect zero-knowledge, it follows that \(\mathsf {V}_{H} (\mathcal {I},(T\oplus \mathsf {crs})_\mathcal {I},x,\pi )\) accepts with probability 1 (or otherwise simulated proofs would not be identically distributed to real ones).

4.2 Soundness

Theorem 14

When in binding mode, the \(\mathsf {DM\text {-}NIWI} \) system in Fig. 5 is statistically sound.

Proof

Here we use the soundness of the hidden-bits scheme, coupled with the lossiness of function \(\mathsf {H} \).

Since \(\mathsf {crs} \) is uniformly random, computing \(\mathsf {hrs}:=\mathsf {H} (\mathsf {PRF} (K_1,a))\oplus \mathsf {crs} \) will yield another uniformly random string and will allow us to use the soundness of the hidden-bits system. Moreover, we leverage the lossiness of \(\mathsf {H} \) to ensure that an adversary cannot influence the \(\mathsf {hrs} \) sufficiently enough as to be able to cheat. This is because the honest verifier applies \(\mathsf {H} \) automatically when it functionally decrypts ciphertext C.

More formally, fix some \(x\in \{0,1\} ^{n}\setminus \mathcal {L}\). We prove that with overwhelming probability over the common reference string, there is no proof \(\varPi \) which will be accepted by the verifier. This is a selective notion which we later amplify to obtain the security notion from Definition 10.

We want to bound \(\Pr _{(\mathsf {CRS},\mathsf {td}_{\mathsf {ext}})\leftarrow _{\textsc {r}} \mathsf {Setup} (1^{\lambda },\mathrm {binding})}[\exists \varPi :\mathsf {Verifier} (\varPi ,\mathsf {CRS})=1]\). We can rewrite this probability as:

$$\begin{aligned} \mathop {\Pr }\limits _{\begin{array}{l}Z\leftarrow _{\textsc {r}}\{0,1\} ^{2\lambda +|x|}\\ \mathsf {crs} \leftarrow _{\textsc {r}}\{0,1\} ^{{t(|x|,2\lambda +|x|)}}\\ \mathsf {H},\mathsf {PC},\mathsf {fmpk},\mathsf {fmsk},\mathsf {sk} _{\mathsf {f}}\end{array}}[\exists (\pi ,C):\mathsf {Verifier} ((\pi ,C),(\mathsf {H},\mathsf {fmpk},\mathsf {lpk},\mathsf {sk} _{\mathsf {f}},\mathsf {crs},Z,\mathsf {PC}))=1] \end{aligned}$$

Now, we condition on the event E that Z does not have a \(\mathsf {PRG} \) preimage, which happens with probability \(1-\frac{1}{2^{\lambda +|x|}}\). From the functionality of \(\mathsf {iO} \) and the special correctness of the \(\mathsf {FE} \) scheme (see Definition 5), the adversary must produce a ciphertext which decrypts to a value in the range of the function f. If Z has no preimage, then being in the range of the function f is equivalent to being of the form \(\mathsf {H} (X)_{\mathcal {I}}\), for some X (recall that \(\mathsf {H} (X)_{\mathcal {I}}\) denotes the subset \(\mathcal {I} \) of the bits of \(\mathsf {H} (X)\)). Note that both the functional equivalence of \(\mathsf {iO} \) and the special correctness of the functional encryption scheme are statistical properties. Therefore, the probability above is less or equal than:

$$\begin{aligned} \mathop {\Pr }\limits _{\begin{array}{l} \mathsf {crs} \leftarrow _{\textsc {r}}\{0,1\} ^{{t(|x|,2\lambda +|x|)}}\\ \end{array}}[\exists (\pi ,X,\mathcal {I}):\mathsf {V}_{H} (x,(\mathsf {crs} \oplus \mathsf {H} (X))_\mathcal {I},\mathcal {I},\pi )=1] \end{aligned}$$

The next step is to bound the number of possible values of \(\mathsf {hrs} \). Recall that \(\mathsf {hrs}:=\mathsf {H} (\mathsf {PRF} (K_1,a))\oplus \mathsf {crs} \). From the lossiness of \(\mathsf {H} \), we know that there are at most \(2^{k}\) images of \(\mathsf {H} \), where k is the second parameter of \(\mathsf {H} \) (see Definition 2). Thus, we can compute an union bound over all these images \(\mathsf {H} (X)\), bounding the above probability by:

$$\begin{aligned} 2^{k}\times \mathop {\Pr }\limits _{\begin{array}{l} \mathsf {crs} \leftarrow _{\textsc {r}}\{0,1\} ^{{t(|x|,2\lambda +|x|)}}\\ \end{array}}[\exists (\pi ,\mathcal {I}):\mathsf {V}_{H} (x,(\mathsf {crs} \oplus \mathsf {H} (X))_\mathcal {I},\mathcal {I},\pi )=1] \end{aligned}$$

Now, recall that we denote \(\mathsf {crs} \oplus H(X)\) as \(\mathsf {hrs} \). Since \(\mathsf {crs} \) is uniformly randomly distributed, so is \(\mathsf {hrs} \), and we can rewrite the probability above as:

$$\begin{aligned} 2^{k}\times \mathop {\Pr }\limits _{\begin{array}{l} \mathsf {hrs} \leftarrow _{\textsc {r}}\{0,1\} ^{{t(|x|,2\lambda +|x|)}}\\ \end{array}}[\exists (\pi ,\mathcal {I}):\mathsf {V}_{H} (x,\mathsf {hrs} _\mathcal {I},\mathcal {I},\pi )=1] \end{aligned}$$

Finally, by using the soundness of the hidden-bits \(\mathsf {NIZK} \), we know that:

$$\begin{aligned} \mathop {\Pr }\limits _{\begin{array}{l} \mathsf {hrs} \leftarrow _{\textsc {r}}\{0,1\} ^{{t(|x|,2\lambda +|x|)}}\\ \end{array}}[\exists (\pi ,\mathcal {I}):\mathsf {V}_{H} (x,\mathsf {hrs} _\mathcal {I},\mathcal {I},\pi )=1]\le \frac{1}{2^{2\lambda +|x|}} \end{aligned}$$

Therefore, we can conclude that:

$$\begin{aligned} \mathop {\Pr }\nolimits _{(\mathsf {CRS},\mathsf {td}_{\mathsf {ext}})\leftarrow _{\textsc {r}} \mathsf {Setup} (1^{\lambda },\mathrm {binding})}[\exists \varPi :\mathsf {Verifier} (\varPi ,\mathsf {CRS})=1]\le \frac{1}{2^{2\lambda +|x|-k}}. \end{aligned}$$

The only remaining step is to amplify the security from the selective variant we have just proven to the adaptive one from Definition 10. We eliminate the restriction that x is fixed by computing a union bound over all possible values of x. In particular, for \(\mathsf {H} \) parameter \(k=\lambda \), we conclude that for every unbounded adversary \(\mathcal {A} \):

$$\begin{aligned} \Pr \bigg [\begin{array}{l} (\mathsf {CRS},\mathsf {td}_{\mathsf {ext}})\leftarrow _{\textsc {r}}\mathsf {Setup} (1^{\lambda },\mathrm {binding}), \\ (x,\varPi )\leftarrow _{\textsc {r}}\mathcal {A} (\mathsf {CRS}) \end{array} : \mathsf {Verifier} (\mathsf {CRS}, x, \varPi )=1 \wedge x\notin \mathcal {L}_{R}\bigg ]=\frac{1}{2^{\lambda }}\text {.} \end{aligned}$$

As a last check, we must ensure that event \(\lnot E\) still happens with negligible probability. If we compute the same union bound as above, the probability of \(\lnot E\) is now bounded by \(\frac{1}{2^\lambda }\). Therefore, the system is statistically sound.

4.3 Witness Indinstinguishability

Theorem 15

In hiding mode, the \(\mathsf {DM\text {-}NIWI}\) system from Fig. 5 is statistically witness-indistinguishable.

Proof

By using the statistical lossiness of \(\mathsf {LE}\), we show that no (potentially unbounded) adversary \(\mathcal {A} \) can break the witness-indistinguishability of \(\mathsf {DM\text {-}NIWI}\). Recall that the lossiness of \(\mathsf {LE}\) implies that for all \((\mathsf {lpk}, \bot ) \leftarrow \mathsf {LE}.\mathsf {Gen} (1^{\lambda }, \mathrm {lossy})\), and for all \(x, w_0, w_1\), encryptions of \((x,w_0)\) are statistically indistinguishable from encryptions of \((x,w_1)\). More formally:

The goal is to show that for every hiding \(\mathsf {CRS} \) and for every \((x,w_0,w_1)\), with both \((x,w_0)\in R\) and \((x,w_1)\in R\), proofs for \((x,w_0)\) are statistically indistinguishable from proofs for \((x,w_1)\). Fix \((x,w_0,w_1)\) and let \(D'_b\) be the following distribution:

(1)

We want to prove that we have that \(D'_0 \approx _{\frac{1}{2^\lambda }} D'_1\). To achieve this, we exhibit a probabilistic function F which on input \(D_b\) outputs \(D'_b\), i.e. \(F(D_b)=D'_b\), without needing to know bit b. If such an F exists, then \(D_0\approx _{\frac{1}{2^\lambda }}D_1\) implies that \(F(D_0)\approx _{\frac{1}{2^\lambda }}F(D_1)\). Function F works as follows:

  1. 1.

    F obtains public key \(\mathsf {lpk} \) from \(D_b\). Then F esentially computes \(\mathsf {DM\text {-}NIWI}.\mathsf {Setup} (1^{\lambda })\) and chooses all the parameters itself, except for \(\mathsf {lpk} \) which comes from \(D_b\).

    In more detail, F chooses the \(\mathsf {PRG} \), a dense function \(\mathsf {H} \), keys \(K_1,K_2,K_3\), master keys \((\mathsf {fmpk},\mathsf {fmsk})\) and functional key \(\mathsf {sk} _{\mathsf {f}}\) just as in \(\mathsf {DM\text {-}NIWI}.\mathsf {Setup} (1^{\lambda })\). It also draws uniformly random strings z and \(\mathsf {crs} \). It then sets \(Z=\mathsf {PRG} (z)\) and uses all these parameters to construct program \(\mathsf {Prog Prov} _{\mathrm {hiding},\mathsf {crs}}\), which it obfuscates obtaining \(\mathsf {PC} \).

  2. 2.

    For hiding \(\mathsf {CRS}\), we have that \(\mathsf {PC} \) obfuscates \(\mathsf {Prog Prov} _{\mathrm {hiding},\mathsf {crs}}\). Therefore, F can compute the output of \(\mathsf {DM\text {-}NIWI}.\mathsf {Prove} (\mathsf {CRS},x,w_b)\) even without knowing bit b: F has access to ciphertext \(\mathsf {ct} \) from distribution \(D_b\). Ciphertext \(\mathsf {ct} \) can originate from either \((x,w_0)\) or \((x,w_1)\). F simply computes \((C,\pi )\leftarrow _{\textsc {r}}\mathsf {Hiding Proof} _\mathsf {crs} (x,\mathsf {ct})\) and uses \((C,\pi )\) to construct distribution \(D'_b\). Observe that this is only possible because \(\mathsf {Hiding Proof} _{\mathsf {crs}}(x,\mathsf {ct})\) crucially only has x and \(\mathsf {ct} \) as inputs and does not directly depend on witnesses \(w_0,w_1\) themselves.

We have shown that \(F(D_0)\approx _{\frac{1}{2^\lambda }}F(D_1)\), for every \((x,w_0,w_1)\) and for all hiding \(\mathsf {CRS} \leftarrow _{\textsc {r}}\mathsf {DM\text {-}NIWI}.\mathsf {Setup} (1^{\lambda },\mathrm {hiding})\). This concludes witness-indistinguishability as defined in Definition 10. (In Definition 10, the adversary can choose \((x,w_0,w_1)\) after seeing the \(\mathsf {CRS} \), but since \(F(D_0)\approx _{\frac{1}{2^\lambda }}F(D_1)\) for every \((x,w_0,w_1)\) and for every hiding \(\mathsf {CRS} \), the adversary will not have advantage greater that \(\frac{1}{2^\lambda })\).

4.4 CRS Indistinguishability

Theorem 16

The \(\mathsf {DM\text {-}NIWI} \) system from Fig. 5 satisfies computational indistinguishability between common reference strings generated in binding mode and common reference strings generated in hiding mode.

Proof

The proof proceeds by a sequence of games where \(\mathsf {G} _0\) is defined exactly as \({\mathsf {Exp}\text {-}\mathsf {CRS}\text {-}\mathsf {IND}} _{0}(1^{\lambda },{\mathcal {A}})\) (see Fig. 4). \(\mathsf {G} _0\) corresponds to the experiment in which adversary \(\mathcal {A} \) against crs indistinguishability receives common reference strings in binding mode. A high-level summary is provided in Fig. 6. For any game \(\mathsf {G} _i\), we denote by \(\mathsf {Adv} _i(A)\) the advantage of \(\mathcal {A} \) in \(\mathsf {G} _i\), that is, \(\Pr [\mathsf {G} _i(1^{\lambda },\mathcal {A}) = 1]\), where the probability is taken over the random coins of \(\mathsf {G} _i\) and \(\mathcal {A} \). At a high level, we use four hybrid games \(\mathsf {G} _0, \mathsf {G} _1,\mathsf {G} _2\) and \(\mathsf {G} _3\). The proof is in three phases:

  1. 1.

    In the first phase, we transition from \(\mathsf {G} _{0}\) to \(\mathsf {G} _{1}\). Game \(\mathsf {G} _1\) is defined to be the same as \(\mathsf {G} _0\), except for the following two changes: First, we switch the mode of the lossy function \(\mathsf {H} \) from lossy to dense. This is done with the end goal of ensuring that the output of \(\mathsf {H} \) is uniformly distributed at specific values of a. Secondly, we use the security of the \(\mathsf {PRG}\) to change Z from being uniformly random to being in the image of the \(\mathsf {PRG} \). This is done by setting \(Z=\mathsf {PRG} (z)\). To anticipate, this will provide us with a trapdoor for replacing functional ciphertext encoding X with ciphertexts encoding \(\mathsf {hrs} _I\). The fact that \(\mathsf {G} _0\approx _{\textsc {c}}\mathsf {G} _1\) is proven in Lemma 17.

  2. 2.

    In the second phase, we transition from \(\mathsf {G} _1\) to \(\mathsf {G} _2\). Game \(\mathsf {G} _2\) is defined to be precisely the same as \(\mathsf {G} _1\), except that \(\mathsf {DM\text {-}NIWI}.\mathsf {Setup} (1^{\lambda })\) computes \(\mathsf {PC} =\mathsf {iO} (\mathsf {Prog Prov} _{\mathrm {hiding},\mathsf {crs}})\). This transition only makes changes in the program \(\mathsf {Prog Prov} \). By iterating over all values of a, for each a we replace real proofs by simulated proofs from the hidden-bits simultator \(\mathsf {S}_{H} \).

    We carefully leverage \(\mathsf {PRF}\) security, the injective mode of \(\mathsf {LE}\) and the density of \(\mathsf {H}\) to ensure that for a specific \({a^*}\), its corresponding \(\mathsf {hrs} ^*\) is of the form \(\beta \oplus \mathsf {crs} \), for uniformly random \(\beta \). Then we use functional encryption security to replace the functional ciphertext corresponding to \({a^*}\) to one which only leaks \(\mathsf {hrs} _{\mathcal {I}}\). But at this stage, since only \(\mathsf {hrs} _\mathcal {I} \) is encoded in the ciphertext, we can use the zero knowledge of the hidden-bits NIZK to replace real proofs by simulated ones. We formally prove that \(\mathsf {G} _1\approx _{\textsc {c}}\mathsf {G} _2\) in Theorem 19.

  3. 3.

    In the third stage, we define \(\mathsf {G} _{3}\) to be the same as \({\mathsf {Exp}\text {-}\mathsf {CRS}\text {-}\mathsf {IND}} _{1}(1^{\lambda },{\mathcal {A}})\). The only difference between \(\mathsf {G} _{2}\) and \(\mathsf {G} _{3}\) is that in the later, the public key of the lossy encryption scheme \(\mathsf {LE} \) is switched from injective to lossy mode. We prove that \(\mathsf {G} _2\approx _{\textsc {c}}\mathsf {G} _3\) in Lemma 18.

Lemma 17

(From \(\mathsf {G} _0\) to \(\mathsf {G} _1\)). For every PPT adversary \(\mathcal {A}\), it holds that \(|\mathsf {Adv} _{0}({\mathcal {A}})-\mathsf {Adv} _{1}({\mathcal {A}})|\le {{\,\mathrm{\mathsf {negl}}\,}}(\lambda )\).

Proof

The only differences between \(\mathsf {G} _0\) and \(\mathsf {G} _1\) are the fact that Z is changed from \(Z\leftarrow _{\textsc {r}}\{0,1\} ^{{2\lambda +|x|}}\) to \(Z\leftarrow \mathsf {PRG} (z)\) and function \(\mathsf {H} \) is changed from \(\mathsf {H} \leftarrow \mathsf {LF}.\mathsf {Setup} (1^{\lambda },\mathrm {lossy})\) to \(\mathsf {H} \leftarrow \mathsf {LF}.\mathsf {Setup} (1^{\lambda },\text {dense})\). The lemma follows from the security of the \(\mathsf {PRG}\) and from the computational indistinguishability of the modes of the lossy function \(\mathsf {LF}\). Namely, if \({\mathcal {A}}\) can distinguish between \(\mathsf {G} _0\) and \(\mathsf {G} _1\), there exists either a PPT adversary \(\mathcal {B}_1\) that can break the security of the \(\mathsf {PRG} \) or a PPT adversary \(\mathcal {B}_2\) that can distinguish with non-negligible advantage between the lossy and dense modes of \(\mathsf {LF} \).

Fig. 6.
figure 6

An overview of the games used in the proof of Theorem 16, changes between consecutive games are highlighted with gray boxes.

Lemma 18

(From \(\mathsf {G} _2\) to \(\mathsf {G} _3\)). For every PPT adversary \(\mathcal {A}\), it holds that \(|\mathsf {Adv} _{2}({\mathcal {A}})-\mathsf {Adv} _{3}({\mathcal {A}})|\le {{\,\mathrm{\mathsf {negl}}\,}}(\lambda )\).

Proof

The only change between \(\mathsf {G} _2\) and \(\mathsf {G} _3\) is that the \((\mathsf {lpk},\mathsf {lsk})\) keys of \(\mathsf {LE} \) are changed from injective to lossy. The lemma follows directly from the fact that \(\{\mathrm {proj}(\mathsf {LE}.\mathsf {Gen} (1^{\lambda },\mathrm {inj}))\}\approx _{\textsc {c}}\{\mathrm {proj}(\mathsf {LE}.\mathsf {Gen} (1^{\lambda },\mathrm {lossy}))\}\), where \(\mathrm {proj}: (\mathsf {lpk}, \mathsf {lsk}) \rightarrow \mathsf {lpk} \) and from the fact that \(\mathsf {lsk} \) is not used anywhere in the construction.

Theorem 19

(From \(\mathsf {G} _1\) to \(\mathsf {G} _2\)). For every PPT adversary \(\mathcal {A}\), there exist PPT adversaries \(\mathcal {B}_1,\mathcal {B}_2,\mathcal {B}_3\), such that:

$$\begin{aligned}&|\mathsf {Adv} _{0}({\mathcal {A}})-\mathsf {Adv} _{1}({\mathcal {A}})|\le 2^{p(|x|+\lambda )}\big (8\cdot \mathsf {Adv} ^{\mathsf {iO}}({\kappa },\mathcal {B}_1)+4\cdot \mathsf {Adv} _{\mathsf {s}\text {-}\mathsf {cPRF}}({\kappa },\mathcal {B}_2)+\\&\qquad \mathsf {Adv} ^{\mathsf {FE}}_{{\mathsf {Exp}\text {-}s\text {-}\mathsf {IND}\text {-}\mathsf {FE}\text {-}\mathsf {CPA}}}({\kappa },\mathcal {B}_3)+\varDelta _{\mathrm {Zero Knowledge}}^{\mathsf {NIZK}_{H}}(\lambda )+\frac{1}{2^{{{p(|x|+\lambda )}+\lambda }}}\big ). \end{aligned}$$
Fig. 7.
figure 7

Hybrid \(\mathsf {H} _{(1,{a^*})}\) for the proofs of Theorems 19 and 20. Note that the \(\mathsf {Prover}\), \(\mathsf {Verifier}\), \(\mathsf {Binding Proof}\), \(\mathsf {Hiding Proof}\) and function \(\mathsf {f}\) are the same as defined in Fig. 5 and are not represented again for succinctness. Changes between hybrids \(\mathsf {H} (1,{a^*})\) and game \(\mathsf {G} _1\) are highlighted in light gray.

Proof

The proof strategy is to iterate over all values of \(a=\mathsf {LE}.\mathsf {Enc} (\mathsf {lpk},(x,w),r)\) and make changes to the obfuscation of the program \(\mathsf {Prog Prov} \). We define a series of hybrids \(\mathsf {H} _{1,{a^*}}\), for all \({a^*}\in \{0,1\} ^{p(|x|+\lambda )}\) in Fig. 7. Briefly, hybrid \(\mathsf {H} _{1,{a^*}}\) is defined as follows:

Hybrid \(\mathsf {H} _{1,{a^*}}\) is defined in the same way as game \(\mathsf {G} _1\), except that:

  1. 1.

    \(\mathsf {DM\text {-}NIWI}.\mathsf {Setup} \) is changed such that the computation of the public parameter \(\mathsf {PC} =\mathsf {iO} (\mathsf {Prog Prov} _{\mathrm {binding},\mathsf {crs}})\) is replaced by \(\mathsf {PC} =\mathsf {iO} (\mathsf {Prog Prov} _{1,{a^*}})\).

  2. 2.

    Program \(\mathsf {Prog Prov} _{1,{a^*}}\) on inputs xwr is the program which first computes \(a=\mathsf {LE}.\mathsf {Enc} (\mathsf {lpk},(x,w),r)\). Then it compares a with hardcoded value \({a^*}\) and for \(a<{a^*}\), it computes \((C,\pi )=\mathsf {Hiding Proof} _{\mathsf {crs}}(x,a)\), while for \(a\ge {a^*}\) it computes \((C,\pi )=\mathsf {Binding Proof} _{\mathsf {crs}}(x,w,a)\). It then returns proof \((C,\pi )\).

Note that hybrid \(\mathsf {H} _{1,0^{p(|x|+\lambda )}}\) is the same as game \(\mathsf {G} _{1}\), while hybrid \(\mathsf {H} _{1,1^{p(|x|+\lambda )}}\) is the same as game \(\mathsf {G} _{2}={\mathsf {Exp}\text {-}\mathsf {CRS}\text {-}\mathsf {IND}} _{1}(1^{\lambda },{\mathcal {A}})\). Just as before, for every hybrid \(\mathsf {H} _i\), we denote by \(\mathsf {Adv} _i({\mathcal {A}})\) the advantage of \(\mathcal {A} \) in \(\mathsf {H} _i\), that is, \(\Pr [\mathsf {G} _i(1^{\lambda },\mathcal {A}) = 1]\). In Theorem 20, we formally prove that every two consecutive hybrids \(\mathsf {H} (1,{a^*})\) and \(\mathsf {H} (1,{a^*}+1)\) are computationally indistinguishable, i.e. \(\mathsf {H} _{(1,{a^*}-1)}\approx _{\textsc {c}}\mathsf {H} _{(1,{a^*})}\), for every \({a^*}\in [2^{p(|x|+\lambda )}]\).

Theorem 20

(From \(\mathsf {H} _{(1,{a^*})}\) to \(\mathsf {H} _{(1,({a^*}+1))}\)). For every PPT adversary \(\mathcal {A}\), there exist PPT adversaries \(\mathcal {B}_1,\mathcal {B}_2,\mathcal {B}_3\), such that:

$$\begin{aligned}&|\mathsf {Adv} _{(1,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(1,({a^*}+1))}({\mathcal {A}})|\le 8\cdot \mathsf {Adv} ^{\mathsf {iO}}({\kappa },\mathcal {B}_1)+4\cdot \mathsf {Adv} _{\mathsf {s}\text {-}\mathsf {cPRF}}({\kappa },\mathcal {B}_2)+ \\&\qquad \mathsf {Adv} ^{\mathsf {FE}}_{{\mathsf {Exp}\text {-}s\text {-}\mathsf {IND}\text {-}\mathsf {FE}\text {-}\mathsf {CPA}}}({\kappa },\mathcal {B}_3)+\varDelta _{\mathrm {Zero Knowledge}}^{\mathsf {NIZK}_{H}}(\lambda )+\frac{1}{2^{{{p(|x|+\lambda )}+\lambda }}}. \end{aligned}$$
Fig. 8.
figure 8

Hybrids \(\mathsf {H} _{(i,{a^*})}\) for the proofs of Theorems 19 and 20. Note that the \(\mathsf {Prover}\), \(\mathsf {Verifier}\), \(\mathsf {Binding Proof}\), \(\mathsf {Hiding Proof}\) and function \(\mathsf {f}\) are the same as defined in Fig. 5 and are not represented again for succinctness. For \(i=1\), subprogram \(\mathsf {Hybrid Proof} _{1,{a^*},\mathsf {crs}}=\mathsf {Binding Proof} _{\mathsf {crs}}\) and for \(i=15\), \(\mathsf {Hybrid Proof} _{15,{a^*},\mathsf {crs}}=\mathsf {Hiding Proof} _{\mathsf {crs}}\). All \(\mathsf {Prog Prov} _{i,{a^*},\mathsf {crs}}(x,w,r)\) are padded so that they have equal sizes.

Proof

We prove this through a sequence of hybrids \(\mathsf {H} _{(1,{a^*})}\) up to \(\mathsf {H} _{(15,{a^*})}\), where hybrid \(\mathsf {H} _{(15,{a^*})}\) is identical to hybrid \(\mathsf {H} _{(1,({a^*}+1))}\). In terms of notation, hybrid \(\mathsf {H} _{(i,{a^*})}\) will have \(\mathsf {PC} =\mathsf {iO} (\mathsf {Prog Prov} _{i,{a^*},\mathsf {crs}})\). The proof strategy is to leverage the properties of \(\mathsf {iO},\mathsf {FE},\mathsf {PRF} s,\mathsf {LE} \) and \(\mathsf {H} \) in order to replace actual proofs computed by the hidden-bits prover \(\mathsf {P}_{H}\) to simulated proofs computed by \(\mathsf {S}_{H}\). Notice that in \(\mathsf {H} _{(1,{a^*})}\), proofs corresponding to a are computed by subprogram \(\mathsf {Binding Proof} _{\mathsf {crs}}(x,w,a)\), while in \(\mathsf {H} _{(15,{a^*})}\) they are computed by subprogram \(\mathsf {Hiding Proof} _{\mathsf {crs}}(x,w)\). This is the only difference between the two hybrids. In order to replace subprogram \(\mathsf {Binding Proof} _{\mathsf {crs}}()\) by \(\mathsf {Hiding Proof} _{\mathsf {crs}}()\) we define a series of subprograms \(\mathsf {Hybrid Proof} _{i,{a^*},\mathsf {crs}}\), for \(i\in [15]\). As expected, every hybrid \(\mathsf {H} _{(i,{a^*})}\) will be defined to be identical to \(\mathsf {H} _{1,{a^*}}\), except that for \(a={a^*}\), \((C,\pi )=\mathsf {Hybrid Proof} _{i,{a^*},\mathsf {crs}}(x,w,a)\). The hybrids are described in Fig. 7. For a detailed decription of subprograms \(\mathsf {Hybrid Proof} _{i,{a^*},\mathsf {crs}}\), see Fig. 9. More figures are provided in the full version (Fig. 8).

Hybrid \(\mathsf {H} _{(2,{a^*})}\). In this hybrid, the subprogram \(\mathsf {Hybrid Proof} _{2,{a^*},\mathsf {crs}}\) is changed so that key \(K_1\) is punctured at point \({a^*}\). This is a standard punctured programming technique. Once we puncture the key, only \(K_1\{{a^*}\}\) is hardcoded in the program, along with the evaluation of \(r_1^*\leftarrow \mathsf {PRF} (K_1,{a^*})\), but not \(K_1\) itself. Observe that key \(K_1\) is punctured in \(\mathsf {Prog Prov} _{2,{a^*},\mathsf {crs}}\) and all its subprograms as well. In \(\mathsf {H} _{(i,{a^*})},i\in [15]\) subprograms \(\mathsf {Binding Proof} _{\mathsf {crs}}(x,w,a)\) and \(\mathsf {Hiding Proof} _{\mathsf {crs}}(x,w,a)\) are never called on inputs \(a\ne {a^*}\), so they never need the evaluation of \(\mathsf {PRF} (K_1,{a^*})\).

This puncturing can be done since \({a^*}\) is a parameter of the hybrid (we are enumerating over all values of a). Since the programs are functionally equivalent, this change is computationally indistinguishable by the security of \(\mathsf {iO}\). Observe that when we hardcode a value in a subprogram \(\mathsf {Hybrid Proof} _{i,{a^*},\mathsf {crs}}\), it is understood that this value is also hardcoded in \(\mathsf {Prog Prov} _{i,{a^*},\mathsf {crs}}\). A full description of \(\mathsf {Hybrid Proof} _{2,{a^*},\mathsf {crs}}\) can be found in Fig. 9. This shows the following lemma:

Lemma 21

(From \(\mathsf {H} _{(1,{a^*})}\) to \(\mathsf {H} _{(2,{a^*})}\)). For every PPT adversary \(\mathcal {A}\), there exists a PPT adversary \(\mathcal {B}\), such that: \(|\mathsf {Adv} _{(1,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(2,{a^*})}({\mathcal {A}})|\le \mathsf {Adv} ^{\mathsf {iO}}({\kappa },\mathcal {B})\).

Hybrid \(\mathsf {H} _{(3,{a^*})}\). Here subprogram \(\mathsf {Hybrid Proof} _{3,{a^*},\mathsf {crs}}\) is changed so that \(r^*_1\) is now a uniformly random value hardcoded inside our program. This change is computationally indistinguishable by the pseudorandomness at punctured points of \(\mathsf {PRF}\) (we are replacing the evaluation at \(K_1\{{a^*}\}\) by a uniformly random). A full description of subprogram \(\mathsf {Hybrid Proof} _{3,{a^*},\mathsf {crs}}\) can be found in Fig. 9. This shows the following lemma:

Lemma 22

(From \(\mathsf {H} _{(2,{a^*})}\) to \(\mathsf {H} _{(3,{a^*})}\)). For every PPT adversary \(\mathcal {A}\), there exists a PPT adversary \(\mathcal {B}\), such that: \(|\mathsf {Adv} _{(2,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(3,{a^*})}({\mathcal {A}})|\le \mathsf {Adv} _{\mathsf {s}\text {-}\mathsf {cPRF}}({\kappa },\mathcal {B})\).

Hybrid \(\mathsf {H} _{(4,{a^*})}\). Subprogram \(\mathsf {Hybrid Proof} _{2,{a^*},\mathsf {crs}}\) is changed so that key \(K_2\) is punctured at point \({a^*}\). This is by the same argument as in Lemma 21 and uses the security of \(\mathsf {iO}\). Once we puncture the key, only \(K_2\{{a^*}\}\) is hardcoded in all subroutines of \(\mathsf {Prog Prov} _{4,{a^*},\mathsf {crs}}\), along with the evaluation of \(r_2^*\leftarrow \mathsf {PRF} (K_2,{a^*})\), but not \(K_2\) itself. This shows the following lemma:

Lemma 23

(From \(\mathsf {H} _{(3,{a^*})}\) to \(\mathsf {H} _{(4,{a^*})}\)). For every PPT adversary \(\mathcal {A}\), there exists a PPT adversary \(\mathcal {B}\), such that: \(|\mathsf {Adv} _{(3,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(4,{a^*})}({\mathcal {A}})|\le \mathsf {Adv} ^{\mathsf {iO}}({\kappa },\mathcal {B})\).

Fig. 9.
figure 9

Descriptions of \(\mathsf {Hybrid Proof} _{i,{a^*},\mathsf {crs}}\), for \(i=1\ldots 4\). In each subprogram, the changes relative to the previous subprogram are highlighted in gray. When we hardcode a value in a subprogram \(\mathsf {Hybrid Proof} _{i,{a^*},\mathsf {crs}}\), it is understood that this value is also hardcoded in \(\mathsf {Prog Prov} _{i,{a^*},\mathsf {crs}}\). If a key K is punctured in \(\mathsf {Hybrid Proof} _{i,{a^*},\mathsf {crs}}\), we understand that it is punctured in \(\mathsf {Prog Prov} _{i,{a^*},\mathsf {crs}}\) and all its subprograms as well. Note that \(\mathsf {Hybrid Proof} _{1,{a^*},\mathsf {crs}}\) is the same as \(\mathsf {Binding Proof} _\mathsf {crs} \).

Hybrid \(\mathsf {H} _{(5,{a^*})}\). Here subprogram \(\mathsf {Hybrid Proof} _{5,{a^*},\mathsf {crs}}\) is changed so that \(r^*_2\) is now a uniformly random value hardcoded inside our program. This change is computationally indistinguishable by the pseudorandomness at punctured points of \(\mathsf {PRF}\) (we are replacing the evaluation at \(K_2\{{a^*}\}\) by a uniformly random). The full description of \(\mathsf {Hybrid Proof} _{5,{a^*},\mathsf {crs}}\) can be found in the full version. This shows the following lemma:

Lemma 24

(From \(\mathsf {H} _{(4,{a^*})}\) to \(\mathsf {H} _{(5,{a^*})}\)). For every PPT adversary \(\mathcal {A}\), there exists a PPT adversary \(\mathcal {B}\), such that: \(|\mathsf {Adv} _{(4,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(5,{a^*})}({\mathcal {A}})|\le \mathsf {Adv} _{\mathsf {s}\text {-}\mathsf {cPRF}}({\kappa },\mathcal {B})\).

Hybrid \(\mathsf {H} _{(6,{a^*})}\). Subprogram \(\mathsf {Hybrid Proof} _{6,{a^*},\mathsf {crs}}\) precomputes and hardcodes the \((C^*,\pi ^*)\) corresponding to \(a^*\). For this we make the crucial observation that for every a, there exists only one corresponding (xw). This follows from the perfect correctness of the lossy encryption scheme \(\mathsf {LE}\), because \(\mathsf {LE} \) is in injective mode and because \(a=\mathsf {LE}.\mathsf {Enc} (\mathsf {lpk},(x,w);r)\). To compute this hybrid, we use \(\mathsf {lsk} \) to decrypt \({a^*}\) and obtain the corresponding \((x^*,w^*)\). Thus, if \({a^*}\) is known in advance this means \((x^*,w^*)\) is also known in advance. Since \(\mathsf {crs} \) is a parameter of the circuit and also known in advance, we can compute \(\mathsf {hrs} ^*\leftarrow \mathsf {H} (r^*_1)\oplus \mathsf {crs} \), \((\pi ^*,\mathcal {I} ^*) \leftarrow \mathsf {P}_{H} (x^*,w^*,\mathsf {hrs} ^*)\) and \(C^*=\mathsf {FE}.\mathsf {Enc} (\mathsf {fmpk},(r_1^*,\mathcal {I} ^*,0,\mathsf {0});r_2^*)\). We hardcode \((C^*,\pi ^*)\) and these are also the returned values when \(\mathsf {Hybrid Proof} _{6,{a^*},\mathsf {crs}}\) is invoked on \((x^*,w^*,{a^*})\). Since \(\mathsf {Prog Prov} _{6,{a^*},\mathsf {crs}}\) is functionally equivalent to \(\mathsf {Prog Prov} _{5,{a^*},\mathsf {crs}}\), this step is justified by \(\mathsf {iO} \) security. The full description of \(\mathsf {Hybrid Proof} _{6,{a^*},\mathsf {crs}}\) can be found in the full version. From all the above, we have the following lemma:

Lemma 25

(From \(\mathsf {H} _{(5,{a^*})}\) to \(\mathsf {H} _{(6,{a^*})}\)). For every PPT adversary \(\mathcal {A}\), there exists a PPT adversary \(\mathcal {B}\), such that: \(|\mathsf {Adv} _{(5,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(6,{a^*})}({\mathcal {A}})|\le \mathsf {Adv} ^{\mathsf {iO}}({\kappa },\mathcal {B})\).

Hybrid \(\mathsf {H} _{(7,{a^*})}\). To obtain subprogram \(\mathsf {Hybrid Proof} _{7,{a^*},\mathsf {crs}}\), we use the selective security of the functional encryption scheme \(\mathsf {FE} \) to switch ciphertext \(C^*=\mathsf {FE}.\mathsf {Enc} (\mathsf {fmpk},(r_1^*,\mathcal {I} ^*,0,\mathsf {0});r_2^*)\) to ciphertext \(C^*=\mathsf {FE}.\mathsf {Enc} (\mathsf {fmpk},(\mathsf {0},\mathcal {I} ^*,z,T^*_{\mathcal {I} ^*});r_2^*)\). We argue that these two ciphertexts are indistinguishable. Consider decryption key \(\mathsf {sk} _{\mathsf {f}}\) used by the verifier, this key is associated to function \(\mathsf {f} \). But from the definition of \(\mathsf {f} \), it holds that:

$$\begin{aligned} \mathsf {f} (r_1^*,\mathcal {I} ^*,0,\mathsf {0})=\mathsf {f} (\mathsf {0},\mathcal {I} ^*,z,T^*_{\mathcal {I} ^*}). \end{aligned}$$

Since \(r_2^*\) used for encryption has been previously switched to a uniformly random, we can therefore reduce the gap between these two games to the \(\textsf { SEL\text{- }IND\text{- }FE\text{- }CPA}\) game. Also note that we are only able to use the selective security of the \(\mathsf {FE} \) scheme because all the values above are known in advance and are derived from a. The full description of \(\mathsf {Hybrid Proof} _{7,{a^*},\mathsf {crs}}\) can be found in the full version. We have therefore proven the following lemma:

Lemma 26

(From \(\mathsf {H} _{(6,{a^*})}\) to \(\mathsf {H} _{(7,{a^*})}\)). For every PPT adversary \(\mathcal {A}\), there exists a PPT adversary \(\mathcal {B}\), such that:

$$\begin{aligned} |\mathsf {Adv} _{(6,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(7,{a^*})}({\mathcal {A}})|\le \mathsf {Adv} ^{\mathsf {FE}}_{{\mathsf {Exp}\text {-}s\text {-}\mathsf {IND}\text {-}\mathsf {FE}\text {-}\mathsf {CPA}}}({\kappa },\mathcal {B}). \end{aligned}$$

Hybrid \(\mathsf {H} _{(8,{a^*})}\). Subprogram \(\mathsf {Hybrid Proof} _{8,{a^*},\mathsf {crs}}\) is defined like \(\mathsf {Hybrid Proof} _{7,{a^*},\mathsf {crs}}\), except that the computation of \(\mathsf {hrs} ^*\) changes. Instead of computing \(\mathsf {hrs} ^*\leftarrow T^*\oplus \mathsf {crs} \), where \(T^*\leftarrow \mathsf {H} (r_1^*)\), we compute \(T^*\leftarrow _{\textsc {r}}\{0,1\} ^{p_1(|x|,\lambda )}\) and let \(\mathsf {hrs} ^*\leftarrow T^*\oplus \mathsf {crs} \). This step is justified by the dense mode of \(\mathsf {H} \). From Definition 2, we know that for uniformly random \(r_1^*\), we have \(\mathsf {H} (r^*_1)\) statistically indistinguishable from a uniformly random. Moreover, by choosing the security parameter in \(\mathsf {LF}\).\(\mathsf {Setup}\) (\(1^{\lambda }\),dense) to be large enough, we can offset the \(2^{p(|x|+\lambda )}\) factor coming from enumerating over all values of a. The full description of \(\mathsf {Hybrid Proof} _{8,{a^*},\mathsf {crs}}\) can be found in the full version. We have therefore proven the following lemma:

Lemma 27

(From \(\mathsf {H} _{(7,{a^*})}\) to \(\mathsf {H} _{(8,{a^*})}\)). For every (potentially unbounded) adversary \(\mathcal {A}\), it holds that:

$$\begin{aligned} |\mathsf {Adv} _{(7,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(8,{a^*})}({\mathcal {A}})|\le \frac{1}{2^{{{p(|x|+\lambda )}+\lambda }}}. \end{aligned}$$

Hybrid \(\mathsf {H} _{(9,{a^*})}\). In this hybrid, we use the zero-knowledge property of the hidden-bits NIZK system to replace real proofs by simulated ones. Subprogram \(\mathsf {Hybrid Proof} _{9,{a^*},\mathsf {crs}}\) is defined like \(\mathsf {Hybrid Proof} _{8,{a^*},\mathsf {crs}}\), but now the precomputation of the program involves choosing a uniformly random \(r^*_3\leftarrow _{\textsc {r}}\{0,1\} ^{p_3(|x|,\lambda )}\). Polynomial \({p_3(|x|,\lambda )}\) represents the size of the random tape needed by the hidden-bits simulator \(\mathsf {S}_{H} \). Proofs are now simulated, i.e. \((\mathsf {hrs} ^*_{I^*},\pi ^*,\mathcal {I} ^*)\leftarrow \mathsf {S}_{H} (x^*;r^*_3)\)

We now argue that this hybrid is statistically indistinguishable from the previous one. The reason this works is that we already used \(\mathsf {FE} \) security to ensure that only the revealed bits of the \(\mathsf {hrs} ^*_{I^*}\) are encoded in ciphertext \(C^*\) and also that \(\mathsf {hrs} ^*\) is uniformly random. This, coupled with the fact that in \(\mathsf {H} _{(9,{a^*})}\) only the value of the real proof \((C^*,\pi ^*)\) is hardcoded means we can use the ZK property of \(\mathsf {NIZK}_{H} \). In \(\mathsf {Hybrid Proof} _{9,{a^*},\mathsf {crs}}\) we can hardcode only the simulated proof, and there is no need to include the simulator code in \(\mathsf {Prog Prov} _{9,{a^*},\mathsf {crs}}\).

The full description of \(\mathsf {Hybrid Proof} _{9,{a^*},\mathsf {crs}}\) can be found in the full version, along with the proof of the following lemma:

Lemma 28

(From \(\mathsf {H} _{(8,{a^*})}\) to \(\mathsf {H} _{(9,{a^*})}\)). Let \({a^*}=\mathsf {LE}.\mathsf {Enc} (\mathsf {lpk},(x^*,w^*);r)\). Then it holds that either:

  1. 1.

    if \((x^*,w^*)\in R\), then \(\mathsf {H} _{(8,{a^*})}\) and \(\mathsf {H} _{(9,{a^*})}\) are statistically close. Namely, for every (potentially unbounded) adversary \({\mathcal {A}}\),

    $$\begin{aligned} |\mathsf {Adv} _{(8,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(9,{a^*})}({\mathcal {A}})|\le \varDelta _{\mathrm {Zero Knowledge}}^{\mathsf {NIZK}_{H}}(\lambda ). \end{aligned}$$
  2. 2.

    if \((x^*,w^*)\notin R\), then \(\mathsf {H} _{(8,{a^*})}\) and \(\mathsf {H} _{(9,{a^*})}\) are computationally indistinguishable. Namely, for every \( PPT \) adversary \({\mathcal {A}}\), there exists PPT adversary \(\mathcal {B}\), such that:

    $$\begin{aligned} |\mathsf {Adv} _{(8,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(9,{a^*})}({\mathcal {A}})|\le \mathsf {Adv} ^{\mathsf {iO}}({\kappa },\mathcal {B}). \end{aligned}$$

Hybrid \(\mathsf {H} _{(10,{a^*})}\). In subprogram \(\mathsf {Hybrid Proof} _{10,{a^*},\mathsf {crs}}\), the only change made is that \(r_2^*\) is changed from a uniformly random value (as in hybrid \(\mathsf {H} _{(9,{a^*})})\) to \(r^*_2\leftarrow \mathsf {PRF} (K_2,{a^*})\). This change is justified by the pseudo-randomness of \(\mathsf {PRF} (K_2,\cdot )\) at punctured point \({a^*}\). The full description of \(\mathsf {Hybrid Proof} _{10,{a^*},\mathsf {crs}}\) can be found in the full version. We have the following lemma:

Lemma 29

(From \(\mathsf {H} _{(9,{a^*})}\) to \(\mathsf {H} _{(10,{a^*})}\)). For every PPT adversary \(\mathcal {A}\), there exists a PPT adversary \(\mathcal {B}\), such that:

$$\begin{aligned} |\mathsf {Adv} _{(9,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(10,{a^*})}({\mathcal {A}})|\le \mathsf {Adv} _{\mathsf {s}\text {-}\mathsf {cPRF}}({\kappa },\mathcal {B}). \end{aligned}$$

Hybrid \(\mathsf {H} _{(11,{a^*})}\). In subprogram \(\mathsf {Hybrid Proof} _{11,{a^*},\mathsf {crs}}\), the only change made is that \(r_2\) is not precomputed anymore (as in hybrid \(\mathsf {H} _{(10,{a^*})})\).

Value \(r_2\leftarrow \mathsf {PRF} (K_2,{a^*})\) is now compted on the fly. This means C must also be computed on the fly in this hybrid. These changes are justified by the fact that the two programs are functionally equivalent and thus their obfuscations computationally indistinguishable. The full description of \(\mathsf {Hybrid Proof} _{11,{a^*},\mathsf {crs}}\) can be found in the full version. This shows the following lemma:

Lemma 30

(From \(\mathsf {H} _{(10,{a^*})}\) to \(\mathsf {H} _{(11,{a^*})}\)). For every PPT adversary \(\mathcal {A}\), there exists a PPT adversary \(\mathcal {B}\), such that:

$$\begin{aligned} |\mathsf {Adv} _{(10,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(11,{a^*})}({\mathcal {A}})|\le \mathsf {Adv} ^{\mathsf {iO}}({\kappa },\mathcal {B}). \end{aligned}$$

Hybrid \(\mathsf {H} _{(12,{a^*})}\). In subprogram \(\mathsf {Hybrid Proof} _{12,{a^*},\mathsf {crs}}\), we puncture key \(K_3\) at \(K_3\{{a^*}\}\) and only hardcode this punctured key in our programs. This change is justified by the fact that the two programs are functionally equivalent and thus their obfuscations computationally indistinguishable. The full description of \(\mathsf {Hybrid Proof} _{12,{a^*},\mathsf {crs}}\) can be found in the full version. This shows the following:

Lemma 31

(From \(\mathsf {H} _{(11,{a^*})}\) to \(\mathsf {H} _{(12,{a^*})}\)). For every PPT adversary \(\mathcal {A}\), there exists a PPT adversary \(\mathcal {B}\), such that:

$$\begin{aligned} |\mathsf {Adv} _{(10,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(11,{a^*})}({\mathcal {A}})|\le \mathsf {Adv} ^{\mathsf {iO}}({\kappa },\mathcal {B}). \end{aligned}$$

Hybrid \(\mathsf {H} _{(13,{a^*})}\). Subprogram \(\mathsf {Hybrid Proof} _{13,{a^*},\mathsf {crs}}\) is changed so that \(r^*_3\) is not a hard-wired uniformly random value anymore, but is chosen as \(r^*_3\leftarrow \mathsf {PRF} (K_3,{a^*})\). This change is justified by the pseudo-randomness of \(\mathsf {PRF} (K_3,\cdot )\) at punctured point \({a^*}\). The full description of \(\mathsf {Hybrid Proof} _{13,{a^*},\mathsf {crs}}\) can be found in the full version. From the above, we have:

Lemma 32

(From \(\mathsf {H} _{(12,{a^*})}\) to \(\mathsf {H} _{(13,{a^*})}\)). For every PPT adversary \(\mathcal {A}\), there exists a PPT adversary \(\mathcal {B}\), such that:

$$\begin{aligned} |\mathsf {Adv} _{(12,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(13,{a^*})}({\mathcal {A}})|\le \mathsf {Adv} _{\mathsf {s}\text {-}\mathsf {cPRF}}({\kappa },\mathcal {B}). \end{aligned}$$

Hybrid \(\mathsf {H} _{(14,{a^*})}\). In subprogram \(\mathsf {Hybrid Proof} _{14,{a^*},\mathsf {crs}}\) the key \(K_3\) is not punctured anymore at \({a^*}\). This means that \(r_3\leftarrow \mathsf {PRF} (K_3, a)\) is not hardwired anymore. As a consequence, the simulated proofs are also not hardcoded. Since this program is functionally equivalent to \(\mathsf {Hybrid Proof} _{14,{a^*},\mathsf {crs}}\), we justify this change by the security of \(\mathsf {iO} \). The full description of \(\mathsf {Hybrid Proof} _{14,{a^*},\mathsf {crs}}\) can be found in the full version. From the above, we have:

Lemma 33

(From \(\mathsf {H} _{(13,{a^*})}\) to \(\mathsf {H} _{(14,{a^*})}\)). For every PPT adversary \(\mathcal {A}\), there exists a PPT adversary \(\mathcal {B}\), such that:

$$\begin{aligned} |\mathsf {Adv} _{(13,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(14,{a^*})}({\mathcal {A}})|\le \mathsf {Adv} ^{\mathsf {iO}}({\kappa },\mathcal {B}). \end{aligned}$$

Hybrid \(\mathsf {H} _{(15,{a^*})}\). In subprogram \(\mathsf {Hybrid Proof} _{15,{a^*},\mathsf {crs}}\) the key \(K_1\) is not punctured anymore at \({a^*}\). Key \(K_1\) is not even used anymore in this subprogram, therefore this program is functionally equivalent to \(\mathsf {Hybrid Proof} _{14,{a^*},\mathsf {crs}}\). We thus justify this change by the security of \(\mathsf {iO} \). The full description of \(\mathsf {Hybrid Proof} _{15,{a^*},\mathsf {crs}}\) can be found in the full version. From all the above, we have:

Lemma 34

(From \(\mathsf {H} _{(14,{a^*})}\) to \(\mathsf {H} _{(15,{a^*})}\)). For every PPT adversary \(\mathcal {A}\), there exists a PPT adversary \(\mathcal {B}\), such that:

$$\begin{aligned} |\mathsf {Adv} _{(14,{a^*})}({\mathcal {A}})-\mathsf {Adv} _{(15,{a^*})}({\mathcal {A}})|\le \mathsf {Adv} ^{\mathsf {iO}}({\kappa },\mathcal {B}). \end{aligned}$$