Abstract
Fine-grained cryptography is constructing cryptosystems in a setting where an adversary’s resource is a-prior bounded and an honest party has less resource than an adversary. Currently, only simple form of encryption schemes, such as secret-key and public-key encryption, are constructed in this setting. In this paper, we enrich the available tools in fine-grained cryptography by proposing the first fine-grained secure attribute-based encryption (ABE) scheme. Our construction is adaptively secure under the widely accepted worst-case assumption, \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), and it is presented in a generic manner using the notion of predicate encodings (Wee, TCC’14). By properly instantiating the underlying encoding, we can obtain different types of ABE schemes, including identity-based encryption. Previously, all of these schemes were unknown in fine-grained cryptography. Our main technical contribution is constructing ABE schemes without using pairing or the Diffie-Hellman assumption. Hence, our results show that, even if one-way functions do not exist, we still have ABE schemes with meaningful security. For more application of our techniques, we construct an efficient (quasi-adaptive) non-interactive zero-knowledge proof system.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
1.1 Motivation
Modern cryptography bases the security of schemes on assumptions, including the basic ones (such as the existence of one-way functions (OWFs)), the more advanced ones (such as the hardness of factoring, discrete logarithms, and some lattice problems), and the much more exotic ones (such as the existence of generic groups [25, 30] or algebraic groups [16]). Although there is some analysis on these assumptions, it is less desirable. We are interested in how to construct cryptography based on much mild assumptions or which form of security cryptography can be achieved if all classical assumptions (such as the existence of OWFs) do not hold.
Fine-grained cryptography is a direction in approaching the aforementioned problems. It aims at cryptography with weaker security in a setting where adversaries have only bounded resources and honest users have less resources than the adversaries. Under this setting it is possible to make the underlying assumption extremely mild, for instance, assuming \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\). This is a widely accepted worst-case assumption. As \(\mathsf{\oplus L/poly}\) is the class of languages with polynomial-sized branching programs and all languages in \(\mathsf {NC^1}\) have polynomial-sized branching programs of constant width [3], this assumption holds if there exists one language having only polynomial-sized branching programs of non-constant width. This is different from assuming the existence of OWFs which is an average-case assumption. It requires that the OWF be hard to invert on a random input. Hence, \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) is more likely to be true.
The study on fine-grained cryptography was initialized by Merkle [26]. In the recent years, we are interested in which kind of cryptosystems can be constructed in this setting. We highlight the recent constructions of OWFs [8], symmetric-key and (leveled fully homomorphic) public-key encryption [9, 13], verifiable computation [9], hash proof systems (HPS) [14], and non-interactive zero-knowledge (NIZK) proof systems [2]. However, due to the restriction on running resources, many important primitives remain unknown. Surprisingly, digital signature schemes are among them, although they are implied by OWFs in the classical setting.
Our Goal: Fine-Grained Secure ABEs. We focus on constructing attribute-based encryption (ABE) schemes [18] with fine-grained security, since it has many applications and implies important primitives, including digital signatures. In an ABE scheme, messages are encrypted under descriptive values \(\textsf{x}\), secret keys are associated with values \(\textsf{y}\), and a secret key decrypts the ciphertext if and only if \(\textsf{p}(\textsf{x},\textsf{y})=1\) for some boolean predicate \(\textsf{p}\). Here the predicate \(\textsf{p}\) may express arbitrary access policy. This is in contrast to traditional public-key encryption (PKE) schemes without access control on data. Identity-based encryption [6, 12, 29] is a simplified version of ABE, where \(\textsf{p}\) is the equality predicate, and it implies signatures in a natural manner (even in the fine-grained setting).
In general, it is challenging to construct ABEs. For instance, in the classical setting, it is shown that IBEs cannot be constructed using trapdoor permutations (TDP) or CCA-secure PKE schemes in a black-box manner [7]. Moreover, many pairing-based constructions of ABE and IBE (for instance, [5, 10]) heavily rely on the algebraic structures of pairing groups. These necessary structures are not available in fine-grained cryptography. Thus, in this paper, we develop new techniques to improve on the state of the art of fine-grained cryptography, which only provides primitives related to TDP and CCA-secure PKE.
1.2 Our Contributions
We construct the first fine-grained secure ABE scheme. In particular, our scheme is computable in \(\mathsf {AC^0[2]}\) and secure against adversaries in \(\mathsf {NC^1}\). Note that \(\mathsf {AC^0[2]}\subsetneq \mathsf {NC^1}\) [28, 31]. Similar to several existing \(\mathsf {NC^1}\) fine-grained primitives [9, 13, 14], the security of our scheme is based on the same worst-case assumption \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\). This is a widely accepted, weak assumption. For simplicity, we consider fine-grained cryptography as schemes with \(\mathsf {NC^1}\) honest users and adversaries and security based on \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) in the rest of this paper.
Previously, fine-grained cryptography can only achieve symmetric-key and public-key encryption and HPS. Our work enriches its available tools and brings fine-grained cryptography closer to classical cryptography in terms of functionality.
In particular, our construction is presented in a generic manner using predicate encodings [10, 36]. Hence, by suitably instantiating the underlying encoding, we directly obtain a fine-grained IBE scheme (which in turn implies a fine-grained signature scheme), fine-grained ABEs for inner-product encryption, non-zero inner-product encryption, spatial encryption, doubly spatial encryption, boolean span programs, and arithmetic span programs, and also fine-grained broadcast encryption and fuzzy IBE schemes. Prior to this work, it was unknown whether these primitives can be constructed in \(\mathsf {NC^1}\) based on a worst-case complexity assumption.
Finally, we use our technique to construct an efficient quasi-adaptive NIZK [22] with fine-grained security. Here “quasi-adaptive” means that common reference strings (CRSs) may depend on the language of the NIZK system.
Applications of Security Against \(\mathsf {NC^1}\). Other than only relying on weak assumptions and running with low complexity, our results have the following applications.
Since security against \(\mathsf {NC^1}\) captures adversaries with limited parallel running-time, our constructions are well-suited for systems where attacks make sense only if they succeed in a short period of time. For example, our ABEs (and other fine-grained encryption primitives) can be used to protect messages that are only valuable in a short period of time, and that can be published or deleted later. As another example, the fine-grained signature (implied by our fine-grained IBE) and fine-grained QA-NIZK prevent adversaries from forging signatures and proofs with the running-time of an honest user, thereby ensuring security by letting the system reject users who have timed out when attempting to generate signatures or proofs. Moreover, as noted in [13], combining fine-grained primitives with standard ones immediately yield hybrids that are secure against \(\mathsf {NC^1}\) adversaries under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) and secure against polynomial time adversaries under stronger assumptions.
1.3 Technique Overview
We borrow the frameworks of the pairing-based constructions of IBEs in [5] and ABEs in [10] to upgrade the available fine-grained techniques [1, 14, 21] in achieving our goal. In a nutshell, we transform a suitable symmetric-key primitive to an ABE in the fine-grained setting.
Previous frameworks in [5, 10] use pairings and the Diffie-Hellman assumptions. In contrast to them, our work develops new techniques to build ABEs without pairings or the Diffie-Hellman assumptions, but only under the mild assumption that \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\). For simplicity, we mostly focus on our techniques in the context of IBE here, and give some ideas about how they can be extended to construct ABEs. In this paper, we consider adaptive security where adversaries can adaptively request user secret keys and a challenge ciphertext.
The Approach of Blazy, Kiltz, and Pan, and Its Limitations in \(\mathsf {NC^1}\). The “MAC\(\rightarrow \)IBE” transformation of BKP [5] is an abstraction of the Chen-Wee (CW) IBE scheme [11], and it also implements the “PRF\(\rightarrow \)Signature” framework by Bellare and Goldwasser (BG) [4] in the IBE context. The BKP transformation requires an “affine MAC”, namely, a MAC whose verification is done by checking a particular system of affine equations. Variables in these affine equations are included in the MAC secret key, and the (public) coefficients are derived from the message (which will be the identity of the resulting IBE scheme) to be signed. Such a MAC scheme can be constructed based on the Diffie-Hellman assumption which is generalized as the MDDH assumption.
We give some ideas about how an affine MAC can be turned into an IBE scheme. The master public key of an IBE scheme, \(\textsf{pk}=\textsf{Com}(\textsf{sk}_{{\textsf{MAC}}})\), is a commitment of the MAC secret key, \(\textsf{sk}_{{\textsf{MAC}}}\). A user secret key \(\textsf{usk}[\textsf{id}]\) of an identity \(\textsf{id}\) consists of a BG signature, namely, a MAC tag \(\tau _\textsf{id}\) on the message \(\textsf{id}\) and a NIZK proof of the validity of \(\tau _\textsf{id}\) w.r.t. the secret key committed in \(\textsf{pk}\).
Since the MAC verification consists of only affine equations, after implementing the aforementioned commitments and NIZK proofs with a (tuned) Groth-Sahai (GS) proof system [19],Footnote 1 the BKP IBE ciphertext \(\textsf{ct}_{\textsf{id}}\) can be viewed as a randomized linear combination of \(\textsf{pk}\) w.r.t. \(\textsf{id}\). This is the key observation of BKP. The BKP framework can be further improved and extended to construct ABEs using predicate encodings [36] as in the CGW framework [10] by Chen, Gay, and Wee.
The MDDH assumption and the pairing-based GS proofs are two key ingredients for the BKP framework which are not available in fine-grained cryptography. One direction to resolve this is to develop a fine-grained GS proof system, but it is not clear what the counterpart of “pairing-product equations” will be. Instead, we achieve our goal with a simpler and more direct approach.
A Hard Subset Membership Problem for \(\mathsf {NC^1}\) circuits. We first need to find a counterpart of the MDDH assumption in \(\mathsf {NC^1}\), since the separation assumption \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) does not directly give us tools in constructing cryptographic schemes. In the work of [1, 21], it is shown that, if \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) holds, then the following two distributions are indistinguishable for \(\mathsf {NC^1}\) circuits:
where \(n=n(\lambda )\) is some polynomial in security parameter \(\lambda \), and the randomized sampling algorithms \({\textsf{ZeroSamp}}\) and \({\textsf{OneSamp}}\) output matrices with rank \(n-1\) and full rank, respectively. Concrete definitions of these algorithms are given in Sect. 2.2, and they are not relevant in this section.
This indistinguishability implies a hard subset membership problem in \(\mathsf {NC^1}\) implicitly given by Egashira, Wang, and Tanaka [15] for their HPS: Given a matrix \(\mathbf {{M}}^\top \) from \(D_0\) and a random vector \(\textbf{t}\) in two specific distributions represented by \(\mathbf {{M}}\), the task of the problem is to tell whether \(\textbf{t}\) is in the span of \(\mathbf {{M}}\).
Our IBE in \(\mathsf {NC^1}\). Our main technical contribution is a new approach of using the subset membership problem to transform an affine MAC to IBEs in the fine-grained setting. Our starting point is constructing a secure affine MAC in \(\mathsf {NC^1}\). We prove that, if the subset membership problem is hard in \(\mathsf {NC^1}\), then our MAC is secure for \(\mathsf {NC^1}\) adversaries.
Next, we propose a generic construction of IBE based on affine MACs, following the BKP framework. In stark contrast to the BKP, our construction does not require pairings. Essentially, we develop a Groth-Sahai-like proof system in \(\mathsf {NC^1}\) to prove the validity of our affine MAC. This proof system allows us to show that if our affine MAC is secure then our resulting IBE is secure in \(\mathsf {NC^1}\). At the core of our proof system is a new commitment scheme in \(\mathsf {NC^1}\), for which we achieve the hiding property by exploiting the concrete structure of matrices in \(D_0\).
We give more details about the security proof. Firstly, the zero-knowledge property allows us to generate user secret keys for adversaries without knowing the MAC secret key. Secondly, we show that if an adversary can break the adaptive security of our IBE, then we can construct a reduction to break the security of our affine MAC. This is a crucial step, and we require some extractability of the proof system to extract the MAC forgery from the IBE adversary. In the BKP framework, this extractability can be achieved by computing the inversion of some matrix \(\mathbf {{A}} \in \mathbb {Z}_{q}^{k \times k}\) for some positive integer k. However, in our setting, inverting a matrix in \(\{0,1\}^{n \times n}\) is impossible, otherwise, this will lead to a distinguisher for the subset membership problem in \(\mathsf {NC^1}\). Also, there is no known way to sample a matrix with its inverse efficiently [14]. To solve it, our proof system develop a new method in achieving this extractability without inverting any matrix. Our core idea is to prove that with a fresh random string , it is possible to extract the forgery from our \(\mathsf {NC^1}\)-commitments by switching the distribution of the public parameter \(\mathbf {{A}}\in D_0\) twice (from \(D_0\) to \(D_1\) and then back to \(D_0\)) and changing the distribution of \(\textbf{r}\) during the switching procedure.
Dual System Methodology in \(\mathsf {NC^1}\) and ABE. Our techniques for IBE can also be viewed as the dual system encryption methodology [35] in \(\mathsf {NC^1}\), which is an alternative interpretation of our approach. In our proof, there are two important technical steps, switching ciphertexts to invalid and randomizing MAC tags in the user secret keys. These correspond to switching ciphertexts and user secret keys from functional to semi-functional in the dual system encryption methodology [5, 10, 24, 35]. Dual system methodology is very useful in constructing predicate encryption and it was only known with pairings. Our work is for the first time implementing the dual system methodology without pairings.
Similar to the extension from BKP-IBE [5] to CGW-ABE [10], we further extend our techniques in constructing ABEs. We first use (part of) a predicate encoding scheme [10, 36] to generalize the notion of affine MAC and make it useful for constructing ABEs. After that, we upgrade our IBE techniques, and transform the generalized affine MAC to an adaptively secure ABE in \(\mathsf {NC^1}\) via the rest part of the predicate encoding scheme. Here, the predicate encoding scheme is to construct ABEs in a modular and generic way, in particular, it can generalize the encoding of different ABEs (such as identity-based encryption and inner-product encryption).
More Extension and Open Problem. We are optimistic that our approach can yield many more new public-key schemes in fine-grained cryptography. In particular, we show that our techniques can also be used to construct an efficient QA-NIZK in \(\mathsf {NC^1}\) with adaptive soundness in “Appendix B”. Roughly, we use the technique for proving the hiding property of the underlying commitment scheme in our IBE scheme to achieve adaptive soundness.
Also, we are optimistic that our approach can be used to construct hierarchical IBE [17, 20]. We leave a detailed treatment of it as an open problem.
1.4 Comparison with the Proceedings Version
This is the full version of the paper appeared at Crypto 2021 [34]. In this full version, we give the full proof for the security of the fine-grained IBE scheme in Sect. 4.2. Additionally, we give the proof of Theorem 2.13, the definition, constructions, and security proofs of the fine-grained QA-NIZKs (with the comparison to previous and subsequent fine-grained NIZKs [2, 32, 33]), and the instantiations of predicate encodings, in Appendices A to C.
2 Preliminaries
Notations. We note that all arithmetic computations are over GF(2) in this work. Namely, all arithmetic computations are performed with a modulus of 2. We write (respectively, \(a=\mathcal {A}(b)\)) to denote the random variable outputted by a probabilistic (respectively, deterministic) algorithm \(\mathcal {A}\) on input b. By we denote the process of sampling an element x from a set or distribution \(\mathcal {S}\) uniformly at random. By \(\textbf{x}\in \{0, 1\}^n\) we denote a column vector with size n and by, say, \(\textbf{x}\in \{1\}\times \{0, 1\}^{n-1}\) we mean that the first element of \(\textbf{x}\) is 1. By [n] we denote the set \(\{1, \cdots , n\}\). By \(x_i\) (respectively, \({\textsf{x}}_i\)) we denote the ith element of a vector \(\textbf{x}\) (respectively, \({\textsf{x}}\)). By \(\textsf{negl}\) we denote an unspecified negligible function.
For a matrix \(\mathbf {{A}}\in \{0, 1\}^{n\times t}\) with rank \(t'< n\), we denote the sets \(\{\textbf{y} \, | \, \exists \textbf{x} \;\; \text{ s.t. } \;\; \textbf{y} = \mathbf {{A}}\textbf{x} \}\) and \(\{\textbf{x} \, | \, \mathbf {{A}}\textbf{x} = \textbf{0} \}\) by \({{\,\textrm{Im}\,}}(\mathbf {{A}})\) (i.e., the span of \(\mathbf {{A}}\)) and \({{\,\textrm{Ker}\,}}(\mathbf {{A}})\) respectively. By \(\mathbf {{A}}^\bot \in \{0, 1\}^{n\times (n-t')}\) we denote a matrix consisting of \(n-t'\) linear independent column vectors in the kernel of \(\mathbf {{A}}^\top \). Note that for any \(\textbf{y}\notin {{\,\textrm{Im}\,}}(\mathbf {{A}})\), we have \(\textbf{y}^\top \mathbf {{A}}^\bot \ne \textbf{0}\). By \((a_{ij})_{i\in [l],j\in [m]}\) we denote the matrix \(\begin{pmatrix}a_{11} \cdots a_{1\,m}\\ \vdots \ddots \vdots \\ a_{l1}\cdots a_{lm}\\ \end{pmatrix}\). Let \(\mathbf {{A}}=(a_{ij})_{i\in [l],j\in [m]}\) be an \(l\times m\) matrix and \(\mathbf {{B}}=(\mathbf {{B}}_{ij})_{i\in [m],j\in [n]}\) be a large matrix consisting of \(m\times n\) matrices \(\mathbf {{B}}_{ij}\) for all \(i\in [m]\) and \(j\in [n]\). By \(h\odot \mathbf {{A}}\) we denote \((h\cdot a_{ij})_{i\in [l],j\in [m]}\) and by \(\mathbf {{A}}\odot \mathbf {{B}}\) we denote
By \(\mathbf {{M}}^n_0\), \(\mathbf {{M}}^n_1\), and \(\mathbf {{N}}^n\), we denote the following \(n \times n\) matrices:
and by \(\textbf{0}\) we denote a zero vector \((0,\cdots ,0)^\top \).
Games. We follow [5] to use code-based games for defining and proving security. A game \(\textsf{G}\) contains procedures \(\textsc {Init}\) and \(\textsc {Finalize}\), and some additional procedures \(\textsc {P}_1,\ldots , \textsc {P}_n\), which are defined in pseudo-code. All variables in a game are initialized as 0, and all sets are empty (denote by \(\emptyset \)). An adversary \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\) is executed in game \(\textsf{G}\) w.r.t. the security parameter \(\lambda \) (denote by \({\textsf{G}}^{a_\lambda }\)) if \(a_\lambda \) first calls \(\textsc {Init}\), obtaining its output. Next, it may make arbitrary queries to \(\textsc {P}_i\) (according to their specification) and obtain their output. Finally, it makes one single call to \(\textsc {Finalize}(\cdot )\) and stops. We use \( \textsf{G}^{a_\lambda } \Rightarrow d\) to denote that \(\textsf{G}\) outputs d after interacting with \({a_\lambda }\), and d is the output of \(\textsc {Finalize}\).
2.1 Function Families
In this section, we recall the definitions of function families, \(\mathsf {NC^1}\) circuits, \(\mathsf {AC^0[2]}\) circuits, and \(\mathsf{\oplus L/poly}\). Note that \(\mathsf {AC^0[2]}\subsetneq \mathsf {NC^1}\) [28, 31].
Definition 2.1
(Function Family) A function family is a family of (possibly randomized) functions \(\mathcal {F}=\{f_{\lambda }\}_{\lambda \in \mathbb {N}}\), where for each \(\lambda \), \(f_{\lambda }\) has a domain \(D^f_{\lambda }\) and a range \(R^f_{\lambda }\).
Definition 2.2
(\(\mathsf {NC^1}\)) The class of (non-uniform) \(\mathsf {NC^1}\) function families is the set of all function families \(\mathcal {F} = \{f_{\lambda }\}_{\lambda \in \mathbb {N}}\) for which there is a polynomial \(p(\cdot )\) and constant c such that for each \(\lambda \), \(f_{\lambda }\) can be computed by a (randomized) circuit of size \(p(\lambda )\), depth \(c\log (\lambda )\), and fan-in 2 using AND, OR, and NOT gates.
Definition 2.3
(\(\mathsf {AC^0[2]}\)) The class of (non-uniform) \(\mathsf {AC^0[2]}\) function families is the set of all function families \(\mathcal {F} = \{f_{\lambda }\}_{\lambda \in \mathbb {N}}\) for which there is a polynomial \(p(\cdot )\) and constant c such that for each \(\lambda \), \(f_{\lambda }\) can be computed by a (randomized) circuit of size \(p(\lambda )\), depth c, and unbounded fan-in using AND, OR, NOT, and PARITY gates.
One can see that multiplication of a constant number of matrices can be performed in \(\mathsf {AC^0[2]}\), since it can be done in constant depth with PARITY gates.
Definition 2.4
(\(\mathsf{\oplus L/poly}\)) \(\mathsf{\oplus L/poly}\) is the set of all boolean function families \(\mathcal {F} = \{f_{\lambda }\}_{\lambda \in \mathbb {N}}\) for which there is a constant c such that for each \(\lambda \), there is a non-deterministic Turing machine \({\mathcal {M}}_{\lambda }\) such that for each input x with length \(\lambda \), \({\mathcal {M}}_{\lambda }(x)\) uses at most \(c\log (\lambda )\) space, and \(f_{\lambda }(x)\) is equal to the parity of the number of accepting paths of \({\mathcal {M}}_{\lambda }(x)\).
2.2 Sampling Procedure
We now recall the definitions of four sampling procedures \({\textsf{LSamp}}\), \({\textsf{RSamp}}\), \({\textsf{ZeroSamp}}\), and \({\textsf{OneSamp}}\) in Fig. 1. Note that the output of \({\textsf{ZeroSamp}}(n)\) is always a matrix of rank \(n-1\) and the output of \({\textsf{OneSamp}}(n)\) is always a matrix of full rank [13].
We now recall several assumptions and lemmata on \({\textsf{ZeroSamp}}\) and \({\textsf{OneSamp}}\) given in [13].
Definition 2.5
(Fine-grained matrix linear assumption [13]) There exists a polynomial \(n=n(\lambda )\) in the security parameter \(\lambda \) such that for any family \(\mathcal {A}= \{a_{\lambda }\}_{\lambda \in \mathbb {N}}\) in \(\mathsf {NC^1}\), we have
Lemma 2.6
(Lemma 4.3 in [13]) If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then the fine-grained matrix linear assumption holds.
Remark. Notice that for any polynomial \(n=n(\lambda )\), we have \(\{f_{n}\}_{\lambda \in N} \in \mathsf {NC^1}\) iff \(\{f_\lambda \}_{\lambda \in N} \in \mathsf {NC^1}\) since \(O(\log (n(\lambda )))=O(\log (\lambda ))\). Hence, in the above lemma, we can also set \(n(\cdot )\) as an identity function, i.e., \(n=\lambda \). For simplicity, in the rest of the paper, we always let \({\textsf{ZeroSamp}}(\cdot )\) and \({\textsf{OneSamp}}(\cdot )\) take as input \(\lambda \).
The following lemma implies that for a matrix \(\mathbf {{M}}^\top \) sampled by \({\textsf{ZeroSamp}}(\lambda )\), there is a unique non-zero vector with the first (respectively, last) element being 1 in the kernel of \(\mathbf {{M}}\) (respectively, \(\mathbf {{M}}^\top \)).
Lemma 2.7
(Lemma 3 in [15]) For all \(\lambda \in \mathbb {N}\) and all \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), it holds that \({{\,\textrm{Ker}\,}}(\mathbf {{M}}^\top ) = \{\textbf{0}, \textbf{k} \}\) where \(\textbf{k}\) is a vector such that \(\textbf{k} \in \{0, 1\}^{\lambda -1}\times \{1\}\).
Lemma 2.8
(Lemma 4 in [15]) For all \(\lambda \in \mathbb {N}\) and all \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), it holds that \({{\,\textrm{Ker}\,}}(\mathbf {{M}}) = \{\textbf{0}, \textbf{k} \}\) where \(\textbf{k}\) is a vector such that \(\textbf{k} \in \{1\}\times \{0, 1\}^{\lambda -1}\).
The following lemma indicates a simple relation between the distributions of the outputs of \({\textsf{ZeroSamp}}(\lambda )\) and \({\textsf{OneSamp}}(\lambda )\).
Lemma 2.9
(Lemma 7 in [15]) For all \(\lambda \in \mathbb {N}\), the distributions of \(\mathbf {{M}} + \mathbf {{N}}^\lambda \) and \(\mathbf {{M}}'\) are identical, where and .
We now give two lemmata showing that when sampling a random vector \(\textbf{w}\) from \(\{0, 1\}^\lambda \), the first element of \(\textbf{w}\) does not affect the distribution of \(\mathbf {{M}} \textbf{w}\) for \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\).
Lemma 2.10
(Lemma 5 in [15]) For all \(\lambda \in \mathbb {N}\) and all \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), it holds that
Lemma 2.11
For all \(\lambda \in \mathbb {N}\) and all \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), the distributions of \(\textbf{x}\) and \(\textbf{x}'\) are identical, where , , \(\textbf{x} = \mathbf {{M}} \textbf{w}\), and \(\textbf{x}' = \mathbf {{M}} \textbf{w}'\).
Proof
According to Lemma 2.8, for any \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), there exists \(\textbf{k}\in {{\,\textrm{Ker}\,}}(\mathbf {{M}})\) such that \(\textbf{k} \in \{1\}\times \{0, 1\}^{\lambda -1}\). Therefore, the distributions of \((\textbf{w}+\textbf{k})\), where , and are identical. Moreover, we have \(\mathbf {{M}}\textbf{w}=\mathbf {{M}}(\textbf{w}+\textbf{k})\). Hence, the distributions of \(\mathbf {{M}}\textbf{w}\) and \(\mathbf {{M}}\textbf{w}'\) are identical, completing the proof of Lemma 2.11. \(\square \)
Below we recall the a theorem implicitly given in [15] as the subset membership problem for an HPS. Roughly, it shows that for , a vector sampled from the span of \(\mathbf {{M}}\) is indistinguishable from one sampled outside the span of \(\mathbf {{M}}\) for any adversary in \(\mathsf {NC^1}\). The proof of this theorem is given in “Appendix A” for completeness.
Definition 2.12
(Fine-grained subset membership problem [15]) Let \({\textsf{SY}}=\{{\textsf{SampYes}}_\lambda \}_{\lambda \in \mathbb {N}}\) and \({\textsf{SN}}=\{{\textsf{SampNo}}_\lambda \}_{\lambda \in \mathbb {N}}\) be function families described in Fig. 2. For all \(\lambda \in \mathbb {N}\), all \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), and all \(\textbf{x}\in {\textsf{SampNo}}_\lambda (\mathbf {{M}})\), we have \(\textbf{x}\in \{0,1\}^\lambda \setminus {{\,\textrm{Im}\,}}(\mathbf {{M}})\), then for and any adversary \(\mathcal {A}= \{a_{\lambda }\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\), we have
Theorem 2.13
([15]) If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then the fine-grained subset membership problem (see Definition 2.12) holds.
Remark. Note that the subset membership problem in [15] gives a stronger result additionally showing that the output distributions of \({\textsf{SampYes}}_\lambda (\mathbf {{M}})\) and \({\textsf{SampNo}}_\lambda (\mathbf {{M}})\) are identical to the uniform distributions over \({{\,\textrm{Im}\,}}(\mathbf {{M}})\) and \(\{0, 1\}^\lambda \setminus {{\,\textrm{Im}\,}}(\mathbf {{M}})\) respectively. We only need a weak form of it in this work.
2.3 Predicate Encodings
We now recall the definition of predicate encodings. As in [10], our resulting construction of ABE is generally based on a predicate encoding. By exploiting various types of encodings, we can achieve a broad class of ABEs.
Our definitions are slightly different from the original definition in [10], in that our definition is over GF(2) rather than GF(p), and we require that the encodings are performed in a circuit class \({\mathcal {C}}_1\).
Definition 2.14
(Predicate Encoding [10]) Let \({\textsf{P}}=\{{\textsf{p}}_\lambda \}_{\lambda \in \mathbb {N}}\) with \({\textsf{p}}_\lambda :{\mathcal {X}}\times {\mathcal {Y}}\rightarrow \{0, 1\}\) be a predicate, where \({\mathcal {X}}\) and \({\mathcal {Y}}\) are polynomial-sized spaces associated with \(\lambda \). An \({\mathcal {C}}_1\)-predicate encoding for \({\textsf{P}}\) is a function family \({\textsf{PE}}=\{{\textsf{rE}}_{\lambda },{\textsf{kE}}_{\lambda },{\textsf{sE}}_{\lambda },{\textsf{sD}}_{\lambda },{\textsf{rD}}_{\lambda }\}_{\lambda \in \mathbb {N}}\in \mathcal C_1\) with
-
\({\textsf{rE}}_{\lambda }:{\mathcal {Y}}\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\eta \),
-
\({\textsf{kE}}_{\lambda }: {\mathcal {Y}}\times \{0, 1\}\rightarrow \{0, 1\}^\eta \),
-
\({\textsf{sE}}_{\lambda }:{\mathcal {X}}\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\zeta \),
-
\({\textsf{sD}}_{\lambda }:{\mathcal {X}}\times {\mathcal {Y}}\times \{0, 1\}^\zeta \rightarrow \{0, 1\}\),
-
\({\textsf{rD}}_{\lambda }:{\mathcal {X}}\times {\mathcal {Y}}\times \{0, 1\}^\eta \rightarrow \{0, 1\}\),
where \(\ell =\ell (\lambda )\), \(\eta =\eta (\lambda )\), and \(\zeta =\zeta (\lambda )\) are polynomials in \(\lambda \).
Linearity is satisfied is for all \(\lambda \in \mathbb {N}\) and all \((\textsf{x},\textsf{y})\in {\mathcal {X}}\times {\mathcal {Y}}\), \({\textsf{rE}}_{\lambda }(\textsf{y},\cdot )\), \({\textsf{kE}}_{\lambda }(\textsf{y},\cdot )\), \({\textsf{sE}}_{\lambda }(\textsf{x},\cdot )\), \({\textsf{sD}}_{\lambda }(\textsf{x},\textsf{y},\cdot )\), and \({\textsf{rD}}_{\lambda }(\textsf{x},\textsf{y},\cdot )\) are \(\{0, 1\}\)-linear. Namely, for any \(\textsf{y}\in \mathcal Y\), any \(\textbf{w}_0,\textbf{w}_1\in \{0, 1\}^\ell \), and any \(c\in \{0, 1\}\), we have \({\textsf{rE}}_{\lambda }(\textsf{y},\textbf{w}_0+\textbf{w}_1\cdot c)={\textsf{rE}}_{\lambda }(\textsf{y},\textbf{w}_0)+{\textsf{rE}}_{\lambda }(\textbf{w}_1)\cdot c\), and the same argument can be made for \({\textsf{kE}}_{\lambda }\), \({\textsf{sE}}_{\lambda }\), \({\textsf{sD}}_{\lambda }\), and \({\textsf{rD}}_{\lambda }\).
Restricted \(\alpha \)-reconstruction is satisfied if for all \(\lambda \in \mathbb {N}\), all \((\textsf{x},\textsf{y})\in {\mathcal {X}}\times {\mathcal {Y}}\) such that \({\textsf{p}}_\lambda (\textsf{x},\textsf{y})=1\), all \(\mathsf w\in \{0, 1\}^\ell \), and all \(\alpha \in \{0, 1\}\), we have
\(\alpha \)-privacy is satisfied if for all \(\lambda \in \mathbb {N}\), all \((\textsf{x},\textsf{y})\in {\mathcal {X}}\times {\mathcal {Y}}\) such that \({\textsf{p}}_\lambda (\textsf{x},\textsf{y})=0\), and all \(\alpha \in \{0, 1\}\), the following distributions are identical:
where .
Intuitively, in a modularly designed attribute-based encryption (ABE) scheme, the attribute value in the user’s key is encoded by \({\textsf{rE}}_{\lambda }\) and \({\textsf{kE}}_{\lambda }\), while that in the ciphertext is encoded by \({\textsf{sE}}_{\lambda }\). The decryption algorithm uses the associated decoding algorithms \({\textsf{rD}}_{\lambda }\) and \({\textsf{sD}}_{\lambda }\) to decode \(({\textsf{rE}}_{\lambda },{\textsf{kE}}_{\lambda })\) and \({\textsf{sE}}_{\lambda }\) respectively. The difference between the decoding results of \({\textsf{rD}}_{\lambda }\) and \({\textsf{sD}}_{\lambda }\) for \(({\textsf{rE}}_{\lambda },{\textsf{kE}}_{\lambda })\) and \({\textsf{sE}}_{\lambda }\) is the decoding result of \({\textsf{rD}}_{\lambda }\) for \({\textsf{kE}}_{\lambda }\) only, which is used to yield the session key. These encoding algorithms can be instantiated according to the predicates of different types of ABEs, thus allowing for a modular and generic approach to ABE construction. Namely, different ABE schemes can be constructed by plugging in different encoding algorithms, based on the desired access structure for the scheme.
Remark on Notions for Predicate Encodings. Similar to [10], we abuse the notion
for all i, j to denote the matrix
The same argument is made for \(({\textsf{kE}}_{\lambda },{\textsf{sE}}_{\lambda },{\textsf{sD}}_{\lambda },{\textsf{rD}}_{\lambda })\).
Encoding for Equality. We now give an example of predicate encoding \({\textsf{PE}}_\textsf{eq}\) for equality \({\textsf{P}}_\textsf{eq}\) in Fig. 3. By instantiating our ABKEM given later in Sect. 5 with this encoding, we immediately achieve an IBKEM. Linearity is straightforward. Restricted \(\alpha \)-reconstruction follows from the fact that \(u+\textbf{x}^\top \textbf{w}=u+\textbf{y}^\top \textbf{w}\) when \(\textbf{x}=\textbf{y}\), and \(\alpha \)-privacy follows from the fact that \(u+\textbf{x}^\top \textbf{w}\) and \(u+\textbf{y}^\top \textbf{w}\) are pairwise independent if \(\textbf{x}\ne \textbf{y}\).
2.4 Attribute-Based Key Encapsulation
We now give the definition of fine-grained ABKEM, the instantiation of which can be easily converted into ABEs by using a one-time symmetric cipher.
Definition 2.15
(Attribute-based key encapsulation) A \({\mathcal {C}}_1\)-attribute-based key encapsulation (ABKEM) scheme for a predicate \({\textsf{P}}=\{{\textsf{p}}_\lambda \}_\lambda \) is a function family \({\textsf{ABKEM}}=\{{\textsf{Gen}}_\lambda ,{\textsf{USKGen}}_\lambda ,{\textsf{Enc}}_\lambda ,{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\in \mathcal C_1\) with the following properties.
-
\({\textsf{Gen}}_\lambda \) returns the (master) public/secret key \((\textsf{pk},\textsf{sk})\). We assume that \(\textsf{pk}\) implicitly defines value spaces \({\mathcal {X}}\) and \({\mathcal {Y}}\), a key space \(\mathcal {K}\), and a ciphertext space \(\mathcal {C}\).
-
\({\textsf{USKGen}}_\lambda (\textsf{sk},\textsf{y})\) returns a user secret-key \(\textsf{usk}[\textsf{y}]\) for a value \(\textsf{y}\in {\mathcal {Y}}\).
-
\({\textsf{Enc}}_\lambda (\textsf{pk},\textsf{x})\) returns a symmetric key \(\textsf{K}\in \mathcal {K}\) together with a ciphertext \(\textsf{ct}\in \mathcal {C}\) w.r.t. \(\textsf{x}\in {\mathcal {X}}\).
-
\({\textsf{Dec}}_\lambda (\textsf{usk}[\textsf{y}],\textsf{y},\textsf{x},\textsf{ct})\) deterministically returns a decapsulated key \(\textsf{K}\in {{\mathcal {K}}}\) or the reject symbol \(\bot \).
Perfect correctness is satisfied if for all \(\lambda \in \mathbb {N}\), all \((\textsf{pk},\textsf{sk})\in {\textsf{Gen}}_\lambda \), all \(\textsf{y}\in {\mathcal {Y}}\), all \(\textsf{x}\in {\mathcal {X}}\), all \(\textsf{usk}[\textsf{y}]\in {\textsf{USKGen}}_\lambda (\textsf{sk},\textsf{y})\), and all \((\textsf{K},\textsf{ct})\in {\textsf{Enc}}_\lambda (\textsf{pk},\textsf{x})\), if \({\textsf{p}}_\lambda (\textsf{x},\textsf{y})=1\), we have
The security requirement we consider is indistinguishability against chosen plaintext and attribute attacks (\({\mathsf {PR\text{- }AT\text{- }CPA}}\)) defined as follows.
Definition 2.16
(\({\mathsf {PR\text{- }AT\text{- }CPA}}\) Security for ABKEM) Let \(k(\cdot )\) and \(l(\cdot )\) be functions in \(\lambda \). \({\textsf{ABKEM}}\) is \({\mathcal {C}}_2\)-(k, l)-\({\mathsf {PR\text{- }AT\text{- }CPA}}\) secure if for any \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in {\mathcal {C}}_2\), where \(a_\lambda \) is allowed to make k rounds of adaptive queries to \(\textsc {USKGen}(\cdot )\) and each round it queries l inputs, we have
where the experiments are defined in Fig. 4.
3 Generalized Affine MAC
In this section, we give the definition of generalized affine MAC, which generalizes the notion of standard affine MAC [5] by using predicate encodings, and show how to construct it in the fine-grained setting under the assumption \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\).
3.1 Definitions
The definition of generalized affine MAC is as follows.
Definition 3.1
(Generalized Affine MAC) Let \({\textsf{PE}}=\{{\textsf{sE}}_{\lambda },{\textsf{rE}}_{\lambda },{\textsf{kE}}_{\lambda },{\textsf{sD}}_{\lambda },{\textsf{rD}}_{\lambda }\}_{\lambda \in \mathbb {N}}\in \mathcal C_1\) be a predicate encoding for \({\textsf{P}}=\{{\textsf{p}}_\lambda \}_{\lambda \in \mathbb {N}}\), where \({\textsf{rE}}_{\lambda }: {\mathcal {Y}}\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\eta \), \({\textsf{kE}}_{\lambda }: {\mathcal {Y}}\times \{0, 1\}\rightarrow \{0, 1\}^\eta \), and \({\textsf{sE}}_{\lambda }:\mathcal X\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\zeta \).
A \({\mathcal {C}}_1\)-generalized affine message authentication code for \({\textsf{PE}}\) is a function family \({\textsf{MAC}}_\textsf{GA}=\{{{\textsf{Gen}}_{\textsf{MAC}}}_\lambda ,{\textsf{Tag}}_\lambda ,{{\textsf{Ver}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}}\in {\mathcal {C}}_1\).
-
1.
\({{\textsf{Gen}}_{\textsf{MAC}}}_\lambda \) returns \(\textsf{sk}_{\textsf{MAC}}\) containing \((\mathbf {{B}}, \mathbf {{X}},x')\), where \(\mathbf {{B}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), \(\mathbf {{X}} \in \{0, 1\}^{\lambda \times \ell }\), and \(x' \in \{0, 1\}\).
-
2.
\({\textsf{Tag}}_\lambda (\textsf{sk}_{\textsf{MAC}},{\textsf{m}}\in {\mathcal {Y}})\) returns a tag \(\tau = (\textbf{t}, \textbf{u}) \in \{0, 1\}^\lambda \times \{0, 1\}^\eta \), computed as
(1)$$\begin{aligned} \textbf{u}= & {} {\textsf{rE}}_{\lambda }({\textsf{m}}, \mathbf {{X}}^\top \textbf{t}) + {\textsf{kE}}_{\lambda }({\textsf{m}},x') \in \{0, 1\}^\eta . \end{aligned}$$(2) -
3.
\({{\textsf{Ver}}_{\textsf{MAC}}}_\lambda (\textsf{sk}_{\textsf{MAC}},{\textsf{m}},\tau =(\textbf{t}, \textbf{u} ))\) verifies if Eq. (2) holds.
Correctness is satisfied if for any \(\textsf{sk}_{\textsf{MAC}}\in {{\textsf{Gen}}_{\textsf{MAC}}}_\lambda \), \({\textsf{m}}\in {\mathcal {Y}}\), and \(\tau \in {\textsf{Tag}}_\lambda (\textsf{sk}_{\textsf{MAC}}, {\textsf{m}})\), we have \(1={{\textsf{Ver}}_{\textsf{MAC}}}_\lambda (\textsf{sk}_{\textsf{MAC}},{\textsf{m}},\tau )\).
The security requirement we consider is psedorandomness against chosen message attacks (\({\mathsf {PR\text{- }CMA}}\)) defined as follows.
Definition 3.2
(\({\mathsf {PR\text{- }CMA}}\) Security) Let \(k=k(\lambda )\) and \(l=l(\lambda )\) be polynomials in \(\lambda \). \({\textsf{MAC}}_\textsf{GA}\) is \({\mathcal {C}}_2\text{- }(k,l)\text{- }{\mathsf {PR\text{- }CMA}}\) secure if for any \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in \mathcal C_2\), where \(a_\lambda \) is allowed to make k rounds of adaptive queries to \(\textsc {Eval}(\cdot )\) and each round it queries l inputs, we have
where the experiments are defined in Fig. 5.
Roughly, the \({\mathsf {PR\text{- }CMA}}\) security says that in the presence of many tags and a challenge token \((\textbf{h}_0,h_1)\), an adversary cannot tell whether the \(h_1\) is honestly generated or randomness.
Standard Affine MAC. Let . When \({\textsf{p}}_\lambda (\cdot )\) is an identity function, \(\textbf{u}\) is computed as
in Eq. (2), and \(\textbf{h}_0\) is computed as
in Fig. 5, i.e., the predicate encoding is the one for equality (see Fig. 3), the above definition becomes exactly the same as that of affine MAC given in [5] for the HPS based IBKEM, except that we only consider computations over GF(2) and \(\textbf{t}\) is sampled by \({\textsf{SampYes}}_\lambda \). We give the definition as below.
Definition 3.3
(Affine MAC [5]) A Generalized affine MAC for the predicate \({\textsf{P}}_\textsf{eq}\) and encoding \({\textsf{PE}}_\textsf{eq}\) defined as in Fig. 3 is said to be an affine MAC.
3.2 Construction
In this section, we give our construction of \(\mathsf {AC^0[2]}\)-generalized affine MAC based on \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\). It is a natural extension of the standard affine MAC from an HPS in [5].
Theorem 3.4
If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) and \({\textsf{PE}}=\{{\textsf{sE}}_{\lambda },{\textsf{rE}}_{\lambda },{\textsf{kE}}_{\lambda },{\textsf{sD}}_{\lambda },{\textsf{rD}}_{\lambda }\}_{\lambda \in \mathbb {N}}\in \mathsf {AC^0[2]}\) is a predicate encoding, where \({\textsf{rE}}_{\lambda }: \mathcal Y\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\eta \), \({\textsf{kE}}_{\lambda }: {\mathcal {Y}}\times \{0, 1\}\rightarrow \{0, 1\}^\eta \), and \({\textsf{sE}}_{\lambda }:{\mathcal {X}}\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\zeta \), then \({\textsf{MAC}}_\textsf{GA}\) is an \(\mathsf {AC^0[2]}\)-generalized affine MAC that is \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }CMA}}\) secure, where k is any constant and \(l=l(\lambda )\) is any polynomial in \(\lambda \).
Proof
First, we note that \((\{{{\textsf{Gen}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}},\{{\textsf{Tag}}_\lambda \}_{\lambda \in \mathbb {N}},\{{{\textsf{Ver}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}})\) are computable in \(\mathsf {AC^0[2]}\), since they only involve operations including sampling random bits and multiplication of a constant number of matrices, which can be done in constant depth with PARITY gates. Also, it is straightforward that \({\textsf{MAC}}_\textsf{GA}\) satisfies correctness.
We now prove that \({\textsf{MAC}}_\textsf{GA}\) is \(\mathsf {NC^1}\)-\((k,l)\text{- }{\mathsf {PR\text{- }CMA}}\) secure by defining a sequence of intermediate games as in Fig. 7.
Let \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) be any adversary against the \({\mathsf {PR\text{- }CMA}}\)-security of \({\textsf{MAC}}_\textsf{GA}\). Game \(\textsf{G}_0\) is the real attack game. In games \(\textsf{G}_{1,i}\), the first \(i-1\) queries to the \(\textsc {Eval}\) oracle are answered with \((\textbf{t},\textbf{u})\), where and \(\textbf{u}\) contains no information on \({\textsf{kE}}_{\lambda }({\textsf{m}},x')\), and the remaining are answered as in the real scheme. To interpolate between \(\textsf{G}_{1,i}\) and \(\textsf{G}_{1,i+1}\), we also define \(\textsf{G}_{1,i}'\), which answers the i-th query to \(\textsc {Eval}\) by picking . By definition, we have \(\textsf{G}_0=\textsf{G}_{1,1}\). \(\square \)
Lemma 3.5
\(\Pr [{\mathsf {PR\text{- }CMA}}_{\textsf{real}}^{a_\lambda } \Rightarrow 1] =\Pr [\textsf{G}^{a_\lambda }_0 \Rightarrow 1]= \Pr [\textsf{G}^{a_\lambda }_{1,1}\Rightarrow 1]\).
Lemma 3.6
There exists an adversary \(\mathcal {B}_{1,i}=\{b_\lambda ^{1,i}\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda ^{1,i}\) breaks the fine-grained subset membership problem (see Definition 2.12), which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Theorem 2.13, with probability
Proof
Games \(\textsf{G}_{1,i}\) and \(\textsf{G}'_{1,i}\) only differ in the distribution of \(\textbf{t}\) returned by the \(\textsc {Eval}\) oracle for its i-th query. We build \(b_\lambda ^{1,i}\) as follows.
The distinguisher \(b_\lambda ^{1,i}\) runs in exactly the same way as the challenger in \(\textsf{G}_{1,i}\) except that for its i-th query, it obtains \(\textbf{t}\) which is sampled as or . When \(a_\lambda \) outputs \(\beta \in \{0, 1\}\), \(b_\lambda \) outputs \(\beta \) if no \({\textsf{m}}\) such that \({\textsf{p}}_\lambda ({\textsf{m}}^*,{\textsf{m}})=1\) was queried to \(\textsc {Eval}\). Otherwise, \(b_\lambda \) outputs 0.
Since \(a_\lambda \) only makes constant rounds of queries, all the operations in \(b_\lambda \) are performed in \(\mathsf {NC^1}\). Hence, we have \(\mathcal {B}_{1,i}\in \mathsf {NC^1}\).
When \(\textbf{t}\) is sampled as (respectively, ), the view of \(a_\lambda \) is exactly the same as its view in \(\textsf{G}_{1,i}\) (respectively, \(\textsf{G}_{1,i}'\)). Thus the advantage of \(b_\lambda ^{1,i}\) in breaking the subset membership problem is \(| \Pr [\textsf{G}'^{a_\lambda }_{1,i}\Rightarrow 1] - \Pr [\textsf{G}^{a_\lambda }_{1,i} \Rightarrow 1]|\), completing this part of proof. \(\square \)
Lemma 3.7
\(\Pr [\textsf{G}_{1,i+1}^{a_\lambda }\Rightarrow 1] =\Pr [\textsf{G}_{1,i}'^{a_\lambda }\Rightarrow 1]\).
Proof
Let \({\textsf{m}}\) be the i-th query to \(\textsc {Eval}\) such that \({\textsf{p}}_\lambda ({\textsf{m}}^*,{\textsf{m}})\ne 1\) and let \((\textbf{t},\textbf{u})\) be its tag. We have \(\textbf{t}\notin {{\,\textrm{Im}\,}}(\mathbf {{B}})\) due to Theorem 2.13. We use an information-theoretic argument to show that in \(\textsf{G}'_{1,i}\), \(\textbf{u}\) does not reveal any information on \(x'\). Information-theoretically, \(a_\lambda \) may learn \(\mathbf {{B}}^\top \mathbf {{X}}\) from each c-th query with \(c>i\). Thus, for and , \(a_\lambda \) information-theoretically obtains the distribution of
This distribution is identical to the distribution of
since the distribution of
and
are identical due to the \(\alpha \)-privacy of \({\textsf{PE}}\), completing this part of proof. \(\square \)
Lemma 3.8
\(\Pr [\textsf{G}^{a_\lambda }_{2} \Rightarrow 1] = \Pr [\textsf{G}^{a_\lambda }_{1,k\cdot l+1} \Rightarrow 1]\).
Proof
Note that \(a_\lambda \) can ask at most \(k\cdot l\)-many \(\textsc {Eval}\) queries. In both \(\textsf{G}_{1,k\cdot l+1}\) and \(\textsf{G}_2\), all the answers of \(\textsc {Eval}\) are independent of \(x'\). Hence, \(h_1\) from \(\textsf{G}_{1,k\cdot l+1}\) is uniform in the view of \(a_\lambda \). \(\square \)
We now do all the previous steps in the reverse order as in Fig. 8. Then, by using the above arguments in a reverse order, we have the following lemma.
Lemma 3.9
There exists an adversary \(\mathcal {B}_2=\{b^2_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda ^2\) breaks the fine-grained subset membership problem with probability at least
Putting all above together, Theorem 3.4 immediately follows. \(\square \)
An Affine MAC. By instantiating the underlying predicate encoding in Fig. 6 with the encoding for equality (see Fig. 3), we immediately obtain an affine MAC \({\textsf{MAC}}=\{{{\textsf{Gen}}_{\textsf{MAC}}}_\lambda ,{\textsf{Tag}}_\lambda ,{{\textsf{Ver}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}}\) as in Fig. 9 for message space \(\{0,1\}^\ell \), which will be used to construct an IBE scheme in \(\mathsf {NC^1}\) later. Formally, we have the following corollary derived from Theorem 3.4.
Corollary 3.10
If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then \({\textsf{MAC}}\) is an \(\mathsf {AC^0[2]}\)-affine MAC that is \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }CMA}}\) secure, where k is any constant and \(l=l(\lambda )\) is any polynomial in \(\lambda \).
4 Fine-Grained Secure Identity-Based Encryption
In this section, we present our fine-grained IBE scheme, which captures the core techniques of our ABE scheme given later in Sect. 5.
4.1 Definition
We now give the definition of fine-grained IBKEM, which is a special case of fine-grained ABKEM (see Definition 2.15) where the boolean predicate is restricted to be the equality predicate.
Definition 4.1
(Identity-based key encapsulation) A \({\mathcal {C}}_1\)-identity key encapsulation (IBKEM) scheme is a function family \({\textsf{IBKEM}}=\{{\textsf{Gen}}_\lambda ,{\textsf{USKGen}}_\lambda ,{\textsf{Enc}}_\lambda ,{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\in \mathcal C_1\) with the following properties.
-
\({\textsf{Gen}}_\lambda \) returns the (master) public/secret key \((\textsf{pk},\textsf{sk})\). We assume that \(\textsf{pk}\) implicitly defines an identity space \(\mathcal{I}\mathcal{D}\), a key space \(\mathcal {K}\), and a ciphertext space \(\mathcal {C}\).
-
\({\textsf{USKGen}}_\lambda (\textsf{sk},\textsf{id})\) returns a user secret-key \(\textsf{usk}[\textsf{id}]\) for an identity \(\textsf{id}\in \mathcal{I}\mathcal{D}\).
-
\({\textsf{Enc}}_\lambda (\textsf{pk},\textsf{id})\) returns a symmetric key \(\textsf{K}\in \mathcal {K}\) together with a ciphertext \(\textsf{ct}\in \mathcal {C}\) w.r.t. \(\textsf{id}\in \mathcal{I}\mathcal{D}\).
-
\({\textsf{Dec}}_\lambda (\textsf{usk}[\textsf{id}],\textsf{id},\textsf{ct})\) deterministically returns a decapsulated key \(\textsf{K}\in {{\mathcal {K}}}\) or the reject symbol \(\bot \).
Perfect correctness is satisfied if for all \(\lambda \in \mathbb {N}\), all \((\textsf{pk},\textsf{sk})\in {\textsf{Gen}}_\lambda \), all \(\textsf{id}\in \mathcal{I}\mathcal{D}\), all \(\textsf{usk}[\textsf{id}]\in {\textsf{USKGen}}_\lambda (\textsf{sk},\textsf{id})\), and all \((\textsf{K},\textsf{ct})\in {\textsf{Enc}}_\lambda (\textsf{pk},\textsf{id})\), we have
The security requirement we consider is indistinguishability against chosen plaintext and identity attacks (\({\mathsf {PR\text{- }ID\text{- }CPA}}\)) defined as follows.
Definition 4.2
(\({\mathsf {PR\text{- }ID\text{- }CPA}}\) Security for IBKEM) Let \(k(\cdot )\) and \(l(\cdot )\) be functions in \(\lambda \). \({\textsf{IBKEM}}\) is \({\mathcal {C}}_2\)-(k, l)-\({\mathsf {PR\text{- }ID\text{- }CPA}}\) secure if for any \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in {\mathcal {C}}_2\), where \(a_\lambda \) is allowed to make k rounds of adaptive queries to \(\textsc {USKGen}(\cdot )\) and each round it queries l inputs, we have
where the experiments are defined in Fig. 10.
4.2 Construction
Let \({\textsf{MAC}}=\{{{\textsf{Gen}}_{\textsf{MAC}}}_\lambda ,{\textsf{Tag}}_\lambda ,{{\textsf{Ver}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) be an affine MAC over \(\{0,1\}^\lambda \) with message space \(\mathcal{I}\mathcal{D}\) in Fig. 9. Our IBKEM \({\textsf{IBKEM}}=\{{\textsf{Gen}}_\lambda ,{\textsf{USKGen}}_\lambda ,{\textsf{Enc}}_\lambda ,{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\) for key-space \({{\mathcal {K}}}=\{0,1\}\) and identity space \(\{0, 1\}^\ell \) is defined as in Fig. 11.Footnote 2
Theorem 4.3
Under the assumption \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) and the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }CMA}}\) security of \({\textsf{MAC}}\), where k is any constant and \(l=l(\lambda )\) is any polynomial in \(\lambda \), \({\textsf{IBKEM}}\) is an \(\mathsf {AC^0[2]}\)-IBKEM that is \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }ID\text{- }CPA}}\) secure against \(\mathsf {NC^1}\).
Proof
First, we note that \(\{{\textsf{Gen}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{\textsf{USKGen}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{\textsf{Enc}}_\lambda \}_{\lambda \in \mathbb {N}}\), and \(\{{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\) are computable in \(\mathsf {AC^0[2]}\), since they only involve operations including multiplication of a constant number of matrices, sampling random bits, and running \({\textsf{MAC}}\in \mathsf {AC^0[2]}\).
Correctness follows from the fact that by Eq. (3) in Sect. 3.1, we have
and the difference of the two elements yields \(\textsf{K}= (\textbf{y}'^\top ||x')\mathbf {{A}}\textbf{r}= \textbf{z}' \cdot \textbf{r}\).
Let \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\) be any adversary against the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }ID\text{- }CPA}}\) security of \({\textsf{IBKEM}}\). We now prove the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }ID\text{- }CPA}}\) security by defining a sequence of games \(\textsf{G}_0\text{- }\textsf{G}_6\) as in Fig. 12. Roughly, in the first four games, we show how to extract a challenge token for \({\textsf{MAC}}\) from the challenge session key and ciphertext by switching the distribution of \(\mathbf {{A}}\) twice and changing the distribution of the randomness \(\textbf{r}\) during the switching procedure. In the last two games, we show that the commitments \(\mathbf {{Z}}_i\) and \(\textbf{z}'\) perfectly hide the secrets, and the answers of queries made by \(a_\lambda \) reveal no useful information other than the tags and token for \({\textsf{MAC}}\). \(\square \)
Lemma 4.4
\(\Pr [{\mathsf {PR\text{- }ID\text{- }CPA}}_{\textsf{real}}^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}^{a_\lambda }_1 \Rightarrow 1]=\Pr [\textsf{G}^{a_\lambda }_0 \Rightarrow 1]\).
Proof
\(\textsf{G}_0\) is the real attack game. In game \(\textsf{G}_1\), we change the simulation of \(\textbf{c}^*_0\), \(\textbf{c}^*_1\) and \(\textsf{K}^*\) in \(\textsc {Enc}(\textsf{id}^*)\) by substituting \(\mathbf {{Z}}_i\) and \(\textbf{z}'\) with their respective definitions and substituting \(\mathbf {{A}}\) with \(\mathbf {{A}}+\mathbf {{N}}^\lambda \). Since we have
the view of \(a_\lambda \) in \(\textsf{G}_1\) is identical to its view in \(\textsf{G}_0\), completing this part of proof. \(\square \)
Lemma 4.5
There exists an adversary \(\mathcal {B}_1=\{b^1_\lambda \}_{\lambda \in \mathbb {N}}\) such that \(b^1_\lambda \) breaks the fine-grained matrix linear assumption (see Definition 2.5), which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Lemma 2.6, with advantage
Proof
\(\textsf{G}_1\) and \(\textsf{G}_2\) only differ in the distribution of \(\mathbf {{A}}\), namely, or , and we build the distinguisher \(b_\lambda ^1\) as follows.
\(b_\lambda ^1\) runs in exactly the same way as the challenger of \(\textsf{G}_1\) except that in \(\textsc {Init}\), instead of generating \(\mathbf {{A}}\) by itself, it takes as input \(\mathbf {{A}}^\top \) generated as or from its own challenger. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^1\) outputs \(\beta \) as well if \(\textsf{id}^*\) was not queried to \(\textsc {USKGen}\). Otherwise, \(b_\lambda ^1\) outputs 0.
If \(\mathbf {{A}}\) is generated as (respectively, ), the view of \(a_\lambda \) is the same as its view in \(\textsf{G}_1\) (respectively, \(\textsf{G}_2\)). Hence, the probability that \(b_\lambda ^1\) breaks the fine-grained matrix linear assumption is
Moreover, since \(a_\lambda \) only makes constant rounds of queries, all operations in \(b_\lambda ^1\) are performed in \(\mathsf {NC^1}\). Hence, we have \(\mathcal {B}_1 = \{b_{\lambda }^1\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof. \(\square \)
Lemma 4.6
\(\Pr [\textsf{G}_3^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_2^{a_\lambda }\Rightarrow 1]\).
Proof
In this game, we sample \(\textbf{r}\) in \(\textsc {Enc}(\textsf{id}^*)\) as instead of . According to Lemma 2.9, the distribution of \(\mathbf {{A}}+\mathbf {{N}}^\lambda \) in \(\textsf{G}_2\) and \(\textsf{G}_3\) is identical to that of a matrix sampled from \({\textsf{ZeroSamp}}\). Then this lemma follows from Lemma 2.11 immediately. \(\square \)
Lemma 4.7
There exists an adversary \(\mathcal {B}_2=\{b_\lambda ^2\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda ^2\) breaks the fine-grained matrix linear assumption with advantage
Proof
\(\textsf{G}_3\) and \(\textsf{G}_4\) only differ in the distribution of \(\mathbf {{A}}\), namely, or , and we build the distinguisher \(b_\lambda ^2\) as follows.
\(b_\lambda ^2\) runs in exactly the same way as the challenger of \(\textsf{G}_3\) except that in \(\textsc {Init}\), instead of generating \(\mathbf {{A}}\) by itself, it takes as input \(\mathbf {{A}}^\top \) generated as or from its own challenger. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^2\) outputs \(\beta \) as well if \(\textsf{id}^*\) was not queried to \(\textsc {USKGen}\). Otherwise, \(b_\lambda ^2\) outputs 0.
If \(\mathbf {{A}}\) is generated as (respectively, ), the view of \(a_\lambda \) is the same as its view in \(\textsf{G}_3\) (respectively, \(\textsf{G}_4\)). Hence, the probability that \(b_\lambda ^2\) breaks the fine-grained matrix linear assumption is
Moreover, since \(a_\lambda \) only makes constant rounds of queries, all operations in \(b_\lambda ^2\) are performed in \(\mathsf {NC^1}\). Hence we have \(\mathcal {B}_2 = \{b_{\lambda }^2\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof.
\(\square \)
Lemma 4.8
\(\Pr [\textsf{G}_5^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_4^{a_\lambda }\Rightarrow 1]\).
Proof
In \(\textsf{G}_5\), we do not use \((\mathbf {{Y}}_i)_{i=0}^\ell \) and \(\textbf{y}'\) in \(\textsc {USKGen}(\textsf{id})\) or \(\textsc {Enc}(\textsf{id}^*)\) any more. We give the sampling procedure for \(\mathbf {{A}}\) in an explicit way and change the simulation of \(\mathbf {{Z}}_i\), \(\textbf{z}'\), \(\textbf{v}\), \(\textbf{c}_1^*\), and \(\textsf{K}^*\) as in Fig. 12. We now show that all the changes are purely conceptual.
In \(\textsf{G}_5\), we generate \(\mathbf {{A}}\) by sampling and , and setting \(\mathbf {{A}}^\top =\mathbf {{R}}_0 \mathbf {{M}}_0^\lambda \mathbf {{R}}_1\). This is exactly the “zero-sampling” procedure, in which case, we have
and
Hence, the distributions of \(\mathbf {{Z}}_i\) and \(\textbf{c}_1^*\) in \(\textsf{G}_5\) remain the same, and the distributions of \(\textbf{z}'\) and \(\textsf{K}^*\) can be analyzed in the same way. The distribution of \(\textbf{v}\) does not change as well since
Putting all above together, Lemma 4.8 immediately follows. \(\square \)
Lemma 4.9
There exists an adversary \(\mathcal {B}_3=\{b_\lambda ^3\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda ^3\) breaks the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }CMA}}\)-security of \({\textsf{MAC}}\) with advantage
Proof
The challenger of \(\textsf{G}_6\) answers the \(\textsc {Enc}(\textsf{id}^*)\) query by choosing random \(\textsf{K}^*\). We build \(b_\lambda ^3\) as in Fig. 13 to show that the differences between \(\textsf{G}_6\) and \(\textsf{G}_5\) can be bounded by the advantage of breaking the \({\mathsf {PR\text{- }CMA}}\) security of \({\textsf{MAC}}\).
\(b_\lambda ^3\) runs in the same way as the challenger of \(\textsf{G}_5\) except that it samples \(\mathbf {{D}}_i\) and \(\textbf{d}'\) uniformly at random from \(\{0,1\}^{\lambda \times (\lambda -1)}\) and \(\{0,1\}^{1\times (\lambda -1)}\) respectively. This does not change the view of \(a_\lambda \) since \(\mathbf {{Y}}_i\) and \(\textbf{y}'\) were uniformly sampled in \(\textsf{G}_5\). Moreover, every time on receiving a query \(\textsf{id}\) to \(\textsc {USKGen}\), \(b_\lambda ^3\) forwards \(\textsf{id}\) to its evaluation oracle \(\textsc {Eval}\) to obtain the answer \((\textbf{t},u)\), and on receiving the query \(\textsf{id}^*\) to \(\textsc {Enc}\), \(b_\lambda ^3\) forwards \(\textsf{id}^*\) to its challenge oracle \(\textsc {Chal}\) and uses the answer \((\textbf{h}_0,h_1)\) to simulate \(\textbf{r}\), \(\textbf{c}_1^*\), and \(\textsf{K}^*\) as in Fig. 13. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^3\) outputs \(\beta \) as well if \(\textsf{id}^*\) was not queried to \(\textsc {USKGen}\). Otherwise, \(b_\lambda ^3\) outputs 0.
If \(h_1\) is uniform (i.e., \(b^3_\lambda \) is in Game \({\mathsf {PR\text{- }CMA}}_{\textsf{rand}}\)) then the view of \(a_\lambda \) is identical to its view in \(\textsf{G}_6\). If \(h_1\) is real (i.e., \(b^3_\lambda \) is in Game \({\mathsf {PR\text{- }CMA}}_{\textsf{real}}\)), then the view of \(a_\lambda \) is identical to its view in \(\textsf{G}_5\). Thus the advantage of \(b_\lambda ^3\) is exactly
Moreover, since all operations in \(b^3_{\lambda }\) are performed in \(\mathsf {NC^1}\), we have \(\mathcal {B}_3 = \{b^3_{\lambda }\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof. \(\square \)
We now do all the previous steps in the reverse order as in Fig. 14. Note that the view of the adversary in \(\textsf{H}_0\) (respectively, \(\textsf{H}_4\)) is identical to its view in \(\textsf{G}_6\) (respectively, \({\mathsf {PR\text{- }ID\text{- }CPA}}_{\textsf{rand}}\)). By using the above arguments in a reverse order, we have the following lemma.
Lemma 4.10
There exists an adversary \(\mathcal {B}_4=\{b_\lambda ^4\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda ^4\) breaks the fine-grained matrix linear assumption with advantage
Putting all above together, Theorem 4.3 immediately follows. \(\square \)
Extension to IBKEM with Large Key Space. The key space of the above IBKEM is \(\{0, 1\}\), while by running it in parallel, we can easily extend it to an IBKEM with large key space. The resulting scheme can still be performed in \(\mathsf {AC^0[2]}\) since running in parallel does not increase the circuit depth. The same extension can be also made for our fine-grained secure ABKEM given later in Sect. 5.
Extension to QA-NIZK. Our techniques for proving the hiding property of the underlying commitment scheme in our IBKEM can also be used to construct an efficient fine-grained QA-NIZK in \(\mathsf {NC^1}\) with adaptive soundness. We refer the reader to “Appendix C” for details.
5 Fine-Grained Secure Attribute-Based Encryption
In this section, we generalize our IBE scheme as a fine-grained ABE scheme by using predicate encodings [10, 36]. By instantiating the underlying encodings in different ways, we can achieve ABEs for inner product, non-zero inner product, spatial encryption, doubly spatial encryption, boolean span programs, and arithmetic span programs, and also broadcast encryption and fuzzy IBE schemes, which are computable in \(\mathsf {AC^0[2]}\) and secure against \(\mathsf {NC^1}\) under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\). We refer the reader to “Appendix C” for several instances of the encodings and also to [10] for more instances. We note that the encodings in [10] are defined over GF(p), while the ours are over GF(2). However, the proofs for encodings in [10] can be adopted in our case, since the linearity and \(\alpha \)-reconstruction properties hold in GF(p) also hold in GF(2) and by the standard linear-independence arguments in GF(2), the \(\alpha \)-privacy also holds in our case.
Let \({\textsf{PE}}=\{{\textsf{rE}}_{\lambda },{\textsf{kE}}_{\lambda },{\textsf{sE}}_{\lambda },{\textsf{sD}}_{\lambda },{\textsf{rD}}_{\lambda }\}_{\lambda \in \mathbb {N}}\in \mathsf {AC^0[2]}\) be a predicate encoding for \({\textsf{P}}=\{{\textsf{p}}_\lambda \}_{\lambda \in \mathbb {N}}\) with \({\textsf{rE}}_{\lambda }:{\mathcal {Y}}\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\eta \), \({\textsf{kE}}_{\lambda }: \mathcal Y\times \{0, 1\}\rightarrow \{0, 1\}^\eta \), \({\textsf{sE}}_{\lambda }:{\mathcal {X}}\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\zeta \), \({\textsf{sD}}_{\lambda }:{\mathcal {X}}\times \mathcal Y\times \{0, 1\}^\zeta \rightarrow \{0, 1\}\), and \({\textsf{rD}}_{\lambda }:{\mathcal {X}}\times {\mathcal {Y}}\times \{0, 1\}^\eta \rightarrow \{0, 1\}\). Let \({\textsf{MAC}}_\textsf{GA}=\{{{\textsf{Gen}}_{\textsf{MAC}}}_\lambda ,{\textsf{Tag}}_\lambda ,{{\textsf{Ver}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {AC^0[2]}\) be a \({\textsf{PE}}\)-generalized affine MAC over \(\{0, 1\}^\lambda \) with message space \({\mathcal {Y}}\). Our ABKEM \({\textsf{ABKEM}}=\{{\textsf{Gen}}_\lambda ,{\textsf{USKGen}}_\lambda ,{\textsf{Enc}}_\lambda ,{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\) is defined as in Fig. 15.
Theorem 5.1
Under the assumption \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) and the \(\mathsf {NC^1}\)-(k, l)-\({\textsf{sE}}_{\lambda }\)-\({\mathsf {PR\text{- }CMA}}\)-security of \({\textsf{MAC}}_\textsf{GA}\), where k is any constant and \(l=l(\lambda )\) is any polynomial in \(\lambda \), \({\textsf{ABKEM}}\) is an \(\mathsf {AC^0[2]}\)-ABKEM that is \(\mathsf {NC^1}\)-(k, l)-\({\mathsf {PR\text{- }AT\text{- }CPA}}\) secure against \(\mathsf {NC^1}\).
Proof
First, we note that \(\{{\textsf{Gen}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{\textsf{USKGen}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{\textsf{Enc}}_\lambda \}_{\lambda \in \mathbb {N}}\), and \(\{{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\) are computable in \(\mathsf {AC^0[2]}\), since they only involve operations including multiplication of a constant number of matrices, sampling random bits, and running \({\textsf{MAC}}_\textsf{GA}\in \mathsf {AC^0[2]}\).
By Eq. (2) in Sect. 3.1, we have
and
Then, due to restricted \(\alpha \)-reconstruction (see Definition 2.14), the difference of the above equations yields \(\textsf{K}= ({\textbf{y}'}^\top ||x')\mathbf {{A}}\textbf{r}= \textbf{z}' \cdot \textbf{r}\), i.e., correctness is satisfied.
Let \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\) be any adversary against the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }AT\text{- }CPA}}\) security of \({\textsf{ABKEM}}\). We now prove the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }AT\text{- }CPA}}\) security by defining a sequence of games \(\textsf{G}_0\text{- }\textsf{G}_6\) as in Fig. 16. Roughly, in the first four games, we show how to extract a challenge token for \({\textsf{MAC}}_\textsf{GA}\) from the challenge session key and ciphertext by switching the distribution of \(\mathbf {{A}}\) twice and changing the distribution of the randomness \(\textbf{r}\) during the switching procedure. In the last two games, we show that the commitments \(\mathbf {{Z}}_i\) and \(\textbf{z}'\) perfectly hide the secrets, and the answers of queries made by \(a_\lambda \) reveal no useful information other than the tags and token for \({\textsf{MAC}}\). \(\square \)
Lemma 5.2
\(\Pr [{\mathsf {PR\text{- }AT\text{- }CPA}}_{\textsf{real}}^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}^{a_\lambda }_1 \Rightarrow 1]=\Pr [\textsf{G}^{a_\lambda }_0 \Rightarrow 1]\).
Proof
\(\textsf{G}_0\) is the real attack game. In game \(\textsf{G}_1\), we change the simulation of \(\textbf{c}^*_0\), \(\mathbf {{C}}^*_1\) and \(\textsf{K}^*\) in \(\textsc {Enc}(\textsf{x})\) by substituting \(\mathbf {{Z}}_i\) and \(\textbf{z}'\) with their respective definitions and substituting \(\mathbf {{A}}\) with \(\mathbf {{A}}+\mathbf {{N}}^\lambda \). Since we have
the view of \(a_\lambda \) in \(\textsf{G}_1\) is identical to its view in \(\textsf{G}_0\), completing this part of proof. \(\square \)
Lemma 5.3
There exists an adversary \(\mathcal {B}_1=\{b^1_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b^1_\lambda \) breaks the fine-grained matrix linear assumption (see Definition 2.5), which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Theorem 2.13, with advantage
Proof
\(\textsf{G}_1\) and \(\textsf{G}_2\) only differ in the distribution of \(\mathbf {{A}}\), namely, or , and we build the distinguisher \(b^1_\lambda \) as follows.
\(b_\lambda ^1\) runs in exactly the same way as the challenger of \(\textsf{G}_1\) except that in \(\textsc {Init}\), instead of generating \(\mathbf {{A}}\) by itself, it takes as input \(\mathbf {{A}}^\top \) generated as or from its own challenger. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^1\) outputs \(\beta \) as well if no \(\textsf{y}\) such that \({\textsf{p}}_\lambda (\textsf{x},\textsf{y})=1\) was queried to \(\textsc {USKGen}\). Otherwise, \(b_\lambda ^1\) outputs 0.
If \(\mathbf {{A}}\) is generated as (respectively, ), the view of \(a_\lambda \) is the same as its view in \(\textsf{G}_1\) (respectively, \(\textsf{G}_2\)). Hence, the probability that \(b_\lambda ^1\) breaks the fine-grained matrix linear assumption is
Moreover, since \(a_\lambda \) only makes constant rounds of queries, all operations in \(b_\lambda ^1\) are performed in \(\mathsf {NC^1}\). Hence, we have \(\mathcal {B}_1 = \{b_{\lambda }^1\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof.
\(\square \)
Lemma 5.4
\(\Pr [\textsf{G}_3^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_2^{a_\lambda }\Rightarrow 1]\).
Proof
In this game, we sample \(\textbf{r}\) in \(\textsc {Enc}(\textsf{x})\) as instead of . According to Lemma 2.9, the distributions of \(\mathbf {{A}}+\mathbf {{N}}^\lambda \) in both \(\textsf{G}_2\) and \(\textsf{G}_3\) are identical to that of a matrix sampled from \({\textsf{ZeroSamp}}\). Then this lemma follows from Lemma 2.11 immediately. \(\square \)
Lemma 5.5
There exists an adversary \(\mathcal {B}_2=\{b^2_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b^2_\lambda \) breaks the fine-grained matrix linear assumption with advantage
Proof
\(\textsf{G}_1\) and \(\textsf{G}_2\) only differ in the distribution of \(\mathbf {{A}}\), namely, or , and we build the distinguisher \(b^2_\lambda \) against Lemma 2.6 as follows.
\(b_\lambda ^2\) runs in exactly the same way as the challenger of \(\textsf{G}_3\) except that in \(\textsc {Init}\), instead of generating \(\mathbf {{A}}\) by itself, it takes as input \(\mathbf {{A}}^\top \) generated as or from its own challenger. When \(a_\lambda \) outputs \(\beta \), \(b^2_\lambda \) outputs \(\beta \) as well if no \(\textsf{y}\) such that \({\textsf{p}}_\lambda (\textsf{x},\textsf{y})=1\) was queried to \(\textsc {USKGen}\). Otherwise, \(b_\lambda ^2\) outputs 0.
If \(\mathbf {{A}}\) is generated as (respectively, ), the view of \(a_\lambda \) is the same as its view in \(\textsf{G}_3\) (respectively, \(\textsf{G}_4\)). Hence, the probability that \(b_\lambda ^2\) breaks the fine-grained matrix linear assumption is
Moreover, since \(a_\lambda \) only makes constant rounds of queries, all operations in \(b^2_\lambda \) are performed in \(\mathsf {NC^1}\). Hence, we have \(\mathcal {B}_2 = \{b^2_{\lambda }\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof.
\(\square \)
Lemma 5.6
\(\Pr [\textsf{G}_5^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_4^{a_\lambda }\Rightarrow 1]\).
Proof
In \(\textsf{G}_5\), we do not use \((\mathbf {{Y}}_i)_{i=1}^\ell \) and \(\textbf{y}'\) in \(\textsc {USKGen}(\textsf{y})\) or \(\textsc {Enc}(\textsf{x})\) any more. We give the sampling procedure for \(\mathbf {{A}}\) in an explicit way and change the simulation of \(\mathbf {{Z}}_i\), \(\textbf{z}'\), \(\textbf{v}\), \(\mathbf {{C}}_1^*\), and \(\textsf{K}^*\) as in Fig. 16. We now show that all the changes are purely conceptual.
In \(\textsf{G}_5\), we generate \(\mathbf {{A}}\) by sampling and , and setting \(\mathbf {{A}}^\top =\mathbf {{R}}_0 \mathbf {{M}}_0^\lambda \mathbf {{R}}_1\). This is exactly the “zero-sampling” procedure, in which case, we have
and
Hence, the distributions of \(\mathbf {{Z}}_i\) in \(\textsf{G}_5\) remain the same, and the distributions of \(\textbf{z}'\) and \(\textsf{K}^*\) can be analyzed in the same way. The distribution of \(\textbf{v}\) does not change as well since
Putting all above together, Lemma 5.6 immediately follows. \(\square \)
Lemma 5.7
There exists an adversary \(\mathcal {B}_3=\{b^3_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b^3_\lambda \) breaks the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }CMA}}\) security of \({\textsf{MAC}}_\textsf{GA}\) with advantage
Proof
The challenger of \(\textsf{G}_6\) answers the \(\textsc {Enc}(\textsf{x})\) query by choosing random \(\textsf{K}^*\). We build \(b^3_\lambda \) as in Fig. 17 to show that the differences between \(\textsf{G}_6\) and \(\textsf{G}_5\) can be bounded by its advantage of breaking the \({\mathsf {PR\text{- }CMA}}\) security of \({\textsf{MAC}}_\textsf{GA}\).
\(b_\lambda ^3\) runs in the same way as the challenger of \(\textsf{G}_5\) except that it samples \(\mathbf {{D}}_i\) and \(\textbf{d}'\) uniformly at random from \(\{0,1\}^{\lambda \times (\lambda -1)}\) and \(\{0,1\}^{1\times (\lambda -1)}\) respectively. This does not change the view of \(a_\lambda \) since \(\mathbf {{Y}}_i\) and \(\textbf{y}'\) were uniformly sampled in \(\textsf{G}_5\). Moreover, every time on receiving a query \(\textsf{y}\) to \(\textsc {USKGen}\), \(b_\lambda ^3\) forwards \(\textsf{y}\) to its evaluation oracle \(\textsc {Eval}\) to obtain the answer \((\textbf{t},\textbf{u})\), and on receiving the query \(\textsf{x}\) to \(\textsc {Enc}\), \(b_\lambda ^3\) forwards \(\textsf{x}\) to its challenge oracle \(\textsc {Chal}\) and uses the answer \((h,\textbf{h}_0,h_1)\) to simulate \(\textbf{r}\), \(\mathbf {{C}}_1^*\), and \(\textsf{K}^*\) as in Fig. 17. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^3\) outputs \(\beta \) as well if no \(\textsf{y}\) such that \({\textsf{p}}_\lambda (\textsf{x},\textsf{y})=1\) was queried to \(\textsc {USKGen}\). Otherwise, \(b_\lambda ^3\) outputs 0.
If \(h_1\) is uniform (i.e., \(b_\lambda ^3\) is in Game \({\mathsf {PR\text{- }CMA}}_{\textsf{rand}}\)) then the view of \(a_\lambda \) is identical to its view in \(\textsf{G}_6\). If \(h_1\) is real (i.e., \(b_\lambda ^3\) is in Game \({\mathsf {PR\text{- }CMA}}_{\textsf{real}}\)) then the view of \(\mathcal {A}\) is identical to its view in \(\textsf{G}_5\). Hence, the advantage of \(b_\lambda ^3\) in breaking the \({\mathsf {PR\text{- }CMA}}\) security is
Moreover, since \(a_\lambda \) only makes constant rounds of queries, all operations in \(b_{\lambda }^3\) are performed in \(\mathsf {NC^1}\). Hence, we have \(\mathcal {B}_3 = \{b_{\lambda }^3\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof.
We now do all the previous steps in the reverse order as in Fig. 18. Note that the view of the adversary in \(\textsf{H}_0\) (respectively, \(\textsf{H}_4\)) is identical to its view in \(\textsf{G}_6\) (respectively, \({\mathsf {PR\text{- }AT\text{- }CPA}}_{\textsf{rand}}\)). By using the above arguments in a reverse order, we have the following lemma. \(\square \)
Lemma 5.8
There exists an adversary \(\mathcal {B}_4=\{b_\lambda ^4\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda ^4\) breaks the fine-grained matrix linear assumption with advantage
Putting all above together, Theorem 5.1 immediately follows. \(\square \)
Notes
Essentially, the BKP framework used the GS proof for linear equations and replaced the GS commitment with the Pedersen commitment.
The IBKEM can be straightforwardly extended to one with large key space as we will discuss later in this section.
See Sect. 2 for the notion of \(\mathbf {{N}}^\lambda \).
One-time (respectively, unbounded) simulation soundness prevents the adversary from proving a false statement after seeing a single simulated proof for a statement (respectively, multiple simulated proofs for statements) of its choice. We refer the reader to [23] for the formal definitions.
We do not exploit the sampleability of the distribution for this construction.
References
B. Applebaum, Y. Ishai, E. Kushilevitz, Cryptography in NC\(^0\), in 45th FOCS. (IEEE Computer Society Press, 2004), pp. 166–175
M. Ball, D. Dachman-Soled, M. Kulkarni, New techniques for zero-knowledge: Leveraging inefficient provers to reduce assumptions, interaction, and trust, in Micciancio, D., Ristenpart, T. (eds.) CRYPTO 2020, Part III. LNCS, vol. 12172 (Springer, Heidelberg, 2020), pp. 674–703
D.A.M. Barrington, Bounded-width polynomial-size branching programs recognize exactly those languages in \(\text{NC}^1\), in 18th ACM STOC (ACM Press, 1986), pp. 1–5
M. Bellare, S. Goldwasser, New paradigms for digital signatures and message authentication based on non-interactive zero knowledge proofs, in Brassard, G. (ed.) CRYPTO’89. LNCS, vol. 435 (Springer, Heidelberg, 1990), pp. 194–211
O. Blazy, E. Kiltz, J. Pan, (Hierarchical) identity-based encryption from affine message authentication, in Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part I. LNCS, vol. 8616 (Springer, Heidelberg, 2014), pp. 408–425
D. Boneh, M.K. Franklin, Identity-based encryption from the Weil pairing, in Kilian, J. (ed.) CRYPTO 2001. LNCS, vol. 2139 (Springer, Heidelberg, 2001), pp. 213–229
D. Boneh, P.A. Papakonstantinou, C. Rackoff, Y. Vahlis, B. Waters, On the impossibility of basing identity based encryption on trapdoor permutations, in 49th FOCS (IEEE Computer Society Press, 2008), pp. 283–292
C. Brzuska, G. Couteau, Towards fine-grained one-way functions from strong average-case hardness. IACR Cryptol. ePrint Arch. 2020, 1326 (2020)
M. Campanelli, R. Gennaro, Fine-grained secure computation, in Beimel, A., Dziembowski, S. (eds.) TCC 2018, Part II. LNCS, vol. 11240 (Springer, Heidelberg, 2018), pp. 66–97
J. Chen, R. Gay, H. Wee, Improved dual system ABE in prime-order groups via predicate encodings, in Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015, Part II. LNCS, vol. 9057 (Springer, Heidelberg, 2015), pp. 595–624
J. Chen, H. Wee, Fully, (almost) tightly secure IBE and dual system groups, in Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043 (Springer, Heidelberg, 2013), pp. 435–460
C. Cocks, An identity based encryption scheme based on quadratic residues, in Honary, B. (ed.) 8th IMA International Conference on Cryptography and Coding. LNCS, vol. 2260 (Springer, Heidelberg, 2001), pp. 360–363
A. Degwekar, V. Vaikuntanathan, P.N. Vasudevan, Fine-grained cryptography, in Robshaw, M., Katz, J. (eds.) CRYPTO 2016, Part III. LNCS, vol. 9816 (Springer, Heidelberg, 2016), pp. 533–562
S. Egashira, Y. Wang, K. Tanaka, Fine-grained cryptography revisited, in Galbraith, S.D., Moriai, S. (eds.) ASIACRYPT 2019, Part III. LNCS, vol. 11923 (Springer, Heidelberg, 2019), pp. 637–666
S. Egashira, Y. Wang, K. Tanaka, Fine-grained cryptography revisited. J. Cryptol.34(3), 23 (2021)
G. Fuchsbauer, E. Kiltz, J. Loss, The algebraic group model and its applications, in Shacham, H., Boldyreva, A. (eds.) CRYPTO 2018, Part II. LNCS, vol. 10992 (Springer, Heidelberg, 2018), pp. 33–62
C. Gentry, A. Silverberg, Hierarchical ID-based cryptography, in Zheng, Y. (ed.) ASIACRYPT 2002. LNCS, vol. 2501 (Springer, Heidelberg, 2002), pp. 548–566
V. Goyal, O. Pandey, A. Sahai, B. Waters, Attribute-based encryption for fine-grained access control of encrypted data, in: Juels, A., Wright, R.N., De Capitani di Vimercati, S. (eds.) ACM CCS 2006 (ACM Press, 2006), Available as Cryptology ePrint Archive Report 2006/309, pp. 89–98
J. Groth, A. Sahai, Efficient non-interactive proof systems for bilinear groups, in Smart, N.P. (ed.) EUROCRYPT 2008. LNCS, vol. 4965 (Springer, Heidelberg, 2008), pp. 415–432
J. Horwitz, B. Lynn, Toward hierarchical identity-based encryption, in Knudsen, L.R. (ed.) EUROCRYPT 2002. LNCS, vol. 2332 (Springer, Heidelberg, 2002), pp. 466–481
Y. Ishai, E. Kushilevitz, Randomizing polynomials: A new representation with applications to round-efficient secure computation, in 41st FOCS (IEEE Computer Society Press, 2000), pp. 294–304
C.S. Jutla, A. Roy, Shorter quasi-adaptive NIZK proofs for linear subspaces, in Sako, K., Sarkar, P. (eds.) ASIACRYPT 2013, Part I. LNCS, vol. 8269 (Springer, Heidelberg, 2013), pp. 1–20
E. Kiltz, H. Wee, Quasi-adaptive NIZK for linear subspaces revisited, in Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015, Part II. LNCS, vol. 9057 (Springer, Heidelberg, 2015), pp. 101–128
A.B. Lewko, B. Waters, New techniques for dual system encryption and fully secure HIBE with short ciphertexts, in: Micciancio, D. (ed.) TCC 2010. LNCS, vol. 5978 (Springer, Heidelberg, 2010), pp. 455–479
U.M. Maurer, Abstract models of computation in cryptography (invited paper), in: Smart, N.P. (ed.) 10th IMA International Conference on Cryptography and Coding. LNCS, vol. 3796 (Springer, Heidelberg, 2005), pp. 1–12
R.C. Merkle, Secure communications over insecure channels. Commun. ACM21(4), 294–299 (1978)
P. Morillo, C. Ràfols, J.L.. Villar, The kernel matrix Diffie–Hellman assumption, in Cheon, J.H., Takagi, T. (eds.) ASIACRYPT 2016, Part I. LNCS, vol. 10031 (Springer, Heidelberg, 2016), pp. 729–758
A.A. Razborov, Lower bounds on the size of bounded depth circuits over a complete basis with logical addition, in Mathematical notes of the Academy of Sciences of the USSR 41(4) (1987)
A. Shamir, Identity-based cryptosystems and signature schemes, in Blakley, G.R., Chaum, D. (eds.) CRYPTO’84. LNCS, vol. 196 (Springer, Heidelberg, 1984), pp. 47–53
V. Shoup, Lower bounds for discrete logarithms and related problems, in Fumy, W. (ed.) EUROCRYPT’97. LNCS, vol. 1233 (Springer, Heidelberg, 1997), pp. 256–266
R. Smolensky, Algebraic methods in the theory of lower bounds for Boolean circuit complexity, in: Aho, A. (ed.) 19th ACM STOC (ACM Press, 1987), pp. 77–82
Y. Wang, J. Pan, Non-interactive zero-knowledge proofs with fine-grained security, in Dunkelman, O., Dziembowski, S. (eds.) EUROCRYPT 2022, Part II. LNCS, vol. 13276 (Springer, Heidelberg, 2022), pp. 305–335
Y. Wang, J. Pan, Unconditionally secure NIZK in the fine-grained setting, in Agrawal, S., Lin, D. (eds.) ASIACRYPT 2022, Part II. LNCS, vol. 13792 (Springer, Heidelberg, 2022), pp. 437–465
Y. Wang, J. Pan, Y. Chen, Fine-grained secure attribute-based encryption, in Malkin, T., Peikert, C. (eds.) CRYPTO 2021, Part IV. LNCS, vol. 12828 (Springer, Heidelberg, 2021), pp. 179–207
B. Waters, Dual system encryption: Realizing fully secure IBE and HIBE under simple assumptions, in Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677 (Springer, Heidelberg, 2009), pp. 619–636
H. Wee, Dual system encryption via predicate encodings, in Lindell, Y. (ed.) TCC 2014. LNCS, vol. 8349 (Springer, Heidelberg, 2014), pp. 616–637
Acknowledgements
We would like to thank the anonymous reviewers for their valuable comments on a previous version of this paper. Parts of Yuyu Wang’s work was supported by the National Natural Science Foundation for Young Scientists of China under Grant Number 62002049, the Natural Science Foundation of Sichuan under Grant Number 2023NSFSC0472, the Sichuan Science and Technology Program under Grant Number 2022YFG0037, and the National Key Research and Development Program of China under Grant Number 2022YFB3104600. Parts of Jiaxin Pan’s work was supported by the Research Council of Norway under Project No. 324235. Parts of Yu Chen’s work was supported by the National Key Research and Development Program of China under Grant Number 2021YFA1000600, the National Natural Science Foundation of China under Grant Number 62272269, Taishan Scholar Program of Shandong Province.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Alon Rosen
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A preliminary version of this paper appeared at Crypto 2021 [34], this is the full version
Appendices
Appendices
The Proof of Theorem 2.13
Proof
We prove Theorem 2.13 by the following two propositions.
Proposition A.1
For all \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\) and \(\textbf{x} \in {\textsf{SampNo}}_\lambda (\mathbf {{M}})\), we have \(\textbf{x} \in \{0,1\}^\lambda {\setminus }{{\,\textrm{Im}\,}}(\mathbf {{M}})\).
Proof of Proposition A.1
According to Lemma 2.10, we have \({{\,\textrm{Im}\,}}(\mathbf {{M}}) = \{\textbf{x} \, | \, \textbf{w} \in \{0\}\times \{0, 1\}^{\lambda -1}, \, \textbf{x} = \mathbf {{M}}\textbf{w} \}\). Since \(\mathbf {{N}}^\lambda \textbf{w}=\textbf{0}\) for any \(\textbf{w} \in \{0\}\times \{0, 1\}^{\lambda -1}\), we have \({{\,\textrm{Im}\,}}(\mathbf {{M}})=\{\textbf{x} \, | \, \textbf{w} \in \{0\}\times \{0, 1\}^{\lambda -1}, \, \textbf{x} = (\mathbf {{M}}+\mathbf {{N}}^\lambda )\textbf{w} \}\).Footnote 3 Moreover, \((\mathbf {{M}}+\mathbf {{N}}^\lambda )\) is of full rank according to Lemma 2.9. Hence, for any \(\textbf{w} \in \{1\}\times \{0, 1\}^{\lambda -1}\) and any \(\textbf{x}\in {{\,\textrm{Im}\,}}(\mathbf {{M}})\), we have \((\mathbf {{M}}+\mathbf {{N}}^\lambda )\textbf{w} \ne \textbf{x}\). Namely, for any \(\textbf{w} \in \{1\}\times \{0, 1\}^{\lambda -1}\), we have \((\mathbf {{M}}+\mathbf {{N}}^\lambda )\textbf{w} \in \{0, 1\}^{\lambda } {\setminus } {{\,\textrm{Im}\,}}(\mathbf {{M}}^\top )\), completing the proof of Proposition A.1. \(\square \)
Proposition A.2
For any \(\mathcal {A} = \{a_{\lambda }\} \in \mathsf {NC^1}\),
where , , and .
Proof of Proposition A.2
Let \(\mathcal {A}= \{a_{\lambda }\}\) be any adversary in \(\mathsf {NC^1}\). We give intermediate games in Fig. 19 to show that the advantage of \(\mathcal {A}\) in breaking Proposition A.2 is negligible.
Lemma A.3
\(\Pr [\textsf{G}_1^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_0^{a_\lambda }\Rightarrow 1].\)
Proof
In \(\textsf{G}_1\) we sample instead of . Then Lemma A.3 follows from the fact that the distributions of \(\textbf{x}= \mathbf {{M}} \textbf{w}\) and \(\textbf{x}' = \mathbf {{M}} \textbf{w}'\) are identical where , , and , according to Lemma 2.11. \(\square \)
Lemma A.4
\(\Pr [\textsf{G}_2^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_1^{a_\lambda }\Rightarrow 1].\)
Proof
In \(\textsf{G}_2\), we compute \(\textbf{x}\) as \(\textbf{x} = (\mathbf {{M+N^\lambda }}) \textbf{w}\) instead. Then Lemma A.4 follows from the fact that for any \(\textbf{w} \in \{0\}\times \{0, 1\}^{\lambda -1}\), we have \(\mathbf {{N}}^\lambda \textbf{w}=\textbf{0}\). \(\square \)
Lemma A.5
There exists an adversary \(\mathcal {B}_1 = \{b_{\lambda }^1\} \in \mathsf {NC^1}\) such that \(b_{\lambda }^1\) breaks Definition 2.5, which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Lemma 2.6, with advantage
Proof
\(\textsf{G}_2\) and \(\textsf{G}_3\) only differ in the distribution of \(\mathbf {{M}}\), namely, or , and we build the distinguisher \(b_\lambda ^1\) as follows.
\(b_\lambda ^1\) runs in exactly the same way as the challenger of \(\textsf{G}_2\) except that in \(\textsc {Init}\), instead of generating \(\mathbf {{M}}\) by itself, it takes as input \(\mathbf {{M}}^\top \) generated as or from its own challenger. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^1\) outputs \(\beta \) as well. If \(\mathbf {{M}}\) is generated as (respectively, ), the view of \(a_\lambda \) is the same as its view in \(\textsf{G}_2\) (respectively, \(\textsf{G}_3\)). Hence, the probability that \(b_\lambda ^1\) breaks the fine-grained matrix linear assumption is
Moreover, since all operations in \(b_\lambda ^1\) are performed in \(\mathsf {NC^1}\), we have \(\mathcal {B}_1 = \{b_{\lambda }^1\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof. \(\square \)
Lemma A.6
\(\Pr [\textsf{G}_4^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_3^{a_\lambda }\Rightarrow 1].\)
Proof
In \(\textsf{G}_4\) we sample instead of .
Let . The distribution of \(\mathbf {{M}} + {\mathbf {{N}}^\lambda }\) is identical to the output distribution of \({\textsf{ZeroSamp}}(\lambda )\) according to Lemma 2.9. Therefore, according to Lemma 2.11, the distributions of \(\textbf{x} = (\mathbf {{M}} + {\mathbf {{N}}^\lambda }) \textbf{w}\) and \(\textbf{x}' = (\mathbf {{M}} + {\mathbf {{N}}^\lambda })\textbf{w}'\) are identical for and , completing this part of proof. \(\square \)
Lemma A.7
There exists an adversary \(\mathcal {B}_2=\{b^2_\lambda \}_{\lambda \in \mathbb {N}}\) such that \(b^2_\lambda \) breaks the fine-grained matrix linear assumption with advantage
Proof
\(\textsf{G}_5\) and \(\textsf{G}_4\) only differ in the distribution of \(\mathbf {{M}}\), namely, or , and we build the distinguisher \(b_\lambda ^2\) as follows.
\(b_\lambda ^2\) runs in exactly the same way as the challenger of \(\textsf{G}_2\) except that in \(\textsc {Init}\), instead of generating \(\mathbf {{M}}\) by itself, it takes as input \(\mathbf {{M}}^\top \) generated as or from its own challenger. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^2\) outputs \(\beta \) as well. If \(\mathbf {{M}}\) is generated as (respectively, ), the view of \(a_\lambda \) is the same as its view in \(\textsf{G}_4\) (respectively, \(\textsf{G}_5\)). Hence, the probability that \(b_\lambda ^2\) breaks the fine-grained matrix linear assumption is
Moreover, since all operations in \(b_\lambda ^2\) are performed in \(\mathsf {NC^1}\), we have \(\mathcal {B}_2 = \{b_{\lambda }^2\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof. \(\square \)
Then Proposition A.2 follows from the fact that \(\textsf{G}_0\) and \(\textsf{G}_5\) are the real games of Proposition A.2, where the values \(\textbf{x}\) are sampled from \({\textsf{SampYes}}_\lambda \) and \({\textsf{SampNo}}_\lambda \) respectively.
\(\square \)
Putting all above together, Theorem 2.13 immediately follows. \(\square \)
Fine-Grained Secure Quasi-Adaptive NIZK
In this section, we construct fine-grained QA-NIZK with adaptive soundness. We first give the definition of \(\mathsf {NC^1}\)-QA-NIZK with adaptive soundness. Then we prove an \(\mathsf {NC^1}\) version of the Kernel Matrix Diffie-Hellman assumption [27], based on which we give a warm-up QA-NIZK in \(\mathsf {NC^1}\) with relatively low efficiency. Finally, we show how to achieve a more efficient construction.
1.1 Definitions
We now recall the definition of fine-grained QA-NIZK. Let \(\mathcal {D}_{\lambda }\) be a probability distribution over a collection of relations \({\textsf{R}}=\{{\textsf{R}}_{\mathbf {{M}}}\}_{\mathbf {{M}}\in \mathcal {D}_{\lambda }}\) parametrized by a matrix \(\mathbf {{M}}\in \{0, 1\}^{n\times t}\) of rank \(t'< n\) generated as with the associated language
Witness Sampleability. Notice that similar to witness sampleable distribution in the classical world [22], we require that \(\mathcal {D}_\lambda \) additionally outputs a non-zero matrix \(\mathbf {{M}}^\bot \in \{0, 1\}^{n\times (n-t')}\) in the kernel of \(\mathbf {{M}}^\top \). An example of sampleable distribution is \({\textsf{ZeroSamp}}(n)\), which can additionally sample a non-zero vector in the kernel of its output.Footnote 4
Definition B.1
(Quasi-adaptive non-interactive zero-knowledge proof) A \({\mathcal {C}}_1\)-quasi-adaptive non-interactive zero-knowledge proof (QA-NIZK) for a set of language distributions \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\) is a function family \({\textsf{QANIZK}}=\{{{\textsf{Gen}}}_\lambda ,{{\textsf{Prove}}}_\lambda ,{{\textsf{Ver}}}_\lambda ,{{\textsf{Sim}}}_\lambda \}_{\lambda \in \mathbb {N}}\in \mathcal C_1\) with the following properties.
-
\({{\textsf{Gen}}}_\lambda (\mathbf {{M}})\) returns a CRS \(\textsf{crs}\) and a simulation trapdoor \(\textsf{td}\).
-
\({{\textsf{Prove}}}_\lambda (\textsf{crs},{} \textbf{y},\textbf{w})\) returns a proof \(\pi \).
-
\({{\textsf{Ver}}}_\lambda (\textsf{crs},{}\textbf{y},\pi )\) deterministically returns 1 (accept) or 0 (reject).
-
\({{\textsf{Sim}}}_\lambda (\textsf{crs},\textsf{td},{}\textbf{y})\) returns a simulated proof \(\pi \).
Perfect completeness is satisfied if for all \((\mathbf {{M}}^\top ,\mathbf {{M}}^\bot )\in {\mathcal {D}}_\lambda \), all vectors \((\textbf{y},\textbf{w})\) such that \(\textbf{y}={\mathbf {{M}}\textbf{w}}\), all \((\textsf{crs},\textsf{td}) \in {{\textsf{Gen}}}_\lambda (\mathbf {{M}})\), and all \(\pi \in {{\textsf{Prove}}}_\lambda (\textsf{crs},\textbf{y},\textbf{w})\), we have
Perfect zero knowledge is satisfied if for all \(\lambda \), all \((\mathbf {{M}}^\top ,\mathbf {{M}}^\bot )\in \mathcal {D}_\lambda \), all \((\textbf{y},\textbf{w})\) with \(\textbf{y}={\mathbf {{M}}\textbf{w}}\), and all \((\textsf{crs},\textsf{td})\in {{\textsf{Gen}}}_\lambda (\mathbf {{M}})\), the following two distributions are identical:
Definition B.2
(Adaptive soundness for QANIZK) \({\textsf{QANIZK}}\) is said to satisfy \({\mathcal {C}}_2\)-adaptive soundness if for any adversary \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in {\mathcal {C}}_2\),
where Game \({\textsf{AS}}\) is defined in Fig. 20.
We note that in the above definition, the term “quasi-adaptive” means that the construction of the CRS depends on the statement \(\mathbf {{M}}\). On the other hand, “adaptive” in the context of adaptive soundness means that in the soundness experiment, the adversary can choose the statement adaptively after seeing the CRS.
1.2 A Warm-Up Construction
A New Lemma. We now prove the following lemma under the assumption \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), based on which we can achieve adaptively sound QA-NIZKs in \(\mathsf {NC^1}\). It can be thought of as the counterpart of the Kernel Matrix Diffie-Hellman assumption [27] in \(\mathsf {NC^1}\).
Definition B.3
(Fine-grained kernel matrix assumption) If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then for all \(\lambda \in \mathbb {N}\) and any adversary \(\mathcal {A}= \{a_{\lambda }\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\), we have
where .
Lemma B.4
If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then the fine-grained kernel matrix assumption (see Definition B.3) holds.
Proof
Let \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) be an adversary such that \(a_\lambda \) breaks the fine-grained kernel matrix assumption with probability \(\epsilon \), we construct another adversary \(\mathcal {B}=\{b_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda \) breaks the fine-grained subset membership problem (see Definition 2.12), which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Theorem 2.13, with the same probability as follows.
On input \((\mathbf {{M}},\textbf{u})\) where and or , \(b_\lambda \) forwards \(\mathbf {{M}}\) to \(a_\lambda \). When \(a_\lambda \) outputs \(\textbf{c}\), \(b_\lambda \) outputs 1 iff the last element in \(\textbf{c}\) is 1, \(\textbf{c}^\top \mathbf {{M}}=\textbf{0}\), and \(\textbf{c}^\top \textbf{u}=0\).
When , the probability that \(b_\lambda \) outputs 1 is \(\epsilon \). The reason is that when \(a_\lambda \) succeeds, we must have \(\textbf{c}^\top \textbf{u}=0\) when \(\textbf{c}^\top \mathbf {{M}}=\textbf{0}\), and the last element of \(\textbf{c}\) must be 1 according to Lemma 2.7. Moreover, when , we have \(\textbf{u}=(\mathbf {{M}}+\mathbf {{N}}^\lambda )\textbf{w}\) for some \(\textbf{w}\in \{1\}\times \{0, 1\}^{\lambda -1}\). If \(\textbf{c}^\top \mathbf {{M}}=\textbf{0}\), we have \(\textbf{c}^\top \textbf{u} = \textbf{c}^\top \mathbf {{N}}^\lambda \textbf{w}=\textbf{c}^\top (0,\cdots ,0,1)^\top \), i.e., either \(\textbf{c}^\top \textbf{u}= 1\) or the last element of \(\textbf{c}\) is 0. Hence, \(b_\lambda \) outputs 0 anyway when . Therefore, we have
Moreover, since all operations in \(b_{\lambda }\) are performed in \(\mathsf {NC^1}\), we have \(\mathcal {B}= \{b_{\lambda }\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing the proof of Lemma B.4. \(\square \)
Constructing QA-NIZK Based on Lemma B.4. Based on the above lemma, we can easily achieve \(\mathsf {NC^1}\)-QA-NIZKs with adaptive soundness, one-time simulation soundness, and unbounded simulation soundness against \(\mathsf {NC^1}\) by adopting the techniques in [23].Footnote 5 Specifically, we only have to move the algorithms in [23] from GF(p) for a large prime p to GF(2), change the matrix Diffie-Hellman distributions to \({\textsf{SampYes}}_\lambda (\lambda )\), and generate a large number of proofs in parallel to bound the advantage of the adversary. We now give an adaptively sound QA-NIZK \({\mathsf {QANIZK_0}}\) w.r.t. a set of (sampleable) distributions \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\) in Fig. 21 as an instance.Footnote 6
Theorem B.5
If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then \({\mathsf {QANIZK_0}}\) is an \(\mathsf {AC^0[2]}\)-QA-NIZK that is \(\mathsf {NC^1}\)-adaptively sound for all \(\mathbf {{M}}\) in the distributions \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\) (see Appendix B.1 for the definition of \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\)).
Proof
First, we note that \(\{{{\textsf{Gen}}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{{\textsf{Prove}}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{{\textsf{Sim}}}_\lambda \}_{\lambda \in \mathbb {N}}\), and \(\{{{\textsf{Ver}}}_\lambda \}_{\lambda \in \mathbb {N}}\) are computable in \(\mathsf {AC^0[2]}\), since they only involve operations including multiplication of a constant number of matrices and sampling random bits.
Perfect correctness and perfect zero-knowledge follow from the fact that for all \(\textbf{y}=\mathbf {{M}}\textbf{x}\) and \(\mathbf {{P}}_i=\mathbf {{M}}^\top \mathbf {{K}}_i\), we have
Let \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\) be an adversary breaking the adaptive soundness of \({\mathsf {QANIZK_0}}\) with advantage \(\epsilon \), we have the following lemma. \(\square \)
Lemma B.6
There exists an adversary \(\mathcal {B}=\{b_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda \) breaks the fine-grained kernel matrix assumption (see Definition B.3), which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Lemma B.4, with probability \(\epsilon -1/2^\ell \).
Proof
We construct \(b_\lambda \) as follows.
\(b_\lambda \) on input \(\mathbf {{A}}\) samples and , and sets \(\mathbf {{P}}_i=\mathbf {{M}}^\top \mathbf {{K}}_i\) and \(\mathbf {{C}}_i=\mathbf {{K}}_i\mathbf {{A}}\) for all \(i\in [\ell ]\). Then it sends \(\textsf{crs}=(\mathbf {{A}},(\mathbf {{P}}_i,\mathbf {{C}}_i)_{i=1}^\ell )\) to \(a_\lambda \). When \(a_\lambda \) outputs \((\pi =(\pi _i)_{i=1}^\ell ,\textbf{y})\), \(b_\lambda \) searches j such that
and
If the searching procedure fails, \(b_\lambda \) aborts. \(b_\lambda \) then outputs \(\pi _j-\textbf{y}^\top \mathbf {{K}}_j\).
When \(a_\lambda \) succeeds, we have \(\pi _j\mathbf {{A}}=\textbf{y}^\top \mathbf {{C}}_j\) for all j and \(\textbf{y}\notin {{\,\textrm{Im}\,}}(\mathbf {{M}})\). Let \(\hat{\textbf{a}}\) be a fixed non-zero vector such that \(\hat{\textbf{a}}\notin {{\,\textrm{Im}\,}}(\mathbf {{A}})\). For each i, since \(a_\lambda \) learns no information on \(\mathbf {{K}}_i\) other than \(\mathbf {{M}}^\top \mathbf {{K}}_i\) and \(\mathbf {{K}}_i\mathbf {{A}}\), \(\textbf{y}^\top \mathbf {{K}}_i\hat{\textbf{a}}\) is information-theoretically hidden in the view of \(a_\lambda \), i.e., the probability that there exists j such that \(\pi _j\hat{\textbf{a}}-\textbf{y}^\top \mathbf {{K}}_j\hat{\textbf{a}}\ne 0\) is at least \(1/2^\ell \). Since \(\pi _j\hat{\textbf{a}}-\textbf{y}^\top \mathbf {{K}}_j\hat{\textbf{a}}\ne 0\) implies \(\pi _j-\textbf{y}^\top \mathbf {{K}}_j\ne \textbf{0}\), the probability that \(b_\lambda \) breaks the fine-grained kernel matrix assumption is at least \(\epsilon -1/2^\ell \), completing this part of proof.
\(\square \)
Since the fine-grained kernel matrix assumption holds if \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Lemma B.4, putting all above together, Theorem B.5 immediately follows. \(\square \)
1.3 A More Efficient Construction
A disadvantage of the scheme in Appendix B.2 is that we have to generate a large number of proofs in parallel. In this section, we give a more efficient \(\mathsf {NC^1}\)-adaptively sound QA-NIZK \({\mathsf {QANIZK_1}}=\{{{\textsf{Gen}}}_\lambda ,{{\textsf{Prove}}}_\lambda ,{{\textsf{Ver}}}_\lambda ,{{\textsf{Sim}}}_\lambda \}_{\lambda \in \mathbb {N}}\) w.r.t. a set of distributions \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\) in Fig. 22. As in Definition B.1, we require that \(\mathcal {D}_\lambda \) be witness sampleable, i.e., it outputs a matrix \(\mathbf {{M}}\in \{0, 1\}^{n\times t}\) of rank \(t'< n\) additionally with a matrix (or vector) \(\mathbf {{M}}^\bot \in \{0, 1\}^{n\times (n-t')}\) with rank \(n-t'\) in its kernel.
The proof size of this construction is \((\lambda -1)\cdot (n-t')\). Since \(\mathbf {{M}}\) (or \(\mathbf {{M}}^\top \)) is usually a combination of matrices sampled from \({\textsf{ZeroSamp}}(\lambda )\) in \(\mathsf {NC^1}\), \(n-t'\) is typically a constant number. For instance, when proving that two ciphertexts of the PKE scheme in [13] correspond to the same message or proving the validity of a public key of the PKE scheme in [15], the proof size is only \(\lambda -1\) in contrast to \(\lambda \cdot \ell \) for a large number \(\ell \) in the warm-up construction.
Theorem B.7
If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then \({\mathsf {QANIZK_1}}\) is an \(\mathsf {AC^0[2]}\)-QA-NIZK that is \(\mathsf {NC^1}\)-adaptively sound for all \(\mathbf {{M}}\) in the distributions \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\) (see Appendix B.1 for the definition of \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\)).
Proof
First, we note that \(\{{{\textsf{Gen}}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{{\textsf{Prove}}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{{\textsf{Sim}}}_\lambda \}_{\lambda \in \mathbb {N}}\), and \(\{{{\textsf{Ver}}}_\lambda \}_{\lambda \in \mathbb {N}}\) are computable in \(\mathsf {AC^0[2]}\), since they only involve operations including multiplications of a constant number of matrices and sampling random bits.
Perfect correctness and perfect zero-knowledge follow from the fact that for all \(\textbf{y}=\mathbf {{M}}\textbf{x}\) and \(\mathbf {{P}}_i=\mathbf {{M}}^\top \mathbf {{K}}_i\), we have
Let \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) be any adversary against the \(\mathsf {NC^1}\)-adaptive soundness of \({\mathsf {QANIZK_1}}\). We now show that \({\mathsf {QANIZK_1}}\) is adaptively sound against \(\mathsf {NC^1}\) via a sequence of hybrid games as in Fig. 23. The crucial step is to use the technique exploited by our IBKEM to switch \((\mathbf {{K}}_i||\textbf{0})\mathbf {{A}}\) to \((\textbf{0}||\mathbf {{K}}_i)\mathbf {{R}}_0^\top \), and then switch it back to \((\mathbf {{K}}_i'||\mathbf {{M}}^\bot \textbf{e}_i)\mathbf {{A}}\) for \(\mathbf {{K}}_i=\mathbf {{K}}_i'+\mathbf {{M}}^\bot \textbf{e}_i\cdot \widetilde{\textbf{r}}^\top \), where \(\mathbf {{R}}_0\) and \(\widetilde{\textbf{r}}\) are intermediate values generated during the sampling procedure for \(\mathbf {{A}}\). \(\square \)
Lemma B.8
\(\Pr [{\textsf{AS}}^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_1^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_0^{a_\lambda }\Rightarrow 1]\).
Proof
In \(\textsf{G}_1\), we generate \(\mathbf {{A}}\) by sampling and , and setting \(\mathbf {{A}}^\top =\mathbf {{R}}_0 \mathbf {{M}}_0^\lambda \mathbf {{R}}_1\). Moreover, for all i, we replace \(\mathbf {{C}}_i=(\mathbf {{K}}_i||\textbf{0})\mathbf {{A}}\) by \(\mathbf {{C}}_i=(\textbf{0}||\mathbf {{K}}_i)\mathbf {{R}}_0^\top \).
The view of \(\mathcal {A}\) in this game is identical to its view in \(\textsf{G}_0\) since the way we generate \(\mathbf {{A}}\) is exactly the “zero-sampling” procedure, and we have
\(\square \)
Lemma B.9
\(\Pr [\textsf{G}_2^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_1^{a_\lambda }\Rightarrow 1]\).
Proof
In \(\textsf{G}_2\), for all i, instead of generating \(\mathbf {{K}}_i\) as a uniformly random matrix, we generate \(\mathbf {{K}}_i\) by randomly sampling and setting \(\mathbf {{K}}_i=\mathbf {{K}}_i'+\mathbf {{M}}^\bot \textbf{e}_i\cdot \widetilde{\textbf{r}}^\top \), where \(\textbf{e}_i\in \{0, 1\}^{n-t'}\) denotes the vector with the ith element being 1 and the other bits being 0. Since the distribution of \(\mathbf {{K}}_i\) is still uniform, the view of \(\mathcal {A}\) remains the same. \(\square \)
Lemma B.10
\(\Pr [\textsf{G}_3^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_2^{a_\lambda }\Rightarrow 1]\).
Proof
This lemma follows from the fact that for all i, we have
and
\(\square \)
Lemma B.11
There exists an adversary \(\mathcal {B}=\{b_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda \) breaks the fine-grained kernel matrix assumption (see Definition B.3), which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Lemma B.4, with probability \(\Pr [\textsf{G}_3^{a_\lambda }\Rightarrow 1]\).
Proof
We construct \(b_\lambda \) as follows.
\(b_\lambda \) on input \(\mathbf {{A}}\) samples and , and sets \(\mathbf {{P}}_i=\mathbf {{M}}^\top (\mathbf {{K}}_i'||\textbf{0})\) and \(\mathbf {{C}}_i=(\mathbf {{K}}_i'||\mathbf {{M}}^\bot \textbf{e}_i)\mathbf {{A}}\) for all i. Then it sends \(\textsf{crs}=(\mathbf {{A}},(\mathbf {{P}}_i,\mathbf {{C}}_i)_{i=1}^{n-t'})\) to \(a_\lambda \). When \(a_\lambda \) outputs \((\pi =(\pi _i)_{i=1}^{n-t'},\textbf{y})\), \(b_\lambda \) searches j such that
and
If the searching procedure fails, \(b_\lambda \) aborts. b then outputs \(\pi _j||0-\textbf{y}^\top (\mathbf {{K}}_j'||\mathbf {{M}}^\bot \textbf{e}_j)\).
Since all the operations performed by \(b_\lambda \) are in \(\mathsf {NC^1}\), we have \(\mathcal {B}\in \mathsf {NC^1}\).
When \(a_\lambda \) succeeds, we have \((\pi _j||0)\mathbf {{A}}=\textbf{y}^\top \mathbf {{C}}_j\) for all j and \(\textbf{y}\notin \textsf{Span}(\mathbf {{M}})\). In this case, \(\textbf{y}^\top \mathbf {{M}}^\bot \ne \textbf{0}\), i.e.,
there must exists j such that \(\textbf{y}^\top \mathbf {{M}}^\bot \textbf{e}_i=1\). Hence the probability that \(b_\lambda \) breaks the fine-grained kernel matrix assumption is exactly \(\Pr [\textsf{G}_3^{a_\lambda }\Rightarrow 1]\), completing this part of proof. \(\square \)
Putting all above together, Theorem B.7 immediately follows. \(\square \)
Concurrent Fine-Grained NIZKs. Assuming \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), our work presents an efficient QA-NIZK that achieves perfect zero-knowledge and can handle languages expressible as linear subspaces. Below we compare our QA-NIZK to other existing fine-grained NIZKs [2, 32, 33].
Ball, Dachman-Soled, and Kulkarni [2] previously constructed a NIZK for circuit satisfiability against \(\mathsf {NC^1}\) adversaries in the uniform random string (URS) model, where the setup only samples public coins. Their scheme achieves offline zero-knowledge, meaning that the distribution of honest URSs and proofs is computationally indistinguishable from that of the output of a simulator drawn from a specific distribution. However, their construction is not in the fully fine-grained setting, since their prover requires more computational resources than \(\mathsf {NC^1}\) (even for statements represented as \(\mathsf {NC^1}\) circuits). This requirement is inherent in their construction, since the underlying NIZK for \(\mathsf{\oplus L/poly}\) in their construction requires computing the determinant of a matrix, which cannot be done in \(\mathsf {NC^1}\).
More recently, Wang and Pan [32] proposed a fully fine-grained NIZK protocol for circuit satisfiability in \(\mathsf {NC^1}\), where all algorithms (including the CRS generator, prover, verifier, and simulator) are in \(\mathsf {NC^1}\). Their scheme can achieve either perfect soundness or perfect zero-knowledge and can be converted into a NIZK in the URS model and a non-interactive zap. Notably, their underlying NIZK for linear languages supports the same class of languages as our QA-NIZK. However, their construction has larger proving/verification cost and proof size than ours. Especially, their proof size is dependent of the statement size, while ours is not.
Another fine-grained NIZK is recently proposed by Wang and Pan [33] in a different fine-grained setting under no assumption. Specifically, it treats adversaries in \(\mathsf {AC^0}\) and requires that all algorithms run in \(\mathsf {AC^0}\).
Instantiations of Encodings
In this section, other than the one in Fig. 3, we give several examples of predicate encodings in Figs. 24, 25, and 26. By instantiating our resulting ABE in Sect. 5 with these encodings, we immediately achieve ABEs for inner product, non-zero inner product, and boolean span programs. All the encodings can be performed in \(\mathsf {AC^0[2]}\) since they only involve multiplication of a constant number of matrices.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wang, Y., Pan, J. & Chen, Y. Fine-Grained Secure Attribute-Based Encryption. J Cryptol 36, 33 (2023). https://doi.org/10.1007/s00145-023-09479-x
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00145-023-09479-x