1 Introduction

1.1 Motivation

Modern cryptography bases the security of schemes on assumptions, including the basic ones (such as the existence of one-way functions (OWFs)), the more advanced ones (such as the hardness of factoring, discrete logarithms, and some lattice problems), and the much more exotic ones (such as the existence of generic groups [25, 30] or algebraic groups [16]). Although there is some analysis on these assumptions, it is less desirable. We are interested in how to construct cryptography based on much mild assumptions or which form of security cryptography can be achieved if all classical assumptions (such as the existence of OWFs) do not hold.

Fine-grained cryptography is a direction in approaching the aforementioned problems. It aims at cryptography with weaker security in a setting where adversaries have only bounded resources and honest users have less resources than the adversaries. Under this setting it is possible to make the underlying assumption extremely mild, for instance, assuming \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\). This is a widely accepted worst-case assumption. As \(\mathsf{\oplus L/poly}\) is the class of languages with polynomial-sized branching programs and all languages in \(\mathsf {NC^1}\) have polynomial-sized branching programs of constant width [3], this assumption holds if there exists one language having only polynomial-sized branching programs of non-constant width. This is different from assuming the existence of OWFs which is an average-case assumption. It requires that the OWF be hard to invert on a random input. Hence, \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) is more likely to be true.

The study on fine-grained cryptography was initialized by Merkle [26]. In the recent years, we are interested in which kind of cryptosystems can be constructed in this setting. We highlight the recent constructions of OWFs [8], symmetric-key and (leveled fully homomorphic) public-key encryption [9, 13], verifiable computation [9], hash proof systems (HPS) [14], and non-interactive zero-knowledge (NIZK) proof systems [2]. However, due to the restriction on running resources, many important primitives remain unknown. Surprisingly, digital signature schemes are among them, although they are implied by OWFs in the classical setting.

Our Goal: Fine-Grained Secure ABEs. We focus on constructing attribute-based encryption (ABE) schemes [18] with fine-grained security, since it has many applications and implies important primitives, including digital signatures. In an ABE scheme, messages are encrypted under descriptive values \(\textsf{x}\), secret keys are associated with values \(\textsf{y}\), and a secret key decrypts the ciphertext if and only if \(\textsf{p}(\textsf{x},\textsf{y})=1\) for some boolean predicate \(\textsf{p}\). Here the predicate \(\textsf{p}\) may express arbitrary access policy. This is in contrast to traditional public-key encryption (PKE) schemes without access control on data. Identity-based encryption [6, 12, 29] is a simplified version of ABE, where \(\textsf{p}\) is the equality predicate, and it implies signatures in a natural manner (even in the fine-grained setting).

In general, it is challenging to construct ABEs. For instance, in the classical setting, it is shown that IBEs cannot be constructed using trapdoor permutations (TDP) or CCA-secure PKE schemes in a black-box manner [7]. Moreover, many pairing-based constructions of ABE and IBE (for instance, [5, 10]) heavily rely on the algebraic structures of pairing groups. These necessary structures are not available in fine-grained cryptography. Thus, in this paper, we develop new techniques to improve on the state of the art of fine-grained cryptography, which only provides primitives related to TDP and CCA-secure PKE.

1.2 Our Contributions

We construct the first fine-grained secure ABE scheme. In particular, our scheme is computable in \(\mathsf {AC^0[2]}\) and secure against adversaries in \(\mathsf {NC^1}\). Note that \(\mathsf {AC^0[2]}\subsetneq \mathsf {NC^1}\) [28, 31]. Similar to several existing \(\mathsf {NC^1}\) fine-grained primitives [9, 13, 14], the security of our scheme is based on the same worst-case assumption \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\). This is a widely accepted, weak assumption. For simplicity, we consider fine-grained cryptography as schemes with \(\mathsf {NC^1}\) honest users and adversaries and security based on \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) in the rest of this paper.

Previously, fine-grained cryptography can only achieve symmetric-key and public-key encryption and HPS. Our work enriches its available tools and brings fine-grained cryptography closer to classical cryptography in terms of functionality.

In particular, our construction is presented in a generic manner using predicate encodings [10, 36]. Hence, by suitably instantiating the underlying encoding, we directly obtain a fine-grained IBE scheme (which in turn implies a fine-grained signature scheme), fine-grained ABEs for inner-product encryption, non-zero inner-product encryption, spatial encryption, doubly spatial encryption, boolean span programs, and arithmetic span programs, and also fine-grained broadcast encryption and fuzzy IBE schemes. Prior to this work, it was unknown whether these primitives can be constructed in \(\mathsf {NC^1}\) based on a worst-case complexity assumption.

Finally, we use our technique to construct an efficient quasi-adaptive NIZK [22] with fine-grained security. Here “quasi-adaptive” means that common reference strings (CRSs) may depend on the language of the NIZK system.

Applications of Security Against \(\mathsf {NC^1}\). Other than only relying on weak assumptions and running with low complexity, our results have the following applications.

Since security against \(\mathsf {NC^1}\) captures adversaries with limited parallel running-time, our constructions are well-suited for systems where attacks make sense only if they succeed in a short period of time. For example, our ABEs (and other fine-grained encryption primitives) can be used to protect messages that are only valuable in a short period of time, and that can be published or deleted later. As another example, the fine-grained signature (implied by our fine-grained IBE) and fine-grained QA-NIZK prevent adversaries from forging signatures and proofs with the running-time of an honest user, thereby ensuring security by letting the system reject users who have timed out when attempting to generate signatures or proofs. Moreover, as noted in [13], combining fine-grained primitives with standard ones immediately yield hybrids that are secure against \(\mathsf {NC^1}\) adversaries under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) and secure against polynomial time adversaries under stronger assumptions.

1.3 Technique Overview

We borrow the frameworks of the pairing-based constructions of IBEs in [5] and ABEs in [10] to upgrade the available fine-grained techniques [1, 14, 21] in achieving our goal. In a nutshell, we transform a suitable symmetric-key primitive to an ABE in the fine-grained setting.

Previous frameworks in [5, 10] use pairings and the Diffie-Hellman assumptions. In contrast to them, our work develops new techniques to build ABEs without pairings or the Diffie-Hellman assumptions, but only under the mild assumption that \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\). For simplicity, we mostly focus on our techniques in the context of IBE here, and give some ideas about how they can be extended to construct ABEs. In this paper, we consider adaptive security where adversaries can adaptively request user secret keys and a challenge ciphertext.

The Approach of Blazy, Kiltz, and Pan, and Its Limitations in \(\mathsf {NC^1}\). The “MAC\(\rightarrow \)IBE” transformation of BKP [5] is an abstraction of the Chen-Wee (CW) IBE scheme [11], and it also implements the “PRF\(\rightarrow \)Signature” framework by Bellare and Goldwasser (BG) [4] in the IBE context. The BKP transformation requires an “affine MAC”, namely, a MAC whose verification is done by checking a particular system of affine equations. Variables in these affine equations are included in the MAC secret key, and the (public) coefficients are derived from the message (which will be the identity of the resulting IBE scheme) to be signed. Such a MAC scheme can be constructed based on the Diffie-Hellman assumption which is generalized as the MDDH assumption.

We give some ideas about how an affine MAC can be turned into an IBE scheme. The master public key of an IBE scheme, \(\textsf{pk}=\textsf{Com}(\textsf{sk}_{{\textsf{MAC}}})\), is a commitment of the MAC secret key, \(\textsf{sk}_{{\textsf{MAC}}}\). A user secret key \(\textsf{usk}[\textsf{id}]\) of an identity \(\textsf{id}\) consists of a BG signature, namely, a MAC tag \(\tau _\textsf{id}\) on the message \(\textsf{id}\) and a NIZK proof of the validity of \(\tau _\textsf{id}\) w.r.t. the secret key committed in \(\textsf{pk}\).

Since the MAC verification consists of only affine equations, after implementing the aforementioned commitments and NIZK proofs with a (tuned) Groth-Sahai (GS) proof system [19],Footnote 1 the BKP IBE ciphertext \(\textsf{ct}_{\textsf{id}}\) can be viewed as a randomized linear combination of \(\textsf{pk}\) w.r.t. \(\textsf{id}\). This is the key observation of BKP. The BKP framework can be further improved and extended to construct ABEs using predicate encodings [36] as in the CGW framework [10] by Chen, Gay, and Wee.

The MDDH assumption and the pairing-based GS proofs are two key ingredients for the BKP framework which are not available in fine-grained cryptography. One direction to resolve this is to develop a fine-grained GS proof system, but it is not clear what the counterpart of “pairing-product equations” will be. Instead, we achieve our goal with a simpler and more direct approach.

A Hard Subset Membership Problem for \(\mathsf {NC^1}\) circuits. We first need to find a counterpart of the MDDH assumption in \(\mathsf {NC^1}\), since the separation assumption \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) does not directly give us tools in constructing cryptographic schemes. In the work of [1, 21], it is shown that, if \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) holds, then the following two distributions are indistinguishable for \(\mathsf {NC^1}\) circuits:

where \(n=n(\lambda )\) is some polynomial in security parameter \(\lambda \), and the randomized sampling algorithms \({\textsf{ZeroSamp}}\) and \({\textsf{OneSamp}}\) output matrices with rank \(n-1\) and full rank, respectively. Concrete definitions of these algorithms are given in Sect. 2.2, and they are not relevant in this section.

This indistinguishability implies a hard subset membership problem in \(\mathsf {NC^1}\) implicitly given by Egashira, Wang, and Tanaka [15] for their HPS: Given a matrix \(\mathbf {{M}}^\top \) from \(D_0\) and a random vector \(\textbf{t}\) in two specific distributions represented by \(\mathbf {{M}}\), the task of the problem is to tell whether \(\textbf{t}\) is in the span of \(\mathbf {{M}}\).

Our IBE in \(\mathsf {NC^1}\). Our main technical contribution is a new approach of using the subset membership problem to transform an affine MAC to IBEs in the fine-grained setting. Our starting point is constructing a secure affine MAC in \(\mathsf {NC^1}\). We prove that, if the subset membership problem is hard in \(\mathsf {NC^1}\), then our MAC is secure for \(\mathsf {NC^1}\) adversaries.

Next, we propose a generic construction of IBE based on affine MACs, following the BKP framework. In stark contrast to the BKP, our construction does not require pairings. Essentially, we develop a Groth-Sahai-like proof system in \(\mathsf {NC^1}\) to prove the validity of our affine MAC. This proof system allows us to show that if our affine MAC is secure then our resulting IBE is secure in \(\mathsf {NC^1}\). At the core of our proof system is a new commitment scheme in \(\mathsf {NC^1}\), for which we achieve the hiding property by exploiting the concrete structure of matrices in \(D_0\).

We give more details about the security proof. Firstly, the zero-knowledge property allows us to generate user secret keys for adversaries without knowing the MAC secret key. Secondly, we show that if an adversary can break the adaptive security of our IBE, then we can construct a reduction to break the security of our affine MAC. This is a crucial step, and we require some extractability of the proof system to extract the MAC forgery from the IBE adversary. In the BKP framework, this extractability can be achieved by computing the inversion of some matrix \(\mathbf {{A}} \in \mathbb {Z}_{q}^{k \times k}\) for some positive integer k. However, in our setting, inverting a matrix in \(\{0,1\}^{n \times n}\) is impossible, otherwise, this will lead to a distinguisher for the subset membership problem in \(\mathsf {NC^1}\). Also, there is no known way to sample a matrix with its inverse efficiently [14]. To solve it, our proof system develop a new method in achieving this extractability without inverting any matrix. Our core idea is to prove that with a fresh random string , it is possible to extract the forgery from our \(\mathsf {NC^1}\)-commitments by switching the distribution of the public parameter \(\mathbf {{A}}\in D_0\) twice (from \(D_0\) to \(D_1\) and then back to \(D_0\)) and changing the distribution of \(\textbf{r}\) during the switching procedure.

Dual System Methodology in \(\mathsf {NC^1}\) and ABE. Our techniques for IBE can also be viewed as the dual system encryption methodology [35] in \(\mathsf {NC^1}\), which is an alternative interpretation of our approach. In our proof, there are two important technical steps, switching ciphertexts to invalid and randomizing MAC tags in the user secret keys. These correspond to switching ciphertexts and user secret keys from functional to semi-functional in the dual system encryption methodology [5, 10, 24, 35]. Dual system methodology is very useful in constructing predicate encryption and it was only known with pairings. Our work is for the first time implementing the dual system methodology without pairings.

Similar to the extension from BKP-IBE [5] to CGW-ABE [10], we further extend our techniques in constructing ABEs. We first use (part of) a predicate encoding scheme [10, 36] to generalize the notion of affine MAC and make it useful for constructing ABEs. After that, we upgrade our IBE techniques, and transform the generalized affine MAC to an adaptively secure ABE in \(\mathsf {NC^1}\) via the rest part of the predicate encoding scheme. Here, the predicate encoding scheme is to construct ABEs in a modular and generic way, in particular, it can generalize the encoding of different ABEs (such as identity-based encryption and inner-product encryption).

More Extension and Open Problem. We are optimistic that our approach can yield many more new public-key schemes in fine-grained cryptography. In particular, we show that our techniques can also be used to construct an efficient QA-NIZK in \(\mathsf {NC^1}\) with adaptive soundness in “Appendix B”. Roughly, we use the technique for proving the hiding property of the underlying commitment scheme in our IBE scheme to achieve adaptive soundness.

Also, we are optimistic that our approach can be used to construct hierarchical IBE [17, 20]. We leave a detailed treatment of it as an open problem.

1.4 Comparison with the Proceedings Version

This is the full version of the paper appeared at Crypto 2021 [34]. In this full version, we give the full proof for the security of the fine-grained IBE scheme in Sect. 4.2. Additionally, we give the proof of Theorem 2.13, the definition, constructions, and security proofs of the fine-grained QA-NIZKs (with the comparison to previous and subsequent fine-grained NIZKs [2, 32, 33]), and the instantiations of predicate encodings, in Appendices A to C.

2 Preliminaries

Notations. We note that all arithmetic computations are over GF(2) in this work. Namely, all arithmetic computations are performed with a modulus of 2. We write (respectively, \(a=\mathcal {A}(b)\)) to denote the random variable outputted by a probabilistic (respectively, deterministic) algorithm \(\mathcal {A}\) on input b. By we denote the process of sampling an element x from a set or distribution \(\mathcal {S}\) uniformly at random. By \(\textbf{x}\in \{0, 1\}^n\) we denote a column vector with size n and by, say, \(\textbf{x}\in \{1\}\times \{0, 1\}^{n-1}\) we mean that the first element of \(\textbf{x}\) is 1. By [n] we denote the set \(\{1, \cdots , n\}\). By \(x_i\) (respectively, \({\textsf{x}}_i\)) we denote the ith element of a vector \(\textbf{x}\) (respectively, \({\textsf{x}}\)). By \(\textsf{negl}\) we denote an unspecified negligible function.

For a matrix \(\mathbf {{A}}\in \{0, 1\}^{n\times t}\) with rank \(t'< n\), we denote the sets \(\{\textbf{y} \, | \, \exists \textbf{x} \;\; \text{ s.t. } \;\; \textbf{y} = \mathbf {{A}}\textbf{x} \}\) and \(\{\textbf{x} \, | \, \mathbf {{A}}\textbf{x} = \textbf{0} \}\) by \({{\,\textrm{Im}\,}}(\mathbf {{A}})\) (i.e., the span of \(\mathbf {{A}}\)) and \({{\,\textrm{Ker}\,}}(\mathbf {{A}})\) respectively. By \(\mathbf {{A}}^\bot \in \{0, 1\}^{n\times (n-t')}\) we denote a matrix consisting of \(n-t'\) linear independent column vectors in the kernel of \(\mathbf {{A}}^\top \). Note that for any \(\textbf{y}\notin {{\,\textrm{Im}\,}}(\mathbf {{A}})\), we have \(\textbf{y}^\top \mathbf {{A}}^\bot \ne \textbf{0}\). By \((a_{ij})_{i\in [l],j\in [m]}\) we denote the matrix \(\begin{pmatrix}a_{11} \cdots a_{1\,m}\\ \vdots \ddots \vdots \\ a_{l1}\cdots a_{lm}\\ \end{pmatrix}\). Let \(\mathbf {{A}}=(a_{ij})_{i\in [l],j\in [m]}\) be an \(l\times m\) matrix and \(\mathbf {{B}}=(\mathbf {{B}}_{ij})_{i\in [m],j\in [n]}\) be a large matrix consisting of \(m\times n\) matrices \(\mathbf {{B}}_{ij}\) for all \(i\in [m]\) and \(j\in [n]\). By \(h\odot \mathbf {{A}}\) we denote \((h\cdot a_{ij})_{i\in [l],j\in [m]}\) and by \(\mathbf {{A}}\odot \mathbf {{B}}\) we denote

$$\begin{aligned} \left( \sum _{k=1}^m a_{ik}\odot \mathbf {{B}}_{kj}\right) _{i\in [l],j\in [n]}. \end{aligned}$$

By \(\mathbf {{M}}^n_0\), \(\mathbf {{M}}^n_1\), and \(\mathbf {{N}}^n\), we denote the following \(n \times n\) matrices:

$$\begin{aligned}{} & {} \mathbf {{M}}^n_0 = \begin{pmatrix} 0 &{}\quad &{}\quad \cdots &{}\quad 0 &{}\quad 0\\ 1 &{}\quad 0 &{}\quad &{}\quad &{}\quad 0 \\ 0 &{}\quad 1 &{}\quad \ddots &{}\quad &{}\quad \vdots \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad 0 &{}\quad \\ 0 &{}\quad \cdots &{}\quad 0 &{}\quad 1 &{}\quad 0 \end{pmatrix}, \; \mathbf {{M}}^n_1 = \begin{pmatrix} 0 &{}\quad &{}\quad \cdots &{}\quad 0 &{}\quad 1\\ 1 &{}\quad 0 &{}\quad &{}\quad &{}\quad 0 \\ 0 &{}\quad 1 &{}\quad \ddots &{}\quad &{}\quad \vdots \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad 0 &{}\quad \\ 0 &{} \quad \cdots &{}\quad 0 &{}\quad 1 &{}\quad 0 \end{pmatrix},\; \\{} & {} \mathbf {{N}}^n=\begin{pmatrix} 0 &{}\quad \cdots &{}\quad &{}\quad 0\\ \vdots &{}\quad 0 &{}\quad \cdots &{} \quad 0 \\ 0 &{}\quad &{}\quad \ddots &{}\quad \vdots \\ 1 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \end{pmatrix}, \end{aligned}$$

and by \(\textbf{0}\) we denote a zero vector \((0,\cdots ,0)^\top \).

Games. We follow [5] to use code-based games for defining and proving security. A game \(\textsf{G}\) contains procedures \(\textsc {Init}\) and \(\textsc {Finalize}\), and some additional procedures \(\textsc {P}_1,\ldots , \textsc {P}_n\), which are defined in pseudo-code. All variables in a game are initialized as 0, and all sets are empty (denote by \(\emptyset \)). An adversary \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\) is executed in game \(\textsf{G}\) w.r.t. the security parameter \(\lambda \) (denote by \({\textsf{G}}^{a_\lambda }\)) if \(a_\lambda \) first calls \(\textsc {Init}\), obtaining its output. Next, it may make arbitrary queries to \(\textsc {P}_i\) (according to their specification) and obtain their output. Finally, it makes one single call to \(\textsc {Finalize}(\cdot )\) and stops. We use \( \textsf{G}^{a_\lambda } \Rightarrow d\) to denote that \(\textsf{G}\) outputs d after interacting with \({a_\lambda }\), and d is the output of \(\textsc {Finalize}\).

2.1 Function Families

In this section, we recall the definitions of function families, \(\mathsf {NC^1}\) circuits, \(\mathsf {AC^0[2]}\) circuits, and \(\mathsf{\oplus L/poly}\). Note that \(\mathsf {AC^0[2]}\subsetneq \mathsf {NC^1}\) [28, 31].

Definition 2.1

(Function Family) A function family is a family of (possibly randomized) functions \(\mathcal {F}=\{f_{\lambda }\}_{\lambda \in \mathbb {N}}\), where for each \(\lambda \), \(f_{\lambda }\) has a domain \(D^f_{\lambda }\) and a range \(R^f_{\lambda }\).

Definition 2.2

(\(\mathsf {NC^1}\)) The class of (non-uniform) \(\mathsf {NC^1}\) function families is the set of all function families \(\mathcal {F} = \{f_{\lambda }\}_{\lambda \in \mathbb {N}}\) for which there is a polynomial \(p(\cdot )\) and constant c such that for each \(\lambda \), \(f_{\lambda }\) can be computed by a (randomized) circuit of size \(p(\lambda )\), depth \(c\log (\lambda )\), and fan-in 2 using AND, OR, and NOT gates.

Definition 2.3

(\(\mathsf {AC^0[2]}\)) The class of (non-uniform) \(\mathsf {AC^0[2]}\) function families is the set of all function families \(\mathcal {F} = \{f_{\lambda }\}_{\lambda \in \mathbb {N}}\) for which there is a polynomial \(p(\cdot )\) and constant c such that for each \(\lambda \), \(f_{\lambda }\) can be computed by a (randomized) circuit of size \(p(\lambda )\), depth c, and unbounded fan-in using AND, OR, NOT, and PARITY gates.

One can see that multiplication of a constant number of matrices can be performed in \(\mathsf {AC^0[2]}\), since it can be done in constant depth with PARITY gates.

Definition 2.4

(\(\mathsf{\oplus L/poly}\)) \(\mathsf{\oplus L/poly}\) is the set of all boolean function families \(\mathcal {F} = \{f_{\lambda }\}_{\lambda \in \mathbb {N}}\) for which there is a constant c such that for each \(\lambda \), there is a non-deterministic Turing machine \({\mathcal {M}}_{\lambda }\) such that for each input x with length \(\lambda \), \({\mathcal {M}}_{\lambda }(x)\) uses at most \(c\log (\lambda )\) space, and \(f_{\lambda }(x)\) is equal to the parity of the number of accepting paths of \({\mathcal {M}}_{\lambda }(x)\).

2.2 Sampling Procedure

We now recall the definitions of four sampling procedures \({\textsf{LSamp}}\), \({\textsf{RSamp}}\), \({\textsf{ZeroSamp}}\), and \({\textsf{OneSamp}}\) in Fig. 1. Note that the output of \({\textsf{ZeroSamp}}(n)\) is always a matrix of rank \(n-1\) and the output of \({\textsf{OneSamp}}(n)\) is always a matrix of full rank [13].

Fig. 1
figure 1

Definitions of \({\textsf{LSamp}}\), \({\textsf{RSamp}}\), \({\textsf{ZeroSamp}}\), and \({\textsf{OneSamp}}\). \(n=n(\lambda )\) is a polynomial in the security parameter \(\lambda \)

We now recall several assumptions and lemmata on \({\textsf{ZeroSamp}}\) and \({\textsf{OneSamp}}\) given in [13].

Definition 2.5

(Fine-grained matrix linear assumption [13]) There exists a polynomial \(n=n(\lambda )\) in the security parameter \(\lambda \) such that for any family \(\mathcal {A}= \{a_{\lambda }\}_{\lambda \in \mathbb {N}}\) in \(\mathsf {NC^1}\), we have

Lemma 2.6

(Lemma 4.3 in [13]) If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then the fine-grained matrix linear assumption holds.

Remark. Notice that for any polynomial \(n=n(\lambda )\), we have \(\{f_{n}\}_{\lambda \in N} \in \mathsf {NC^1}\) iff \(\{f_\lambda \}_{\lambda \in N} \in \mathsf {NC^1}\) since \(O(\log (n(\lambda )))=O(\log (\lambda ))\). Hence, in the above lemma, we can also set \(n(\cdot )\) as an identity function, i.e., \(n=\lambda \). For simplicity, in the rest of the paper, we always let \({\textsf{ZeroSamp}}(\cdot )\) and \({\textsf{OneSamp}}(\cdot )\) take as input \(\lambda \).

The following lemma implies that for a matrix \(\mathbf {{M}}^\top \) sampled by \({\textsf{ZeroSamp}}(\lambda )\), there is a unique non-zero vector with the first (respectively, last) element being 1 in the kernel of \(\mathbf {{M}}\) (respectively, \(\mathbf {{M}}^\top \)).

Lemma 2.7

(Lemma 3 in [15]) For all \(\lambda \in \mathbb {N}\) and all \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), it holds that \({{\,\textrm{Ker}\,}}(\mathbf {{M}}^\top ) = \{\textbf{0}, \textbf{k} \}\) where \(\textbf{k}\) is a vector such that \(\textbf{k} \in \{0, 1\}^{\lambda -1}\times \{1\}\).

Lemma 2.8

(Lemma 4 in [15]) For all \(\lambda \in \mathbb {N}\) and all \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), it holds that \({{\,\textrm{Ker}\,}}(\mathbf {{M}}) = \{\textbf{0}, \textbf{k} \}\) where \(\textbf{k}\) is a vector such that \(\textbf{k} \in \{1\}\times \{0, 1\}^{\lambda -1}\).

The following lemma indicates a simple relation between the distributions of the outputs of \({\textsf{ZeroSamp}}(\lambda )\) and \({\textsf{OneSamp}}(\lambda )\).

Lemma 2.9

(Lemma 7 in [15]) For all \(\lambda \in \mathbb {N}\), the distributions of \(\mathbf {{M}} + \mathbf {{N}}^\lambda \) and \(\mathbf {{M}}'\) are identical, where and .

We now give two lemmata showing that when sampling a random vector \(\textbf{w}\) from \(\{0, 1\}^\lambda \), the first element of \(\textbf{w}\) does not affect the distribution of \(\mathbf {{M}} \textbf{w}\) for \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\).

Lemma 2.10

(Lemma 5 in [15]) For all \(\lambda \in \mathbb {N}\) and all \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), it holds that

$$\begin{aligned} {{\,\textrm{Im}\,}}(\mathbf {{M}}) = \left\{ \textbf{x} | \textbf{w} \in \{0\}\times \{0, 1\}^{\lambda -1}, \textbf{x} = \mathbf {{M}}\textbf{w} \right\} = \left\{ \textbf{x} | \textbf{w} \in \{1\}\times \{0, 1\}^{\lambda -1}, \textbf{x} = \mathbf {{M}}\textbf{w} \right\} . \end{aligned}$$

Lemma 2.11

For all \(\lambda \in \mathbb {N}\) and all \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), the distributions of \(\textbf{x}\) and \(\textbf{x}'\) are identical, where , , \(\textbf{x} = \mathbf {{M}} \textbf{w}\), and \(\textbf{x}' = \mathbf {{M}} \textbf{w}'\).

Proof

According to Lemma 2.8, for any \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), there exists \(\textbf{k}\in {{\,\textrm{Ker}\,}}(\mathbf {{M}})\) such that \(\textbf{k} \in \{1\}\times \{0, 1\}^{\lambda -1}\). Therefore, the distributions of \((\textbf{w}+\textbf{k})\), where , and are identical. Moreover, we have \(\mathbf {{M}}\textbf{w}=\mathbf {{M}}(\textbf{w}+\textbf{k})\). Hence, the distributions of \(\mathbf {{M}}\textbf{w}\) and \(\mathbf {{M}}\textbf{w}'\) are identical, completing the proof of Lemma 2.11. \(\square \)

Below we recall the a theorem implicitly given in [15] as the subset membership problem for an HPS. Roughly, it shows that for , a vector sampled from the span of \(\mathbf {{M}}\) is indistinguishable from one sampled outside the span of \(\mathbf {{M}}\) for any adversary in \(\mathsf {NC^1}\). The proof of this theorem is given in “Appendix A” for completeness.

Definition 2.12

(Fine-grained subset membership problem [15]) Let \({\textsf{SY}}=\{{\textsf{SampYes}}_\lambda \}_{\lambda \in \mathbb {N}}\) and \({\textsf{SN}}=\{{\textsf{SampNo}}_\lambda \}_{\lambda \in \mathbb {N}}\) be function families described in Fig. 2. For all \(\lambda \in \mathbb {N}\), all \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), and all \(\textbf{x}\in {\textsf{SampNo}}_\lambda (\mathbf {{M}})\), we have \(\textbf{x}\in \{0,1\}^\lambda \setminus {{\,\textrm{Im}\,}}(\mathbf {{M}})\), then for and any adversary \(\mathcal {A}= \{a_{\lambda }\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\), we have

Fig. 2
figure 2

Definitions of \({\textsf{SY}}\) and \({\textsf{SN}}\). Note that \({\textsf{SY}},{\textsf{SN}}\in \mathsf {AC^0[2]}\), since they only involve operations including sampling random bits and multiplication of a matrix and a vector

Theorem 2.13

([15]) If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then the fine-grained subset membership problem (see Definition 2.12) holds.

Remark. Note that the subset membership problem in [15] gives a stronger result additionally showing that the output distributions of \({\textsf{SampYes}}_\lambda (\mathbf {{M}})\) and \({\textsf{SampNo}}_\lambda (\mathbf {{M}})\) are identical to the uniform distributions over \({{\,\textrm{Im}\,}}(\mathbf {{M}})\) and \(\{0, 1\}^\lambda \setminus {{\,\textrm{Im}\,}}(\mathbf {{M}})\) respectively. We only need a weak form of it in this work.

2.3 Predicate Encodings

We now recall the definition of predicate encodings. As in [10], our resulting construction of ABE is generally based on a predicate encoding. By exploiting various types of encodings, we can achieve a broad class of ABEs.

Our definitions are slightly different from the original definition in [10], in that our definition is over GF(2) rather than GF(p), and we require that the encodings are performed in a circuit class \({\mathcal {C}}_1\).

Definition 2.14

(Predicate Encoding [10]) Let \({\textsf{P}}=\{{\textsf{p}}_\lambda \}_{\lambda \in \mathbb {N}}\) with \({\textsf{p}}_\lambda :{\mathcal {X}}\times {\mathcal {Y}}\rightarrow \{0, 1\}\) be a predicate, where \({\mathcal {X}}\) and \({\mathcal {Y}}\) are polynomial-sized spaces associated with \(\lambda \). An \({\mathcal {C}}_1\)-predicate encoding for \({\textsf{P}}\) is a function family \({\textsf{PE}}=\{{\textsf{rE}}_{\lambda },{\textsf{kE}}_{\lambda },{\textsf{sE}}_{\lambda },{\textsf{sD}}_{\lambda },{\textsf{rD}}_{\lambda }\}_{\lambda \in \mathbb {N}}\in \mathcal C_1\) with

  • \({\textsf{rE}}_{\lambda }:{\mathcal {Y}}\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\eta \),

  • \({\textsf{kE}}_{\lambda }: {\mathcal {Y}}\times \{0, 1\}\rightarrow \{0, 1\}^\eta \),

  • \({\textsf{sE}}_{\lambda }:{\mathcal {X}}\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\zeta \),

  • \({\textsf{sD}}_{\lambda }:{\mathcal {X}}\times {\mathcal {Y}}\times \{0, 1\}^\zeta \rightarrow \{0, 1\}\),

  • \({\textsf{rD}}_{\lambda }:{\mathcal {X}}\times {\mathcal {Y}}\times \{0, 1\}^\eta \rightarrow \{0, 1\}\),

where \(\ell =\ell (\lambda )\), \(\eta =\eta (\lambda )\), and \(\zeta =\zeta (\lambda )\) are polynomials in \(\lambda \).

Linearity is satisfied is for all \(\lambda \in \mathbb {N}\) and all \((\textsf{x},\textsf{y})\in {\mathcal {X}}\times {\mathcal {Y}}\), \({\textsf{rE}}_{\lambda }(\textsf{y},\cdot )\), \({\textsf{kE}}_{\lambda }(\textsf{y},\cdot )\), \({\textsf{sE}}_{\lambda }(\textsf{x},\cdot )\), \({\textsf{sD}}_{\lambda }(\textsf{x},\textsf{y},\cdot )\), and \({\textsf{rD}}_{\lambda }(\textsf{x},\textsf{y},\cdot )\) are \(\{0, 1\}\)-linear. Namely, for any \(\textsf{y}\in \mathcal Y\), any \(\textbf{w}_0,\textbf{w}_1\in \{0, 1\}^\ell \), and any \(c\in \{0, 1\}\), we have \({\textsf{rE}}_{\lambda }(\textsf{y},\textbf{w}_0+\textbf{w}_1\cdot c)={\textsf{rE}}_{\lambda }(\textsf{y},\textbf{w}_0)+{\textsf{rE}}_{\lambda }(\textbf{w}_1)\cdot c\), and the same argument can be made for \({\textsf{kE}}_{\lambda }\), \({\textsf{sE}}_{\lambda }\), \({\textsf{sD}}_{\lambda }\), and \({\textsf{rD}}_{\lambda }\).

Restricted  \(\alpha \)-reconstruction is satisfied if for all \(\lambda \in \mathbb {N}\), all \((\textsf{x},\textsf{y})\in {\mathcal {X}}\times {\mathcal {Y}}\) such that \({\textsf{p}}_\lambda (\textsf{x},\textsf{y})=1\), all \(\mathsf w\in \{0, 1\}^\ell \), and all \(\alpha \in \{0, 1\}\), we have

$$\begin{aligned} {\textsf{rD}}_{\lambda }(\textsf{x},\textsf{y},{\textsf{rE}}_{\lambda }(\textsf{y},\mathsf w))={\textsf{sD}}_{\lambda }(\textsf{x},\textsf{y},{\textsf{sE}}_{\lambda }(\textsf{x},\mathsf w))\ \text{ and }\ {\textsf{rD}}_{\lambda }(\textsf{x},\textsf{y},{\textsf{kE}}_{\lambda }(\textsf{y},\alpha ))=\alpha . \end{aligned}$$

\(\alpha \)-privacy is satisfied if for all \(\lambda \in \mathbb {N}\), all \((\textsf{x},\textsf{y})\in {\mathcal {X}}\times {\mathcal {Y}}\) such that \({\textsf{p}}_\lambda (\textsf{x},\textsf{y})=0\), and all \(\alpha \in \{0, 1\}\), the following distributions are identical:

$$\begin{aligned} (\textsf{x},\textsf{y},\alpha ,{\textsf{sE}}_{\lambda }(\textsf{x},\mathsf w),{\textsf{rE}}_{\lambda }(\textsf{y},\mathsf w)+{\textsf{kE}}_{\lambda }(\textsf{y},\alpha ))\ \text{ and }\ (\textsf{x},\textsf{y},\alpha ,{\textsf{sE}}_{\lambda }(\textsf{x},\mathsf w),{\textsf{rE}}_{\lambda }(\textsf{y},\mathsf w)), \end{aligned}$$

where .

Intuitively, in a modularly designed attribute-based encryption (ABE) scheme, the attribute value in the user’s key is encoded by \({\textsf{rE}}_{\lambda }\) and \({\textsf{kE}}_{\lambda }\), while that in the ciphertext is encoded by \({\textsf{sE}}_{\lambda }\). The decryption algorithm uses the associated decoding algorithms \({\textsf{rD}}_{\lambda }\) and \({\textsf{sD}}_{\lambda }\) to decode \(({\textsf{rE}}_{\lambda },{\textsf{kE}}_{\lambda })\) and \({\textsf{sE}}_{\lambda }\) respectively. The difference between the decoding results of \({\textsf{rD}}_{\lambda }\) and \({\textsf{sD}}_{\lambda }\) for \(({\textsf{rE}}_{\lambda },{\textsf{kE}}_{\lambda })\) and \({\textsf{sE}}_{\lambda }\) is the decoding result of \({\textsf{rD}}_{\lambda }\) for \({\textsf{kE}}_{\lambda }\) only, which is used to yield the session key. These encoding algorithms can be instantiated according to the predicates of different types of ABEs, thus allowing for a modular and generic approach to ABE construction. Namely, different ABE schemes can be constructed by plugging in different encoding algorithms, based on the desired access structure for the scheme.

Remark on Notions for Predicate Encodings. Similar to [10], we abuse the notion

$$\begin{aligned} {\textsf{rE}}_{\lambda }(\textsf{x},\mathbf {{W}})\ \text{ where } \ \mathbf {{W}}=(\textbf{w}_{ij})_{i\in [l],j\in [m]}\ \text{ and }\ \textbf{w}_{ij}\in \{0, 1\}^\ell \end{aligned}$$

for all ij to denote the matrix

$$\begin{aligned} ({\textsf{rE}}_{\lambda }(\textsf{x},\textbf{w}_{ij}))_{i\in [l],j\in [m]}. \end{aligned}$$

The same argument is made for \(({\textsf{kE}}_{\lambda },{\textsf{sE}}_{\lambda },{\textsf{sD}}_{\lambda },{\textsf{rD}}_{\lambda })\).

Encoding for Equality. We now give an example of predicate encoding \({\textsf{PE}}_\textsf{eq}\) for equality \({\textsf{P}}_\textsf{eq}\) in Fig. 3. By instantiating our ABKEM given later in Sect. 5 with this encoding, we immediately achieve an IBKEM. Linearity is straightforward. Restricted \(\alpha \)-reconstruction follows from the fact that \(u+\textbf{x}^\top \textbf{w}=u+\textbf{y}^\top \textbf{w}\) when \(\textbf{x}=\textbf{y}\), and \(\alpha \)-privacy follows from the fact that \(u+\textbf{x}^\top \textbf{w}\) and \(u+\textbf{y}^\top \textbf{w}\) are pairwise independent if \(\textbf{x}\ne \textbf{y}\).

Fig. 3
figure 3

Definitions of \({\textsf{P}}_\textsf{eq}=\{{\textsf{p}}_\lambda \}_{\lambda \in \mathbb {N}}\) and \({\textsf{PE}}_\textsf{eq}=\{{\textsf{rE}}_{\lambda },{\textsf{kE}}_{\lambda },{\textsf{sE}}_{\lambda },{\textsf{sD}}_{\lambda },{\textsf{rD}}_{\lambda }\}\)

2.4 Attribute-Based Key Encapsulation

We now give the definition of fine-grained ABKEM, the instantiation of which can be easily converted into ABEs by using a one-time symmetric cipher.

Definition 2.15

(Attribute-based key encapsulation) A \({\mathcal {C}}_1\)-attribute-based key encapsulation (ABKEM) scheme for a predicate \({\textsf{P}}=\{{\textsf{p}}_\lambda \}_\lambda \) is a function family \({\textsf{ABKEM}}=\{{\textsf{Gen}}_\lambda ,{\textsf{USKGen}}_\lambda ,{\textsf{Enc}}_\lambda ,{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\in \mathcal C_1\) with the following properties.

  • \({\textsf{Gen}}_\lambda \) returns the (master) public/secret key \((\textsf{pk},\textsf{sk})\). We assume that \(\textsf{pk}\) implicitly defines value spaces \({\mathcal {X}}\) and \({\mathcal {Y}}\), a key space \(\mathcal {K}\), and a ciphertext space \(\mathcal {C}\).

  • \({\textsf{USKGen}}_\lambda (\textsf{sk},\textsf{y})\) returns a user secret-key \(\textsf{usk}[\textsf{y}]\) for a value \(\textsf{y}\in {\mathcal {Y}}\).

  • \({\textsf{Enc}}_\lambda (\textsf{pk},\textsf{x})\) returns a symmetric key \(\textsf{K}\in \mathcal {K}\) together with a ciphertext \(\textsf{ct}\in \mathcal {C}\) w.r.t. \(\textsf{x}\in {\mathcal {X}}\).

  • \({\textsf{Dec}}_\lambda (\textsf{usk}[\textsf{y}],\textsf{y},\textsf{x},\textsf{ct})\) deterministically returns a decapsulated key \(\textsf{K}\in {{\mathcal {K}}}\) or the reject symbol \(\bot \).

Perfect correctness is satisfied if for all \(\lambda \in \mathbb {N}\), all \((\textsf{pk},\textsf{sk})\in {\textsf{Gen}}_\lambda \), all \(\textsf{y}\in {\mathcal {Y}}\), all \(\textsf{x}\in {\mathcal {X}}\), all \(\textsf{usk}[\textsf{y}]\in {\textsf{USKGen}}_\lambda (\textsf{sk},\textsf{y})\), and all \((\textsf{K},\textsf{ct})\in {\textsf{Enc}}_\lambda (\textsf{pk},\textsf{x})\), if \({\textsf{p}}_\lambda (\textsf{x},\textsf{y})=1\), we have

$$\begin{aligned} \Pr [{\textsf{Dec}}_\lambda (\textsf{usk}[\textsf{y}],\textsf{y},\textsf{x},\textsf{ct})=\textsf{K}]=1. \end{aligned}$$

The security requirement we consider is indistinguishability against chosen plaintext and attribute attacks (\({\mathsf {PR\text{- }AT\text{- }CPA}}\)) defined as follows.

Definition 2.16

(\({\mathsf {PR\text{- }AT\text{- }CPA}}\) Security for ABKEM) Let \(k(\cdot )\) and \(l(\cdot )\) be functions in \(\lambda \). \({\textsf{ABKEM}}\) is \({\mathcal {C}}_2\)-(kl)-\({\mathsf {PR\text{- }AT\text{- }CPA}}\) secure if for any \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in {\mathcal {C}}_2\), where \(a_\lambda \) is allowed to make k rounds of adaptive queries to \(\textsc {USKGen}(\cdot )\) and each round it queries l inputs, we have

$$\begin{aligned} \left| \Pr \left[ {\mathsf {PR\text{- }AT\text{- }CPA}}_{\textsf{real}}^{a_\lambda }\Rightarrow 1\right] -\Pr \left[ {\mathsf {PR\text{- }AT\text{- }CPA}}_{\textsf{rand}}^{a_\lambda } \Rightarrow 1\right] \right| \le \textsf{negl}(\lambda ), \end{aligned}$$

where the experiments are defined in Fig. 4.

Fig. 4
figure 4

Security Games \({\mathsf {PR\text{- }AT\text{- }CPA}}_{\textsf{real}}\) and for defining \({\mathsf {PR\text{- }AT\text{- }CPA}}\) security for ABKEM. The boxed statement redefining \(\textsf{K}^*\) is only executed in game \({\mathsf {PR\text{- }AT\text{- }CPA}}_{\textsf{rand}}\)

3 Generalized Affine MAC

In this section, we give the definition of generalized affine MAC, which generalizes the notion of standard affine MAC [5] by using predicate encodings, and show how to construct it in the fine-grained setting under the assumption \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\).

3.1 Definitions

The definition of generalized affine MAC is as follows.

Definition 3.1

(Generalized Affine MAC) Let \({\textsf{PE}}=\{{\textsf{sE}}_{\lambda },{\textsf{rE}}_{\lambda },{\textsf{kE}}_{\lambda },{\textsf{sD}}_{\lambda },{\textsf{rD}}_{\lambda }\}_{\lambda \in \mathbb {N}}\in \mathcal C_1\) be a predicate encoding for \({\textsf{P}}=\{{\textsf{p}}_\lambda \}_{\lambda \in \mathbb {N}}\), where \({\textsf{rE}}_{\lambda }: {\mathcal {Y}}\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\eta \), \({\textsf{kE}}_{\lambda }: {\mathcal {Y}}\times \{0, 1\}\rightarrow \{0, 1\}^\eta \), and \({\textsf{sE}}_{\lambda }:\mathcal X\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\zeta \).

A \({\mathcal {C}}_1\)-generalized affine message authentication code for \({\textsf{PE}}\) is a function family \({\textsf{MAC}}_\textsf{GA}=\{{{\textsf{Gen}}_{\textsf{MAC}}}_\lambda ,{\textsf{Tag}}_\lambda ,{{\textsf{Ver}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}}\in {\mathcal {C}}_1\).

  1. 1.

    \({{\textsf{Gen}}_{\textsf{MAC}}}_\lambda \) returns \(\textsf{sk}_{\textsf{MAC}}\) containing \((\mathbf {{B}}, \mathbf {{X}},x')\), where \(\mathbf {{B}}^\top \in {\textsf{ZeroSamp}}(\lambda )\), \(\mathbf {{X}} \in \{0, 1\}^{\lambda \times \ell }\), and \(x' \in \{0, 1\}\).

  2. 2.

    \({\textsf{Tag}}_\lambda (\textsf{sk}_{\textsf{MAC}},{\textsf{m}}\in {\mathcal {Y}})\) returns a tag \(\tau = (\textbf{t}, \textbf{u}) \in \{0, 1\}^\lambda \times \{0, 1\}^\eta \), computed as

    (1)
    $$\begin{aligned} \textbf{u}= & {} {\textsf{rE}}_{\lambda }({\textsf{m}}, \mathbf {{X}}^\top \textbf{t}) + {\textsf{kE}}_{\lambda }({\textsf{m}},x') \in \{0, 1\}^\eta . \end{aligned}$$
    (2)
  3. 3.

    \({{\textsf{Ver}}_{\textsf{MAC}}}_\lambda (\textsf{sk}_{\textsf{MAC}},{\textsf{m}},\tau =(\textbf{t}, \textbf{u} ))\) verifies if Eq. (2) holds.

Correctness is satisfied if for any \(\textsf{sk}_{\textsf{MAC}}\in {{\textsf{Gen}}_{\textsf{MAC}}}_\lambda \), \({\textsf{m}}\in {\mathcal {Y}}\), and \(\tau \in {\textsf{Tag}}_\lambda (\textsf{sk}_{\textsf{MAC}}, {\textsf{m}})\), we have \(1={{\textsf{Ver}}_{\textsf{MAC}}}_\lambda (\textsf{sk}_{\textsf{MAC}},{\textsf{m}},\tau )\).

The security requirement we consider is psedorandomness against chosen message attacks (\({\mathsf {PR\text{- }CMA}}\)) defined as follows.

Definition 3.2

(\({\mathsf {PR\text{- }CMA}}\) Security) Let \(k=k(\lambda )\) and \(l=l(\lambda )\) be polynomials in \(\lambda \). \({\textsf{MAC}}_\textsf{GA}\) is \({\mathcal {C}}_2\text{- }(k,l)\text{- }{\mathsf {PR\text{- }CMA}}\) secure if for any \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in \mathcal C_2\), where \(a_\lambda \) is allowed to make k rounds of adaptive queries to \(\textsc {Eval}(\cdot )\) and each round it queries l inputs, we have

$$\begin{aligned} \Pr \left[ {\mathsf {PR\text{- }CMA}}_{\textsf{real}}^{a_\lambda } \Rightarrow 1\right] - \Pr \left[ {\mathsf {PR\text{- }CMA}}_{\textsf{rand}}^{a_\lambda } \Rightarrow 1\right] \le \textsf{negl}(\lambda ), \end{aligned}$$

where the experiments are defined in Fig. 5.

Fig. 5
figure 5

Games \({\mathsf {PR\text{- }CMA}}_{\textsf{real}}\) and for defining \({\mathsf {PR\text{- }CMA}}\) security. The boxed statement redefining \(h_1\) is only executed in game \({\mathsf {PR\text{- }CMA}}_{\textsf{rand}}\)

Roughly, the \({\mathsf {PR\text{- }CMA}}\) security says that in the presence of many tags and a challenge token \((\textbf{h}_0,h_1)\), an adversary cannot tell whether the \(h_1\) is honestly generated or randomness.

Standard Affine MAC. Let . When \({\textsf{p}}_\lambda (\cdot )\) is an identity function, \(\textbf{u}\) is computed as

$$\begin{aligned} u=\textbf{x}_0^\top \textbf{t}+\sum _{i=1}^n {\textsf{m}}_i\textbf{x}_i^\top \textbf{t}+x'\in \{0, 1\}\end{aligned}$$
(3)

in Eq. (2), and \(\textbf{h}_0\) is computed as

$$\begin{aligned} \textbf{h}_0=h\cdot \left( \textbf{x}_0^\top +\sum _{i=1}^n {\textsf{m}}_i^*\textbf{x}_i^\top \right) \in \{0, 1\}^{1\times \lambda } \end{aligned}$$
(4)

in Fig. 5, i.e., the predicate encoding is the one for equality (see Fig. 3), the above definition becomes exactly the same as that of affine MAC given in [5] for the HPS based IBKEM, except that we only consider computations over GF(2) and \(\textbf{t}\) is sampled by \({\textsf{SampYes}}_\lambda \). We give the definition as below.

Definition 3.3

(Affine MAC [5]) A Generalized affine MAC for the predicate \({\textsf{P}}_\textsf{eq}\) and encoding \({\textsf{PE}}_\textsf{eq}\) defined as in Fig. 3 is said to be an affine MAC.

3.2 Construction

In this section, we give our construction of \(\mathsf {AC^0[2]}\)-generalized affine MAC based on \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\). It is a natural extension of the standard affine MAC from an HPS in [5].

Fig. 6
figure 6

Definition of \({\textsf{MAC}}_\textsf{GA}=\{{{\textsf{Gen}}_{\textsf{MAC}}}_\lambda ,{\textsf{Tag}}_\lambda ,{{\textsf{Ver}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}}\)

Theorem 3.4

If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) and \({\textsf{PE}}=\{{\textsf{sE}}_{\lambda },{\textsf{rE}}_{\lambda },{\textsf{kE}}_{\lambda },{\textsf{sD}}_{\lambda },{\textsf{rD}}_{\lambda }\}_{\lambda \in \mathbb {N}}\in \mathsf {AC^0[2]}\) is a predicate encoding, where \({\textsf{rE}}_{\lambda }: \mathcal Y\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\eta \), \({\textsf{kE}}_{\lambda }: {\mathcal {Y}}\times \{0, 1\}\rightarrow \{0, 1\}^\eta \), and \({\textsf{sE}}_{\lambda }:{\mathcal {X}}\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\zeta \), then \({\textsf{MAC}}_\textsf{GA}\) is an \(\mathsf {AC^0[2]}\)-generalized affine MAC that is \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }CMA}}\) secure, where k is any constant and \(l=l(\lambda )\) is any polynomial in \(\lambda \).

Fig. 7
figure 7

Games \(\textsf{G}_0,(\textsf{G}_{1,i},\textsf{G}_{1,i}')_{1 \le i \le k\cdot l},\textsf{G}_{1,k\cdot l+1},\textsf{G}_2\) for the proof of Theorem 3.4

Proof

First, we note that \((\{{{\textsf{Gen}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}},\{{\textsf{Tag}}_\lambda \}_{\lambda \in \mathbb {N}},\{{{\textsf{Ver}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}})\) are computable in \(\mathsf {AC^0[2]}\), since they only involve operations including sampling random bits and multiplication of a constant number of matrices, which can be done in constant depth with PARITY gates. Also, it is straightforward that \({\textsf{MAC}}_\textsf{GA}\) satisfies correctness.

We now prove that \({\textsf{MAC}}_\textsf{GA}\) is \(\mathsf {NC^1}\)-\((k,l)\text{- }{\mathsf {PR\text{- }CMA}}\) secure by defining a sequence of intermediate games as in Fig. 7.

Let \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) be any adversary against the \({\mathsf {PR\text{- }CMA}}\)-security of \({\textsf{MAC}}_\textsf{GA}\). Game \(\textsf{G}_0\) is the real attack game. In games \(\textsf{G}_{1,i}\), the first \(i-1\) queries to the \(\textsc {Eval}\) oracle are answered with \((\textbf{t},\textbf{u})\), where and \(\textbf{u}\) contains no information on \({\textsf{kE}}_{\lambda }({\textsf{m}},x')\), and the remaining are answered as in the real scheme. To interpolate between \(\textsf{G}_{1,i}\) and \(\textsf{G}_{1,i+1}\), we also define \(\textsf{G}_{1,i}'\), which answers the i-th query to \(\textsc {Eval}\) by picking . By definition, we have \(\textsf{G}_0=\textsf{G}_{1,1}\). \(\square \)

Lemma 3.5

\(\Pr [{\mathsf {PR\text{- }CMA}}_{\textsf{real}}^{a_\lambda } \Rightarrow 1] =\Pr [\textsf{G}^{a_\lambda }_0 \Rightarrow 1]= \Pr [\textsf{G}^{a_\lambda }_{1,1}\Rightarrow 1]\).

Lemma 3.6

There exists an adversary \(\mathcal {B}_{1,i}=\{b_\lambda ^{1,i}\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda ^{1,i}\) breaks the fine-grained subset membership problem (see Definition 2.12), which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Theorem 2.13, with probability

$$\begin{aligned} \left| \Pr \left[ \textsf{G}'^{a_\lambda }_{1,i}\Rightarrow 1\right] - \Pr \left[ \textsf{G}^{a_\lambda }_{1,i} \Rightarrow 1\right] \right| . \end{aligned}$$

Proof

Games \(\textsf{G}_{1,i}\) and \(\textsf{G}'_{1,i}\) only differ in the distribution of \(\textbf{t}\) returned by the \(\textsc {Eval}\) oracle for its i-th query. We build \(b_\lambda ^{1,i}\) as follows.

The distinguisher \(b_\lambda ^{1,i}\) runs in exactly the same way as the challenger in \(\textsf{G}_{1,i}\) except that for its i-th query, it obtains \(\textbf{t}\) which is sampled as or . When \(a_\lambda \) outputs \(\beta \in \{0, 1\}\), \(b_\lambda \) outputs \(\beta \) if no \({\textsf{m}}\) such that \({\textsf{p}}_\lambda ({\textsf{m}}^*,{\textsf{m}})=1\) was queried to \(\textsc {Eval}\). Otherwise, \(b_\lambda \) outputs 0.

Since \(a_\lambda \) only makes constant rounds of queries, all the operations in \(b_\lambda \) are performed in \(\mathsf {NC^1}\). Hence, we have \(\mathcal {B}_{1,i}\in \mathsf {NC^1}\).

When \(\textbf{t}\) is sampled as (respectively, ), the view of \(a_\lambda \) is exactly the same as its view in \(\textsf{G}_{1,i}\) (respectively, \(\textsf{G}_{1,i}'\)). Thus the advantage of \(b_\lambda ^{1,i}\) in breaking the subset membership problem is \(| \Pr [\textsf{G}'^{a_\lambda }_{1,i}\Rightarrow 1] - \Pr [\textsf{G}^{a_\lambda }_{1,i} \Rightarrow 1]|\), completing this part of proof. \(\square \)

Lemma 3.7

\(\Pr [\textsf{G}_{1,i+1}^{a_\lambda }\Rightarrow 1] =\Pr [\textsf{G}_{1,i}'^{a_\lambda }\Rightarrow 1]\).

Proof

Let \({\textsf{m}}\) be the i-th query to \(\textsc {Eval}\) such that \({\textsf{p}}_\lambda ({\textsf{m}}^*,{\textsf{m}})\ne 1\) and let \((\textbf{t},\textbf{u})\) be its tag. We have \(\textbf{t}\notin {{\,\textrm{Im}\,}}(\mathbf {{B}})\) due to Theorem 2.13. We use an information-theoretic argument to show that in \(\textsf{G}'_{1,i}\), \(\textbf{u}\) does not reveal any information on \(x'\). Information-theoretically, \(a_\lambda \) may learn \(\mathbf {{B}}^\top \mathbf {{X}}\) from each c-th query with \(c>i\). Thus, for and , \(a_\lambda \) information-theoretically obtains the distribution of

$$\begin{aligned} \begin{aligned}&\begin{pmatrix} \mathbf {{X}}^\top \mathbf {{B}}\\ \textbf{h}_0=h\odot {\textsf{sE}}_{\lambda }({\textsf{m}}^*,\mathbf {{X}}^\top )\\ \textbf{u}={\textsf{rE}}_{\lambda }({\textsf{m}},\mathbf {{X}}^\top \textbf{t})+{\textsf{kE}}_{\lambda }({\textsf{m}},x') \end{pmatrix}\\&\quad = \begin{pmatrix} (\mathbf {{X}}^\top +\textbf{w} {\mathbf {{B}}^\bot }^\top ) \mathbf {{B}}\\ \textbf{h}_0={\textsf{sE}}_{\lambda }({\textsf{m}}^*,\mathbf {{X}}^\top +\textbf{w} {\mathbf {{B}}^\bot }^\top )\\ \textbf{u}={\textsf{rE}}_{\lambda }({\textsf{m}},(\mathbf {{X}}^\top +\textbf{w} {\mathbf {{B}}^\bot }^\top )\textbf{t})+{\textsf{kE}}_{\lambda }({\textsf{m}},x') \end{pmatrix}\\&\quad = \begin{pmatrix} \mathbf {{X}}^\top \mathbf {{B}}\\ \textbf{h}_0={\textsf{sE}}_{\lambda }({\textsf{m}}^*,\mathbf {{X}}^\top )+{\textsf{sE}}_{\lambda }({\textsf{m}}^*,\textbf{w} {\mathbf {{B}}^\bot }^\top )\\ \textbf{u}={\textsf{rE}}_{\lambda }({\textsf{m}},\mathbf {{X}}^\top \textbf{t})+{\textsf{rE}}_{\lambda }({\textsf{m}},\textbf{w})+{\textsf{kE}}_{\lambda }({\textsf{m}},x') \end{pmatrix} (\because \textbf{t}\notin {{\,\textrm{Im}\,}}(\mathbf {{B}})). \end{aligned} \end{aligned}$$

This distribution is identical to the distribution of

$$\begin{aligned} \begin{pmatrix} \mathbf {{X}}^\top \mathbf {{B}}\\ \textbf{h}_0={\textsf{sE}}_{\lambda }({\textsf{m}}^*,\mathbf {{X}}^\top )+{\textsf{sE}}_{\lambda }({\textsf{m}}^*,\textbf{w} {\mathbf {{B}}^\bot }^\top )\\ \textbf{u}={\textsf{rE}}_{\lambda }({\textsf{m}},\mathbf {{X}}^\top \textbf{t})+{\textsf{rE}}_{\lambda }({\textsf{m}},\textbf{w}) \end{pmatrix}, \end{aligned}$$

since the distribution of

$$\begin{aligned} ({\textsf{m}}^*,{\textsf{m}},x',{\textsf{sE}}_{\lambda }({\textsf{m}}^*,\textbf{w}),{\textsf{rE}}_{\lambda }({\textsf{m}},\textbf{w})+{\textsf{kE}}_{\lambda }({\textsf{m}},x') \end{aligned}$$

and

$$\begin{aligned} ({\textsf{m}}^*,{\textsf{m}},x',{\textsf{sE}}_{\lambda }({\textsf{m}}^*,\textbf{w}),{\textsf{rE}}_{\lambda }({\textsf{m}},\textbf{w})), \end{aligned}$$

are identical due to the \(\alpha \)-privacy of \({\textsf{PE}}\), completing this part of proof. \(\square \)

Lemma 3.8

\(\Pr [\textsf{G}^{a_\lambda }_{2} \Rightarrow 1] = \Pr [\textsf{G}^{a_\lambda }_{1,k\cdot l+1} \Rightarrow 1]\).

Proof

Note that \(a_\lambda \) can ask at most \(k\cdot l\)-many \(\textsc {Eval}\) queries. In both \(\textsf{G}_{1,k\cdot l+1}\) and \(\textsf{G}_2\), all the answers of \(\textsc {Eval}\) are independent of \(x'\). Hence, \(h_1\) from \(\textsf{G}_{1,k\cdot l+1}\) is uniform in the view of \(a_\lambda \). \(\square \)

We now do all the previous steps in the reverse order as in Fig. 8. Then, by using the above arguments in a reverse order, we have the following lemma.

Lemma 3.9

There exists an adversary \(\mathcal {B}_2=\{b^2_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda ^2\) breaks the fine-grained subset membership problem with probability at least

$$\begin{aligned} \left( \left| \Pr \left[ {\mathsf {PR\text{- }CMA}}_{\textsf{rand}}^{a_\lambda } \Rightarrow 1\right] - \Pr \left[ \textsf{G}^{a_\lambda }_{2}\Rightarrow 1\right] \right| \right) /(k\cdot l). \end{aligned}$$
Fig. 8
figure 8

Games \(\textsf{H}_0,(\textsf{H}_{1,i},\textsf{H}_{1,i}')_{1 \le i \le k\cdot l},\textsf{H}_{1,k\cdot l+1},\textsf{H}_2\) for the proof of Lemma 3.9

Putting all above together, Theorem 3.4 immediately follows. \(\square \)

An Affine MAC. By instantiating the underlying predicate encoding in Fig. 6 with the encoding for equality (see Fig. 3), we immediately obtain an affine MAC \({\textsf{MAC}}=\{{{\textsf{Gen}}_{\textsf{MAC}}}_\lambda ,{\textsf{Tag}}_\lambda ,{{\textsf{Ver}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}}\) as in Fig. 9 for message space \(\{0,1\}^\ell \), which will be used to construct an IBE scheme in \(\mathsf {NC^1}\) later. Formally, we have the following corollary derived from Theorem 3.4.

Fig. 9
figure 9

Definition of \({\textsf{MAC}}=\{{{\textsf{Gen}}_{\textsf{MAC}}}_\lambda ,{\textsf{Tag}}_\lambda ,{{\textsf{Ver}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}}\)

Corollary 3.10

If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then \({\textsf{MAC}}\) is an \(\mathsf {AC^0[2]}\)-affine MAC that is \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }CMA}}\) secure, where k is any constant and \(l=l(\lambda )\) is any polynomial in \(\lambda \).

4 Fine-Grained Secure Identity-Based Encryption

In this section, we present our fine-grained IBE scheme, which captures the core techniques of our ABE scheme given later in Sect. 5.

4.1 Definition

We now give the definition of fine-grained IBKEM, which is a special case of fine-grained ABKEM (see Definition 2.15) where the boolean predicate is restricted to be the equality predicate.

Definition 4.1

(Identity-based key encapsulation) A \({\mathcal {C}}_1\)-identity key encapsulation (IBKEM) scheme is a function family \({\textsf{IBKEM}}=\{{\textsf{Gen}}_\lambda ,{\textsf{USKGen}}_\lambda ,{\textsf{Enc}}_\lambda ,{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\in \mathcal C_1\) with the following properties.

  • \({\textsf{Gen}}_\lambda \) returns the (master) public/secret key \((\textsf{pk},\textsf{sk})\). We assume that \(\textsf{pk}\) implicitly defines an identity space \(\mathcal{I}\mathcal{D}\), a key space \(\mathcal {K}\), and a ciphertext space \(\mathcal {C}\).

  • \({\textsf{USKGen}}_\lambda (\textsf{sk},\textsf{id})\) returns a user secret-key \(\textsf{usk}[\textsf{id}]\) for an identity \(\textsf{id}\in \mathcal{I}\mathcal{D}\).

  • \({\textsf{Enc}}_\lambda (\textsf{pk},\textsf{id})\) returns a symmetric key \(\textsf{K}\in \mathcal {K}\) together with a ciphertext \(\textsf{ct}\in \mathcal {C}\) w.r.t. \(\textsf{id}\in \mathcal{I}\mathcal{D}\).

  • \({\textsf{Dec}}_\lambda (\textsf{usk}[\textsf{id}],\textsf{id},\textsf{ct})\) deterministically returns a decapsulated key \(\textsf{K}\in {{\mathcal {K}}}\) or the reject symbol \(\bot \).

Perfect correctness is satisfied if for all \(\lambda \in \mathbb {N}\), all \((\textsf{pk},\textsf{sk})\in {\textsf{Gen}}_\lambda \), all \(\textsf{id}\in \mathcal{I}\mathcal{D}\), all \(\textsf{usk}[\textsf{id}]\in {\textsf{USKGen}}_\lambda (\textsf{sk},\textsf{id})\), and all \((\textsf{K},\textsf{ct})\in {\textsf{Enc}}_\lambda (\textsf{pk},\textsf{id})\), we have

$$\begin{aligned} \Pr [{\textsf{Dec}}_\lambda (\textsf{usk}[\textsf{id}],\textsf{id},\textsf{ct})=\textsf{K}]=1. \end{aligned}$$

The security requirement we consider is indistinguishability against chosen plaintext and identity attacks (\({\mathsf {PR\text{- }ID\text{- }CPA}}\)) defined as follows.

Definition 4.2

(\({\mathsf {PR\text{- }ID\text{- }CPA}}\) Security for IBKEM) Let \(k(\cdot )\) and \(l(\cdot )\) be functions in \(\lambda \). \({\textsf{IBKEM}}\) is \({\mathcal {C}}_2\)-(kl)-\({\mathsf {PR\text{- }ID\text{- }CPA}}\)  secure if for any \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in {\mathcal {C}}_2\), where \(a_\lambda \) is allowed to make k rounds of adaptive queries to \(\textsc {USKGen}(\cdot )\) and each round it queries l inputs, we have

$$\begin{aligned} \left| \Pr \left[ {\mathsf {PR\text{- }ID\text{- }CPA}}_{\textsf{real}}^{a_\lambda }\Rightarrow 1\right] -\Pr \left[ {\mathsf {PR\text{- }ID\text{- }CPA}}_{\textsf{rand}}^{a_\lambda } \Rightarrow 1\right] \right| \le \textsf{negl}(\lambda ), \end{aligned}$$

where the experiments are defined in Fig. 10.

Fig. 10
figure 10

Security Games \({\mathsf {PR\text{- }ID\text{- }CPA}}_{\textsf{real}}\) and for defining \({\mathsf {PR\text{- }ID\text{- }CPA}}\)-security for IBKEM. The boxed statement redefining \(\textsf{K}^*\) is only executed in game \({\mathsf {PR\text{- }ID\text{- }CPA}}_{\textsf{rand}}\)

4.2 Construction

Let \({\textsf{MAC}}=\{{{\textsf{Gen}}_{\textsf{MAC}}}_\lambda ,{\textsf{Tag}}_\lambda ,{{\textsf{Ver}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) be an affine MAC over \(\{0,1\}^\lambda \) with message space \(\mathcal{I}\mathcal{D}\) in Fig. 9. Our IBKEM \({\textsf{IBKEM}}=\{{\textsf{Gen}}_\lambda ,{\textsf{USKGen}}_\lambda ,{\textsf{Enc}}_\lambda ,{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\) for key-space \({{\mathcal {K}}}=\{0,1\}\) and identity space \(\{0, 1\}^\ell \) is defined as in Fig. 11.Footnote 2

Fig. 11
figure 11

Definition of our \({\textsf{IBKEM}}=\{{\textsf{Gen}}_\lambda ,{\textsf{USKGen}}_\lambda ,{\textsf{Enc}}_\lambda ,{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\) with identity space \(\{0, 1\}^\ell \) and key space \(\{0, 1\}\). \(\textsf{id}_i\) denotes the ith bit of \(\textsf{id}\) for all \(i\in [\ell ]\)

Theorem 4.3

Under the assumption \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) and the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }CMA}}\) security of \({\textsf{MAC}}\), where k is any constant and \(l=l(\lambda )\) is any polynomial in \(\lambda \), \({\textsf{IBKEM}}\) is an \(\mathsf {AC^0[2]}\)-IBKEM that is \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }ID\text{- }CPA}}\) secure against \(\mathsf {NC^1}\).

Proof

First, we note that \(\{{\textsf{Gen}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{\textsf{USKGen}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{\textsf{Enc}}_\lambda \}_{\lambda \in \mathbb {N}}\), and \(\{{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\) are computable in \(\mathsf {AC^0[2]}\), since they only involve operations including multiplication of a constant number of matrices, sampling random bits, and running \({\textsf{MAC}}\in \mathsf {AC^0[2]}\).

Correctness follows from the fact that by Eq. (3) in Sect. 3.1, we have

$$\begin{aligned} (\textbf{v} ||u)\textbf{c}_0&=\left( \textbf{t}^\top \left( \mathbf {{Y}}_0^\top +\sum _{i=1}^{\ell } \textsf{id}_i \odot \mathbf {{Y}}_i^\top \right) + \textbf{y}'^\top || \textbf{t}^\top \left( \mathbf {{x}}_0 +\sum _{i=1}^\ell \textsf{id}_i\odot \textbf{x}_i\right) + x'\right) \mathbf {{A}} \textbf{r} \\ \textbf{t}^\top \textbf{c}_1&= \textbf{t}^\top \left( \mathbf {{Y}}_0^\top ||\textbf{x}_0+\sum _{i=1}^\ell \textsf{id}_i\odot (\mathbf {{Y}}_i^\top ||\textbf{x}_i)\right) \mathbf {{A}} \textbf{r} \end{aligned}$$

and the difference of the two elements yields \(\textsf{K}= (\textbf{y}'^\top ||x')\mathbf {{A}}\textbf{r}= \textbf{z}' \cdot \textbf{r}\).

Let \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\) be any adversary against the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }ID\text{- }CPA}}\) security of \({\textsf{IBKEM}}\). We now prove the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }ID\text{- }CPA}}\) security by defining a sequence of games \(\textsf{G}_0\text{- }\textsf{G}_6\) as in Fig. 12. Roughly, in the first four games, we show how to extract a challenge token for \({\textsf{MAC}}\) from the challenge session key and ciphertext by switching the distribution of \(\mathbf {{A}}\) twice and changing the distribution of the randomness \(\textbf{r}\) during the switching procedure. In the last two games, we show that the commitments \(\mathbf {{Z}}_i\) and \(\textbf{z}'\) perfectly hide the secrets, and the answers of queries made by \(a_\lambda \) reveal no useful information other than the tags and token for \({\textsf{MAC}}\). \(\square \)

Fig. 12
figure 12

Games \(\textsf{G}_0\text{- }\textsf{G}_6\) for the proof of Theorem 4.3

Lemma 4.4

\(\Pr [{\mathsf {PR\text{- }ID\text{- }CPA}}_{\textsf{real}}^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}^{a_\lambda }_1 \Rightarrow 1]=\Pr [\textsf{G}^{a_\lambda }_0 \Rightarrow 1]\).

Proof

\(\textsf{G}_0\) is the real attack game. In game \(\textsf{G}_1\), we change the simulation of \(\textbf{c}^*_0\), \(\textbf{c}^*_1\) and \(\textsf{K}^*\) in \(\textsc {Enc}(\textsf{id}^*)\) by substituting \(\mathbf {{Z}}_i\) and \(\textbf{z}'\) with their respective definitions and substituting \(\mathbf {{A}}\) with \(\mathbf {{A}}+\mathbf {{N}}^\lambda \). Since we have

$$\begin{aligned}\begin{aligned} \mathbf {{N}}^\lambda \textbf{r}^= \begin{pmatrix} 0 &{}\quad \cdots &{}\quad &{}\quad 0\\ \vdots &{}\quad 0 &{}\quad \cdots &{}\quad 0 \\ 0 &{}\quad &{}\quad \ddots &{}\quad \vdots \\ 1 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \end{pmatrix} \begin{pmatrix} 0 \\ r_2 \\ \vdots \\ r_{\lambda } \end{pmatrix}=\textbf{0},\\ \end{aligned}\end{aligned}$$

the view of \(a_\lambda \) in \(\textsf{G}_1\) is identical to its view in \(\textsf{G}_0\), completing this part of proof. \(\square \)

Lemma 4.5

There exists an adversary \(\mathcal {B}_1=\{b^1_\lambda \}_{\lambda \in \mathbb {N}}\) such that \(b^1_\lambda \) breaks the fine-grained matrix linear assumption (see Definition 2.5), which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Lemma 2.6, with advantage

$$\begin{aligned} \left| \Pr \left[ \textsf{G}^{a_\lambda }_2\Rightarrow 1\right] - \Pr \left[ \textsf{G}^{a_\lambda }_1 \Rightarrow 1\right] \right| . \end{aligned}$$

Proof

\(\textsf{G}_1\) and \(\textsf{G}_2\) only differ in the distribution of \(\mathbf {{A}}\), namely, or , and we build the distinguisher \(b_\lambda ^1\) as follows.

\(b_\lambda ^1\) runs in exactly the same way as the challenger of \(\textsf{G}_1\) except that in \(\textsc {Init}\), instead of generating \(\mathbf {{A}}\) by itself, it takes as input \(\mathbf {{A}}^\top \) generated as or from its own challenger. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^1\) outputs \(\beta \) as well if \(\textsf{id}^*\) was not queried to \(\textsc {USKGen}\). Otherwise, \(b_\lambda ^1\) outputs 0.

If \(\mathbf {{A}}\) is generated as (respectively, ), the view of \(a_\lambda \) is the same as its view in \(\textsf{G}_1\) (respectively, \(\textsf{G}_2\)). Hence, the probability that \(b_\lambda ^1\) breaks the fine-grained matrix linear assumption is

$$\begin{aligned} \left| \Pr \left[ \textsf{G}^{a_\lambda }_2\Rightarrow 1\right] - \Pr \left[ \textsf{G}^{a_\lambda }_1 \Rightarrow 1\right] \right| . \end{aligned}$$

Moreover, since \(a_\lambda \) only makes constant rounds of queries, all operations in \(b_\lambda ^1\) are performed in \(\mathsf {NC^1}\). Hence, we have \(\mathcal {B}_1 = \{b_{\lambda }^1\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof. \(\square \)

Lemma 4.6

\(\Pr [\textsf{G}_3^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_2^{a_\lambda }\Rightarrow 1]\).

Proof

In this game, we sample \(\textbf{r}\) in \(\textsc {Enc}(\textsf{id}^*)\) as instead of . According to  Lemma 2.9, the distribution of \(\mathbf {{A}}+\mathbf {{N}}^\lambda \) in \(\textsf{G}_2\) and \(\textsf{G}_3\) is identical to that of a matrix sampled from \({\textsf{ZeroSamp}}\). Then this lemma follows from Lemma 2.11 immediately. \(\square \)

Lemma 4.7

There exists an adversary \(\mathcal {B}_2=\{b_\lambda ^2\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda ^2\) breaks the fine-grained matrix linear assumption with advantage

$$\begin{aligned} \left| \Pr \left[ \textsf{G}^{a_\lambda }_4\Rightarrow 1\right] - \Pr \left[ \textsf{G}^{a_\lambda }_3 \Rightarrow 1\right] \right| . \end{aligned}$$

Proof

\(\textsf{G}_3\) and \(\textsf{G}_4\) only differ in the distribution of \(\mathbf {{A}}\), namely, or , and we build the distinguisher \(b_\lambda ^2\) as follows.

\(b_\lambda ^2\) runs in exactly the same way as the challenger of \(\textsf{G}_3\) except that in \(\textsc {Init}\), instead of generating \(\mathbf {{A}}\) by itself, it takes as input \(\mathbf {{A}}^\top \) generated as or from its own challenger. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^2\) outputs \(\beta \) as well if \(\textsf{id}^*\) was not queried to \(\textsc {USKGen}\). Otherwise, \(b_\lambda ^2\) outputs 0.

If \(\mathbf {{A}}\) is generated as (respectively, ), the view of \(a_\lambda \) is the same as its view in \(\textsf{G}_3\) (respectively, \(\textsf{G}_4\)). Hence, the probability that \(b_\lambda ^2\) breaks the fine-grained matrix linear assumption is

$$\begin{aligned} \left| \Pr \left[ \textsf{G}^{a_\lambda }_4\Rightarrow 1\right] - \Pr \left[ \textsf{G}^{a_\lambda }_3 \Rightarrow 1\right] \right| . \end{aligned}$$

Moreover, since \(a_\lambda \) only makes constant rounds of queries, all operations in \(b_\lambda ^2\) are performed in \(\mathsf {NC^1}\). Hence we have \(\mathcal {B}_2 = \{b_{\lambda }^2\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof.

\(\square \)

Lemma 4.8

\(\Pr [\textsf{G}_5^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_4^{a_\lambda }\Rightarrow 1]\).

Proof

In \(\textsf{G}_5\), we do not use \((\mathbf {{Y}}_i)_{i=0}^\ell \) and \(\textbf{y}'\) in \(\textsc {USKGen}(\textsf{id})\) or \(\textsc {Enc}(\textsf{id}^*)\) any more. We give the sampling procedure for \(\mathbf {{A}}\) in an explicit way and change the simulation of \(\mathbf {{Z}}_i\), \(\textbf{z}'\), \(\textbf{v}\), \(\textbf{c}_1^*\), and \(\textsf{K}^*\) as in Fig. 12. We now show that all the changes are purely conceptual.

In \(\textsf{G}_5\), we generate \(\mathbf {{A}}\) by sampling and , and setting \(\mathbf {{A}}^\top =\mathbf {{R}}_0 \mathbf {{M}}_0^\lambda \mathbf {{R}}_1\). This is exactly the “zero-sampling” procedure, in which case, we have

$$\begin{aligned} \begin{aligned} \mathbf {{Z}}_i&=(\mathbf {{Y}}_i^\top ||\textbf{x}_i)\mathbf {{A}} = (\mathbf {{Y}}_i^\top ||\textbf{x}_i)\mathbf {{R}}_1^\top {\mathbf {{M}}_0^\lambda }^\top \mathbf {{R}}_0^\top \\&=(\mathbf {{Y}}_i^\top ||\textbf{x}_i)\left( \begin{array}{ccc}\mathbf {{I}}_{\lambda -1} &{} \textbf{0}\\ \widetilde{\textbf{r}}^\top &{} 1\end{array}\right) \begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad \ddots &{}\quad \vdots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \\ 0 &{}\quad \cdots &{}\quad 0 &{}\quad &{}\quad 1\\ 0 &{}\quad &{}\quad \cdots &{}\quad &{}\quad 0\\ \end{pmatrix} \mathbf {{R}}_0^\top \\&=(\mathbf {{Y}}_i^\top +\textbf{x}_i\cdot \widetilde{\textbf{r}}^\top ||\textbf{x}_i)\begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad \ddots &{}\quad \vdots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \\ 0 &{}\quad \cdots &{}\quad 0 &{}\quad &{}\quad 1\\ 0 &{}\quad &{}\quad \cdots &{}\quad &{}\quad 0\\ \end{pmatrix}\mathbf {{R}}_0^\top \\&=(\textbf{0}||\mathbf {{Y}}_i^\top +\textbf{x}_i\cdot \widetilde{\textbf{r}}^\top )\mathbf {{R}}_0^\top =(\textbf{0}||\mathbf {{D}}_i)\mathbf {{R}}_0^\top \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} \textbf{c}^*_1=\sum _{i=1}^\ell \textsf{id}^*_i\odot (\mathbf {{Y}}^\top _i\mid \textbf{x}_i) (\mathbf {{A}}+\mathbf {{N}}^\lambda )\textbf{r}=\sum _{i=1}^\ell \textsf{id}^*_i \odot (\mathbf {{Z}}_{i}{\textbf{r}}+\textbf{x}_i). \end{aligned} \end{aligned}$$

Hence, the distributions of \(\mathbf {{Z}}_i\) and \(\textbf{c}_1^*\) in \(\textsf{G}_5\) remain the same, and the distributions of \(\textbf{z}'\) and \(\textsf{K}^*\) can be analyzed in the same way. The distribution of \(\textbf{v}\) does not change as well since

$$\begin{aligned} \begin{aligned} \textbf{v}&=\textbf{t}^\top \left( \mathbf {{Y}}_0^\top +\sum _{i=1}^{\ell } \textsf{id}_i\odot \mathbf {{Y}}_i^\top \right) + \textbf{y}'^\top \\&=\textbf{t}^\top \left( \mathbf {{Y}}_0^\top +\textbf{x}_0\cdot \widetilde{\textbf{r}}^\top +\sum _{i=1}^{\ell } \textsf{id}_i\odot \left( \mathbf {{Y}}_i^\top +\textbf{x}_i\cdot \widetilde{\textbf{r}}^\top \right) \right) +\left( \textbf{y}'^\top +x'\cdot \widetilde{\textbf{r}}^\top \right) \\&\qquad - \left( \textbf{t}^\top \left( \textbf{x}_0+ \sum _{i=1}^{\ell } \textsf{id}_i \odot \textbf{x}_i\right) +x'\right) \cdot \widetilde{\textbf{r}}^\top \\&=\textbf{t}^\top \left( \mathbf {{D}}_0+\sum _{i=1}^{\ell } \textsf{id}_i\odot \mathbf {{D}}_i\right) +\textbf{d}' - u\cdot \widetilde{\textbf{r}}^\top . \end{aligned} \end{aligned}$$

Putting all above together, Lemma 4.8 immediately follows. \(\square \)

Lemma 4.9

There exists an adversary \(\mathcal {B}_3=\{b_\lambda ^3\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda ^3\) breaks the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }CMA}}\)-security of \({\textsf{MAC}}\) with advantage

$$\begin{aligned} \left| \Pr \left[ \textsf{G}_6^{a_\lambda }\Rightarrow 1\right] - \Pr \left[ \textsf{G}_5^{a_\lambda }\Rightarrow 1\right] \right| . \end{aligned}$$

Proof

The challenger of \(\textsf{G}_6\) answers the \(\textsc {Enc}(\textsf{id}^*)\) query by choosing random \(\textsf{K}^*\). We build \(b_\lambda ^3\) as in Fig. 13 to show that the differences between \(\textsf{G}_6\) and \(\textsf{G}_5\) can be bounded by the advantage of breaking the \({\mathsf {PR\text{- }CMA}}\) security of \({\textsf{MAC}}\).

Fig. 13
figure 13

Description of \(\mathcal {B}_3=\{b^3_\lambda \}_{\lambda \in \mathbb {N}}\) (having access to the oracles \({\textsc {Init}_{{\textsf{MAC}}},\textsc {Eval},\textsc {Chal},\textsc {Finalize}_{\textsf{MAC}}}\) of the \({\mathsf {PR\text{- }CMA}}_{\textsf{real}}\)/\({\mathsf {PR\text{- }CMA}}_{\textsf{rand}}\) games of Fig. 5 (instantiated with the encoding for equality predicate)) for the proof of Lemma 4.9

\(b_\lambda ^3\) runs in the same way as the challenger of \(\textsf{G}_5\) except that it samples \(\mathbf {{D}}_i\) and \(\textbf{d}'\) uniformly at random from \(\{0,1\}^{\lambda \times (\lambda -1)}\) and \(\{0,1\}^{1\times (\lambda -1)}\) respectively. This does not change the view of \(a_\lambda \) since \(\mathbf {{Y}}_i\) and \(\textbf{y}'\) were uniformly sampled in \(\textsf{G}_5\). Moreover, every time on receiving a query \(\textsf{id}\) to \(\textsc {USKGen}\), \(b_\lambda ^3\) forwards \(\textsf{id}\) to its evaluation oracle \(\textsc {Eval}\) to obtain the answer \((\textbf{t},u)\), and on receiving the query \(\textsf{id}^*\) to \(\textsc {Enc}\), \(b_\lambda ^3\) forwards \(\textsf{id}^*\) to its challenge oracle \(\textsc {Chal}\) and uses the answer \((\textbf{h}_0,h_1)\) to simulate \(\textbf{r}\), \(\textbf{c}_1^*\), and \(\textsf{K}^*\) as in Fig. 13. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^3\) outputs \(\beta \) as well if \(\textsf{id}^*\) was not queried to \(\textsc {USKGen}\). Otherwise, \(b_\lambda ^3\) outputs 0.

If \(h_1\) is uniform (i.e., \(b^3_\lambda \) is in Game \({\mathsf {PR\text{- }CMA}}_{\textsf{rand}}\)) then the view of \(a_\lambda \) is identical to its view in \(\textsf{G}_6\). If \(h_1\) is real (i.e., \(b^3_\lambda \) is in Game \({\mathsf {PR\text{- }CMA}}_{\textsf{real}}\)), then the view of \(a_\lambda \) is identical to its view in \(\textsf{G}_5\). Thus the advantage of \(b_\lambda ^3\) is exactly

$$\begin{aligned} \left| \Pr \left[ \textsf{G}_6^{a_\lambda }\Rightarrow 1\right] - \Pr \left[ \textsf{G}_5^{a_\lambda }\Rightarrow 1\right] \right| . \end{aligned}$$

Moreover, since all operations in \(b^3_{\lambda }\) are performed in \(\mathsf {NC^1}\), we have \(\mathcal {B}_3 = \{b^3_{\lambda }\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof. \(\square \)

We now do all the previous steps in the reverse order as in Fig. 14. Note that the view of the adversary in \(\textsf{H}_0\) (respectively, \(\textsf{H}_4\)) is identical to its view in \(\textsf{G}_6\) (respectively, \({\mathsf {PR\text{- }ID\text{- }CPA}}_{\textsf{rand}}\)). By using the above arguments in a reverse order, we have the following lemma.

Lemma 4.10

There exists an adversary \(\mathcal {B}_4=\{b_\lambda ^4\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda ^4\) breaks the fine-grained matrix linear assumption with advantage

$$\begin{aligned} \left( \left| \Pr \left[ \textsf{H}^{a_\lambda }_4\Rightarrow 1\right] - \Pr \left[ \textsf{H}^{a_\lambda }_0 \Rightarrow 1\right] \right| \right) /2. \end{aligned}$$
Fig. 14
figure 14

Games \(\textsf{H}_0\text{- }\textsf{H}_4\) for the proof of Theorem 4.3

Putting all above together, Theorem 4.3 immediately follows. \(\square \)

Extension to IBKEM with Large Key Space. The key space of the above IBKEM is \(\{0, 1\}\), while by running it in parallel, we can easily extend it to an IBKEM with large key space. The resulting scheme can still be performed in \(\mathsf {AC^0[2]}\) since running in parallel does not increase the circuit depth. The same extension can be also made for our fine-grained secure ABKEM given later in Sect. 5.

Extension to QA-NIZK. Our techniques for proving the hiding property of the underlying commitment scheme in our IBKEM can also be used to construct an efficient fine-grained QA-NIZK in \(\mathsf {NC^1}\) with adaptive soundness. We refer the reader to “Appendix C” for details.

5 Fine-Grained Secure Attribute-Based Encryption

In this section, we generalize our IBE scheme as a fine-grained ABE scheme by using predicate encodings [10, 36]. By instantiating the underlying encodings in different ways, we can achieve ABEs for inner product, non-zero inner product, spatial encryption, doubly spatial encryption, boolean span programs, and arithmetic span programs, and also broadcast encryption and fuzzy IBE schemes, which are computable in \(\mathsf {AC^0[2]}\) and secure against \(\mathsf {NC^1}\) under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\). We refer the reader to “Appendix C” for several instances of the encodings and also to [10] for more instances. We note that the encodings in [10] are defined over GF(p), while the ours are over GF(2). However, the proofs for encodings in [10] can be adopted in our case, since the linearity and \(\alpha \)-reconstruction properties hold in GF(p) also hold in GF(2) and by the standard linear-independence arguments in GF(2), the \(\alpha \)-privacy also holds in our case.

Let \({\textsf{PE}}=\{{\textsf{rE}}_{\lambda },{\textsf{kE}}_{\lambda },{\textsf{sE}}_{\lambda },{\textsf{sD}}_{\lambda },{\textsf{rD}}_{\lambda }\}_{\lambda \in \mathbb {N}}\in \mathsf {AC^0[2]}\) be a predicate encoding for \({\textsf{P}}=\{{\textsf{p}}_\lambda \}_{\lambda \in \mathbb {N}}\) with \({\textsf{rE}}_{\lambda }:{\mathcal {Y}}\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\eta \), \({\textsf{kE}}_{\lambda }: \mathcal Y\times \{0, 1\}\rightarrow \{0, 1\}^\eta \), \({\textsf{sE}}_{\lambda }:{\mathcal {X}}\times \{0, 1\}^\ell \rightarrow \{0, 1\}^\zeta \), \({\textsf{sD}}_{\lambda }:{\mathcal {X}}\times \mathcal Y\times \{0, 1\}^\zeta \rightarrow \{0, 1\}\), and \({\textsf{rD}}_{\lambda }:{\mathcal {X}}\times {\mathcal {Y}}\times \{0, 1\}^\eta \rightarrow \{0, 1\}\). Let \({\textsf{MAC}}_\textsf{GA}=\{{{\textsf{Gen}}_{\textsf{MAC}}}_\lambda ,{\textsf{Tag}}_\lambda ,{{\textsf{Ver}}_{\textsf{MAC}}}_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {AC^0[2]}\) be a \({\textsf{PE}}\)-generalized affine MAC over \(\{0, 1\}^\lambda \) with message space \({\mathcal {Y}}\). Our ABKEM \({\textsf{ABKEM}}=\{{\textsf{Gen}}_\lambda ,{\textsf{USKGen}}_\lambda ,{\textsf{Enc}}_\lambda ,{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\) is defined as in Fig. 15.

Fig. 15
figure 15

Construction of \({\textsf{ABKEM}}=\{{\textsf{Gen}}_\lambda ,{\textsf{USKGen}}_\lambda ,{\textsf{Enc}}_\lambda ,{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\)

Theorem 5.1

Under the assumption \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) and the \(\mathsf {NC^1}\)-(kl)-\({\textsf{sE}}_{\lambda }\)-\({\mathsf {PR\text{- }CMA}}\)-security of \({\textsf{MAC}}_\textsf{GA}\), where k is any constant and \(l=l(\lambda )\) is any polynomial in \(\lambda \), \({\textsf{ABKEM}}\) is an \(\mathsf {AC^0[2]}\)-ABKEM that is \(\mathsf {NC^1}\)-(kl)-\({\mathsf {PR\text{- }AT\text{- }CPA}}\) secure against \(\mathsf {NC^1}\).

Proof

First, we note that \(\{{\textsf{Gen}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{\textsf{USKGen}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{\textsf{Enc}}_\lambda \}_{\lambda \in \mathbb {N}}\), and \(\{{\textsf{Dec}}_\lambda \}_{\lambda \in \mathbb {N}}\) are computable in \(\mathsf {AC^0[2]}\), since they only involve operations including multiplication of a constant number of matrices, sampling random bits, and running \({\textsf{MAC}}_\textsf{GA}\in \mathsf {AC^0[2]}\).

By Eq. (2) in Sect. 3.1, we have

$$\begin{aligned}&{\textsf{rD}}_{\lambda }(\textsf{x},\textsf{y},\textbf{v}||\textbf{u}) \textbf{c}_0\\&\quad = {\textsf{rD}}_{\lambda }(\textsf{x},\textsf{y},{\textsf{rE}}_{\lambda }\left( \textsf{y}, \begin{pmatrix} \textbf{t}^\top \mathbf {{Y}}_1^\top \\ \vdots \\ \textbf{t}^\top \mathbf {{Y}}_\ell ^\top \end{pmatrix}+{\textsf{kE}}_{\lambda }(\textsf{y},\textbf{y}'^\top ) ||\begin{pmatrix} \textbf{t}^\top \textbf{x}_1\\ \vdots \\ \textbf{t}^\top \textbf{x}_\ell \end{pmatrix} + {\textsf{kE}}_{\lambda }(\textsf{y},x')\right) \mathbf {{A}}\textbf{r} \end{aligned}$$

and

$$\begin{aligned} {\textsf{sD}}_{\lambda }(\textsf{x},\textsf{y},\mathbf {{C}}_1\textbf{t}) ={\textsf{sD}}_{\lambda }(\textsf{x},\textsf{y},{\textsf{sE}}_{\lambda }\left( \textsf{x}, \begin{pmatrix} \textbf{t}^\top (\mathbf {{Y}}_1^\top ||\textbf{x}_1)\\ \vdots \\ \textbf{t}^\top (\mathbf {{Y}}_\ell ^\top ||\textbf{x}_\ell ) \end{pmatrix}\right) )\mathbf {{A}} \textbf{r}. \end{aligned}$$

Then, due to restricted \(\alpha \)-reconstruction (see Definition 2.14), the difference of the above equations yields \(\textsf{K}= ({\textbf{y}'}^\top ||x')\mathbf {{A}}\textbf{r}= \textbf{z}' \cdot \textbf{r}\), i.e., correctness is satisfied.

Let \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\) be any adversary against the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }AT\text{- }CPA}}\) security of \({\textsf{ABKEM}}\). We now prove the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }AT\text{- }CPA}}\) security by defining a sequence of games \(\textsf{G}_0\text{- }\textsf{G}_6\) as in Fig. 16. Roughly, in the first four games, we show how to extract a challenge token for \({\textsf{MAC}}_\textsf{GA}\) from the challenge session key and ciphertext by switching the distribution of \(\mathbf {{A}}\) twice and changing the distribution of the randomness \(\textbf{r}\) during the switching procedure. In the last two games, we show that the commitments \(\mathbf {{Z}}_i\) and \(\textbf{z}'\) perfectly hide the secrets, and the answers of queries made by \(a_\lambda \) reveal no useful information other than the tags and token for \({\textsf{MAC}}\). \(\square \)

Fig. 16
figure 16

Games \(\textsf{G}_0\text{- }\textsf{G}_6\) for the proof of Theorem 5.1

Lemma 5.2

\(\Pr [{\mathsf {PR\text{- }AT\text{- }CPA}}_{\textsf{real}}^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}^{a_\lambda }_1 \Rightarrow 1]=\Pr [\textsf{G}^{a_\lambda }_0 \Rightarrow 1]\).

Proof

\(\textsf{G}_0\) is the real attack game. In game \(\textsf{G}_1\), we change the simulation of \(\textbf{c}^*_0\), \(\mathbf {{C}}^*_1\) and \(\textsf{K}^*\) in \(\textsc {Enc}(\textsf{x})\) by substituting \(\mathbf {{Z}}_i\) and \(\textbf{z}'\) with their respective definitions and substituting \(\mathbf {{A}}\) with \(\mathbf {{A}}+\mathbf {{N}}^\lambda \). Since we have

$$\begin{aligned}\begin{aligned} \mathbf {{N}}^\lambda \textbf{r}^= \begin{pmatrix} 0 &{}\quad \cdots &{}\quad &{}\quad 0\\ \vdots &{}\quad 0 &{}\quad \cdots &{}\quad 0 \\ 0 &{}\quad &{}\quad \ddots &{}\quad \vdots \\ 1 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \end{pmatrix} \begin{pmatrix} 0 \\ r_2 \\ \vdots \\ r_{\lambda } \end{pmatrix}=0,\\ \end{aligned}\end{aligned}$$

the view of \(a_\lambda \) in \(\textsf{G}_1\) is identical to its view in \(\textsf{G}_0\), completing this part of proof. \(\square \)

Lemma 5.3

There exists an adversary \(\mathcal {B}_1=\{b^1_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b^1_\lambda \) breaks the fine-grained matrix linear assumption (see Definition 2.5), which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Theorem 2.13, with advantage

$$\begin{aligned} \left| \Pr \left[ \textsf{G}^{a_\lambda }_2\Rightarrow 1\right] - \Pr \left[ \textsf{G}^{a_\lambda }_1 \Rightarrow 1\right] \right| . \end{aligned}$$

Proof

\(\textsf{G}_1\) and \(\textsf{G}_2\) only differ in the distribution of \(\mathbf {{A}}\), namely, or , and we build the distinguisher \(b^1_\lambda \) as follows.

\(b_\lambda ^1\) runs in exactly the same way as the challenger of \(\textsf{G}_1\) except that in \(\textsc {Init}\), instead of generating \(\mathbf {{A}}\) by itself, it takes as input \(\mathbf {{A}}^\top \) generated as or from its own challenger. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^1\) outputs \(\beta \) as well if no \(\textsf{y}\) such that \({\textsf{p}}_\lambda (\textsf{x},\textsf{y})=1\) was queried to \(\textsc {USKGen}\). Otherwise, \(b_\lambda ^1\) outputs 0.

If \(\mathbf {{A}}\) is generated as (respectively, ), the view of \(a_\lambda \) is the same as its view in \(\textsf{G}_1\) (respectively, \(\textsf{G}_2\)). Hence, the probability that \(b_\lambda ^1\) breaks the fine-grained matrix linear assumption is

$$\begin{aligned} \left| \Pr \left[ \textsf{G}^{a_\lambda }_2\Rightarrow 1\right] - \Pr \left[ \textsf{G}^{a_\lambda }_1 \Rightarrow 1\right] \right| . \end{aligned}$$

Moreover, since \(a_\lambda \) only makes constant rounds of queries, all operations in \(b_\lambda ^1\) are performed in \(\mathsf {NC^1}\). Hence, we have \(\mathcal {B}_1 = \{b_{\lambda }^1\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof.

\(\square \)

Lemma 5.4

\(\Pr [\textsf{G}_3^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_2^{a_\lambda }\Rightarrow 1]\).

Proof

In this game, we sample \(\textbf{r}\) in \(\textsc {Enc}(\textsf{x})\) as instead of . According to Lemma 2.9, the distributions of \(\mathbf {{A}}+\mathbf {{N}}^\lambda \) in both \(\textsf{G}_2\) and \(\textsf{G}_3\) are identical to that of a matrix sampled from \({\textsf{ZeroSamp}}\). Then this lemma follows from Lemma 2.11 immediately. \(\square \)

Lemma 5.5

There exists an adversary \(\mathcal {B}_2=\{b^2_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b^2_\lambda \) breaks the fine-grained matrix linear assumption with advantage

$$\begin{aligned} \left| \Pr \left[ \textsf{G}^{a_\lambda }_4\Rightarrow 1\right] - \Pr \left[ \textsf{G}^{a_\lambda }_3 \Rightarrow 1\right] \right| . \end{aligned}$$

Proof

\(\textsf{G}_1\) and \(\textsf{G}_2\) only differ in the distribution of \(\mathbf {{A}}\), namely, or , and we build the distinguisher \(b^2_\lambda \) against Lemma 2.6 as follows.

\(b_\lambda ^2\) runs in exactly the same way as the challenger of \(\textsf{G}_3\) except that in \(\textsc {Init}\), instead of generating \(\mathbf {{A}}\) by itself, it takes as input \(\mathbf {{A}}^\top \) generated as or from its own challenger. When \(a_\lambda \) outputs \(\beta \), \(b^2_\lambda \) outputs \(\beta \) as well if no \(\textsf{y}\) such that \({\textsf{p}}_\lambda (\textsf{x},\textsf{y})=1\) was queried to \(\textsc {USKGen}\). Otherwise, \(b_\lambda ^2\) outputs 0.

If \(\mathbf {{A}}\) is generated as (respectively, ), the view of \(a_\lambda \) is the same as its view in \(\textsf{G}_3\) (respectively, \(\textsf{G}_4\)). Hence, the probability that \(b_\lambda ^2\) breaks the fine-grained matrix linear assumption is

$$\begin{aligned} |\Pr [\textsf{G}^{a_\lambda }_4\Rightarrow 1] - \Pr [\textsf{G}^{a_\lambda }_3 \Rightarrow 1]|. \end{aligned}$$

Moreover, since \(a_\lambda \) only makes constant rounds of queries, all operations in \(b^2_\lambda \) are performed in \(\mathsf {NC^1}\). Hence, we have \(\mathcal {B}_2 = \{b^2_{\lambda }\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof.

\(\square \)

Lemma 5.6

\(\Pr [\textsf{G}_5^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_4^{a_\lambda }\Rightarrow 1]\).

Proof

In \(\textsf{G}_5\), we do not use \((\mathbf {{Y}}_i)_{i=1}^\ell \) and \(\textbf{y}'\) in \(\textsc {USKGen}(\textsf{y})\) or \(\textsc {Enc}(\textsf{x})\) any more. We give the sampling procedure for \(\mathbf {{A}}\) in an explicit way and change the simulation of \(\mathbf {{Z}}_i\), \(\textbf{z}'\), \(\textbf{v}\), \(\mathbf {{C}}_1^*\), and \(\textsf{K}^*\) as in Fig. 16. We now show that all the changes are purely conceptual.

In \(\textsf{G}_5\), we generate \(\mathbf {{A}}\) by sampling and , and setting \(\mathbf {{A}}^\top =\mathbf {{R}}_0 \mathbf {{M}}_0^\lambda \mathbf {{R}}_1\). This is exactly the “zero-sampling” procedure, in which case, we have

$$\begin{aligned}\begin{aligned} \mathbf {{Z}}_i&=(\mathbf {{Y}}_i^\top ||\textbf{x}_i)\mathbf {{A}} = (\mathbf {{Y}}_i^\top ||\textbf{x}_i)\mathbf {{R}}_1^\top {\mathbf {{M}}_0^\lambda }^\top \mathbf {{R}}_0^\top \\&=(\mathbf {{Y}}_i^\top ||\textbf{x}_i)\left( \begin{array}{ccc}\mathbf {{I}}_{\lambda -1} &{} \textbf{0}\\ \widetilde{\textbf{r}}^\top &{} 1\end{array}\right) \begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad \ddots &{}\quad \vdots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \\ 0 &{}\quad \cdots &{}\quad 0 &{}\quad &{}\quad 1\\ 0 &{}\quad &{}\quad \cdots &{}\quad &{}\quad 0\\ \end{pmatrix} \mathbf {{R}}_0^\top \\&=(\mathbf {{Y}}_i^\top +\textbf{x}_i\cdot \widetilde{\textbf{r}}^\top ||\textbf{x}_i)\begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad \ddots &{}\quad \vdots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \\ 0 &{}\quad \cdots &{}\quad 0 &{}\quad &{}\quad 1\\ 0 &{}\quad &{}\quad \cdots &{}\quad &{}\quad 0\\ \end{pmatrix}\mathbf {{R}}_0^\top \\&=(\textbf{0}||\mathbf {{Y}}_i^\top +\textbf{x}_i\cdot \widetilde{\textbf{r}}^\top )\mathbf {{R}}_0^\top =(\textbf{0}||\mathbf {{D}}_i)\mathbf {{R}}_0^\top \end{aligned}\end{aligned}$$

and

$$\begin{aligned}\begin{aligned} {\mathbf {{C}}^*_1}=\,&{\textsf{sE}}_{\lambda }(\textsf{x},((\mathbf {{Y}}^\top _1\mid \textbf{x}_1) (\mathbf {{A}}+\mathbf {{N}}^\lambda )\textbf{r},\cdots ,(\mathbf {{Y}}^\top _\ell \mid \textbf{x}_\ell ) (\mathbf {{A}}+\mathbf {{N}}^\lambda )\textbf{r})^\top )\\ =\,&{\textsf{sE}}_{\lambda }(\textsf{x},(\mathbf {{Z}}_1{\textbf{r}}+\textbf{x}_1,\cdots ,\mathbf {{Z}}_\ell {\textbf{r}}+\textbf{x}_\ell )^\top )\\ =\,&{\textsf{sE}}_{\lambda }(\textsf{x},(\mathbf {{Z}}_1{\textbf{r}},\cdots ,\mathbf {{Z}}_\ell {\textbf{r}})^\top )+ {\textsf{sE}}_{\lambda }(\textsf{x},(\textbf{x}_1,\cdots ,\textbf{x}_\ell )^\top ). \end{aligned}\end{aligned}$$

Hence, the distributions of \(\mathbf {{Z}}_i\) in \(\textsf{G}_5\) remain the same, and the distributions of \(\textbf{z}'\) and \(\textsf{K}^*\) can be analyzed in the same way. The distribution of \(\textbf{v}\) does not change as well since

$$\begin{aligned}\begin{aligned} \textbf{v}&={\textsf{rE}}_{\lambda }\left( \textsf{y},\left( \mathbf {{Y}}_1\textbf{t},\cdots ,\mathbf {{Y}}_\ell \textbf{t}\right) ^\top \right) + \mathbf {{\textsf{kE}}_{\lambda }}\left( \textsf{y},\textbf{y}'^\top \right) \\&={\textsf{rE}}_{\lambda }\left( \textsf{y},\left( \left( \mathbf {{Y}}_1+\widetilde{\textbf{r}}\cdot \textbf{x}_1^\top \right) \textbf{t},\cdots ,\left( \mathbf {{Y}}_\ell +\widetilde{\textbf{r}}\cdot \textbf{x}_\ell ^\top \right) \textbf{t}\right) ^\top \right) +{\textsf{kE}}_{\lambda }\left( \textsf{y},\textbf{y}'^\top +x'\cdot \widetilde{\textbf{r}}^\top \right) \\&- \left( {\textsf{rE}}_{\lambda }\left( \textsf{y},\left( \widetilde{\textbf{r}}\cdot \textbf{x}_1^\top \cdot \textbf{t},\cdots ,\widetilde{\textbf{r}}\cdot \textbf{x}_1^\top \cdot \textbf{t}\right) ^\top \right) + {\textsf{kE}}_{\lambda }\left( \textsf{y},x'\cdot \widetilde{\textbf{r}}^\top \right) \right) \\&={\textsf{rE}}_{\lambda }\left( \textsf{y},\left( \mathbf {{D}}_1^\top \textbf{t},\cdots ,\mathbf {{D}}_\ell ^\top \textbf{t}\right) ^\top \right) +{\textsf{kE}}_{\lambda }\left( \textsf{y},{\textbf{d}'}\right) - \textbf{u}\cdot \widetilde{\textbf{r}}^\top . \end{aligned}\end{aligned}$$

Putting all above together, Lemma 5.6 immediately follows. \(\square \)

Lemma 5.7

There exists an adversary \(\mathcal {B}_3=\{b^3_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b^3_\lambda \) breaks the \(\mathsf {NC^1}\text{- }(k,l)\text{- }{\mathsf {PR\text{- }CMA}}\) security of \({\textsf{MAC}}_\textsf{GA}\) with advantage

$$\begin{aligned} \left| \Pr \left[ \textsf{G}_6^{a_\lambda }\Rightarrow 1\right] - \Pr \left[ \textsf{G}_5^{a_\lambda }\Rightarrow 1\right] \right| . \end{aligned}$$

Proof

The challenger of \(\textsf{G}_6\) answers the \(\textsc {Enc}(\textsf{x})\) query by choosing random \(\textsf{K}^*\). We build \(b^3_\lambda \) as in Fig. 17 to show that the differences between \(\textsf{G}_6\) and \(\textsf{G}_5\) can be bounded by its advantage of breaking the \({\mathsf {PR\text{- }CMA}}\) security of \({\textsf{MAC}}_\textsf{GA}\).

\(b_\lambda ^3\) runs in the same way as the challenger of \(\textsf{G}_5\) except that it samples \(\mathbf {{D}}_i\) and \(\textbf{d}'\) uniformly at random from \(\{0,1\}^{\lambda \times (\lambda -1)}\) and \(\{0,1\}^{1\times (\lambda -1)}\) respectively. This does not change the view of \(a_\lambda \) since \(\mathbf {{Y}}_i\) and \(\textbf{y}'\) were uniformly sampled in \(\textsf{G}_5\). Moreover, every time on receiving a query \(\textsf{y}\) to \(\textsc {USKGen}\), \(b_\lambda ^3\) forwards \(\textsf{y}\) to its evaluation oracle \(\textsc {Eval}\) to obtain the answer \((\textbf{t},\textbf{u})\), and on receiving the query \(\textsf{x}\) to \(\textsc {Enc}\), \(b_\lambda ^3\) forwards \(\textsf{x}\) to its challenge oracle \(\textsc {Chal}\) and uses the answer \((h,\textbf{h}_0,h_1)\) to simulate \(\textbf{r}\), \(\mathbf {{C}}_1^*\), and \(\textsf{K}^*\) as in Fig. 17. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^3\) outputs \(\beta \) as well if no \(\textsf{y}\) such that \({\textsf{p}}_\lambda (\textsf{x},\textsf{y})=1\) was queried to \(\textsc {USKGen}\). Otherwise, \(b_\lambda ^3\) outputs 0.

Fig. 17
figure 17

Description of \(\mathcal {B}_3=\{b_\lambda ^3\}_{\lambda \in \mathbb {N}}\) (having access to the oracles \({\textsc {Init}_{{\textsf{MAC}}},\textsc {Eval},\textsc {Chal},\textsc {Finalize}_{\textsf{MAC}}}\) of the \({\mathsf {PR\text{- }CMA}}_{\textsf{real}}\)/\({\mathsf {PR\text{- }CMA}}_{\textsf{rand}}\) games of Fig. 5) for the proof of Lemma 5.7

Fig. 18
figure 18

Games \(\textsf{H}_0\text{- }\textsf{H}_4\) for the proof of Theorem 5.1

Fig. 19
figure 19

Games \(\textsf{G}_0\text{- }\textsf{G}_5\) for the proof of Proposition A.2

If \(h_1\) is uniform (i.e., \(b_\lambda ^3\) is in Game \({\mathsf {PR\text{- }CMA}}_{\textsf{rand}}\)) then the view of \(a_\lambda \) is identical to its view in \(\textsf{G}_6\). If \(h_1\) is real (i.e., \(b_\lambda ^3\) is in Game \({\mathsf {PR\text{- }CMA}}_{\textsf{real}}\)) then the view of \(\mathcal {A}\) is identical to its view in \(\textsf{G}_5\). Hence, the advantage of \(b_\lambda ^3\) in breaking the \({\mathsf {PR\text{- }CMA}}\) security is

$$\begin{aligned} \left| \Pr \left[ \textsf{G}_6^{a_\lambda }\Rightarrow 1\right] - \Pr \left[ \textsf{G}_5^{a_\lambda }\Rightarrow 1\right] \right| . \end{aligned}$$

Moreover, since \(a_\lambda \) only makes constant rounds of queries, all operations in \(b_{\lambda }^3\) are performed in \(\mathsf {NC^1}\). Hence, we have \(\mathcal {B}_3 = \{b_{\lambda }^3\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof.

We now do all the previous steps in the reverse order as in Fig. 18. Note that the view of the adversary in \(\textsf{H}_0\) (respectively, \(\textsf{H}_4\)) is identical to its view in \(\textsf{G}_6\) (respectively, \({\mathsf {PR\text{- }AT\text{- }CPA}}_{\textsf{rand}}\)). By using the above arguments in a reverse order, we have the following lemma. \(\square \)

Lemma 5.8

There exists an adversary \(\mathcal {B}_4=\{b_\lambda ^4\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda ^4\) breaks the fine-grained matrix linear assumption with advantage

$$\begin{aligned} \left( \left| \Pr \left[ \textsf{H}^{a_\lambda }_4\Rightarrow 1\right] - \Pr \left[ \textsf{H}^{a_\lambda }_0 \Rightarrow 1\right] \right| \right) /2. \end{aligned}$$

Putting all above together, Theorem 5.1 immediately follows. \(\square \)