Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In 1984, Shamir introduced the notion of identity based encryptions [Sha84] (IBE). The entire system is maintained by an authority called Key Generation Center (KGC) who publishes a master public key \(\textsc {mpk}\) and keeps the master secret key \(\textsc {msk}\). Each user receives his/her secret key \(\textsc {sk}\) for decryption from KGC which is produced using \(\textsc {msk}\). To encrypt a message to a user in the system, one only needs \(\textsc {mpk}\) and user’s identity \(\textsc {id}\), which can be a descriptive tag such as email address.

Boneh and Franklin, in their seminal work [BF01] in 2001, formulated the security notion of IBE and proposed a pairing-based IBE in the random oracle model. Their security model has been accepted as standard model for IBE which ensures that a ciphertext for target identity \({\textsc {id}}^{*}\) reveals nothing of the plaintext even when adversary \(\mathcal {A}\) holding \(\textsc {mpk}\) can obtain secret keys for any identity other than \({\textsc {id}}^{*}\). We call it adaptive security in the paper. After that, a series of work were devoted to constructing IBE schemes in the standard model (i.e., without random oracle) including Boneh and Boyen’s IBE [BB04a] in the selective modelFootnote 1, Boneh and Boyen’s IBE [BB04b] with huge security loss, Waters’ IBE [Wat05] with large \(\textsc {mpk}\), and Gentry’s IBE [Gen06] based on q-type assumption. The dual system methodology was proposed in 2009 by Waters [Wat09]. With this novel and powerful proof technique, Waters proposed an IBE scheme with constant-size \(\textsc {mpk}\) in the standard model. The adaptive security is proven based on standard and static complexity assumptions, and the security loss is proportional to the amount of secret keys held by the adversary. This is the first IBE scheme achieving all these features simultaneously.

Since Waters deals with only one secret key at a time in the proof, a security loss of such an order of magnitude seems to be inherent. Fortunately, Chen and Wee [CW13] combined the proof idea underlying Naor-Reingold PRF [NR04] and the dual system methodology and showed an almost-tightly secure IBE scheme. Here almost tight means the security loss can be bounded by a polynomial in security parameter \(\lambda \) instead of the number of revealed secret keys. Soon afterwards, Blazy et al. [BKP14] described a generic transformation from affine MAC to IBE and constructed an affine MAC with almost-tight reduction. Their method essentially follows Chen and Wee’s [CW13] but leads to a more efficient IBE. Recently, the study of almost-tightly secure IBE has extended to the multi-instance, multi-ciphertext setting [HKS15, GCD+16, AHY15, GDCC16]. However the following two problems left by Chen and Wee [CW13] still remain open.

figure a

It’s worth noting that Attrapadung et al. [AHY15] provided a technique achieving a trade-off between the size of master public key and sizes of secret keys and ciphertexts. As a special case, they can indeed reach constant-size master public key but at the cost of larger secret keys and ciphertexts (and vice versa). Here we do not consider this as a satisfactory solution to Chen and Wee’s first open problem. One must preserve advantages of Chen and Wee’s IBE such as constant-size secret keys and ciphertexts.

1.1 Our Contribution

In this paper, we present an IBE scheme in the composite-order bilinear group [BGN05] with constant-size master public key, ciphertexts, and secret keys. The adaptive security in the multi-instance, multi-ciphertext setting relies on several concrete decisional subgroup assumptions [BWY11] and a subgroup variant of decisional bilinear Diffie-Hellman (DBDH) assumption. The security reduction arises a probability loss of \({\mathcal {O}}(\log q)\) in which q is the upper bound of the total number of secret keys and challenge ciphertexts per instance.

We make a comparison in Table 1. On one hand, our IBE has the shortest master public key, ciphertexts, secret keys and fastest decryption algorithm \({{\mathsf {Dec}}}\). In fact the performance is nearly optimal as Wee’s petit IBE [Wee16]. On the other hand, we achieve a tighter reduction in a concrete senseFootnote 2. Under typical setting where \(q = 2^{30}\) and \(n = 128\), the security loss of our IBE scheme is just a quarter of those for all previous ones [CW13, HKS15, AHY15]. Therefore our result (partially) answers Chen and Wee’s first open problem and makes a significant progress on the second one. We emphasize that the multi-instance, multi-ciphertext setting [HKS15] is more realistic and complex than Boneh and Franklin’s standard security notion [BF01]. This means that we are actually working on Chen and Wee’s open problems in a more complex setting.

Table 1. Comparing existing tightly secure IBE in the composite-order bilinear group.

Our Strategy. Chen and Wee [CW13] have pointed out that solving these two open problems may require some kinds of progresses in the underlying PRF, which is another long-standing problem. As our high-level strategy, we reverse the problem in order to circumvent the technical difficulty. In particular, instead of reducing the size of master public key of a tightly secure IBE to constant, we try to improve the tightness of an IBE scheme already with constant-size master public key. Technically, we propose a variant of Wee’s petit IBE [Wee16] which is tightly secure and inherits all advantages from Wee’s petit IBE. Our work is inspired by Chen and Wee’s tight reduction technique from a very high level and brings Chase and Meiklejohn’s idea [CM14] back to Wee’s petit IBE [Wee16] in order to fulfil the intuition.

Our Method. Assume composite-order bilinear group \((N = p_{1}p_{2}p_{3},\mathbb {G},\mathbb {G}_T,e)\). Let’s review Wee’s petit IBE [Wee16]. From a high level, Wee followed the dual system methodology [Wat09] and employed Déjà Q technique [CM14] with an extension. The IBE scheme is quite elegant as we described below.

$$\begin{aligned} \textsc {mpk} :&\, g_{1},\ g_{1}^\alpha ,\ e(g_{1},u),\ \mathsf {H}\\ \textsc {sk}_{\textsc {id}} :&\, u^{\frac{1}{\alpha + \textsc {id}}} \cdot R_3\\ \textsc {ct}_{\textsc {id}} :&\, g_{1}^{(\alpha + \textsc {id}) s},\ \mathsf {H}(e(g_{1},u)^s) \cdot \textsc {m} \end{aligned}$$

where \(g_{1},u \leftarrow {\mathbb {G}}_{p_{1}}\), \(\alpha ,s \leftarrow {{\mathbb {Z}}}_{N}\), \({R}_{3} \leftarrow {\mathbb {G}}_{{p}_{3}}\), \(\mathsf {H}\) is selected from a pairwise independent hash family. Here we consider \({\mathbb {G}}_{p_{1}}\) as normal space and \({\mathbb {G}}_{p_{2}}\) as semi-functional space. Subgroup \({\mathbb {G}}_{p_{3}}\) is used to randomize secret keys.

To prove the adaptive security, he first transformed the challenge ciphertext into the form

$$ {\textsc {ct}}_{{\textsc {id}}^{*}} : S,\ \mathsf {H}(e(S,{\textsc {sk}}_{{\textsc {id}}^{*}})) \cdot \textsc {m} $$

where \(S \leftarrow {\mathbb {G}}_{p_{1}}{\mathbb {G}}_{p_{2}}\) and \({\textsc {sk}}_{{\textsc {id}}^{*}}\) is a secret key for target identity \({\textsc {id}}^{*}\). The core step is to inject enough entropy into the semi-functional space of \(\textsc {sk}_{\textsc {id}}\) for all \(\textsc {id}\) “touched” by adversary (including the target identity \(\textsc {id}^*\)). More formally, define

$$ f_i (x) = \sum _{j = 1}^i \frac{r_j}{\alpha _j + x} \in {\mathbb {Z}}_{p_2} $$

where \(r_1,\ldots ,r_i,\alpha _1,\ldots ,\alpha _i \leftarrow {\mathbb {Z}}_{p_2}\). It has been proved that \(f_q\) behaves like a truly random function given only q input-output pairs [Wee16, CM14] where q depends on the total number of identities involved (in secret keys revealed to adversary and the challenge ciphertext). The remaining task is to transform all involved secret keys (including that used in \(\textsc {ct}_{\textsc {id}^*}\))

$$ \text {from}\qquad u^{\frac{1}{\alpha + \textsc {id}}} \cdot \boxed { g_2^{f_0(\textsc {id})} } \cdot R_3 \qquad \text {into} \qquad u^{\frac{1}{\alpha + \textsc {id}}} \cdot \boxed { g_2^{f_q(\textsc {id})} } \cdot R_3 $$

where \(f_0(\textsc {id}) = 0\) for all \(\textsc {id}\). Wee reached \(f_q\) in q steps following the roadmap

$$ f_0 \rightarrow f_1 \rightarrow f_2 \rightarrow \cdots \rightarrow f_q. $$

In the kth step, he extracted one unit of entropy \(r_k\) and \(\alpha _k\) from the normal space (i.e., from u and \(\alpha \)) and injected them into the semi-functional space (i.e., into \(f_{k-1}\)). We illustrate the process in the graph below.

Chen and Wee’s success [CW13] teaches us that one must reach \(f_{q}\) much more quickly in order to obtain tighter reduction. In other word, we should try to extract and inject more entropy each time. Our idea is to extract entropy from \(f_k\) (\(1 \le k \le q\)) itself rather than from u and \(\alpha \), and then inject them back into \(f_k\). A key observation is that \(f_k\) already has k units of entropy (i.e., \(\alpha _1,r_1,\ldots ,\alpha _k,r_k\)) and the structure of \(f_k\) allows us to reach \(f_{2k}\) directly which will include 2k units of entropy. This significantly accelerates the process towards \(f_q\). In particular, the roadmap now becomes

where \(\widehat{f}_k\) indicates the entropy extracted from \(f_k\), both of which have the same structure but \(\widehat{f}_k\) are defined by independent randomness over \({\mathbb {Z}}_{p_3}\). It’s not hard to see that we only need \(n = \lceil \log q \rceil + 1\) steps to reach \(f_q\).

To fulfill the above intuition, we introduce another semi-functional space, which we call shadow semi-functional space, to temporarily store the entropy extracted from \(f_k\) (i.e., \(\widehat{f}_{k}\) in the above graph) since we obviously can not put them into the normal space. Furthermore the new semi-functional space should allow us to flip all entropy back to the old semi-functional space as Chase and Meiklejohn [CM14] did. We sketch our method in the following graph where the IBE is now put into a bilinear group of order \(N = p_1p_2p_3p_4\). Subgroup \(\mathbb {G}_{p_3}\) acts as the shadow semi-functional space and \(\mathbb {G}_{p_4}\) is used to randomize secret key.

We first extract one unit entropy from u and \(\alpha \) and puts them into the semi-functional space as [Wee16] which forms \(f_{2^0} = f_1\). In the kth step, we first

$$ \text {extract} \qquad g_3^{\widehat{f}_{2^{k-1}}(\textsc {id})} \qquad \text { from } \qquad g_2^{f_{2^{k-1}}(\textsc {id})} $$

and then

$$ \text {flip} \qquad g_3^{\widehat{f}_{2^{k-1}}(\textsc {id})} \qquad \text { back as } \qquad g_2^{\widehat{f}_{2^{k-1}}(\textsc {id})} $$

which forms \(g_2^{f_{2^k}(\textsc {id})}\) together with \(g_2^{f_{2^{k-1}}(\textsc {id})}\). All these technical steps can be realized under several concrete instantiations of decisional subgroup assumption.

On the Multi-ciphertext Setting. We find that Wee’s proof idea [Wee16] and our extension (see above) can be directly extended to the (single-instance) multi-ciphertext setting but with the restriction that only one challenge ciphertext is allowed for each target identity. This is the weak version of adaptive security in the multi-ciphertext setting [HKS15]. The first observation is that each challenge ciphertext has its own randomness s which is sufficient for hiding \(\alpha \) on the ciphertext side. That is we can always argue

$$ \{ g_1^{(\alpha + \textsc {id}) s},\ e(g_1,u)^s \} = \{ g_1^{(\alpha + \textsc {id}) s},\ e(g_1^{(\alpha + \textsc {id}) s},u^{\frac{1}{\alpha + \textsc {id}}}) \} = \{ g_1^s,\ e(g_1^s,u^{\frac{1}{\alpha + \textsc {id}}}) \} $$

even when there are more than one challenge ciphertexts; the second observation is that it’s adequate to cope with more than one target identity by setting \(n = \lceil \log q_\sigma \rceil \) where \(q_\sigma \) is the total number of reveal keys and challenge ciphertexts. The restriction is set here so as to avoid the following situation: After reaching \(f_{2^n}\), all l challenge ciphertexts for target identity \(\textsc {id}^*\) will be in the form

where boxed terms have their own randomness \(S_1,\ldots ,S_l\), but share the same \(f_{2^n}(\textsc {id}^*)\).

To remove this restriction and achieve the full adaptive security [HKS15], we employ a subgroup variant of decisional bilinear Diffie-Hellman (DBDH) assumption (in subgroup \(\mathbb {G}_{p_2}\)). This allows us to utilize randomness \(S_1,\ldots ,S_l\) and argues that the joint distribution of all boxed terms sharing \(f_{2^n}(\textsc {id}^*)\) are pseudorandom. Our proof idea is almost the same as [HKS15] but our assumption is slightly simpler.

On the Multi-instance Setting. Hofheinz et al. [HKS15] also investigated the so-called multi-instance setting where adversary \(\mathcal {A}\) is allowed to attack multiple IBE instances at the same time. Fortunately, our technique and result in the single-instance setting (see above) can be extended to the multi-instance setting with a tiny adjustment. The high-level idea is to apply our proof technique (for the single-instance setting) to each instance in an independent but concurrent manner.

Assume there are \(\tau \) instances. For the \(\iota \)-th (\(1 \le \iota \le \tau \)) instance, we define a series of functions \(f^{(\iota )}_{2^0},\ldots ,f^{(\iota )}_{2^n}\) as in the single-instance setting, which are independent of those for other instances. Here we let \(n = \lceil \log \hat{q}_\sigma \rceil \) in which \(\hat{q}_\sigma \) is the upper bound of the total number of revealed secret keys and challenge ciphertexts per instance. We depict the process in the graph below. In the ith step, we create \(\tau \) functions \(f^{(1)}_{2^i},\ldots ,f^{(\tau )}_{2^i}\) at a time using the random self-reducibility of decisional subgroup assumption.

$$ \begin{array}{rccccccccc} \text {1st instance:} &{}\quad f^{(1)}_{2^0} &{} &{} f^{(1)}_{2^1} &{} &{} f^{(1)}_{2^2} &{} &{} &{} &{} f^{(1)}_{2^n} \\ \text {2nd instance:} &{}\quad f^{(2)}_{2^0} &{} \longrightarrow &{} f^{(2)}_{2^1} &{} \longrightarrow &{} f^{(2)}_{2^2} &{} \longrightarrow &{} \cdots &{} \longrightarrow &{} f^{(2)}_{2^n} \\ \vdots \\ \tau \text {th instance:} &{}\quad f^{(\tau )}_{2^0} &{} &{} f^{(\tau )}_{2^1} &{} &{} f^{(\tau )}_{2^2} &{} &{} &{} &{} f^{(\tau )}_{2^n} \\ &{} &{} &{} \text {1st step} &{} &{} \text {2nd step} &{} &{} &{} &{} n\text {th step} \end{array} $$

Then, utilizing the random self-reducibility of the subgroup variant of DBDH assumption, we can prove the full adaptive security in the multi-instance setting.

1.2 Related Work

The dual system methodology has been applied to broader area of functional encryptions [OT10, LOS+10]. In 2014, Wee [Wee14] and Attrapadung [Att14] independently gave generic constructions of a large class of functional encryptions with adaptive security including attribute based encryption, inner-product encryption, and even functional encryption for regular language. They introduced the notion of predicate/pair encoding and employed the dual system methodology in the composite-order bilinear group. Their work have been extended to the prime-order setting in [AC16, Att16, CGW15] recently.

Tight reduction under short public parameter has been studied in the field of digital signature. Very recently, Hofheinz developed algebraic partitioning technique [Hof16b] and adaptive partitioning technique [Hof16a] based on Chen and Wee’s result [CW13], which leaded to tightly secure signatures with constant verification key and public key encryption against chosen ciphertext attack with similar features. However it’s not quite direct to apply their technique to IBE.

Déjà Q technique was proposed by Chase and Meiklejohn [CM14]. They showed that one can avoid the use of (a class of) q-type assumptions with the help of a composite-order bilinear group equipped with decisional subgroup assumption using the dual system methodology. Recently, Wee gave a petit IBE scheme and broadcast encryption scheme [Wee16] with a extended Déjà Q technique. Their results have been used to build non-zero inner-product encryptions [CLR16] and functional commitments for linear functions [LRY16] (which implies many other important primitives such as accumulators.)

A recent work by Boyen and Li [BL16] established a generic framework from PRF to signatures and IBE utilizing the powerful tools in the lattice world. The reduction is constantly tight and the security loss of resulting scheme solely depends on that of underlying PRF. We remark that all tightly secure IBE schemes they showed still require non-constant-size master public key.

Independent Work. An independent work by Chase, Maller and Meiklejohn [CMM16] developed the basic Déjà Q technique [CM14] in a similar way to us. We focus on solving or making progress on two open problems left by Chen and Wee [CW13] in a specific area (i.e., tightly secure IBE) while Chase et al. focus on a more general goal, i.e., tightly translating a broader class of q-type assumptions into static one. Although they described four functional encryptions including an IBE scheme, its master public key consists of \({\mathcal {O}}(n)\) group elements with identity space \(\{0,1\}^n\). As a matter of fact, neither Wee’s IBE nor ours can be derived from an IBE under q-type assumption using Chase et al.’s new framework [CMM16]. Therefore we believe it’s still necessary to propose and analyze the IBE directly.

Open Problem. Our proposed IBE scheme works in the composite-order bilinear group which can be a drawback. We leave it as an open problem to find a prime-order IBE with tight(er) reduction, constant-size master public key, secret keys and ciphertexts.

Organization. The paper will be organized as follows. Section 2 reviews several basic notions, the decisional subgroup assumption and a core lemma given by Wee [Wee16]. Section 3 describes our IBE scheme and proves the weak adaptive security in the single-instance, multi-ciphertext setting. We then extend the basic result to full adaptive security and multi-instance setting in Sects. 4 and 5, respectively.

2 Preliminaries

Notation. Let S be a finite set. The notation \(s \leftarrow S\) means that we pick s from S at random. “p.p.t.” is the abbreviation of “probabilistic polynomial time”.

2.1 Composite-Order Bilinear Groups

Our IBE scheme is constructed in composite-order bilinear groups [BGN05]. We assume a group generator \({\mathsf {GrpGen}}\) which takes as input the security parameter \(1^\lambda \) and outputs group description \(\mathcal {G}= (N,\mathbb {G},\mathbb {G}_T,e)\), where order N is product of 4 distinct \({{\Theta }}(\lambda )\)-bit primes, group \(\mathbb {G}\) and \(\mathbb {G}_T\) are all finite cyclic groups of order N and e is an efficient, non-degenerated bilinear map from \(\mathbb {G}\times \mathbb {G}\) to \(\mathbb {G}_T\). With \(N = p_1 p_2 p_3 p_4\) for primes \(p_1, p_2, p_3, p_4\), we let \(\mathbb {G}_{p_i}\) be the subgroup of order \(p_i\) in \(\mathbb {G}\) and use \(\mathbb {G}^*_{p_i}\) to refer to the set of all generators in \(\mathbb {G}_{p_i}\), i.e., \(\mathbb {G}_{p_i}{\setminus }\{1\}\).

We review several concrete instantiations of decisional subgroup assumption [BWY11]. Since we can uniquely decompose \(\mathbb {G}= \mathbb {G}_{p_1} \times \mathbb {G}_{p_2} \times \mathbb {G}_{p_3} \times \mathbb {G}_{p_4}\), we employ a special notation for sampling random elements from a composite-order subgroup of \(\mathbb {G}\). For any two prime factors \(p_i, p_j\) of N with \(1 \le i < j \le 4\), we use \(X_i X_j \leftarrow \mathbb {G}_{p_i}\mathbb {G}_{p_j}\) to indicate that we uniformly sample an element from the subgroup of order \(p_ip_j\), whose respective components in \(\mathbb {G}_{p_i}\), \(\mathbb {G}_{p_j}\) are \(X_i\), \(X_j\). The notation can also be applied to more general cases.

Assumption 1

(SD1). For any p.p.t. adversary \(\mathcal {A}\) the following advantage function is negligible in \(\lambda \).

$$ {\mathsf {Adv}}^{\text {SD1}}_{\mathcal {A}}(\lambda ) = |\Pr [\mathcal {A}(\mathcal {G},g_1,g_4,T_0) = 1] - \Pr [\mathcal {A}(\mathcal {G},g_1,g_4,T_1)]|, $$

where \(\mathcal {G}\leftarrow {\mathsf {GrpGen}}(1^\lambda )\), \(g_1 \leftarrow \mathbb {G}^*_{p_1}\), \(g_4 \leftarrow \mathbb {G}^*_{p_4}\),

$$ T_0 \leftarrow \mathbb {G}_{p_1}\quad \text { and } \quad T_1 \leftarrow \mathbb {G}_{p_1}\mathbb {G}_{p_2}\mathbb {G}_{p_3}. $$

Assumption 2

(SD2). For any p.p.t. adversary \(\mathcal {A}\) the following advantage function is negligible in \(\lambda \).

$$ {\mathsf {Adv}}^{\text {SD2}}_{\mathcal {A}}(\lambda ) = |\Pr [\mathcal {A}(\mathcal {G},g_1,g_4,X_1X_2X_3,T_0) = 1] - \Pr [\mathcal {A}(\mathcal {G},g_1,g_4,X_1X_2X_3,T_1)]|, $$

where \(\mathcal {G}\leftarrow {\mathsf {GrpGen}}(1^\lambda )\), \(g_1 \leftarrow \mathbb {G}^*_{p_1}\), \(g_4 \leftarrow \mathbb {G}^*_{p_4}\), \(X_1X_2X_3 \leftarrow \mathbb {G}_{p_1}\mathbb {G}_{p_2}\mathbb {G}_{p_3}\),

$$ T_0 \leftarrow \mathbb {G}_{p_1} \quad \text { and } \quad T_1 \leftarrow \mathbb {G}_{p_1} \mathbb {G}_{p_2}. $$

Assumption 3

(SD3). For any p.p.t. adversary \(\mathcal {A}\) the following advantage function is negligible in \(\lambda \).

$$ {\mathsf {Adv}}^{\text {SD3}}_{\mathcal {A}}(\lambda ) = |\Pr [\mathcal {A}(\mathcal {G},g_1,g_4,X_1X_2X_3,T_0) = 1] - \Pr [\mathcal {A}(\mathcal {G},g_1,g_4,X_1X_2X_3,T_1)]|, $$

where \(\mathcal {G}\leftarrow {\mathsf {GrpGen}}(1^\lambda )\), \(g_1 \leftarrow \mathbb {G}^*_{p_1}\), \(g_4 \leftarrow \mathbb {G}^*_{p_4}\), \(X_1X_2X_3 \leftarrow \mathbb {G}_{p_1}\mathbb {G}_{p_2}\mathbb {G}_{p_3}\),

$$ T_0 \leftarrow \mathbb {G}_{p_2} \quad \text { and } \quad T_1 \leftarrow \mathbb {G}_{p_2} \mathbb {G}_{p_3}. $$

Assumption 4

(SD4). For any p.p.t. adversary \(\mathcal {A}\) the following advantage function is negligible in \(\lambda \).

$$ \begin{array}{rcr} {\mathsf {Adv}}^{\text {SD4}}_{\mathcal {A}}(\lambda ) &{} = &{} |\Pr [\mathcal {A}(\mathcal {G}, g_1, g_4, X_1X_2X_3, Y_2Y_4, T_0) = 1] \qquad \\ &{} &{} - \Pr [\mathcal {A}(\mathcal {G}, g_1, g_4, X_1X_2X_3, Y_2Y_4,T_1)]|,\\ \end{array} $$

where \(\mathcal {G}\leftarrow {\mathsf {GrpGen}}(1^\lambda )\), \(g_1 \leftarrow \mathbb {G}^*_{p_1}\), \(g_4 \leftarrow \mathbb {G}^*_{p_4}\), \(X_1X_2X_3 \leftarrow \mathbb {G}_{p_1}\mathbb {G}_{p_2}\mathbb {G}_{p_3}\), \(Y_2Y_4 \leftarrow \mathbb {G}_{p_2} \mathbb {G}_{p_4}\),

$$ T_0 \leftarrow \mathbb {G}_{p_2} \mathbb {G}_{p_4} \quad \text { and } \quad T_1 \leftarrow \mathbb {G}_{p_3} \mathbb {G}_{p_4}. $$

2.2 Identity Based Encryptions

In the paper we define the notion of identity based encryption (IBE) in the framework of key encapsulation mechanism (KEM).

Algorithms. An IBE (in the single-instance setting) is composed of the following four p.p.t. algorithms:

  • \({{\mathsf {Setup}}}(1^\lambda ) \rightarrow (\textsc {mpk},\textsc {msk})\). The setup algorithm \({{\mathsf {Setup}}}\) takes as input the security parameter \(1^\lambda \) and outputs master public/secret key pair \((\textsc {mpk},\textsc {msk})\). We assume that \(\textsc {mpk}\) includes ciphertext space \(\mathcal {C}\) and key space \(\mathcal {K}\).

  • \({\mathsf {KeyGen}}(\textsc {mpk},\textsc {msk},\textsc {id}) \rightarrow \textsc {sk}\). The key generation algorithm \({\mathsf {KeyGen}}\) takes as input the master public key \(\textsc {mpk}\), the master secret key \(\textsc {msk}\) and an identity \(\textsc {id}\) and outputs its secret key \(\textsc {sk}\).

  • \({\mathsf {Enc}}(\textsc {mpk},\textsc {id}) \rightarrow (\textsc {ct},{\textsc {key}})\). The encryption algorithm \({\mathsf {Enc}}\) takes as input the master public key \(\textsc {mpk}\) and an identity \(\textsc {id}\) and outputs a ciphertext \(\textsc {ct} \in \mathcal {C}\) along with key \({\textsc {key}}\in \mathcal {K}\).

  • \({{\mathsf {Dec}}}(\textsc {mpk},\textsc {ct},\textsc {sk}) \rightarrow {\textsc {key}}\). The decryption algorithm \({{\mathsf {Dec}}}\) takes as input the master public key \(\textsc {mpk}\), a ciphertext \(\textsc {ct}\) and a secret key \(\textsc {sk}\) and outputs key \({\textsc {key}}\) or \(\bot \).

Correctness. For any \(\lambda \in {\mathbb {N}}\), \((\textsc {mpk},\textsc {msk}) \in [{{\mathsf {Setup}}}(1^\lambda )]\), identity \(\textsc {id}\), we require

$$ \Pr \left[ {{\mathsf {Dec}}}(\textsc {mpk},\textsc {ct},\textsc {sk}) = {\textsc {key}}\left| \begin{array}{c} \textsc {sk} \leftarrow {\mathsf {KeyGen}}(\textsc {mpk},\textsc {msk},\textsc {id}) \\ (\textsc {ct},{\textsc {key}}) \leftarrow {\mathsf {Enc}}(\textsc {mpk},\textsc {id}) \end{array} \right. \right] \ge 1 - 2^{-{{\Omega }}(\lambda )}. $$

The probability space is defined by random coins of \({\mathsf {KeyGen}}\) and \({\mathsf {Enc}}\).

Security notion. For any adversary \(\mathcal {A}\), we define the advantage function as

$$ {\mathsf {Adv}}^{\text {IBE}}_{\mathcal {A}}(\lambda ) = \left| \Pr \left[ \beta = \beta ' \left| \begin{array}{c} (\textsc {mpk},\textsc {msk}) \leftarrow {{\mathsf {Setup}}}(1^\lambda ),\ \beta \leftarrow \{0,1\} \\ \beta ' \leftarrow \mathcal {A}^{{\mathsf {O}}^{{\mathsf {KeyGen}}}(\cdot ),\mathsf {O}^{{\mathsf {Enc}}}_\beta (\cdot )}(1^\lambda ,\textsc {mpk}) \end{array} \right. \right] - \frac{1}{2} \right| $$

where oracles are defined as

  • \({\mathsf {O}}^{{\mathsf {KeyGen}}}\): On input \((\textsc {id})\), the oracle returns \({\mathsf {KeyGen}}(\textsc {mpk},\textsc {msk},\textsc {id})\) and sets \(Q_K = Q_K \cup \{\textsc {id}\}\).

  • \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \): On input \((\textsc {id}^*)\), the oracle samples \((\textsc {ct}^*_1,{\textsc {key}}^*_1) \leftarrow {\mathsf {Enc}}(\textsc {mpk},\textsc {id}^*)\), \((\textsc {ct}^*_0,{\textsc {key}}^*_0) \leftarrow \mathcal {C} \times \mathcal {K}\) and returns \((\textsc {ct}^*_\beta ,{\textsc {key}}^*_\beta )\). It then sets \(Q_C = Q_C \cup \{\textsc {id}^*\}\).

The probability is defined over random coins used by \({{\mathsf {Setup}}}\), oracle \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) and \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \), and adversary \(\mathcal {A}\) as well as random bit \(\beta \). We say an IBE is adaptively secure and anonymous if and only if the above advantage function is negligible in \(\lambda \) for any p.p.t. adversary such that \(Q_C \cap Q_K = \emptyset \).

2.3 A Core Lemma

We review the lemma by Wee [Wee16] as follows.

Lemma 1

Fix a prime p. For any adversary \(\mathcal {A}\) making at most q queries, we have

$$ \left| \Pr \left[ \mathcal {A}^{{\mathsf {O}}^f(\cdot )}(1^q) = 1 \right] - \Pr \left[ \mathcal {A}^{{\mathsf {O}}^{\mathsf {RF}}(\cdot )}(1^q) = 1 \right] \right| \le \frac{q^2}{p} $$

where oracles are defined as

  • \({\mathsf {O}}^f\): The oracle is initialized by picking \(r_1,\ldots ,r_q,\alpha _1,\ldots ,\alpha _q \leftarrow {\mathbb {Z}}_p\). On input \(x \in {\mathbb {Z}}_p\), it outputs

    $$ \sum _{i = 1}^q \frac{r_i}{\alpha _i + x} \in {\mathbb {Z}}_p. $$

    Every queries are answered using the same \(r_1,\ldots ,r_q,\alpha _1,\ldots ,\alpha _q\) we picked at the very beginning.

  • \({\mathsf {O}}^{\mathsf {RF}}\): This oracle behaves as a truly random function \({\mathsf {RF}}: {\mathbb {Z}}_p \rightarrow {\mathbb {Z}}_p\). On input \(x \in {\mathbb {Z}}_p\), it returns \({\mathsf {RF}}(x)\) if it has been defined, otherwise it returns \(y \leftarrow {\mathbb {Z}}_p\) and defines \({\mathsf {RF}}(x) = y\).

3 Our IBE Scheme

This section describes our IBE scheme. At current stage, we prove its weak adaptive security and anonymity in the single-instance, multi-challenge setting, i.e., adversary can access only one IBE instance and only one challenge ciphertext is allowed for each target identity.

3.1 Construction

Our IBE scheme is described as follows.

  • \({{\mathsf {Setup}}}(1^\lambda )\). Run \(\mathcal {G}= (N, \mathbb {G}, \mathbb {G}_T, e) \leftarrow {\mathsf {GrpGen}}(1^\lambda )\). Sample

    $$ \alpha \leftarrow {\mathbb {Z}}_N, \quad g_1 \leftarrow \mathbb {G}^*_{p_1},\quad u \leftarrow \mathbb {G}_{p_1}, \quad g_4 \leftarrow \mathbb {G}_{p_4}^*. $$

    Pick \(\mathsf {H}: \mathbb {G}_T\rightarrow \{0,1\}^\lambda \) from a pairwise independent hash family. Output

    $$ \textsc {mpk} = (g_1,\ g_1^\alpha ,\ e(g_1,u),\ \mathsf {H}) \quad \text { and } \quad \textsc {msk} = (\alpha ,u,g_4). $$
  • \({\mathsf {KeyGen}}(\textsc {mpk},\textsc {msk},\textsc {id})\). Sample \(R_4 \leftarrow \mathbb {G}_{p_4}\) and output

    $$ \textsc {sk} = u^{\frac{1}{\alpha + \textsc {id}}} \cdot R_4. $$
  • \({\mathsf {Enc}}(\textsc {mpk},\textsc {id})\). Sample \(s \leftarrow {\mathbb {Z}}_N\) and output

    $$ \textsc {ct} = g_1^{(\alpha + \textsc {id}) s} \quad \text { and } \quad {\textsc {key}}= \mathsf {H}(e(g_1,u)^s). $$
  • \({{\mathsf {Dec}}}(\textsc {mpk},\textsc {ct},\textsc {sk})\). Return

    $$ {\textsc {key}}= \mathsf {H}(e(\textsc {ct},\textsc {sk})). $$

Correctness. We have

$$ e(\textsc {ct},\textsc {sk}) = e(g_1^{(\alpha + \textsc {id}) s}, u^{\frac{1}{\alpha + \textsc {id}}} \cdot R_4) = e(g_1,u)^{(\alpha + \textsc {id}) s \cdot \frac{1}{\alpha + \textsc {id}}} = e(g_1,u)^s. $$

This immediately proves the correctness.

3.2 Security Analysis: An Overview

We prove the following theorem.

Theorem 1

For any p.p.t. adversary \(\mathcal {A}\) sending at most \(q_\sigma \) queries to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) and \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \), there exist \(\mathcal {B}_1\), \(\mathcal {B}_2\), \(\mathcal {B}_3\), \(\mathcal {B}_4\) such that

$$\begin{aligned} {\mathsf {Adv}}^{\text {IBE}}_{\mathcal {A}}(\lambda )\le & {} \frac{5}{2} \cdot {\mathsf {Adv}}^{\text {SD1}}_{\mathcal {B}_1}(\lambda ) + 2 \cdot {\mathsf {Adv}}^{\text {SD2}}_{\mathcal {B}_2}(\lambda ) + 2 \cdot \lceil \log q_\sigma \rceil \cdot {\mathsf {Adv}}^{\text {SD3}}_{\mathcal {B}_3}(\lambda ) \\&\qquad \qquad \qquad \qquad \qquad +\ \Big ( 2 \cdot \lceil \log q_\sigma \rceil + \frac{1}{2} \Big ) \cdot {\mathsf {Adv}}^{\text {SD4}}_{\mathcal {B}_4}(\lambda ) + 2^{-{{\Omega }}(\lambda )} \end{aligned}$$

and \(\max \{\mathsf {T}(\mathcal {B}_1),\mathsf {T}(\mathcal {B}_2),\mathsf {T}(\mathcal {B}_3),\mathsf {T}(\mathcal {B}_4)\} \approx \mathsf {T}(\mathcal {A}) + q_\sigma ^2 \cdot {\mathsf {poly}}(\lambda )\).

We prove the theorem using hybrid argument. We define the advantage function of any p.p.t. adversary \(\mathcal {A}\) in \({{\mathsf {Game}}}_{xxx}\) as

$$ {\mathsf {Adv}}^{{{\mathsf {Game}}}_{xxx}}_{\mathcal {A}}(\lambda ) = |\Pr [\beta = \beta '] - 1/2| $$

Let \(n = \lceil \log q_\sigma \rceil \). Our proof employs the following game sequence.

  • is the real game.

  • is the real game with the following assumptions:

  • \(\mathcal {A}\) can not find \(\textsc {id},\textsc {id}' \in {\mathbb {Z}}_N\) such that \(\textsc {id} \ne \textsc {id}'\) but \(\textsc {id} = \textsc {id}' \mod p_2\);

  • \(\mathcal {A}\) can not find \(\textsc {id} \in {\mathbb {Z}}_N\) such that \(\alpha + \textsc {id} = 0 \mod p_1\) even given \(\alpha \).

One may notice that \(\mathcal {A}\) can efficiently factorize the order N and break the general decisional subgroup assumption when it violates one of the above two assumptions. Technically, \({{{\mathsf {Game}}}_0}\) aborts immediately when \(\mathcal {A}\) submits \(\textsc {id} \in {\mathbb {Z}}_N\) (through \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) or \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \)) such that

  • \(\gcd (\textsc {id}-\textsc {id}',N) \notin \{1,N\}\) for some previous identity \(\textsc {id}' \in {\mathbb {Z}}_N\);

  • \(\gcd (\alpha + \textsc {id},N) \notin \{1,N\}\).

Note that both \(N \in {\mathbb {Z}}\) and \(\alpha \in {\mathbb {Z}}_N\) are always available throughout our proof. We prove the following lemma.

Lemma 2

(from \({{{\mathsf {Game}}_{{\mathsf {real}}}}}\) to \({{{\mathsf {Game}}}_0}\) ). For any p.p.t. adversary \(\mathcal {A}\) sending at most \(q_\sigma \) queries to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) and \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \), there exist \(\mathcal {B}_1\), \(\mathcal {B}_2\) such that \(\max \{\mathsf {T}(\mathcal {B}_1),\mathsf {T}(\mathcal {B}_2)\} \approx \mathsf {T}(\mathcal {A}) + q_\sigma \cdot {\mathsf {poly}}(\lambda )\) and

$$|{\mathsf {Adv}}^{{{\mathsf {Game}}}_0}_{\mathcal {A}}(\lambda ) - {\mathsf {Adv}}^{{{\mathsf {Game}}_{{\mathsf {real}}}}}_{\mathcal {A}}(\lambda ) | \le \frac{1}{2} \cdot {\mathsf {Adv}}^{\text {SD1}}_{\mathcal {B}_1}(\lambda ) + \frac{1}{2} \cdot {\mathsf {Adv}}^{\text {SD4}}_{\mathcal {B}_2}(\lambda ) + 2^{-{{\Omega }}(\lambda )}.$$
  • is identical to \({{{\mathsf {Game}}}_0}\) except that, for each query \((\textsc {id}^*)\) to \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \), we compute \({\textsc {key}}^*_1\) as

    $$ {\textsc {key}}^*_1 = \mathsf {H}(e(\textsc {ct}^*_1,\textsc {sk}_{\textsc {id}^*})) $$

    where \(\textsc {ct}^*_1\) is produced as before and \(\textsc {sk}_{\textsc {id}^*}\) is obtained via a \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) query \((\textsc {id}^*)\). From the correctness, we have that

    $${\mathsf {Adv}}^{{{\mathsf {Game}}}'_0}_{\mathcal {A}}(\lambda ) = {\mathsf {Adv}}^{{{\mathsf {Game}}}_0}_{\mathcal {A}}(\lambda )$$

    for any p.p.t. adversary \(\mathcal {A}\).

  • is identical to \({{{\mathsf {Game}}}'_0}\) except that, for each query \((\textsc {id}^*)\) to \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \), we compute \(\textsc {ct}^*_1\) as

    $$ g_1^s \qquad \text { instead of} \qquad g_1^{(\alpha + \textsc {id}^*) s} $$

    where \(s \leftarrow {\mathbb {Z}}_N\). We have

    $${\mathsf {Adv}}^{{{\mathsf {Game}}}''_0}_{\mathcal {A}}(\lambda ) = {\mathsf {Adv}}^{{{\mathsf {Game}}}'_0}_{\mathcal {A}}(\lambda )$$

    for any p.p.t. adversary \(\mathcal {A}\) since the two games are exactly the same unless \(\alpha + \textsc {id}^* = 0 \bmod p_1\) for some query \((\textsc {id}^*)\). We emphasize that it holds even for the multiple challenge setting since s is freshly picked for each query.

  • is identical to \({{{\mathsf {Game}}}''_0}\) except that, for each query \((\textsc {id}^*)\) to \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \), we compute \(\textsc {ct}^*_1\) as

    $$ (g_1 g_2 g_3)^s \qquad \text { instead of} \qquad g_1^s $$

    where \(s \leftarrow {\mathbb {Z}}_N\), \(g_2 \leftarrow \mathbb {G}^*_{p_2}\) and \(g_3 \leftarrow \mathbb {G}^*_{p_3}\). We prove the lemma.

Lemma 3

(from \({{{\mathsf {Game}}}''_0}\) to \({{{\mathsf {Game}}}_1}\) ). For any p.p.t. adversary \(\mathcal {A}\) sending at most \(q_\sigma \) queries to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) and \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \), there exists \(\mathcal {B}\) with \(\mathsf {T}(\mathcal {B}) \approx \mathsf {T}(\mathcal {A}) + q_\sigma \cdot {\mathsf {poly}}(\lambda )\) and

$$|{\mathsf {Adv}}^{{{\mathsf {Game}}}_1}_{\mathcal {A}}(\lambda ) - {\mathsf {Adv}}^{{{\mathsf {Game}}}''_0}_{\mathcal {A}}(\lambda ) | \le {\mathsf {Adv}}^{\text {SD1}}_{\mathcal {B}}(\lambda ) + 2^{-{{\Omega }}(\lambda )}.$$
  • (\(0 \le i \le n\), \(n = \lceil \log q_\sigma \rceil \)) is identical to \({{{\mathsf {Game}}}_1}\) except that, for each query \((\textsc {id})\) to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) (including those involved in \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \)), we return

    where \(g_2 \leftarrow \mathbb {G}^*_{p_2}\) and \(\alpha _j,r_j \leftarrow {\mathbb {Z}}_N\) for all \(j \in [2^i]\). We must prove the following lemma first.

Lemma 4

(from \({{{\mathsf {Game}}}_1}\) to \({{{\mathsf {Game}}}_{2.0}}\) ). For any p.p.t. adversary \(\mathcal {A}\) sending at most \(q_\sigma \) queries to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) and \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \), there exists \(\mathcal {B}\) with \(\mathsf {T}(\mathcal {B}) \approx \mathsf {T}(\mathcal {A}) + q_\sigma \cdot {\mathsf {poly}}(\lambda )\) and

$$|{\mathsf {Adv}}^{{{\mathsf {Game}}}_{2.0}}_{\mathcal {A}}(\lambda ) - {\mathsf {Adv}}^{{{\mathsf {Game}}}_1}_{\mathcal {A}}(\lambda ) | \le {\mathsf {Adv}}^{\text {SD2}}_{\mathcal {B}}(\lambda ) + 2^{-{{\Omega }}(\lambda )}.$$

To move from \({{{\mathsf {Game}}}_{2.i}}\) to \({{{\mathsf {Game}}}_{2.(i+1)}}\), we need two additional games:

  • is identical to \({{\mathsf {Game}}}_{2.i}\) except that, for each query \((\textsc {id})\) to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\), we return

    $$ u^{\frac{1}{\alpha + \textsc {id}}} \cdot g_2^{\sum _{j=1}^{2^i}\frac{r_j}{\alpha _j + \textsc {id}}} \cdot \boxed { g_3^{\sum _{j=1}^{2^i}\frac{\widehat{r}_j}{\widehat{\alpha }_j + \textsc {id}}}} \cdot R_4 $$

    where \(g_3 \leftarrow \mathbb {G}^*_{p_3}\) and \(\alpha _j,r_j,\widehat{\alpha }_j, \widehat{r}_j \leftarrow {\mathbb {Z}}_N\) for all \(j \in [2^i]\).

  • is identical to \({{{\mathsf {Game}}}_{2.i}}\) except that, for each query \((\textsc {id})\) to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\), we return

    where \(\alpha _j,r_j,\widehat{\alpha }_j, \widehat{r}_j \leftarrow {\mathbb {Z}}_N\) for all \(j \in [2^i]\).

We prove the following two lemmas.

Lemma 5

(from \({{\mathsf {Game}}}_{2.i}\) to \({{\mathsf {Game}}}_{2.i.1}\) ). For any p.p.t. adversary \(\mathcal {A}\) sending at most \(q_\sigma \) queries to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) and \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \), there exists \(\mathcal {B}\) with \(\mathsf {T}(\mathcal {B}) \approx \mathsf {T}(\mathcal {A})\, +\, q_\sigma ^2 \cdot {\mathsf {poly}}(\lambda )\) and

$$ |{\mathsf {Adv}}^{{{\mathsf {Game}}}_{2.i.1}}_{\mathcal {A}}(\lambda ) - {\mathsf {Adv}}^{{{\mathsf {Game}}}_{2.i}}_{\mathcal {A}}(\lambda ) | \le {\mathsf {Adv}}^{\text {SD3}}_{\mathcal {B}}(\lambda ) + 2^{-{{\Omega }}(\lambda )}. $$

Lemma 6

(from \({{\mathsf {Game}}}_{2.i.1}\) to \({{\mathsf {Game}}}_{2.i.2}\) ). For any p.p.t. adversary \(\mathcal {A}\) sending at most \(q_\sigma \) queries to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) and \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \), there exists \(\mathcal {B}\) with \(\mathsf {T}(\mathcal {B}) \approx \mathsf {T}(\mathcal {A}) + q_\sigma ^2 \cdot {\mathsf {poly}}(\lambda )\) and

$$ |{\mathsf {Adv}}^{{{\mathsf {Game}}}_{2.i.2}}_{\mathcal {A}}(\lambda ) - {\mathsf {Adv}}^{{{\mathsf {Game}}}_{2.i.1}}_{\mathcal {A}}(\lambda ) | \le {\mathsf {Adv}}^{\text {SD4}}_{\mathcal {B}}(\lambda ) + 2^{-{{\Omega }}(\lambda )}. $$

Observe that all \(r_j\) and all \(\widehat{r}_j\) are i.i.d. variables in \({{\mathsf {Game}}}_{2.i.2}\). By setting \(\alpha _{2^i + k} = \widehat{\alpha }_k\) and \(r_{2^i + k} = \widehat{r}_k\) for all \(k \in [2^i]\), one can claim that

$$ {\mathsf {Adv}}^{{{\mathsf {Game}}}_{2.i.2}}_{\mathcal {A}}(\lambda ) = {\mathsf {Adv}}^{{{\mathsf {Game}}}_{2.(i+1)}}_{\mathcal {A}}(\lambda ) $$

for any adversary \(\mathcal {A}\).

  • is identical to \({{\mathsf {Game}}}_{2.n}\) except that, for each query \((\textsc {id})\) to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\), we return

    $$ u^{\frac{1}{\alpha + \textsc {id}}} \cdot g_2^{{\mathsf {RF}}(\textsc {id})} \cdot R_4 $$

    where \(g_2 \leftarrow \mathbb {G}^*_{p_2}\) and \({\mathsf {RF}}\) is a truly random function. By the core lemma shown in Sect. 2.3, we have

    $$ |{\mathsf {Adv}}^{{{\mathsf {Game}}}_{2.n}}_{\mathcal {A}}(\lambda ) - {\mathsf {Adv}}^{{{\mathsf {Game}}}_3}_{\mathcal {A}}(\lambda )| \le 2^{-{{\Omega }}(\lambda )} $$

    for any adversary \(\mathcal {A}\).

  • is identical to \({{\mathsf {Game}}}_3\) except that, for each query \((\textsc {id}^*)\) to \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \), we directly sample \({\textsc {key}}^*_1 \leftarrow \{0,1\}^\lambda \). In \({{\mathsf {Game}}}_3\), we compute a challenge for \(\textsc {id}^*\) as follows:

    $$ \textsc {ct}^*_1 = (g_1g_2g_3)^s \quad \text { and } \quad {\textsc {key}}^*_1 = \mathsf {H}(e(g_1^s,u^{\frac{1}{\alpha + \textsc {id}^*}}) \cdot \boxed {e(g_2,g_2)^{s \cdot {\mathsf {RF}}(\textsc {id}^*)}}). $$

    Due to the restrictions in the security game, \({\mathsf {RF}}(\textsc {id}^*)\) will be evaluated only in this place and the boxed term has entropy of \(p_2 = {{\Theta }}(\lambda )\) which means we can sample \({\textsc {key}}^*_1 \leftarrow \{0,1\}^\lambda \) instead but with small error. This comes from the leftover hash lemma and the fact that the pairwise independent hash family is a stronger extractor. Formally we have

    $$ |{\mathsf {Adv}}^{{{\mathsf {Game}}}_{4}}_{\mathcal {A}}(\lambda ) - {\mathsf {Adv}}^{{{\mathsf {Game}}}_3}_{\mathcal {A}}(\lambda )| \le 2^{-{{\Omega }}(\lambda )} $$

    for any adversary \(\mathcal {A}\).

Utilizing, in a reversed manner, a game sequence which is identical to the above except that we always sample \({\textsc {key}}^*_1 \leftarrow \{0,1\}^\lambda \) when answering queries to \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \), we may reach a game where we create

$$ \boxed { \textsc {ct}^*_1 \leftarrow \mathbb {G}_{p_1} } \quad \text { and }\quad {\textsc {key}}^*_1 \leftarrow \{0,1\}^\lambda \quad \text { for all } \textsc {id}^*. $$

This means we can answer all queries to \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \) without \(\beta \) and this readily proves the main theorem.

3.3 Security Analysis: Proving All Lemmas

This subsection provides all omitted proofs.

Proof

(a sketch). Let \({\mathsf {Abort}}_\mathcal {A}\) be the event that \({{\mathsf {Game}}}_0\) aborts with adversary \(\mathcal {A}\). We have

$$ |{\mathsf {Adv}}^{{{\mathsf {Game}}}_0}_{\mathcal {A}}(\lambda ) - {\mathsf {Adv}}^{{{\mathsf {Game}}_{{\mathsf {real}}}}}_{\mathcal {A}}(\lambda ) | \le \Pr [{\mathsf {Abort}}_\mathcal {A}]. $$

As we have discussed, when \({\mathsf {Abort}}_\mathcal {A}\) occurs, one can reach a non-trivial factorization of N. That is we can efficiently compute \(N_1,N_2 \in {\mathbb {Z}}\) such that \(N = N_1 N_2\) and \(1< N_1,N_2 <N\). Let us consider the following three cases:

  1. 1.

    If \(p_4 | N_1\) and \(p_2 \not \mid N_1\), given \((\mathcal {G}, g_1, g_4, X_1X_2X_3, Y_2Y_4, T)\) where either \(T \leftarrow \mathbb {G}_{p_2} \mathbb {G}_{p_4}\) or \(T \leftarrow \mathbb {G}_{p_3} \mathbb {G}_{p_4}\), we observe that \((Y_2Y_4)^{N_1} \in \mathbb {G}_{p_2}\). This allows us to break SD4 assumption by checking whether \(e((Y_2Y_4)^{N_1},T) = 1\).

  2. 2.

    If \(p_2p_4 | N_1\) and \(p_3 \not \mid N_1\), given \((\mathcal {G}, g_1, g_4, X_1X_2X_3, Y_2Y_4, T)\) where either \(T \leftarrow \mathbb {G}_{p_2} \mathbb {G}_{p_4}\) or \(T \leftarrow \mathbb {G}_{p_3} \mathbb {G}_{p_4}\), we can break SD4 assumption by checking whether \(T^{N_1} = 1\).

  3. 3.

    If \(p_2p_3p_4 | N_1\), it must be the case that \(N_2 = p_1\). Given \((\mathcal {G}, g_1, g_4, T)\) where either \(T \leftarrow \mathbb {G}_{p_1}\) or \(T \leftarrow \mathbb {G}_{p_1} \mathbb {G}_{p_2} \mathbb {G}_{p_3}\), we can break SD1 assumption by checking whether \(T^{N_2} = 1\).

In all three cases, we have access to \((\mathcal {G},g_1,g_4)\) which is sufficient for simulating \({{\mathsf {Game}}}_0\) for \(\mathcal {A}\). Therefore we can claim that there exist \(\mathcal {B}_1\), \(\mathcal {B}_2\) such that \(\max \{\mathsf {T}(\mathcal {B}_1),\mathsf {T}(\mathcal {B}_2)\} \approx \mathsf {T}(\mathcal {A}) + q_\sigma \cdot {\mathsf {poly}}(\lambda )\) and

$$ \Pr [{\mathsf {Abort}}_\mathcal {A}] \le \frac{1}{2} \cdot {\mathsf {Adv}}^{\text {SD1}}_{\mathcal {B}_1}(\lambda ) + \frac{1}{2} \cdot {\mathsf {Adv}}^{\text {SD4}}_{\mathcal {B}_2}(\lambda ) + 2^{-{{\Omega }}(\lambda )}. $$

This proves the lemma.    \(\square \)

Proof

Given \((\mathcal {G}, g_1, g_4, T)\) where either \(T \leftarrow \mathbb {G}_{p_1}\) or \(T \leftarrow \mathbb {G}_{p_1} \mathbb {G}_{p_2} \mathbb {G}_{p_3}\), algorithm \(\mathcal {B}\) works as follows:

  • Initialization. Pick \(\alpha \leftarrow {\mathbb {Z}}_N\) and \(u \leftarrow \mathbb {G}_{p_1}\). Select hash function \(\mathsf {H}\). Output

    $$ \textsc {mpk} = (g_1,\ g_1^\alpha ,\ e(g_1,u),\ \mathsf {H}) $$

    and store \(\textsc {msk} = (\alpha ,\ u,\ g_4)\).

  • Answering \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) . On input \((\textsc {id})\), return \({\mathsf {KeyGen}}(\textsc {mpk},\textsc {msk},\textsc {id})\) directly.

  • Answering \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \) . On input \((\textsc {id}^*)\), obtain \(\textsc {sk}_{\textsc {id}^*}\) via a query \((\textsc {id}^*)\) to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\). Sample \(s' \leftarrow {\mathbb {Z}}_N\) and compute

    $$ \textsc {ct}^*_1 = T^{s'} \quad \text { and } \quad {\textsc {key}}^*_1 = \mathsf {H}(e(T^{s'},\textsc {sk}_{\textsc {id}^*})). $$

    \(\mathcal {B}\) then picks \((\textsc {ct}^*_0,{\textsc {key}}^*_0) \leftarrow \mathbb {G}_{p_1} \times \{0,1\}^\lambda \) and returns \((\textsc {ct}^*_\beta ,{\textsc {key}}^*_\beta )\).

  • Finalize. \(\mathcal {B}\) returns 1 if \(\beta = \beta '\) and returns 0 in the other case.

When \(T \leftarrow \mathbb {G}_{p_1}\), the simulation is identical to \({{\mathsf {Game}}}''_0\); when \(T \leftarrow \mathbb {G}_{p_1} \mathbb {G}_{p_2} \mathbb {G}_{p_3}\), the simulation is identical to \({{\mathsf {Game}}}_1\). The additive probability error \(2^{-{{\Omega }}(\lambda )}\) is caused by trivial subgroup components in T. Because we actually take T as a generator, our simulation will deviate from both or one of the games if there exists any trivial subgroup component in it.   \(\square \)

Proof

Given \((\mathcal {G}, g_1, g_4, X_1X_2X_3, T)\) where either \(T = u \leftarrow \mathbb {G}_{p_1}\) or \(T = u g_2^r \leftarrow \mathbb {G}_{p_1} \mathbb {G}_{p_2}\) for \(g_2 \leftarrow \mathbb {G}^*_{p_2}\) and \(r \leftarrow {\mathbb {Z}}_N\), algorithm \(\mathcal {B}\) works as follows:

  • Initialization. Pick \(\alpha \leftarrow {\mathbb {Z}}_N\) and select hash function \(\mathsf {H}\). Output

    $$ \textsc {mpk} = (g_1,\ g_1^\alpha ,\ e(g_1,T),\ \mathsf {H}). $$

    Observe that \(e(g_1,T) = e(g_1,u)\) in both cases.

  • Answering \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) . On input \((\textsc {id})\), sample \(R_4 \leftarrow \mathbb {G}_{p_4}\) and return

    $$ T^{\frac{1}{\alpha + \textsc {id}}} \cdot R_4. $$
  • Answering \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \) . On input \((\textsc {id}^*)\), sample \(s' \leftarrow {\mathbb {Z}}_N\) and compute

    $$ \textsc {ct}^*_1 = (X_1X_2X_3)^{s'} \quad \text { and } \quad {\textsc {key}}^*_1 = \mathsf {H}(e((X_1X_2X_3)^{s'},\textsc {sk}_{\textsc {id}^*})) $$

    where \(\textsc {sk}_{\textsc {id}^*}\) is obtained via oracle \({\mathsf {O}}^{{\mathsf {KeyGen}}}\). \(\mathcal {B}\) then picks \((\textsc {ct}^*_0,{\textsc {key}}^*_0) \leftarrow \mathbb {G}_{p_1} \times \{0,1\}^\lambda \) and returns \((\textsc {ct}^*_\beta ,{\textsc {key}}^*_\beta )\).

  • Finalize. \(\mathcal {B}\) returns 1 if \(\beta = \beta '\) and returns 0 in the other case.

When \(T = u\), the simulation is identical to \({{\mathsf {Game}}}_1\); when \(T = u g_2^r\), the simulation is identical to \({{\mathsf {Game}}}_{2.0}\) where \(\alpha _1 = \alpha \bmod p_2\) and \(r_1 = r \bmod p_2\). The additive probability error \(2^{-{{\Omega }}(\lambda )}\) is caused by trivial subgroup components in \(X_1X_2X_3\).    \(\square \)

Proof

Given \((\mathcal {G}, g_1, g_4, X_1X_2X_3, T)\) where either \(T = g_2 \leftarrow \mathbb {G}_{p_2}\) or \(T = g_2 g_3 \leftarrow \mathbb {G}_{p_2} \mathbb {G}_{p_3}\), algorithm \(\mathcal {B}\) works as follows:

  • Initialization. Pick \(\alpha \leftarrow {\mathbb {Z}}_N\) and \(u \leftarrow \mathbb {G}_{p_1}\). Select hash function \(\mathsf {H}\). Output

    $$ \textsc {mpk} = (g_1,\ g_1^\alpha ,\ e(g_1,u),\ \mathsf {H}). $$

    Sample \(\alpha '_1,\ldots ,\alpha '_{2^i},r'_1,\ldots ,r'_{2^i} \leftarrow {\mathbb {Z}}_N\).

  • Answering \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) . On input \((\textsc {id})\), sample \(R_4 \leftarrow \mathbb {G}_{p_4}\) and return

    $$ u^{\frac{1}{\alpha + \textsc {id}}} \cdot T^{\sum _{j=1}^{2^i}\frac{r'_j}{\alpha '_j + \textsc {id}}} \cdot R_4. $$
  • Answering \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \) . On input \((\textsc {id}^*)\), sample \(s' \leftarrow {\mathbb {Z}}_N\) and compute

    $$ \textsc {ct}^*_1 = (X_1X_2X_3)^{s'} \quad \text { and } \quad {\textsc {key}}^*_1 = \mathsf {H}(e((X_1X_2X_3)^{s'},\textsc {sk}_{\textsc {id}^*})) $$

    where \(\textsc {sk}_{\textsc {id}^*}\) is obtained via oracle \({\mathsf {O}}^{{\mathsf {KeyGen}}}\). \(\mathcal {B}\) then picks \((\textsc {ct}^*_0,{\textsc {key}}^*_0) \leftarrow \mathbb {G}_{p_1} \times \{0,1\}^\lambda \) and returns \((\textsc {ct}^*_\beta ,{\textsc {key}}^*_\beta )\).

  • Finalize. \(\mathcal {B}\) returns 1 if \(\beta = \beta '\) and returns 0 in the other case.

When \(T = g_2\), the simulation is identical to \({{\mathsf {Game}}}_{2.i}\); when \(T = g_2 g_3\), the simulation is identical to \({{\mathsf {Game}}}_{2.i.1}\). We set

$$ \alpha _j = \alpha '_j \bmod p_2,\ r_j = r'_j \bmod p_2, \quad \text {for all } j \in [2^i] $$

for both cases and set

$$ \widehat{\alpha }_j = \alpha '_j \bmod p_3,\ \widehat{r}_j = r'_j \bmod p_3, \quad \text {for all } j \in [2^i] $$

in the case of \(T = g_2 g_3\). The additive probability error \(2^{-{{\Omega }}(\lambda )}\) is caused by trivial subgroup components in \(X_1X_2X_3\) and T.    \(\square \)

Proof

Given \((\mathcal {G}, g_1, g_4, X_1X_2X_3, Y_2Y_4, T)\) where either \(T = g_2 R_4 \leftarrow \mathbb {G}_{p_2} \mathbb {G}_{p_4}\) or \(T = g_3 R_4 \leftarrow \mathbb {G}_{p_3} \mathbb {G}_{p_4}\), algorithm \(\mathcal {B}\) works as follows:

  • Initialization. Pick \(\alpha \leftarrow {\mathbb {Z}}_N\) and \(u \leftarrow \mathbb {G}_{p_1}\). Select hash function \(\mathsf {H}\). Output

    $$ \textsc {mpk} = (g_1,\ g_1^\alpha ,\ e(g_1,u),\ \mathsf {H}). $$

    Sample \(\alpha '_1,\ldots ,\alpha '_{2^i},r'_1,\ldots ,r'_{2^i},\widehat{\alpha }_1,\ldots ,\widehat{\alpha }_{2^i}, \widehat{r}_1,\ldots , \widehat{r}_{2^i} \leftarrow {\mathbb {Z}}_N\).

  • Answering \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) . On input \((\textsc {id})\), sample \(R'_4 \leftarrow \mathbb {G}_{p_4}\) and return

    $$ u^{\frac{1}{\alpha + \textsc {id}}} \cdot (Y_2Y_4)^{\sum _{j=1}^{2^i}\frac{r'_j}{\alpha '_j + \textsc {id}}} \cdot T^{\sum _{j=1}^{2^i}\frac{\widehat{r}_j}{\widehat{\alpha }_j + \textsc {id}}} \cdot R'_4. $$
  • Answering \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \) . On input \((\textsc {id}^*)\), sample \(s' \leftarrow {\mathbb {Z}}_N\) and compute

    $$ \textsc {ct}^*_1 = (X_1X_2X_3)^{s'} \quad \text { and } \quad {\textsc {key}}^*_1 = \mathsf {H}(e((X_1X_2X_3)^{s'},\textsc {sk}_{\textsc {id}^*})) $$

    where \(\textsc {sk}_{\textsc {id}^*}\) is obtained via oracle \({\mathsf {O}}^{{\mathsf {KeyGen}}}\). \(\mathcal {B}\) then picks \((\textsc {ct}^*_0,{\textsc {key}}^*_0) \leftarrow \mathbb {G}_{p_1} \times \{0,1\}^\lambda \) and returns \((\textsc {ct}^*_\beta ,{\textsc {key}}^*_\beta )\).

  • Finalize. \(\mathcal {B}\) returns 1 if \(\beta = \beta '\) and returns 0 in the other case.

Let \(Y_2Y_4 = g_2^{y_2} g_4^{y_4}\), we implicitly set

$$ \alpha _j = \alpha '_j \bmod p_2 \quad \text { and } \quad r_j = r'_j \cdot y_2 \bmod p_2 \quad \text { for all } j \in [2^i]. $$

When \(T = g_3 R_4\), the simulation is identical to \({{\mathsf {Game}}}_{2.i.1}\); when \(T = g_2 R_4\), the simulation is identical to \({{\mathsf {Game}}}_{2.i.2}\). The additive probability error \(2^{-{{\Omega }}(\lambda )}\) is caused by trivial subgroup components in \(X_1X_2X_3\), \(Y_2Y_4\) and T.    \(\square \)

4 Towards Full Adaptive Security

To prove the full adaptive security of our IBE scheme (in the single-instance setting), we still employ the game sequence described in the previous section. In fact, nearly all lemmas and results we have established still hold in the full adaptive security model where each target identity may have more than one challenge ciphertexts. The only exception is that we can not prove the indistinguishability between \({{\mathsf {Game}}}_3\) and \({{\mathsf {Game}}}_4\) just from the property of random function as before.

Following the work by Hofheinz et al. [HKS15], we find that we can prove the indistinguishability between them under a subgroup variant of DBDH assumption (see Assumption 5). This assumption is motivated by Dual System Bilinear DDH assumption from [HKS15] but is simpler.

Assumption 5

(DBDH in \(\mathbb {G}_{p_2}\) ). For any p.p.t. adversary \(\mathcal {A}\) the following advantage function is negligible in \(\lambda \).

$$ {\mathsf {Adv}}^{\text {DBDH}}_{\mathcal {A}}(\lambda ) = |\Pr [\mathcal {A}(\mathcal {G},D,T_0) = 1] - \Pr [\mathcal {A}(\mathcal {G},D,T_1)]|, $$

where \(\mathcal {G}\leftarrow {\mathsf {GrpGen}}(1^\lambda )\), \(g_1 \leftarrow \mathbb {G}^*_{p_1}\), \(g_2 \leftarrow \mathbb {G}^*_{p_2}\), \(g_3 \leftarrow \mathbb {G}^*_{p_3}\), \(g_4 \leftarrow \mathbb {G}^*_{p_4}\), \(a,b,c,r \leftarrow {\mathbb {Z}}_N\),

$$\begin{aligned}&\qquad \,\, D = (\mathcal {G},g_1,g_3,g_4,g_2,g_2^a,g_2^b,g_2^c); \\&T_0 = e(g_2,g_2)^{abc} \quad \text { and } \quad T_1 \leftarrow e(g_2,g_2)^r. \end{aligned}$$

We can define two efficient algorithms to re-randomize DBDH problem instances as Hofheinz et al. [HKS15]. Given a DBDH instance, algorithm \(\mathsf {ReRand}\) produces an entirely fresh instance while algorithm \(\mathsf {ReRand_{a}}\) creates a fresh instance sharing b and c with its input. Their formal definitions are given below.

  • \(\mathsf {ReRand_{a}}(g_2,g_2^a,g_2^b,g_2^c,T) \rightarrow (g_2^{a'},T')\) where \(a' \leftarrow {\mathbb {Z}}_N\) and

    $$ T' = \left\{ \begin{array}{ll} e(g_2,g_2)^{a'bc} &{} \quad \text { when }\ T = e(g_2,g_2)^{abc} \\ e(g_2,g_2)^{r'} \ \text { for } r' \leftarrow {\mathbb {Z}}_N &{} \quad \text { when }\ T = e(g_2,g_2)^r \\ \end{array} \right. $$
  • \(\mathsf {ReRand}(g_2,g_2^a,g_2^b,g_2^c,T) \rightarrow (g_2^{a'},g_2^{b'},g_2^{c'},T')\) where \(a',b',c' \leftarrow {\mathbb {Z}}_N\) and

    $$ T' = \left\{ \begin{array}{ll} e(g_2,g_2)^{a'b'c'} &{} \quad \text { when }\ T = e(g_2,g_2)^{abc}\\ e(g_2,g_2)^{r'} \ \text { for } r' \leftarrow {\mathbb {Z}}_N &{} \quad \text { when }\ T = e(g_2,g_2)^r \\ \end{array} \right. $$

We now prove that \({{\mathsf {Game}}}_3\) and \({{\mathsf {Game}}}_4\) are computationally indistinguishable in the full adaptive security model. This will immediately derive the full adaptive security of our IBE scheme in the single-instance setting.

Lemma 7

(from \({{\mathsf {Game}}}_3\) to \({{\mathsf {Game}}}_4\) ). For any p.p.t. adversary \(\mathcal {A}\) sending at most \(q_\sigma \) queries to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) and \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \), there exists \(\mathcal {B}\) with \(\mathsf {T}(\mathcal {B}) \approx \mathsf {T}(\mathcal {A}) + q_\sigma \cdot {\mathsf {poly}}(\lambda )\) and

$$ |{\mathsf {Adv}}^{{{\mathsf {Game}}}_3}_{\mathcal {A}}(\lambda ) - {\mathsf {Adv}}^{{{\mathsf {Game}}}_4}_{\mathcal {A}}(\lambda ) | \le {\mathsf {Adv}}^{\text {DBDH}}_{\mathcal {B}}(\lambda ) + 2^{-{\Omega }(\lambda )}. $$

Proof

Given \((\mathcal {G}, g_1,g_3,g_4,g_2,g_2^a,g_2^b,g_2^c,T)\) where either \(T = e(g_2,g_2)^{abc}\) or \(T = e(g_2,g_2)^r\) for some \(r \leftarrow {\mathbb {Z}}_N\), algorithm \(\mathcal {B}\) works as follows:

  • Initialization. Pick \(\alpha \leftarrow {\mathbb {Z}}_N\) and \(u \leftarrow \mathbb {G}_{p_1}\). Select hash function \(\mathsf {H}\). Output

    $$ \textsc {mpk} = (g_1,\ g_1^\alpha ,\ e(g_1,u),\ \mathsf {H}). $$
  • Answering \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) . On input \((\textsc {id})\), return

    $$ u^{\frac{1}{\alpha + \textsc {id}}} \cdot g_2^{{\mathsf {RF}}(\textsc {id})} \cdot R_4 $$

    where \(R_4 \leftarrow \mathbb {G}_{p_4}\) and \({\mathsf {RF}}\) is a truly random function.

  • Answering \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \) . \(\mathcal {B}\) maintains a list \(\mathcal {L}\). On input \((\textsc {id}^*)\), sample \(s' \leftarrow {\mathbb {Z}}_N\). If one can find a entry \((\textsc {id}^*,g_2^{a'},g_2^{b'},g_2^{c'},T') \in \mathcal {L}\), get

    $$ (g_2^{a^*},T^*) \leftarrow \mathsf {ReRand_{a}}(g_2^{a'},g_2^{b'},g_2^{c'},T'); $$

    otherwise get

    $$ (g_2^{a^*},g_2^{b^*},g_2^{c^*},T^*) \leftarrow \mathsf {ReRand}(g_2^{a},g_2^{b},g_2^{c},T) $$

    and update the list as \(\mathcal {L}= \mathcal {L}\cup \{(\textsc {id}^*,g_2^{a^*},g_2^{b^*},g_2^{c^*},T^*)\}\). \(\mathcal {B}\) then computes

    $$ \textsc {ct}^*_1 = (g_1g_3)^{s'} \cdot g_2^{a^*} \quad \text { and } \quad {\textsc {key}}^*_1 = \mathsf {H}(e(g_1^{s'},u^{\frac{1}{\alpha + \textsc {id}^*}}) \cdot T^*). $$

    Finally \(\mathcal {B}\) picks \((\textsc {ct}^*_0,{\textsc {key}}^*_0) \leftarrow \mathbb {G}_{p_1} \times \{0,1\}^\lambda \) and returns \((\textsc {ct}^*_\beta ,{\textsc {key}}^*_\beta )\).

  • Finalize. \(\mathcal {B}\) returns 1 if \(\beta = \beta '\) and returns 0 in the other case.

We implicitly define \({\mathsf {RF}}\) as

For all \((\textsc {id}^*,\star ,\star ,\star ,\star ) \in \mathcal {L}\) (or \(\textsc {id}^* \in Q_C\)), we have \(\textsc {id}^* \notin Q_K\). Therefore our simulation of \({\mathsf {RF}}\) is consistent. When \(T = e(g_2,g_2)^{abc}\), the simulation is identical to \({{\mathsf {Game}}}_3\) where

$$ T^* = e(g_2^{a^*},g_2^{{\mathsf {RF}}(\textsc {id}^*)}); $$

when \(T = e(g_2,g_2)^r\) for some \(r \leftarrow {\mathbb {Z}}_N\), the simulation is identical to \({{\mathsf {Game}}}_4\) since all inputs of \(\mathsf {H}\) have min-entropy \({\Theta }(\lambda )\) and thus distributions of all \({\textsc {key}}^*_1\) are statistically close to the uniform distribution over \(\{0,1\}^\lambda \).    \(\square \)

5 Towards Multi-instance Setting

Having obtained full adaptive security of our IBE scheme in the basic single-instance setting, we now extend the result to the multi-instance setting [HKS15]. Typically, all instances in question will share some parameters. Formally, we define two additional algorithms following [HKS15]:

  • \({{\mathsf {Param}}}(1^\lambda ) \rightarrow {\textsc {gp}}\). The parameter generation algorithm \({{\mathsf {Param}}}\) takes as input the security parameter \(1^\lambda \) and outputs global parameter \({\textsc {gp}}\).

  • \({\mathsf {Setup}}_{{\mathsf {m}}}({\textsc {gp}}) \rightarrow (\textsc {mpk},\textsc {msk})\). The setup algorithm \({\mathsf {Setup}}_{{\mathsf {m}}}\) takes as input the global parameter \({\textsc {gp}}\) and outputs master public/secret key pair \((\textsc {mpk},\textsc {msk})\).

Each instance is established by running algorithm \({\mathsf {Setup}}_{{\mathsf {m}}}\) with the global parameter \({\textsc {gp}}\) (shared among all instances) and a fresh random coin. For simplicity, we assume that all instances have common ciphertext space \(\mathcal {C}\) and key space \(\mathcal {K}\). With master public/secret key pair \((\textsc {mpk},\textsc {msk})\) generated by algorithm \({\mathsf {Setup}}_{{\mathsf {m}}}\), one can invoke algorithms \({\mathsf {KeyGen}}\), \({\mathsf {Enc}}\), \({{\mathsf {Dec}}}\) as in the single-instance setting. Therefore the correctness can be defined in a natural way.

The full adaptive security and anonymity in the multi-instance setting can be formulated by defining the advantage function as

$$ {\mathsf {Adv}}^{\text {mIBE}}_{\mathcal {A}}(\lambda ) = \left| \Pr \left[ \beta = \beta ' \left| \begin{array}{c} {\textsc {gp}}\leftarrow {{\mathsf {Param}}}(1^\lambda ),\ \beta \leftarrow \{0,1\}\\ (\textsc {mpk}^{(\iota )},\textsc {msk}^{(\iota )}) \leftarrow {\mathsf {Setup}}_{{\mathsf {m}}}({\textsc {gp}}),\quad \forall \iota \in [\tau ] \\ \beta ' \leftarrow \mathcal {A}^{{\mathsf {O}}^{{\mathsf {KeyGen}}}(\cdot ,\cdot ),\mathsf {O}^{{\mathsf {Enc}}}_\beta (\cdot ,\cdot )}(1^\lambda ,\textsc {mpk}^{(1)},\ldots ,\textsc {mpk}^{(\tau )}) \end{array} \right. \right] - \frac{1}{2} \right| $$

where \(\tau \) is the number of instances and oracles work as follows

  • \({\mathsf {O}}^{{\mathsf {KeyGen}}}\): On input \((\iota ,\textsc {id})\), the oracle returns \({\mathsf {KeyGen}}(\textsc {mpk}^{(\iota )},\textsc {msk}^{(\iota )},\textsc {id})\) and sets \(Q_K = Q_K \cup \{(\iota ,\textsc {id})\}\).

  • \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \): On input \((\iota ^*,\textsc {id}^*)\), the oracle samples \((\textsc {ct}^*_1,{\textsc {key}}^*_1) \leftarrow {\mathsf {Enc}}(\textsc {mpk}^{(\iota ^*)},\textsc {id}^*)\), \((\textsc {ct}^*_0,{\textsc {key}}^*_0) \leftarrow \mathcal {C} \times \mathcal {K}\) and returns \((\textsc {ct}^*_\beta ,{\textsc {key}}^*_\beta )\). Set \(Q_C = Q_C \cup \{(\iota ^*,\textsc {id}^*)\}\).

5.1 Construction

We describe a multi-instance variant of our basic IBE scheme (shown in Sect. 3.1) as follows.

  • \({{\mathsf {Param}}}(1^\lambda )\). Run \(\mathcal {G}= (N, \mathbb {G}, \mathbb {G}_T, e) \leftarrow {\mathsf {GrpGen}}(1^\lambda )\). Sample

    $$ g_1 \leftarrow \mathbb {G}^*_{p_1},\quad g_4 \leftarrow \mathbb {G}_{p_4}^*. $$

    Pick \(\mathsf {H}: \mathbb {G}_T\rightarrow \{0,1\}^\lambda \) from a pairwise independent hash family. Output

    $$ {\textsc {gp}}= (\mathcal {G},\ g_1,\ g_4,\ \mathsf {H}). $$
  • \({\mathsf {Setup}}_{{\mathsf {m}}}({\textsc {gp}})\). Sample \(\alpha \leftarrow {\mathbb {Z}}_N\) and \(u \leftarrow \mathbb {G}_{p_1}\). Output

    $$ \textsc {mpk} = (g_1,\ g_1^\alpha ,\ e(g_1,u),\ \mathsf {H}) \quad \text { and } \quad \textsc {msk} = (\alpha ,u,g_4). $$

The remaining algorithms \({\mathsf {KeyGen}}\), \({\mathsf {Enc}}\), \({{\mathsf {Dec}}}\) are defined as in Sect. 3.1.

5.2 Security

We prove the following theorem.

Theorem 2

For any p.p.t. adversary \(\mathcal {A}\) sending at most \(\hat{q}_\sigma \) queries to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) and \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \) for each of \(\tau \) instances, there exist \(\mathcal {B}_1\), \(\mathcal {B}_2\), \(\mathcal {B}_3\), \(\mathcal {B}_4\) such that

$$\begin{aligned} {\mathsf {Adv}}^{\text {mIBE}}_{\mathcal {A}}(\lambda )\le & {} \frac{5}{2} \cdot {\mathsf {Adv}}^{\text {SD1}}_{\mathcal {B}_1}(\lambda ) + 2 \cdot {\mathsf {Adv}}^{\text {SD2}}_{\mathcal {B}_2}(\lambda ) + 2 \cdot \lceil \log \hat{q}_\sigma \rceil \cdot {\mathsf {Adv}}^{\text {SD3}}_{\mathcal {B}_3}(\lambda ) \\&\qquad \qquad \qquad \qquad \quad +\ \Big ( 2 \cdot \lceil \log \hat{q}_\sigma \rceil + \frac{1}{2} \Big ) \cdot {\mathsf {Adv}}^{\text {SD4}}_{\mathcal {B}_4}(\lambda ) + 2^{-{{\Omega }}(\lambda )} \end{aligned}$$

and \(\max \{\mathsf {T}(\mathcal {B}_1),\mathsf {T}(\mathcal {B}_2),\mathsf {T}(\mathcal {B}_3),\mathsf {T}(\mathcal {B}_4)\} \approx \mathsf {T}(\mathcal {A}) + \tau ^2 \cdot q_\sigma ^2 \cdot {\mathsf {poly}}(\lambda )\).

One may find that the above theorem is almost the same as Theorem 1. As a matter of fact, it can be proved in a similar way. As we have discussed, our main idea in this setting is to build an independent random function for each instance in a concurrent manner. The remaining of this subsection is devoted to showing how to upgrade the proof of Theorem 1 (c.f. Sect. 3.2 for game sequence and Sect. 3.3 for proof details) to prove Theorem 2.

Game Sequence. It’s quite straightforward to extend \({{\mathsf {Game}}_{{\mathsf {real}}}}\), \({{\mathsf {Game}}}_0\), \({{\mathsf {Game}}}'_0\), \({{\mathsf {Game}}}''_0\), \({{\mathsf {Game}}}_1\) and \({{\mathsf {Game}}}_4\) to the multi-instance setting. The remaining \({{\mathsf {Game}}}_{2.i}\), \({{\mathsf {Game}}}_{2.i.1}\), \({{\mathsf {Game}}}_{2.i.2}\), \({{\mathsf {Game}}}_3\) can be described as follows: Let \(\mathcal {G}= (N, \mathbb {G}, \mathbb {G}_T, e) \leftarrow {\mathsf {GrpGen}}(1^\lambda )\). In all these games, master public keys given to adversary \(\mathcal {A}\) are

$$ \textsc {mpk}^{(1)} = (g_1,\ g_1^{\alpha ^{(1)}},\ e(g_1,u^{(1)}),\ \mathsf {H}),\ \ldots \ ,\ \textsc {mpk}^{(\tau )} = (g_1,\ g_1^{\alpha ^{(\tau )}},\ e(g_1,u^{(\tau )}),\ \mathsf {H}) $$

where \(g_1 \leftarrow \mathbb {G}_{p_1}^*\), \(\alpha ^{(1)},\ldots ,\alpha ^{(\tau )} \leftarrow {\mathbb {Z}}_N\), \(u^{(1)},\ldots ,u^{(\tau )} \leftarrow \mathbb {G}_{p_1}\) and \(\mathsf {H}\) is picked from a family of pairwise-independent hash family; oracle \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \) works as follows:

  • On input \((\iota ^*,\textsc {id}^*)\), sample \(\textsc {ct}^*_1 \leftarrow \mathbb {G}_{p_1}\mathbb {G}_{p_2}\mathbb {G}_{p_3}\) and compute

    $$ {\textsc {key}}^*_1 = \mathsf {H}(e(\textsc {ct}^*_1,\textsc {sk}^{(\iota ^*)}_{\textsc {id}^*})) $$

    where \(\textsc {sk}^{(\iota ^*)}_{\textsc {id}^*}\) is obtained via a \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) query \((\iota ^*,\textsc {id}^*)\). Sample \((\textsc {ct}^*_0,{\textsc {key}}^*_0) \leftarrow \mathbb {G}_{p_1} \times \{0,1\}^\lambda \) and return \((\textsc {ct}^*_\beta ,{\textsc {key}}^*_\beta )\).

However, on input \((\iota ,\textsc {id})\), oracle \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) behaves differently in those games:

  • In \({{\mathsf {Game}}}_{2.i}\), it returns

    $$ (u^{(\iota )})^{\frac{1}{\alpha ^{(\iota )} + \textsc {id}}} \cdot \boxed { g_2^{\sum _{j=1}^{2^i}\frac{r^{(\iota )}_j}{\alpha ^{(\iota )}_j + \textsc {id}}} } \cdot R_4 $$

    where \(g_2 \leftarrow \mathbb {G}^*_{p_2}\) and \(\alpha ^{(1)}_j,r^{(1)}_j,\ldots ,\alpha ^{(\tau )}_j,r^{(\tau )}_j \leftarrow {\mathbb {Z}}_N\) for all \(j \in [2^i]\).

  • In \({{\mathsf {Game}}}_{2.i.1}\), it returns

    $$ (u^{(\iota )})^{\frac{1}{\alpha ^{(\iota )} + \textsc {id}}} \cdot g_2^{\sum _{j=1}^{2^i}\frac{r^{(\iota )}_j}{\alpha ^{(\iota )}_j + \textsc {id}}} \cdot \boxed { g_3^{\sum _{j=1}^{2^i}\frac{\widehat{r}^{(\iota )}_j}{\widehat{\alpha }^{(\iota )}_j + \textsc {id}}}} \cdot R_4, $$

    where \(g_3 \leftarrow \mathbb {G}^*_{p_3}\) and \(\alpha ^{(1)}_j,r^{(1)}_j,\widehat{\alpha }^{(1)}_j,\widehat{r}^{(1)}_j,\ldots ,\alpha ^{(\tau )}_j,r^{(\tau )}_j,\widehat{\alpha }^{(\tau )}_j,\widehat{r}^{(\tau )}_j \leftarrow {\mathbb {Z}}_N\) for all \(j \in [2^i]\).

  • In \({{\mathsf {Game}}}_{2.i.2}\), it returns

    where \(g_2 \leftarrow \mathbb {G}^*_{p_2}\) and \(\alpha ^{(1)}_j,r^{(1)}_j,\widehat{\alpha }^{(1)}_j,\widehat{r}^{(1)}_j,\ldots ,\alpha ^{(\tau )}_j,r^{(\tau )}_j,\widehat{\alpha }^{(\tau )}_j,\widehat{r}^{(\tau )}_j \leftarrow {\mathbb {Z}}_N\) for all \(j \in [2^i]\).

  • In \({{\mathsf {Game}}}_3\), it returns

    $$ (u^{(\iota )})^{\frac{1}{\alpha ^{(\iota )} + \textsc {id}}} \cdot g_2^{{\mathsf {RF}}^{(\iota )}(\textsc {id})} \cdot R_4 $$

    where \(g_2 \leftarrow \mathbb {G}^*_{p_2}\) and \({\mathsf {RF}}^{(1)},\ldots ,{\mathsf {RF}}^{(\tau )}\) are \(\tau \) independent random functions.

Lemmas and Proofs. Most lemmas and proofs (including arguments) in Sects. 3.3, 3.2 and 4 can be extended directly to cope with multiple instances. In particular, in order to prove \({{\mathsf {Game}}}_{2.i} \approx {{\mathsf {Game}}}_{2.i.1}\), \({{\mathsf {Game}}}_{2.i.1} \approx {{\mathsf {Game}}}_{2.i.2}\), and \({{\mathsf {Game}}}_{3} \approx {{\mathsf {Game}}}_4\) (where “\({{{\mathsf {Game}}}_{xxx} \approx {{\mathsf {Game}}}_{yyy}}\)” means two games are computationally indistinguishable) in the multi-instance setting, one can just invoke simulators described in the proofs of Lemmas 5, 6, and 7 for each instance using independent random coins. It remains to give the following lemma showing \({{{\mathsf {Game}}}_1 \approx {{\mathsf {Game}}}_{2.0}}\) with proof.

Lemma 8

(from \({{{\mathsf {Game}}}_1}\) to \({{{\mathsf {Game}}}_{2.0}}\) , multi-instance case). For any p.p.t. adversary \(\mathcal {A}\) sending at most \(\hat{q}_\sigma \) queries to \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) and \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \) for each of \(\tau \) instances, there exists \(\mathcal {B}\) with \(\mathsf {T}(\mathcal {B}) \approx \mathsf {T}(\mathcal {A}) + \tau \cdot \hat{q}_\sigma \cdot {\mathsf {poly}}(\lambda )\) and

$$|{\mathsf {Adv}}^{{{\mathsf {Game}}}_{2.0}}_{\mathcal {A}}(\lambda ) - {\mathsf {Adv}}^{{{\mathsf {Game}}}_1}_{\mathcal {A}}(\lambda ) | \le {\mathsf {Adv}}^{\text {SD2}}_{\mathcal {B}}(\lambda ) + 2^{-{{\Omega }}(\lambda )}.$$

Proof

Given \((\mathcal {G}, g_1, g_4, X_1X_2X_3, T)\) where either \(T = g_1^\mu \leftarrow \mathbb {G}_{p_1}\) or \(T = g_1^\mu g_2^r \leftarrow \mathbb {G}_{p_1} \mathbb {G}_{p_2}\) for \(g_2 \leftarrow \mathbb {G}^*_{p_2}\) and \(\mu ,r \leftarrow {\mathbb {Z}}_N\), algorithm \(\mathcal {B}\) works as follows:

  • Initialization. Pick \(\alpha ^{(1)},\ldots ,\alpha ^{(\tau )},\mu ^{(1)},\ldots ,\mu ^{(\tau )} \leftarrow {\mathbb {Z}}_N\) and select hash function \(\mathsf {H}\). Compute

    $$ T^{(1)} = T^{\mu ^{(1)}},\ \ldots \ ,\ T^{(\tau )} = T^{\mu ^{(\tau )}} $$

    and output

    $$ \textsc {mpk}^{(1)} = (g_1,\ g_1^{\alpha ^{(1)}},\ e(g_1,T^{(1)}),\ \mathsf {H}),\ \ldots ,\ \textsc {mpk}^{(\tau )} = (g_1,\ g_1^{\alpha ^{(\tau )}},\ e(g_1,T^{(\tau )}),\ \mathsf {H}). $$

    Here we implicitly set

    $$ u^{(1)} = g_1^{\mu \mu ^{(1)}},\ \ldots \ ,\ u^{(\tau )} = g_1^{\mu \mu ^{(\tau )}}. $$
  • Answering \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) . On input \((\iota ,\textsc {id})\), sample \(R_4 \leftarrow \mathbb {G}_{p_4}\) and return

    $$ (T^{(\iota )})^{\frac{1}{\alpha ^{(\iota )} + \textsc {id}}} \cdot R_4. $$
  • Answering \(\mathsf {O}^{{\mathsf {Enc}}}_\beta \) . On input \((\iota ^*,\textsc {id}^*)\), sample \(s' \leftarrow {\mathbb {Z}}_N\) and compute

    $$ \textsc {ct}^*_1 = (X_1X_2X_3)^{s'} \quad \text { and } \quad {\textsc {key}}^*_1 = \mathsf {H}(e((X_1X_2X_3)^{s'},\textsc {sk}_{\textsc {id}^*})) $$

    where \(\textsc {sk}_{\textsc {id}^*}\) is obtained via a \({\mathsf {O}}^{{\mathsf {KeyGen}}}\) query. \(\mathcal {B}\) then picks \((\textsc {ct}^*_0,{\textsc {key}}^*_0) \leftarrow \mathbb {G}_{p_1} \times \{0,1\}^\lambda \) and returns \((\textsc {ct}^*_\beta ,{\textsc {key}}^*_\beta )\).

  • Finalize. \(\mathcal {B}\) returns 1 if \(\beta = \beta '\) and returns 0 in the other case.

When \(T = g_1^\mu \), the simulation is identical to \({{\mathsf {Game}}}_1\); when \(T = g_1^\mu g_2^r\), the simulation is identical to \({{\mathsf {Game}}}_{2.0}\) where we implicitly set

$$ \begin{array}{l} \alpha ^{(1)}_1 = \alpha ^{(1)} \bmod p_2\\ r^{(1)}_1 = r \mu ^{(1)} \bmod p_2\\ \end{array}, \ldots \ ,\ \begin{array}{l} \alpha ^{(\tau )}_1 = \alpha ^{(\tau )} \bmod p_2\\ r^{(\tau )}_1 = r \mu ^{(\tau )} \bmod p_2\\ \end{array}. $$

This proves the lemma.    \(\square \)