1 Introduction

Authenticated key exchange (AKE) protocols, which are among the most widely used cryptographic primitives, form a central component in many network standards, such as IPSec, SSL/TLS, SSH. An AKE protocol enables a secure channel to be established among a set of communicating parties by allowing them to agree on a common secret key. Practical AKE protocols such as the ISO protocol (a.k.a. SIG-DH) [10, 18], the Internet key exchange protocol (a.k.a. SIGMA) [29] and their variants have been proposed and deployed in the aforementioned network standards.

In order to formally define the security of an AKE protocol, Bellare and Rogaway (BR) [5] proposed the first formal complexity theoretic security notion for AKE. The BR model and its variants are nowadays the de facto standard for AKE security analysis. In particular, the Canetti–Krawczyk (CK) model [10], which can be considered as the extension and combination of the BR model and the Bellare–Canetti–Krawczyk (BCK) model [6], has been used to prove the security of many widely used AKE protocols such as SIG-DH and SIGMA. Noting that the CK model does not capture several attacks such as the key compromise impersonation (KCI) and ephemeral key compromise attacks, LaMacchia et al. [31] introduced an extension of the CK model, named eCK model, to consider stronger adversaries (in some aspects). We should note that the CK model and the eCK model are incomparable, and refer the readers to Choo et al. [11] for a detailed summary of the differences among the aforementioned AKE models.

Leakage attacks in reality Although the security notion of AKE has been extensively studied by the research community in the last two decades, all the aforementioned AKE security models belong to the traditional “black-box” setting, where honest parties are viewed as interactive Turing machines, and each party has its own private memory and randomness source. In the real world, however, attackers do not always obey such a somewhat idealized setting. Specifically, the physical implementation of a cryptographic system can leak sensitive information through physical channels, such as the computation-time, power-consumption, radiation/noise/heat emission etc. Through these “side channels”, an attacker is able to learn some imperfect information of the user secret [7, 22, 36]. Moreover, potential weakness of the randomness can be caused due to different reasons, such as the side-channel leakages mentioned above and the poor implementation of pseudo-random number generators (PRNGs). Security vulnerabilities involving either flawed or weakened PRNGs have already been reported in the literature [32, 37, 42]. In consequence, an AKE protocol proven secure in the traditional model could be completely insecure in the presence of leakage attacks.

Modelling leakage attacks The seminal research of defending against leakage attacks by treating them in an abstract way was first proposed by Micali and Reyzin [33]. Since then significant progress has been done within the cryptographic community to incorporate leakage attacks into the traditional black-box security definitions. Informally, in the leakage setting the adversary is allowed to learn the partial information of a user secret via an abstract leakage function f in addition to the normal black-box interaction with an honest party. More precisely, the adversary is provided with access to a leakage oracle: the adversary can query the oracle with a polynomial-time computable function f, and then receive \(f(\mathsf {sk})\), where \(\mathsf {sk}\) is the user secret key. It is obvious that if we allow the adversary to choose any leakage function f, then it will lead to the leakage of the key in its entirety and the security of the system is surely compromised. Hence certain restrictions on the class of admissible leakage functions are necessary. For example, in the work by Micali and Reyzin [33], they put the restrictions on the leakage function so that only computation leaks information and the amount of leakage per occurrence is bounded by a leakage parameter. The challenge is to model the necessary restrictions in a meaningful and reasonable manner so that we can capture real leakage attacks and also make the security notion achievable.

To address the above problem, several leakage models have been proposed in the literature. Inspired by the cold boot attack presented by Halderman et al. [24], Akavia et al. [1] formalized a general framework, named bounded leakage model, which explicitly assumes that a leakage attack only reveals a fraction of the secret key to the adversary. Specifically, the output-length of the (adversarial chosen) leakage function is bounded to be strictly less than the length of the secret key. In a subsequent work, Naor and Segev [35] relaxed the restriction on the leakage function and only required that the secret key still has a “sufficient” amount of min-entropy left after the leakage. Such a leakage model, named noisy leakage model, can capture a wider class of real-world leakage attacks since now the leakage size can be arbitrarily long.

The auxiliary input model Instead of imposing a min-entropy requirement on the secret key under leakage, a more general model was put forward by Dodis et al. [16]. Their notion, named auxiliary input model, requires that it is computationally hard for the adversary to recover the secret key given the leakage. In other words, this model allows any one-way leakage function f. One can note that such hard-to-invert leakage is a generalization of the bounded/noisy leakage model, and hence can capture a larger class of leakage functions. For example, a one-way permutation is allowed to be a leakage function in the auxiliary input model, while disallowed in the bounded/noisy leakage model since it information-theoretically reveals the entire secret key. Moreover, as shown by Dodis et al. [16], the auxiliary input model offers additional advantages in terms of capturing leakage attacks in the real-world. For some cryptographic systems where the secret key (e.g., biometric keys) is used for multiple tasks, the attacker may eventually obtain leakage larger than the upper bound permitted in the bounded/noisy leakage model after a sufficiently long time. However, if the system is secure w.r.t. auxiliary inputs, one can safely use the secret key as long as the total leakage does not (computationally) reveal the entire secret key. Readers are referred to the work by Dodis et al. [16] for more detailed discussions.

1.1 Gaps between existing works and the reality for leakage-resilient AKE

Leakage-resilient cryptography, particularly leakage-resilient cryptographic primitives such as pseudorandom generators [40], signature schemes [9], and encryption schemes [1, 12, 16, 35], have been extensively studied under different leakage models outlined above. However, only very few works have been done on the modelling and construction of leakage-resilient AKE protocols. This seems surprising given the importance of AKE, but on the other hand is also “understandable” since it is believed that leakage-resilient PKE or signature can be straightforwardly adopted for the construction of AKE protocols resilient to leakage attacks, which has been demonstrated in some existing leakage-resilient AKE constructions. In particular, Alwen et al. [4] showed that a leakage-resilient AKE protocol, namely \(\mathsf {eSIG\hbox {-}DH}\), can be constructed from an entropically-unforgeable digital signature scheme secure under chosen-message attacks. Dodis et al. [15] proposed two new constructions of AKE protocols that are leakage-resilient in the CK security model. Their first construction follows the result of work by Alwen et al. [4], i.e., authenticating Diffie–Hellman (DH) key exchange using a leakage-resilient signature scheme. The second construction, named \(\mathsf {Enc\hbox {-}DH}\), is based on a leakage-resilient CCA-secure PKE scheme and follows the idea that both parties authenticate each other by requiring the peer entity to correctly decrypt the DH ephemeral public key encrypted under the long-term public key.

However, we notice that the above straightforward approach for constructing leakage-resilient AKE protocols has some limitations and fails to fully capture general leakage attacks in reality due to the following reasons.

  • Limitation on the assumption Although the AKE models in the work [4, 15] consider leakage attacks, they have an assumption that the leakage would not happen during the challenge AKE session. More precisely, to guarantee the security of the session key for the challenge (or target) session, the adversary is disallowed to access the leakage oracle during the session. The reason behind such an assumption is that for PKE- and signature-based Diffie–Hellman AKE protocols, the adversary can use the leakage function to break the authentication mechanism based on the PKE or signature in order to launch an active impersonation attack and obtain the session key. The assumption can bypass such a trivial attack, but on the other hand greatly limits the strength of the adversary since in reality leakage attack may happen at any time. Such a definitional difficulty was also recognized by Naor et al. [35] and later resolved in the works on leakage-resilient encryption [26, 41]. However, only few formal treatments have been proposed for AKE to address this problem. It is worth noting that Alawatugoda et al. [2, 3] addressed this problem by constructing AKE protocols that are resilient to after-the-fact leakage. However, their approach relies on the split-state assumption [26]. Moreover, Alwen et al. [4] suggested a solution to remove this somewhat unnatural assumption by instantiating \(\mathsf {eSIG\hbox {-}DH}\) with a leakage-resilient and existentially-unforgeable (instead of entropically-unforgeable) signature scheme. Nevertheless, their option introduces a new assumption that the amount of leakage must be smaller than the length of a signature.

  • Limitation on the modelling of randomness leakage Another limitation of the previous work on leakage-resilient AKE is that they didn’t fully capture general leakage attacks, which may involve the randomness (a.k.a. ephemeral secret) used by an AKE protocol. As mentioned before, in reality randomness leakage may occur, e.g., due to the poor implementation of PRNGs [32, 37, 42]. Also, leakage attacks in practice (e.g., timing or power consumption analysis) can be closely related to the randomness. Since randomness determines the value of a session key and hence plays a crucial role in an AKE protocol, it is necessary to capture randomness leakage in the design and analysis of leakage-resilient AKE. It is worth noting that although the eCK model [31] captures the compromising of randomness, it is different from the leakage attack. Specifically, in the eCK model, either long-term secret or ephemeral secret (but not both) of a party can be completely revealed while partial leakage of both secrets is not allowed. In particular, the attacker may reveal the long-term secret and obtain partial leakage of the ephemeral secret. This adversarial capability of the attacker has not ever been captured by any existing model.

  • Limitation on the leakage setting The leakage model for AKE introduced in the work [4, 15] is an extension of the CK model [10] in the bounded leakage setting [1]. To be specific, the adversary is given access to a leakage oracle \(\mathcal {O}_{\mathsf {sk}}^{l}(\cdot )\), which can be invoked adaptively subjecting to the constraint that the adversary can only learn at most l bits of \(\mathsf {sk}\). As outlined above, such a leakage setting is relatively weaker than the auxiliary input setting [16]. To obtain AKE protocols secure under the auxiliary input model, it is likely that we can build such a protocol either from a digital signature scheme that is random message unforgeable with auxiliary input attacks, e.g., in the work by Faust et al. [19], or from an auxiliary input secure encryption scheme, e.g., in the work by Dodis et al. [14]. In fact, it has been shown by Yang et al. [39] that an AKE protocol secure under CK model with auxiliary input attacks can be built based on a digital signature scheme that is random message unforgeable under random message and auxiliary input attacks. However, as illustrated above, such a straightforward solution cannot overcome the limitations of unnatural assumption and deficient leakage-capturing (i.e., randomness leakage). Moreover, it is still unknown how to design an eCK-based model under the auxiliary input setting and actually no existing work has considered this issue.

Other related works To remove the unnatural assumption, the notion of after-the-fact leakage proposed by Halevi and Lin [26] appears to be a potential solution, which, actually has been adopted in the work [2, 3] by Alawatugoda et al. Their proposed model allows the adversary to access the leakage oracle during the challenge session execution but implicitly requires the long-term secret has split-state since otherwise their definition is unachievable in the eCK-model. Moreover, the central idea of their AKE construction is to utilize a split-state encryption scheme with a special property (i.e., PG-IND), which is a strong assumption. We also note that it seems not natural to use the split-state approach for dealing with randomness leakage, which is not captured by their model but our goal in this work.

Another potential solution is the second approach proposed by Faust et al. [19], which defined an auxiliary input model that only allows exponentially hard-to-invert functions instead of computationally hard-to-invert functions, of the signing key. Since the signing algorithm can be viewed as a polynomially hard-to-invert function, it is excluded from the allowed set of leakage function and hence the adversary can issue the leakage query during the session. However, the exponentially hard-to-invert restriction would greatly reduce the allowed leakage functions and cannot fully capture general leakage attacks.

Noting that some previous work on encryption and signature schemes [8, 27] has considered leakage from the randomness, one may try to incorporate these techniques for AKE constructions to capture leakage of the randomness. Unfortunately, randomness (ephemeral secret) used in AKE protocols is usually separated from that used in the authentication mechanism, i.e., encryption or signature schemes, and the ephemeral key plays a crucial role in determining the session key. As mentioned before, there has been no work addressing the leakage of ephemeral secret key in AKE protocols. In particular, we must ensure that the session key is pseudo-random even if the randomness (i.e., the secret keying material) is leaked, which is a difficult task under the strong leakage setting (e.g., auxiliary input).

We should also note that there are some other research works on leakage-resilient AKE. Whereas these works didn’t follow the approach of [4, 15], they still suffer from the same limitations mentioned above. Moriyama and Okamoto [34] introduced an eCK-based leakage-resilient model for AKE protocols. However, it only considers the bounded leakage resilience [1] of the long-term secret key but not the ephemeral secret key. Moreover, their construction requires the notion of \(\pi \)PRFs (Pseudo-Random Functions), of which the construction under standard model still remains unknown.

Goal of this work Based on the aforementioned observations, we can conclude that designing a strong, meaningful, yet achievable leakage model for AKE is of practical and theoretical importance. In this paper, we give a formal study on AKE protocols in the auxiliary input model under the goal of closing the research gaps outlined above.

Table 1 Comparison with existing leakage-resilient AKE security models

1.2 Our contributions

In this work, we investigate both the modelling and construction of AKE in the auxiliary input model, under the goal of developing a strong, meaningful, yet achievable framework that can overcome the limitations of the previous solutions. Particularly, we:

  • Formulate a new security model named leakage-resilient eCK model w.r.t. auxiliary inputs (\(\mathsf {AI\hbox {-}LR\text{- }eCK}\)) for AKE protocols, which is the first eCK-based AKE security notion that captures computationally hard-to-invert leakage attacks on both the long-term secret and the randomness (i.e., the ephemeral secret) under some reasonable and necessary restrictions;

  • Propose a general framework for the construction of 2-round \(\mathsf {AI\hbox {-}LR\text{- }eCK}\)-secure AKE protocols to illustrate the feasibility of the \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model; We also provide a formal security analysis of the generic construction through a rigorous reduction proof;

  • Show an instantiation of the proposed general framework by instantiating all the building blocks under the standard model.

1.3 Overview of our techniques

In this section we present an overview of our techniques for modelling and constructing AKE with auxiliary input. The overview consists of three parts, i.e., the \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model, the generic construction and the instantiations. We start with describing more clearly the notion of \(\mathsf {AI\hbox {-}LR\text{- }eCK}\).

Leakage-resilient eCK model w.r.t. auxiliary inputs As shown in Table  1, our proposed \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model resolves the limitations in the previous leakage models for AKE.

– Modelling general leakage attacks We incorporate the auxiliary input model into the eCK security notion in order to capture the computationally hard-to-invert leakage of both the long-term secret and ephemeral secret.

We define the set of admissible leakage functions \(\mathcal {H}\) by following the work of Dodis et al. [14], i.e., we require a secret key is hard to compute when given the public key in additional to the leakage. The reason for considering such kind of auxiliary input function class is that AKE protocols (in some sense) belong to the public key setting (especially the long-term key) in terms of the authentication part. Since the public key (e.g., \(\mathsf {lpk}\)) itself leaks some information about the secret key (\(\mathsf {lsk}\)), as pointed out by Dodis et al. [14], it would be impossible to prove that the scheme remains secure w.r.t the class of admissible function that is hard-to-invert without given the public key. More precisely, for the modelling of long-term secret key (denoted by \(\mathsf {lsk}\)) leakage, we define \(\mathcal {H}_{\mathsf {lpk\hbox {-}ow}}(\epsilon _{\mathsf {lsk}})\) as the class of all the polynomial-time computable functions \(h:\{0,1\}^{|\mathsf {lsk}|+|\mathsf {lpk}|}\rightarrow \{0,1\}^*\), such that given \((\mathsf {lpk},h(\mathsf {lsk},\mathsf {lpk}))\) (\(\mathsf {lpk}\) is the long-term public key), no probabilistic polynomial-time (PPT) adversary can find \(\mathsf {lsk}\) with probability greater than \(\epsilon _{\mathsf {lsk}}\). Similarly, to model the leakage of randomness (the ephemeral secret, denoted by \(\mathsf {esk}\)), we define \(\mathcal {H}_{\mathsf {epk\hbox {-}ow}}(\epsilon _{\mathsf {esk}})\) as the class of all the polynomial-time computable functions \(h:\{0,1\}^{|\mathsf {esk}|+|\mathsf {epk}|}\rightarrow \{0,1\}^*\), such that given \((\mathsf {epk},h(\mathsf {esk},\mathsf {epk}))\) (\(\mathsf {epk}\) denotes the ephemeral public key), no PPT adversary can find \(\mathsf {esk}\) with probability greater than \(\epsilon _{\mathsf {esk}}\).

Our proposed \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model deserves more interpretation. To model the leakage query from the adversary, we define the leakage query \(\mathsf {LongTermKeyLeakage}(f_{\mathsf {lsk}},\mathsf {pid})\) for the adversary to query the leakage oracle and learn \(f_{\mathsf {lsk}}(lsk,lpk)\) where \(f_{\mathsf {lsk}}\in \mathcal {H}_{\mathsf {lpk\hbox {-}ow}}(\epsilon _{\mathsf {lsk}})\) and (lsklpk) denotes the long-term key pair of party \(\mathsf {pid}\). As for the ephemeral secret leakage, we also define the query \(\mathsf {EphemeralKeyLeakage}(f_{\mathsf {esk}},\mathsf {sid})\) for the adversary to access the leakage oracle and learn \(f_{\mathsf {esk}}(esk,epk)\) where \(f_{\mathsf {esk}}\in \mathcal {H}_{\mathsf {epk\hbox {-}ow}}(\epsilon _{\mathsf {esk}})\) and (eskepk) denotes the ephemeral key pair used by the owner of the session \(\mathsf {sid}\). One should note that separating the leakage queries for the long-term key and the ephemeral key could be reasonable in practice, as these keys are usually not stored in the same place (e.g., the long-term key can be stored in ROM, while ephemeral keys are stored in RAM). To be more precise, in \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model, the adversary is allowed to reveal the long-term secret of both parties and obtain partial information of all the ephemeral secrets generated in the AKE protocol through the leakage query. More details are referred to the Table 1. Therefore, compared to existing models, our proposed \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model is the strongest security model for AKE with auxiliary inputs.

– A reasonable assumption A somewhat unnatural assumption, namely disallowing leakage queries to be made during the challenge session, has been introduced in the previous models. We should note that it is impossible to remove the assumption without any additional treatment(s) (e.g., split-state assumption by Alawatugoda et al. [2]). The reason is that different from PKE or signature, in AKE the adversary can actively participate in the challenge session. As a result, it can utilize the leakage function to break the authentication mechanism or directly learn the session key.

In this paper, we address this definitional difficulty using a different but more reasonable approach. We observe that in reality, most of the leakage attacks are constrained by the hardware device and measurement methods in their physical implementations. Therefore, as pointed out in the work [20, 40], a fully adaptive choice of the leakage function may be an overly powerful model to capture leakage attacks. Inspired by this observation, we modify the fully adaptive definition used in the previous models by asking the adversary to submit two leakage function sets \(\mathcal {F}_{\mathsf {lsk}}\subseteq \mathcal {H}_{\mathsf {lpk\hbox {-}ow}}(\epsilon _{\mathsf {lsk}})\), and \(\mathcal {F}_{esk}\subseteq \mathcal {H}_{\mathsf {epk\hbox {-}ow}}(\epsilon _{\mathsf {esk}})\) prior to the game setup. During the security game, the adversary is only allowed to adaptively query any leakage functions belonging to the committed sets. As the submitted leakage function sets are chosen by the adversary independently of the system parameters, the adversary can no longer trivially break the challenge session and hence the aforementioned unnatural assumption can be abolished.

One may object that artificially requiring the adversary to fix the leakage function sets is somewhat counter-intuitive, as it seems that we bypass the definitional difficulty by just reducing the leakage capturing. This is actually no the case. As illustrated above, most of the practical leakage attacks in reality depend on the physical implementation rather than the realistic cryptosystem. In fact, one can note that a similar treatment has also been adopted for other cryptographic schemes with leakage resilience [17, 20, 38, 40]. Therefore, compared to the unnatural assumption which disallows the adversary to access the leakage query during the challenge session, our assumption is meaningful in capturing most of the leakages that occur in practice and hence is more reasonable. Moreover, unlike the notion of after-the-fact leakage which implicitly requires the split-state assumption, our approach is more general and also more natural to tackle with the randomness leakage.

Our generic construction We present a generic framework for constructing \(\mathsf {AI\hbox {-}LR\text{- }eCK}\)-secure AKE protocols to illustrate the feasibility of our proposed security model. Our generic construction utilizes several building blocks, including a strong extractor with hard-to-invert auxiliary inputs (denoted as \(\mathsf {AI\hbox {-}SE}\) in the rest of paper), an enhanced twisted pseudo-random function (\(\mathsf {tPRF}\)), a smooth projective hash function (\(\mathsf {SPHF}\)) and a signature scheme that is existentially unforgeable under chosen message and auxiliary input attacks (EU-CMAA).

– Underlying primitives The notion of \(\mathsf {AI\hbox {-}SE}\) was introduced by Yuen et al. [41]. A randomness extractor \(\mathsf {Ext}\) is said to be a strong extractor with hard-to-invert auxiliary inputs if no PPT adversary can distinguish \((r,f(x),\mathsf {Ext}(x,r))\) from (rf(x), u), where ru are chosen uniformly at random and f is any computationally hard-to-invert function. It has been shown in Yuen et al. [41] that such a strong extractor can be constructed based on the modified Goldreich–Levin theorem of Dodis et al. [14].

The notion of twisted pseudo-random function (\(\mathsf {tPRF}\)) was introduced and used in the work [21, 30] for constructing AKE protocols in the traditional setting. A \(\mathsf {tPRF}\) \(\widetilde{F}\) is pseudo-random in terms of two different forms. Firstly, for any polynomial q, no PPT adversary can distinguish \((x_1,\ldots ,x_q,\widetilde{F}(K,x_1),\ldots ,\widetilde{F}(K,x_q))\) from \((x_1,\ldots ,x_q,R_1,\ldots ,R_q)\) where K, \(\{x_i, R_i\}_{i=1}^q\), are randomly chosen. Secondly, no PPT adversary can distinguish \((K,\widetilde{F}(K,x))\) from (KR) for any randomly chosen KxR. In this work, we consider an enhanced variant of this notion. We require that the above two properties of \(\mathsf {tPRF}\) should still hold when \(\{x_i, R_i\}_{i=1}^q\) or K are adaptively chosen by the PPT adversary respectively. An instantiation of \(\mathsf {tPRF}\) from a normal pseudo-random function was given in [30], which shows that pseudo-random functions do imply \(\mathsf {tPRF}\)s. One can easily prove that such an instantiation is also an enhanced \(\mathsf {tPRF}\).

The definition of a smooth projective hash function (\(\mathsf {SPHF}\)) [13] requires the existence of a domain \(\mathcal X\) and an underlying \(\mathcal {NP}\) language \(\mathcal L\), where elements of \(\mathcal {L}\) form a subset of \(\mathcal X\), i.e., \( \mathcal {L} \subset \mathcal X\). The key property of \(\mathsf {SPHF}\) is that the hash value of any word \(W \in \mathcal L\) can be computed by using either a secret hashing key, or a public projection key with the witness to the fact that \(W \in \mathcal L\). However, the projection key gives almost no information about the hash value of any point in \(\mathcal {X}\setminus \mathcal {L}\). In our paper, we adopt an extension of the \(\mathsf {SPHF}\) by introducing a new algorithm \(\mathsf {WordG}\) for the word generation and require the \(\mathsf {SPHF}\) to be based on a hard-on-the-average \(\mathcal {NP}\)-language and hence is pseudo-random [23]. That is, given a word \(W \in \mathcal L\), without the corresponding witness or the secret hashing key, the distribution of its hash value is computationally indistinguishable from an uniform distribution.

– Roadmap of our framework Our framework is based on the eCK-secure AKE protocols proposed in the work [21, 30], which are in the traditional (i.e., non-leakage) setting. Our 2-round \(\mathsf {AI\hbox {-}LR\text{- }eCK}\)-secure AKE protocol works as follows. First, for both the initiator and the responder, we apply the \(\mathsf {AI\hbox {-}SE}\) on the long-term secret key lsk and ephemeral secret key esk to extract two new secrets, i.e., \(\widetilde{lsk}, \widetilde{esk}\). According to the definition of \(\mathsf {AI\hbox {-}SE}\), we know that \(\widetilde{lsk}, \widetilde{esk}\) are indistinguishable from random values from the view of the adversary even in the presence of computationally hard-to-invert leakages on lsk and esk.

We then use \(\widetilde{lsk}\) and \(\widetilde{esk}\) to generate a session-unique secret hashing key and a witness for the \(\mathsf {WordG}\) algorithm for the \(\mathsf {SPHF}\). However, since in the \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model the adversary can reveal either the long-term secret key or the ephemeral secret key, if we directly apply the \(\mathsf {SPHF}\), e.g., by setting one of \(\widetilde{lsk}\) and \(\widetilde{esk}\) as the secret hashing key, then the adversary can reveal lsk or esk to obtain the secret hashing key (as \(\mathsf {AI\hbox {-}SE}\) guarantees nothing in this situation!). Due to a similar reason, we cannot directly use \(\widetilde{lsk}\) or \(\widetilde{esk}\) as the witness in word generation. To tackle this problem, our construction generates a session-specific secret hashing key and a witness for the \(\mathsf {WordG}\) algorithm by applying a \(\mathsf {tPRF}\) \(\widetilde{F}(\widetilde{lsk}, \widetilde{esk})\). More precisely, the initiator derives the two random values \((r_1,r_2)\) from the \(\mathsf {tPRF}\). It first applies \(r_1\) to generate the hashing key and the projection key \(\mathsf {hp}\). It then signs \(\mathsf {hp}\) using \(r_2\) and sends \(\mathsf {hp}\) with the corresponding signature to the responder. Upon receiving the message from the initiator, the responder verifies the signature and then executes the same procedure to generate two random values \((w,r_3)\) by applying the \(\mathsf {tPRF}\). It then use w as the witness to generate the word W and signs W with other communicated messages using \(r_3\). Finally, it returns W and the corresponding signature to the initiator. The initiator and the responder can then compute the hash value of W, which is the shared session key, since the initiator knows the secret hashing key while the responder has the projection key and the witness.

It is worth noting that in our framework, we require the underlying signature scheme to be existentially unforgeability under chosen message and auxiliary input attacks (EU-CMAA) [19]. However, as illustrated by Faust et al. [19], no EU-CMAA-seucre signature scheme w.r.t. polynomially hard-to-invert leakage exists. The reason is that the signing algorithm can be regarded as a polynomially hard-to-invert leakage function. To address this problem, the work by Faust et al. [19] considered a restricted class of leakage functions that are exponentially hard-to-invert. Fortunately, we do not need to enforce such a restriction on the leakage functions in this work. It is due to the fact that in our general framework of AKE construction, for authentication, the initiator (\(\mathcal {A}\)) generates the signature on the transcript \((\mathsf {hp},\widehat{A},lpk_{\mathcal {B}},\widehat{B})\) where \(lpk_{\mathcal {B}}\) is the long-term public key of the responder (\(\mathcal {B}\)) and likewise the responder signs the transcript \((W_{\mathcal {B}},\widehat{B},\mathsf {hp},\widehat{A})\) that contains the ephemeral public key \(\mathsf {hp}\) of the initiator. Therefore, the adversary cannot forge a signature on the challenge session through leakage query, as it is asked to specify the allowed leakage function sets prior to the game setup in the \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model and hence neither \(lpk_{\mathcal {B}}\) nor \(\mathsf {hp}\) can be embedded into the leakage function by the adversary.

An instantiation of our framework We show that the building blocks in our framework can be instantiated in the standard model. To be precise, we first describe a construction of \(\mathsf {AI\hbox {-}SE}\) introduced in [41] using inner product from the Goldreich–Levin theorem [14] over any field GF(q) for a prime q. A simple instantiation of \(\mathsf {tPRF}\) from \(\mathsf {PRF}\)s is then presented. We introduce the Diffie–Hellman language \(\mathcal {L}_{\mathsf {DH}}=\{(u_1,u_2)|\exists r\in \mathbb Z_p, s.t., u_1=g_1^r,u_2=g_2^r\}\) where \(\mathbb G\) is a group of primer order p and \(g_1,g_2 \in \mathbb {G}\) are generators. An \(\mathsf {SPHF}\), denoted by \(\mathcal {SPHF}_{\mathsf {DH}}\), is then constructed based on \(\mathcal {L}_{\mathsf {DH}}\). We show that the EU-CMAA-secure signature scheme of Faust et al. [19] (henceforth called the FHNNZ signature scheme) can be used as the underlying signature scheme for instantiating our framework.

2 Preliminaries

In this section, we introduce some notations and definitions used in our construction.

2.1 Notations

In this paper, for a finite set \(\varOmega \), \(\omega \mathop {\leftarrow }\limits ^{\$}\varOmega \) denotes that \(\omega \) is selected uniformly at random from \(\varOmega \).

Computational indistinguishability Let \(\mathcal {V}_1\) and \(\mathcal {V}_2\) be two probability distributions over a finite set \(\varOmega \) where \(|\varOmega | \ge 2^k\) and k is a security parameter. We then define a distinguisher \(\widetilde{\mathcal {D}}\) as follows. In the game, \(\widetilde{\mathcal {D}}\) takes as input \(\mathcal {\mathcal {V}}_1\) and \(\mathcal {V}_2\), the challenger flips a coin \(\gamma \mathop {\leftarrow }\limits ^{\$}\{0,1\}\). \(\widetilde{\mathcal {D}}\) is then given an element \(v_1 \mathop {\leftarrow }\limits ^{\$}\mathcal {V}_1\) if \(\gamma =1\), otherwise an element \(v_2 \mathop {\leftarrow }\limits ^{\$}\mathcal {\mathcal {V}}_2\). Finally, \(\widetilde{\mathcal {D}}\) outputs a bit \(\gamma '\in \{0,1\}\) as its guess on \(\gamma \). We define the advantage of \(\widetilde{\mathcal {D}}\) in this game as \(\mathsf {Adv}_{\widetilde{\mathcal {D}}}^{\mathcal {V}_1,\mathcal {V}_2}(k)=\Pr [\gamma '=\gamma ]-1/2\). We say that \(\mathcal {V}_1\) and \(\mathcal {V}_2\) are computationally indistinguishable if for any probabilistic polynomial-time (PPT) distinguisher \(\widetilde{\mathcal {D}}\), \(\mathsf {Adv}_{\widetilde{\mathcal {D}}}^{\mathcal {V}_1,\mathcal {V}_2}(k)\) is negligible, and we denote it by \(\mathcal {V}_1\mathop {\equiv }\limits ^{c}\mathcal {V}_2\).

2.2 Strong extractor with hard-to-invert auxiliary inputs

A central component of our construction is a new variant of strong randomness extractor which is proposed in [41]. The new notion, named strong extractor with \(\epsilon \)-hard-to-invert auxiliary inputs, is defined as follows.

Definition 1

(Strong extractor with \(\epsilon \)-hard-to-invert auxiliary inputs [41]). Let \(\mathsf {Ext}:\{0,1\}^{l_1}\times \{0,1\}^{l_2}\rightarrow \{0,1\}^{m'}\), where \(l_1,l_2\) and \(m'\) are polynomial in the security parameter k. \(\mathsf {Ext}\) is said to be a strong extractor with \(\epsilon \)-hard-to-invert auxiliary inputs, if for all pairs (xf) such that \(x\in \{0,1\}^{l_2}\) and \(f\in \mathcal {H}_{\mathsf {ow}}(\epsilon )\), we have:

$$\begin{aligned} \big \{(r,f(x),\mathsf {Ext}(x,r))\}\mathop {\equiv }\limits ^{c}\{(r,f(x),u)\big \} \end{aligned}$$

where \(r\in \{0,1\}^{l_1},u\in \{0,1\}^{m'}\) are chosen uniformly at random.

It is worth noting that the above leakage function \(f\in \mathcal {H}_{\mathsf {ow}}(\epsilon )\) can be viewed as a composition of q functions \(f_1,f_2,\ldots ,f_q\), where \(q\in \mathbb N^+\) and \(f_i\in \mathcal {H}_{\mathsf {ow}}(\epsilon )\). Details are provided in Sect. 3.1. The following Lemma is obtained in [41].

Lemma 1

[41]. Let \(r\in \{0,1\}^{l_1}\) be chosen uniformly at random. For any pair (xf) where \(x\in \{0,1\}^{l_2}\) and \(f\in \mathcal {H}_{\mathsf {ow}}(\epsilon )\), no PPT adversary can recover x with probability \(\ge \epsilon \) given \((r,f,\mathsf {Ext}(x,r))\), provided that \(\mathsf {Ext}(x,r)\) is a strong extractor with \(\epsilon \)-hard-to-invert auxiliary inputs.

2.3 Twisted pseudo-random function

In this paper, we apply a specific type of pseudo-random function called twisted pseudo-random function (\(\mathsf {tPRF}\)) in our construction. Twisted \(\mathsf {PRF}\) was introduced in [21] and later improved in [30]. Here we briefly recall the improved twisted \(\mathsf {PRF}\) and introduce its enhanced version for our work.

Let \(k\in \mathbb {N}\) be a security parameter. A function family \(\mathsf {F}\) is associated with \(\{\mathsf {Seed}_k\}_{k\in \mathbb {N}}\), \(\{\mathsf {Dom}_k\}_{k\in \mathbb {N}}\) and \(\{\mathsf {Rng}_k\}_{k\in \mathbb {N}}\). Formally, for any \(\sum \mathop {\leftarrow }\limits ^{\$} \mathsf {Seed}_k\), \(\sigma \mathop {\leftarrow }\limits ^{\$} \sum \), \(\mathcal {D} \mathop {\leftarrow }\limits ^{\$} \mathsf {Dom}_k\) and \(\mathcal {R} \mathop {\leftarrow }\limits ^{\$} \mathsf {Rng}_k\), \(\mathsf {F}_{\sigma }^{k,\sum ,\mathcal {D},\mathcal {R}}\) defines a function which maps an element of \(\mathcal {D}\) to an element of \(\mathcal {R}\). That is, \(\mathsf {F}_{\sigma }^{k,\sum ,\mathcal {D},\mathcal {R}}(\rho )\in \mathcal {R}\) for any \(\rho \in \mathcal {D}\).

Definition 2

(\(\mathsf {PRF}\)) We say that \(\mathsf {F}\) is a pseudo-random function \((\mathsf {PRF})\) family if

$$\begin{aligned} \{\mathsf {F}_{\sigma }^{k,\sum ,\mathcal {D},\mathcal {R}}(\rho _i)\}\mathop {\equiv }\limits ^{c} \{\mathsf {RF}(\rho _i)\} \end{aligned}$$

for any \(\rho _i \in \mathcal D\) adaptively chosen by any polynomial time distinguisher, where \(\mathsf {RF}\) is a truly random function. That is, for any \(\rho \in \mathcal {D}, \mathsf {RF}(\rho )\mathop {\leftarrow }\limits ^{\$}\mathcal {R}\).

Definition 3

(Twisted \(\mathsf {PRF}\)) We say that a function \(\widetilde{F}:\{0,1\}^{t_1}\times \{0,1\}^{t_2}\rightarrow \{0,1\}^{t_2}\) is a twisted pseudo-random function \((\mathsf {tPRF})\) if

  • For any polynomial q(k),

    $$\begin{aligned} \{(x_1,\ldots ,x_{q(k)},\widetilde{F}(K,x_1),\ldots ,\widetilde{F}(K,x_{q(k)}))\}\mathop {\equiv }\limits ^{c}\{(x_1,\ldots ,x_{q(k)},R_1,\ldots ,R_{q(k)})\} \end{aligned}$$

    where \(K\in \{0,1\}^{t_1}, x_i\in \{0,1\}^{t_2}, R_i\in \{0,1\}^{t_2}\) are randomly chosen for any \(i=1,...,q(k)\), and

  • For any randomly chosen KxR,

    $$\begin{aligned} \{(K,\widetilde{F}(K,x))\}\mathop {\equiv }\limits ^{c}\{(K,R)\} \end{aligned}$$

An enhanced variant of twisted PRF In this work, we consider an enhanced variant of \(\mathsf {tPRF}\). That is, similar to the distinguisher defined in the security game of \(\mathsf {PRF}\), we allow the distinguisher to adaptively choose the input in the security game of \(\mathsf {tPRF}\). Precisely, the two properties of \(\mathsf {tPRF}\) defined above should still hold when \((x_1,\ldots ,x_{q(k)})\) or K are chosen by any polynomial time distinguisher in the two security games respectively.

An instantiation of (enhanced) \(\mathsf {tPRF}\) from \(\mathsf {PRF}\) In [30], the authors showed that a twisted pseudo-random function \(\widetilde{F}:\{0,1\}^{t_1}\times \{0,1\}^{t_2}\rightarrow \{0,1\}^{t_2}\) can be instantiated from a pseudo-random function \(F:\{0,1\}^{t_1'}\times \{0,1\}^{t_1'}\rightarrow \{0,1\}^{t_2}\) as follows,

$$\begin{aligned} \widetilde{F}\big (\big (k,k'\big ),\big (a,a'\big )\big )=F_{k}(a)\oplus F_{a'}\big (k'\big ) \end{aligned}$$

where \(t_1=t_2=2t_1'\) and \(k,k',a,a'\) are randomly chosen from \(\{0,1\}^{t_1'}\). A formal proof was given in [30] to show that \(\widetilde{F}\) is a \(\mathsf {tPRF}\) if F is a \(\mathsf {PRF}\). Therefore, we can see that \(\mathsf {PRF}\)s do imply \(\mathsf {tPRF}\)s. One can easily prove that such an instantiation is also an enhanced \(\mathsf {tPRF}\).

2.4 Smooth projective hash function

Smooth projective hash functions (\(\mathsf {SPHF}\)) is originally introduced by Cramer and Shoup [13] and extended for constructions of many cryptographic primitives [23, 25, 28]. We start with the definition of a slight variant required in this paper.

Syntax (extension) Roughly speaking, the definition of an \(\mathsf {SPHF}\) requires the existence of domains \(\mathcal X,\mathcal Y\) and an underlying \(\mathcal {NP}\) language \(\mathcal L\), where elements of \(\mathcal {L}\) form a subset \(\mathcal X\), i.e., \( \mathcal {L} \subset \mathcal X\). In order to make the \(\mathsf {SPHF}\) notion well applied for our work, we need an extension of the \(\mathsf {SPHF}\) in this paper. Formally, our \(\mathsf {SPHF}\) is defined by the following algorithms.

  • \(\mathsf {SPHFSetup}\big (1^{k}\big )\): outputs the global parameters \(\mathsf {param} \) and the language \(\mathcal L\);

  • \(\mathsf {HashKG}(\mathcal L, \mathsf {param})\): generates a hashing key \(\mathsf {hk}\) for the language \(\mathcal L\);

  • \(\mathsf {ProjKG}(\mathsf {hk},(\mathcal L, \mathsf {param}))\): derives projection key \(\mathsf {hp}\) from the hashing key \(\mathsf {hk}\);

  • \(\mathsf {WordG}(\mathcal L, \mathsf {param},w)\): generates a word \(W\in \mathcal {L}\) with w the witness ;

  • \(\mathsf {Hash}(\mathsf {hk},(\mathcal L, \mathsf {param}),W)\): outputs the hash value \(\mathsf {hv}\) on the word W from the secret hashing key \(\mathsf {hk}\);

  • \(\mathsf {ProjHash}(\mathsf {hp},(\mathcal L, \mathsf {param}),W,w)\): outputs the hash value \(\mathsf {hv'} \in \mathcal Y\), on the word W from the projection key \(\mathsf {hp}\), and the witness w for the fact that \(W \in \mathcal L\).

Correctness A key property of SPHF is that, for any point W in the language \(\mathcal L\) (\(W\in \mathcal L\)), the hash value of W can be computed by using either a secret hashing key which also works for the computation of any point in the set \(\mathcal X \setminus \mathcal L\), or a public projection key which only works for any point \(W \in \mathcal L \) and requires the knowledge of the witness w for the fact that \(W \in \mathcal L\). Formally, let \(W=\mathsf {WordG}(\mathcal L, \mathsf {param},w)\), then for all hashing key \(\mathsf {hk}\) and projection key \(\mathsf {hp}\), we have

$$\begin{aligned} \mathsf {Hash}(\mathsf {hk},(\mathcal L, \mathsf {param}),W)=\mathsf {ProjHash}(\mathsf {hp},(\mathcal L, \mathsf {param}),W,w). \end{aligned}$$

Pseudo-randomness Another property of SPHF is that the projection key gives almost no information about the hash value of any point in \(\mathcal {X}\setminus \mathcal {L}\) (smooth). In this paper, instead of requiring the SPHF to be smooth, we require the \(\mathsf {SPHF}\) to be pseudo-random as follows. One should note that this property was firstly introduced in [23]. Roughly speaking, if a word \(W \in \mathcal L\), then without the corresponding witness w, the distribution of the hash output is computationally indistinguishable from a uniform distribution of the hash value space \(\mathcal {Y}\). Formally, we have the following definition.

Definition 4

(Pseudo-random SPHF) A smooth projective hash function \(\mathcal {SPHF}\)= \((\mathsf {SPHFSetup}\), \(\mathsf {HashKG}, \mathsf {ProjKG}, \mathsf {WordG}, \mathsf {Hash}, \mathsf {ProjHash})\) is pseudo-random, if for any PPT adversary \(\mathcal {M}\),

is negligible in k.

2.5 Signature schemes secure against hard-to-invert leakage

A signature scheme consists of a tuple of PPT algorithm (\(\mathsf {KeyGen},\mathsf {Sign}, \mathsf {Verify}\)) defined as follows.

  • \(\mathsf {KeyGen}(1^{k})\): generates the signing and verification key \((\mathsf {sk,vk})\);

  • \(\mathsf {Sign}(m,\mathsf {sk})\): outputs the signature \(\sigma \) of a message m;

  • \(\mathsf {Verify}(\mathsf {vk},m,\sigma )\): outputs 1 if the signature is accepted and 0 otherwise.

A signature scheme should satisfy the following correctness property: for any message m and keys \((\mathsf {sk,vk})\leftarrow \mathsf {KeyGen}(1^k)\), \(\Pr [\mathsf {Verify}(\mathsf {vk},m,\mathsf {Sign}(\mathsf {sk},m))=1]=1\). The standard security notion for a signature scheme is existentially unforgeability under chosen message attacks (EU-CMA). In this paper, we are interested in the following notion for signature schemes proposed in [19], namely existentially unforgeability under chosen message and auxiliary input attacks (EU-CMAA).

Definition 5

(EU-CMAA) [19]. A signature scheme \(\mathcal {SIG}=(\mathsf {KeyGen},\mathsf {Sign}, \mathsf {Verify})\) is secure against EU-CMAA w.r.t. \(\mathcal {H}\) which is a class of admissible leakage functions, if for any PPT adversary \(\mathcal {M}\) and any function \(h\in \mathcal {H}\),

$$\begin{aligned} \mathsf {Adv}_{\mathcal {SIG},\mathcal {H},\mathcal {M}}^{\mathsf {EU\hbox {-}CMAA}}(k)=\Pr \left[ \begin{array}{c} (sk,vk)\leftarrow \mathsf {KeyGen}(1^{k});\\ (m^*,\sigma ^*)\leftarrow \mathcal {M}^{\mathcal {O}(sk,\cdot )}(vk,h(vk,sk)):\\ \mathsf {Verify}(vk,m^*,\sigma ^*)=1. \end{array} \right] \end{aligned}$$

is negligible in k, where oracle \(\mathcal {O}(sk,\cdot )\) outputs \(\mathsf {Sign}(sk,m)\) for any input message m.

2.6 The extended Canetti–Krawczyk (eCK) model for AKE

In this subsection, we review the eCK security model for AKE protocols.

AKE protocols An AKE protocol is run among parties \((\mathcal {A},\mathcal {B},\mathcal {C},...)\) which are modelled as probabilistic polynomial-time Turing Machines. Each party has a long-term secret key (\(\mathsf {lsk}\)) together with a certificate that binds the corresponding long-term public key (\(\mathsf {lpk}\)) to the party. Here we denote \(\widehat{A}\)(\(\widehat{B}\)) as the long-term public key of party \(\mathcal {A}\) (\(\mathcal {B}\)) with the certificate issued by the certificate authority \(\mathcal {CA}\).

Any two parties, say \(\mathcal {A}, \mathcal {B}\), can be activated to run an instance of the AKE protocol, namely session to obtain a shared session key. During the session execution, the party that first sends a message is called an initiator and the party that first receives a message is called a responder. Suppose that \(\mathcal {A}\) is the session initiator. When the session is activated, \(\mathcal {A}\) generates the ephemeral public/secret key pair \((epk_{\mathcal {A}},esk_{\mathcal {A}})\) and sends \((\widehat{B},\widehat{A},epk_{\mathcal {A}})\) to session responder \(\mathcal {B}\). \(\mathcal {B}\) then creates the ephemeral public/secret key pair \((epk_{\mathcal {B}},esk_{\mathcal {B}})\) and sends \((\widehat{A},\widehat{B},epk_{\mathcal {A}},epk_{\mathcal {B}})\) to \(\mathcal {A}\). At the end of the session execution, each party derives the shared session key by taking as input his/her own long-term secret key and ephemeral secret key, along with the long-term public key and ephemeral public key received from the other party.

The session of initiator \(\mathcal {A}\) with responder \(\mathcal {B}\) is identified by the session identifier \((\widehat{A},\widehat{B}\), \(epk_{\mathcal {A}},epk_{\mathcal {B}})\), where \(\mathcal {A}\) is the owner of the session and \(\mathcal {B}\) the peer of the session. The session of responder \(\mathcal {B}\) with initiator \(\mathcal {A}\) is identified as \((\widehat{B},\widehat{A},epk_{\mathcal {B}},epk_{\mathcal {A}})\), where \(\mathcal {B}\) is the owner of the session and \(\mathcal {A}\) the peer of the session. Session \((\widehat{B},\widehat{A},epk_{\mathcal {B}},epk_{\mathcal {A}})\) is said to be the matching session of \((\widehat{A},\widehat{B},epk_{\mathcal {A}},epk_{\mathcal {B}})\). If the party outputs a session key at the end of the session, we called the session is completed.

eCK security model The extended Canetti–Krawczyk (eCK) model was proposed by LaMacchia, Lauter and Mityagin [31] based on the CK model which was formulated by Canetti and Krawczyk [10] for the AKE protocols. Roughly speaking, in the eCK definition, the adversary \(\mathcal {M}\) is modelled as a probabilistic polynomial time Turing machine that controls all communications between the honest parties. In the eCK model, adversary \(\mathcal {M}\) is allowed to issue the following oracle queries.

  • \(\mathsf {Send}(\mathcal {A},\mathcal {B},message)\). Sends message to party \(\mathcal {A}\) on behalf of party \(\mathcal {B}\). Returns \(\mathcal {A}\)’s response to this message. This query allows \(\mathcal {M}\) to activate \(\mathcal {A}\) to start a session and present party \(\mathcal {B}\) to communicate with \(\mathcal {A}\).

  • \(\mathsf {EstablishParty(pid)}\). This query allows the adversary to register a long-term public key on behalf of party \(\mathsf {pid}\), which is said to be dishonest. Note that if a party is not dishonest, we call the party honest.

  • \(\mathsf {LongTermKeyReveal(pid)}\). This query allows the adversary to learn the long-term secret key of party \(\mathsf {pid}\).

  • \(\mathsf {SessionKeyReveal(sid)}\). This query allows the adversary to obtain the session key of the completed session \(\mathsf {sid}\).

  • \(\mathsf {EphemeralKeyReveal(sid)}\). This query allows the adversary to obtain the ephe-meral secret key of session \(\mathsf {sid}\).

Eventually, adversary \(\mathcal {M}\) selects a completed session \(\mathsf {sid}^*\) as the test session and makes a query \(\mathsf {Test}(\mathsf {sid}^*)\) as follows.

  • \(\mathsf {Test(sid^*)}\). To answer this query, the challenger pick \(b\mathop {\leftarrow }\limits ^{\$} \{0,1\}\). If \(b=1\), the challenger returns \(SK^*\leftarrow \mathsf {SessionKeyReveal(sid^*)}\) . Otherwise, the challenger sends \(\mathcal {M}\) a random key \(R^* \mathop {\leftarrow }\limits ^{\$} \{0,1\}^{|SK^*|}\).

Note that the \(\mathsf {Test}\) query can be issued only once and the game terminates as soon as \(\mathcal {M}\) outputs its guess \(b'\) on b. Here, we require the test session to be a fresh session which is defined as follows.

Definition 6

(Fresh session in eCK model) Let \(\mathsf {sid}\) be the completed session owned by an honest party \(\mathcal {A}\) with peer \(\mathcal {B}\), who is also honest. If there exists the matching session to session \(\mathsf {sid}\), we denote the matching session as \(\overline{\mathsf {sid}}\). Session \(\mathsf {sid}\) is said to be fresh if none of the following conditions hold:

  • \(\mathcal {M}\) issues a \(\mathsf {SessionKeyReveal(sid)}\) query or a \(\mathsf {SessionKeyReveal(\overline{sid})}\) query (If \(\overline{\mathsf {sid}}\) exists).

  • \(\overline{\mathsf {sid}}\) exists and \(\mathcal {M}\) issues either

    • \(\mathsf {LongTermKeyReveal}(\mathcal {A})\wedge \mathsf {EphemeralKeyReveal(sid)} \), or

    • \(\mathsf {LongTermKeyReveal}(\mathcal {B})\wedge \mathsf {EphemeralKeyReveal(\overline{sid})} \).

  • \(\overline{\mathsf {sid}}\) does not exist and \(\mathcal {M}\) issues either

    • \(\mathsf {LongTermKeyReveal}(\mathcal {A})\wedge \mathsf {EphemeralKeyReveal(sid)} \), or

    • \(\mathsf {LongTermKeyReveal}(\mathcal {B})\).

We remark that the freshness of the test session can be identified only after the game is completed as \(\mathcal {M}\) can continue the other queries after the \(\mathsf {Test}\) query. That is, \(\mathcal {M}\) wins the game if he correctly guesses the challenge for the test session which remains fresh until the end of the game. Formally, we have the following notion for eCK security.

Definition 7

(eCK security) Let the test session \(\mathsf {sid}^*\) be fresh where adversary \(\mathcal {M}\) issues \(\mathsf {Test}(sid^*)\) query. We define the advantage of \(\mathcal {M}\) in the eCK game by

$$\begin{aligned} \mathsf {Adv}_{\mathcal {M}}^{\mathsf {eCK}}(k)=\Pr [b'=b]-1/2, \end{aligned}$$

where k is the security parameter of the AKE protocol. We say the AKE protocol is eCK-secure if the matching session computes the same session key and for any probabilistic polynomial-time adversary \(\mathcal {M}\), \(\mathsf {Adv}_{\mathcal {M}}^{\mathsf {eCK}}\) is negligible.

3 Formal security against auxiliary inputs for AKE

In this section, we introduce a new security model, namely leakage-resilient eCK model w.r.t. auxiliary input (\(\mathsf {AI\hbox {-}LR\text{- }eCK}\)), for AKE protocols.

3.1 Admissible auxiliary input functions

To model both the long-term secret key and ephemeral secret key leakage with respect to the auxiliary input, we firstly specify the set of admissible functions \(\mathcal {H}\). Following the work of Dodis et al. [14], we define two classes of auxiliary input leakage functions.

  • Let \(\mathcal {H}_{\mathsf {ow}}(\epsilon _{\mathsf {lsk}})\) be the class of all the polynomial-time computable functions \(h:\{0,\) \(1\}^{|\mathsf {lsk}|+|\mathsf {lpk}|}\rightarrow \{0,1\}^*\), such that given \(h(\mathsf {lsk},\mathsf {lpk})\) (for a randomly generated long-term key pair \((\mathsf {lsk},\mathsf {lpk})\)), no PPT adversary can find \(\mathsf {lsk}\) with probability greater than \(\epsilon _{\mathsf {lsk}}\). The function \(h(\mathsf {lsk})\) can be viewed as a composition of \(q_{\mathsf {lsk}}\in \mathbb {N}^{+}\) functions, i.e., \(h(\mathsf {lsk},\mathsf {lpk})=(h_1(\mathsf {lsk},\mathsf {lpk}),\ldots ,h_{q_{\mathsf {lsk}}}(\mathsf {lsk},\mathsf {lpk}))\) where for all \(i\in \{1,\ldots ,\) \(q_{\mathsf {lsk}}\}\), \(h_i\in \mathcal {H}_{\mathsf {ow}}(\epsilon _{\mathsf {lsk}})\).

  • Let \(\mathcal {H}_{\mathsf {lpk\hbox {-}ow}}(\epsilon _{\mathsf {lsk}})\) be the class of all the polynomial-time computable functions \(h:\{0,\) \(1\}^{|\mathsf {lsk}|+|\mathsf {lpk}|}\rightarrow \{0,1\}^*\), such that given \((\mathsf {lpk},h(\mathsf {lsk},\mathsf {lpk}))\) (for a randomly generated long-term key pair \((\mathsf {lsk},\mathsf {lpk})\)), no PPT adversary can find \(\mathsf {lsk}\) with probability greater than \(\epsilon _{\mathsf {lsk}}\). The function \(h(\mathsf {lsk},\mathsf {lpk})\) can be viewed as a composition of \(q_{\mathsf {lsk}}\in \mathbb {N}^{+}\) functions, i.e., \(h(\mathsf {lsk},\mathsf {lpk})=(h_1(\mathsf {lsk},\mathsf {lpk}),\ldots ,h_{q_{\mathsf {lsk}}}(\mathsf {lsk},\mathsf {lpk}))\) where for all \(i\in \{1,\ldots ,q_{\mathsf {lsk}}\}, h_i\in \mathcal {H}_{\mathsf {lpk\hbox {-}ow}}(\epsilon _{\mathsf {lsk}})\).

For the auxiliary input functions with respect to ephemeral secret key leakage, we also define two classes.

  • Let \(\mathcal {H}_{\mathsf {ow}}(\epsilon _{\mathsf {esk}})\) be the class of all the polynomial-time computable functions \(h:\{0,\) \(1\}^{|\mathsf {esk}|+|\mathsf {epk}|}\rightarrow \{0,1\}^*\), such that given \(h(\mathsf {esk},\mathsf {epk})\) (for a randomly generated ephemeral key pair \((\mathsf {esk},\mathsf {epk})\)), no PPT adversary can find \(\mathsf {esk}\) with probability greater than \(\epsilon _{\mathsf {esk}}\). The function \(h(\mathsf {esk},\mathsf {epk})\) can be viewed as a composition of \(q_{\mathsf {esk}}\in \mathbb {N}^{+}\) functions, i.e., \(h(\mathsf {esk},\mathsf {epk})=(h_1(\mathsf {esk},\mathsf {epk}),\ldots ,h_{q_{\mathsf {esk}}}(\mathsf {esk},\mathsf {epk}))\) where for all \(i\in \{1,\ldots ,q_{\mathsf {esk}}\}, h_i\in \mathcal {H}_{\mathsf {ow}}(\epsilon _{\mathsf {esk}})\).

  • Let \(\mathcal {H}_{\mathsf {epk\hbox {-}ow}}(\epsilon _{\mathsf {esk}})\) be the class of all the polynomial-time computable functions \(h:\{0,1\}^{|\mathsf {esk}|+|\mathsf {epk}|}\rightarrow \{0,1\}^*\), such that given \((\mathsf {epk},h(\mathsf {esk},\mathsf {epk}))\) (for a randomly generated ephemeral key pair \((\mathsf {esk},\mathsf {epk})\)), no PPT adversary can find \(\mathsf {esk}\) with probability greater than \(\epsilon _{\mathsf {esk}}\). The function \(h(\mathsf {esk},\mathsf {epk})\) can be viewed as a composition of \(q_{\mathsf {esk}}\in \mathbb {N}^{+}\) functions, i.e., \(h(\mathsf {esk})=(h_1(\mathsf {esk},\mathsf {epk}),\ldots ,h_{q_{\mathsf {esk}}}(\mathsf {esk},\mathsf {epk}))\) where for all \(i\in \{1,\ldots ,q_{\mathsf {esk}}\}\), \( h_i\in \mathcal {H}_{\mathsf {epk\hbox {-}ow}}(\epsilon _{\mathsf {esk}})\).

One can note that if either \(\epsilon _{\mathsf {lsk}}\le 2^{-|\mathsf {lsk}|}\) or \(\epsilon _{\mathsf {esk}}\le 2^{-|\mathsf {esk}|}\), our definitions would be trivialized since then no leakage if admissible. In the definition of \(\mathcal {H}_{\mathsf {ow}}\), we require that it is hard to compute the secret key given only the leakage. In contrast, when defining \(\mathcal {H}_{\mathsf {lpk\hbox {-}ow}}\) and \(\mathcal {H}_{\mathsf {epk\hbox {-}ow}}\), we ask that the secret key is hard to compute when given the public key in additional to the leakage, which means the allowed leakage functions depend on the information of the public key while the leakage class \(\mathcal {H}_{\mathsf {ow}}\) is defined independently of the concrete AKE scheme.

In this work, we typically define the auxiliary input model with respect to the class \(\mathcal {H}_{\mathsf {lpk\hbox {-}ow}}(\epsilon _{\mathsf {lsk}})\) and \(\mathcal {H}_{\mathsf {epk\hbox {-}ow}}(\epsilon _{\mathsf {esk}})\). This is because that AKE protocols normally belong to the public key setting in terms of the authentication mechanism. As stated in [14], it would be impossible to prove that the scheme remains secure w.r.t the class of admissible auxiliary input function \(\mathcal {H}_{\mathsf {ow}}\) since the public key (e.g., \(\mathsf {lpk}\)) itself leaks some information about the secret key (\(\mathsf {lsk}\)). The stronger security notion for the class of admissible auxiliary input function \(\mathcal {H}_{\mathsf {ow}}\), can be then achieved according to the following lemma which is firstly proven by Dodis et al. [14] and later formalized in [19].

Lemma 2

([14, 19]) For any \(\epsilon _{\mathsf {lsk}}\) and \(\epsilon _{\mathsf {esk}}\), we have that

  1. 1.

    \(\mathcal {H}_{\mathsf {lpk\hbox {-}ow}}(\epsilon _{\mathsf {lsk}})\subseteq \mathcal {H}_{\mathsf {ow}}(\epsilon _{\mathsf {lsk}})\), \(\mathcal {H}_{\mathsf {epk\hbox {-}ow}}(\epsilon _{\mathsf {esk}})\subseteq \mathcal {H}_{\mathsf {ow}}(\epsilon _{\mathsf {esk}})\).

  2. 2.

    \(\mathcal {H}_{\mathsf {ow}}(2^{-|\mathsf {lpk}|}\cdot \epsilon _{\mathsf {lsk}})\subseteq \mathcal {H}_{\mathsf {lpk\hbox {-}ow}}(\epsilon _{\mathsf {lsk}})\), \(\mathcal {H}_{\mathsf {ow}}(2^{-|\mathsf {epk}|}\cdot \epsilon _{\mathsf {esk}})\subseteq \mathcal {H}_{\mathsf {lpk\hbox {-}ow}}(\epsilon _{\mathsf {esk}})\).

It is worth noting that the public-key setting is not always the case for the secrets that are not involved in the authentication mechanism (e.g., the ephemeral secret). A counterexample is our generic construction that will be introduced below. Nevertheless, we consider such class of auxiliary input function to make our model general. Actually, for the ephemeral secret that has no public key (i.e., \(|\mathsf {epk}|=0\)), we have that \(\mathcal {H}_{\mathsf {epk\hbox {-}ow}}(\epsilon _{\mathsf {esk}})= \mathcal {H}_{\mathsf {ow}}(\epsilon _{\mathsf {esk}})\) according to Lemma 2.

3.2 Auxiliary input model for AKE

We are now ready to present the new security model, i.e., leakage-resilient eCK model w.r.t. auxiliary input (\(\mathsf {AI\hbox {-}LR\text{- }eCK}\)) for AKE protocols. Roughly speaking, we allow the adversary to access the leakage oracle in addition to the queries specified by the original eCK model. Formally, we define the following two leakage queries for the adversary in the \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model.

  • \(\mathsf {LongTermKeyLeakage}(f_{\mathsf {lsk}},\mathsf {pid})\). This allows the adversary to query the leakage oracle and learn \(f_{\mathsf {lsk}}(lsk,lpk)\) where \(f_{\mathsf {lsk}}\in \mathcal {H}_{\mathsf {lpk\hbox {-}ow}}(\epsilon _{\mathsf {lsk}})\) and (lsklpk) denotes the long-term key pair of party \(\mathsf {pid}\).

  • \(\mathsf {EphemeralKeyLeakage}(f_{\mathsf {esk}},\mathsf {sid})\). This allows the adversary to query the leakage oracle and learn \(f_{\mathsf {esk}}(esk,epk)\) where \(f_{\mathsf {esk}}\in \mathcal {H}_{\mathsf {epk\hbox {-}ow}}(\epsilon _{\mathsf {esk}})\) and (eskepk) denotes the ephemeral key pair used by the owner of the session \(\mathsf {sid}\).

One should note that separating the leakage queries for the long-term key and the ephemeral key is reasonable in practice, as these keys are usually not stored in the same place (e.g., the long-term key can be stored in ROM, while ephemeral keys are stored in RAM).

Note that in our new security model, we aim to remove the restriction in the previous models without the split-state assumption used by the after-the-fact-leakage AKE model proposed in [2, 3] by Alawatugoda et al. That is, the adversary can access to the leakage oracle before, during or after the test session. However, as shown below, if there is no other restriction with respect to the leakage query, a trivial attack can be launched by the adversary as follows.

A trivial attack Consider a test session \(\mathsf {sid}^*\) which is owned by party \(\mathcal {A}\) with peer \(\mathcal {B}\). Note that for a 2-pass AKE protocol, the session key of \(\mathsf {sid}^*\) is determined by \((\widehat{A}, \widehat{B}, lsk_{\mathcal {A}},esk_{\mathcal {A}}^*\), \(lpk_{\mathcal {B}},epk_{\mathcal {B}}^*)\) which contains only two secret keys (i.e., \(lsk_{\mathcal {A}},esk_{\mathcal {A}}^*\)). Since \(\mathcal {M}\) is allowed to reveal \(esk_{\mathcal {A}}^*\) (\(lsk_{\mathcal {A}}\)) in the eCK model, \(\mathcal {M}\) can launch a trivial attack by encoding the session key derivation function into the leakage function of \(lsk_{\mathcal {A}}\) (\(esk_{\mathcal {A}}^*\)) and hence wins the security game. Therefore, some special treatments should be adopted otherwise the security cannot be achieved at all.

Our treatment As we have discussed before, in reality, leakage attacks are often determined by the hardware devices and measurement methods in physical implementations. On the other hand, the trivial attack above assumes that the adversary can adaptively choose any abstract leakage function, which is overly strong for capturing leakage attacks in reality [20, 40]. Therefore, it is reasonable to change the full adaptive definition which would give a more meaningful way to address the trivial attack. Inspired by this observation, in our proposed model, we ask the adversary to submit two leakage function sets \(\mathcal {F}_{\mathsf {lsk}}\subseteq \mathcal {H}_{\mathsf {lpk\hbox {-}ow}}(\epsilon _{\mathsf {lsk}})\) and \(\mathcal {F}_{\mathsf {esk}}\subseteq \mathcal {H}_{\mathsf {epk\hbox {-}ow}}(\epsilon _{\mathsf {esk}})\), where both \(|\mathcal {F}_{\mathsf {lsk}}|\) and \(|\mathcal {F}_{\mathsf {esk}}|\) are polynomial in the security parameter, prior to the game setup. During the security game, the adversary is only allowed to adaptively ask for any leakage functions belonging to the committed sets. More precisely, for all the queries \(\mathsf {LongTermKeyLeakage}(f_{\mathsf {lsk}},\mathsf {pid})\) and \(\mathsf {EphemeralKeyLeakage}(f_{\mathsf {esk}},\mathsf {sid})\) that are issued by the adversary, we require that \(f_{\mathsf {lsk}}\in \mathcal {F}_{\mathsf {lsk}}, f_{\mathsf {esk}}\in \mathcal {F}_{\mathsf {esk}}\). It is worth noting that the above natural relaxation of leakage resilience has also been considered in e.g., [17, 20, 38, 40] and is generally known as non-adaptive leakage resilience where the adversary has to fix the leakage functions in advance before seeing the system parameters.

Readers may consider a slightly stronger model where the adversary is only required to submit \(\mathcal {F}_{\mathsf {lsk}},\mathcal {F}_{\mathsf {esk}}\) before the challenge phase (i.e., the test session). However, if we adopt this approach, then we cannot avoid the following trivial attack. Consider a test session \(\mathsf {sid}^*\) where the session key is determined by \((\widehat{A}, \widehat{B}, lsk_{\mathcal {A}},esk_{\mathcal {A}}^*,lpk_{\mathcal {B}},epk_{\mathcal {B}}^*)\). Suppose that the matching session \(\overline{\mathsf {sid}^*}\) does not exist, i.e., adversary \(\mathcal {M}\) controls the generation of \(epk_{\mathcal {B}}^*\). To specify the function set \(\mathcal {F}_{\mathsf {esk}}\), \(\mathcal {M}\) can first obtain \(lsk_{\mathcal {A}}\) via \(\mathsf {LongTermKeyReveal}(\mathcal {A})\) and then encode the session key computation function and \((\widehat{A}, \widehat{B}, lsk_{\mathcal {A}},lpk_{\mathcal {B}},epk_{\mathcal {B}}^*)\) into a leakage function \(f_{\mathsf {esk}}\) that outputs the session key. Note that this can be done before the activation of test session since adversary \(\mathcal {M}\) can choose \(epk_{\mathcal {B}}^*\) by itself. The reason behind this trival attack is that the adversary can be actively involved in the challenge session in AKE.

We define the notion of a fresh session in the \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model as follows.

Definition 8

(\((\epsilon _1,\epsilon _2)\)-Leakage fresh session in \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model). Let \(\mathsf {sid}\) be a completed session owned by an honest party \(\mathcal {A}\) with peer \(\mathcal {B}\), who is also honest. Let \(\overline{\mathsf {sid}}\) denote the matching session of \(\mathsf {sid}\), if it exists. Session \(\mathsf {sid}\) is said to be fresh in the \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model if the following conditions hold:

  • \(\mathsf {sid}\) is a fresh session in the sense of eCK model.

  • \(\mathcal {M}\) only issues the following leakage queries

    • \(\mathsf {LongTermKeyLeakage}(f_{\mathsf {lsk}},\mathcal {A})\),

    • \(\mathsf {LongTermKeyLeakage}(f_{\mathsf {lsk}},\mathcal {B})\),

    • \(\mathsf {EphemeralKeyLeakage}(f_{\mathsf {esk}},\mathsf {sid})\),

    • \(\mathsf {EphemeralKeyLeakage}(f_{\mathsf {esk}},\overline{\mathsf {sid})}\) (if \(\overline{\mathsf {sid}}\) exists),

    such that \(f_{\mathsf {lsk}}\in \mathcal {F}_{\mathsf {lsk}}\subseteq \mathcal {H}_{\mathsf {lpk}\hbox {-}\mathsf {ow}}(\epsilon _1), f_{\mathsf {esk}}\in \mathcal {F}_{\mathsf {esk}} \subseteq \mathcal {H}_{\mathsf {epk}\hbox {-}\mathsf {ow}}(\epsilon _2)\) where both \(\mathcal {F}_{\mathsf {lsk}}\) and \(\mathcal {F}_{\mathsf {esk}}\) are submitted by \(\mathcal {M}\) at the very beginning of the security game.

It remains to formally define the notion of \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) security.

Definition 9

(\(\mathsf {AI\hbox {-}LR\text{- }eCK}\) security) Let the test session \(\mathsf {sid}^*\) be \((\epsilon _1,\epsilon _2)\)-leakage fresh where adversary \(\mathcal {M}\) issues \(\mathsf {Test}(sid^*)\) query. We define the advantage of \(\mathcal {M}\) in the \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) game by

$$\begin{aligned} \mathsf {Adv}_{\mathcal {M}}^{\mathsf {AI\hbox {-}LR\text{- }eCK}}(k)=\Pr [b'=b]-1/2, \end{aligned}$$

where k is the security parameter of the AKE protocol. We say the AKE protocol is \((\epsilon _1,\epsilon _2)\)-leakage-resilient eCK w.r.t. auxiliary inputs-secure \(((\epsilon _1,\epsilon _2)\hbox {-}\mathsf {AI\hbox {-}LR\text{- }eCK}\)-secure) if the matching session computes the same session key and for any probabilistic polynomial-time adversary \(\mathcal {M}\), \(\mathsf {Adv}_{\mathcal {M}}^{\mathsf {AI\hbox {-}LR\text{- }eCK}}(k)\) is negligible.

Classification of valid attacks in \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) Here we discuss the relationship between the reveal oracle, e.g., \(\mathsf {LongTermKeyReveal}\) and the leakage oracle, e.g., \(\mathsf {LongTermKeyLeakage}\). We can see that it is meaningless for \(\mathcal {M}\) to issue the leakage query on the long-term secret key (ephemeral secret key) if it has already obtained the whole key through querying the reveal oracle. Indeed, adversary \(\mathcal {M}\) can compute itself the leakage function \(f_{\mathsf {lsk}}(lsk_{\mathcal {A}})\) if \(lsk_{\mathcal {A}}\) is known to him. Therefore, the valid attack on the test session can be classified according to Table 2. Without loss of generality, we assume that the test session \(\mathsf {sid}^*\) is a completed session owned by an honest party \(\mathcal {A}\) with peer \(\mathcal {B}\). Let \(\overline{\mathsf {sid}}^*\) denote the matching session of \(\mathsf {sid}^*\), if it exists. We denote \(esk_{\mathcal {A}}^*,esk_{\mathcal {B}}^*\) as the ephemeral secret keys used by \(\mathcal {A}\) and \(\mathcal {B}\) in \(\mathsf {sid}^*\) and \(\overline{\mathsf {sid}}^*\) (if it exists), respectively.

4 A general framework for \(\mathsf {AI\hbox {-}LR\text{- }eCK}\)-secure AKE

In this section, we propose a general framework for constructing \(\mathsf {AI\hbox {-}LR\text{- }eCK}\)-secure AKE protocols.

4.1 Our generic construction

Figure 1 describes the generic construction for the secure AKE protocol with auxiliary inputs. We describe the framework as follows.

Table 2 Classification of valid attacks by adversary \(\mathcal {M}\)
Fig. 1
figure 1

Framework for \(\mathsf {AI\hbox {-}LR\text{- }eCK}\)-secure AKE

System setup Suppose that k is the system security  parameter.  Let \(\mathcal {SPHF}= \) \((\mathsf {SPHFSetup}\), \(\mathsf {HashKG}, \mathsf {ProjKG}, \mathsf {WordG}, \mathsf {Hash}, \mathsf {ProjHash})\) denote an \(\mathsf {SPHF}\) over \(\mathcal {L}\subset \mathcal {X}\) and onto the set \(\mathcal {Y}\). Let \(\mathcal {SIG}=(\mathsf {KeyGen},\mathsf {Sign}, \mathsf {Verify})\) be an EU-CMAA-secure signature scheme with the signing key space denoted as \(\mathcal {SK}\). Let \(\mathsf {Ext}_1: \mathcal {SK}\times \{0,1\}^{l_2(k)}\rightarrow \{0,1\}^{t_1(k)},\mathsf {Ext}_2:\{0,1\}^{l_2(k)}\times \{0,1\}^{l_2'(k)}\rightarrow \{0,1\}^{t_2(k)}\) be two strong extractors with hard-to-invert auxiliary inputs. Let \(\widetilde{F}:\{0,1\}^{t_1(k)}\times \{0,1\}^{t_2(k)}\rightarrow \{0,1\}^{t_3(k)}\) be an enhanced \(\mathsf {tPRF}\). The system parameter is \((\mathsf {param}, \mathsf {Ext}_1,\mathsf {Ext}_2, \widetilde{F})\) where \(\mathsf {param} \leftarrow \mathsf {SPHFSetup}(1^k)\).

Key generation At the long-term key generation stage, \(\mathcal {A}\) picks \(r_{\mathcal {A}} \mathop {\leftarrow }\limits ^{\$} \{0,1\}^{l_2'(k)}\) and runs the algorithm \(\mathsf {KeyGen}\) to obtain a signing/verification key pair \((sk_{\mathcal {A}},vk_{\mathcal {A}})\), \(\mathcal {A}\) then sets its long-term key pair as \(lsk_{\mathcal {A}}=sk_{\mathcal {A}},lpk_{\mathcal {A}}=(vk_{\mathcal {A}},r_{\mathcal {A}})\). Similarly, \(\mathcal {B}\) picks \(r_{\mathcal {B}} \mathop {\leftarrow }\limits ^{\$} \{0,1\}^{l_2'(k)}\) and runs the algorithm \(\mathsf {KeyGen}\) to obtain signing/verification key pair \((sk_{\mathcal {B}},vk_{\mathcal {B}})\), \(\mathcal {B}\) then sets its long-term key pair as \(lsk_{\mathcal {B}}=sk_{\mathcal {B}},lpk_{\mathcal {B}}=(vk_{\mathcal {B}},r_{\mathcal {B}})\).

Session execution \((\mathcal {A}\rightleftharpoons \mathcal {B})\) The key exchange protocol between \(\mathcal {A}\) and \(\mathcal {B}\) executes as follows.

  • (\(\mathcal {A}\rightharpoonup \mathcal {B}\)). \(\mathcal {A}\) performs the following steps.

    1. 1.

      Selects the ephemeral secret key \(esk_{\mathcal {A}} \mathop {\leftarrow }\limits ^{\$} \{0,1\}^{l_2(k)}\).

    2. 2.

      Sets \(\widetilde{lsk}_{\mathcal {A}}=\mathsf {Ext}_1(sk_{\mathcal {A}},esk_{\mathcal {A}}), \widetilde{esk}_{\mathcal {A}}=\mathsf {Ext}_2(esk_{\mathcal {A}},r_{\mathcal {A}})\).

    3. 3.

      Computes \((r_1,r_2)= \widetilde{F}(\widetilde{lsk}_{\mathcal {A}},\widetilde{esk}_{\mathcal {A}})\).

    4. 4.

      Runs \(\mathsf {HashKG}(\mathsf {param}, \mathcal {L},r_1)\) to obtain the hashing key \(\mathsf {hk}\).

    5. 5.

      Runs \(\mathsf {ProjKG}(\mathsf {param}, \mathcal {L},\mathsf {hk})\) to obtain the projection key \(\mathsf {hp}\).

    6. 6.

      Signs \(\mathsf {hp}\) by running \(\mathsf {Sign}(sk_{\mathcal {A}},\mathsf {hp},\widehat{A},lpk_{\mathcal {B}},\widehat{B});r_2)\) to obtain \(\sigma _{\mathcal {A}}\).

    7. 7.

      Erase all state except \((\mathsf {hp},esk_{\mathcal {A}})\) and sends \((\widehat{A},\mathsf {hp},\sigma _{\mathcal {A}})\) to \(\mathcal {B}\).

  • (\(\mathcal {B}\rightharpoonup \mathcal {A}\)). Upon receiving the message from \(\mathcal {A}\), \(\mathcal {B}\) executes the following steps.

    1. 1.

      Verifies \(\sigma _{\mathcal {A}}\) and aborts if the verification fails, otherwise,

    2. 2.

      Selects the ephemeral secret key \(esk_{\mathcal {B}} \mathop {\leftarrow }\limits ^{\$} \{0,1\}^{l_2(k)}\).

    3. 3.

      Sets \(\widetilde{lsk}_{\mathcal {B}}=\mathsf {Ext}_1(sk_{\mathcal {B}},esk_{\mathcal {B}}), \widetilde{esk}_{\mathcal {B}}=\mathsf {Ext}_2(esk_{\mathcal {B}},r_{\mathcal {B}})\).

    4. 4.

      Computes \((w_{\mathcal {B}},r_3)= \widetilde{F}(\widetilde{lsk}_{\mathcal {B}},\widetilde{esk}_{\mathcal {B}})\).

    5. 5.

      Runs the algorithm \(\mathsf {WordG}(\mathsf {param}, \mathcal {L},w_{\mathcal {B}})\) to obtain a word \(W_{\mathcal {B}}\).

    6. 6.

      Signs \(W_{\mathcal {B}}\) by running \(\mathsf {Sign}(sk_{\mathcal {B}},(W_{\mathcal {B}},\widehat{B},\mathsf {hp},\widehat{A}),r_3)\) to obtain \(\sigma _{\mathcal {B}}\).

    7. 7.

      Sends \((\widehat{B},W_{\mathcal {B}},\widehat{A},\mathsf {hp},\sigma _{\mathcal {B}})\) to \(\mathcal {A}\).

Session key derivation When \(\mathcal {A}\) receives \((\widehat{B},W_{\mathcal {B}},\widehat{A},\mathsf {hp},\sigma _{\mathcal {B}})\), \(\mathcal {A}\) firstly verifies the signature \(\sigma _{\mathcal {B}}\) by running \(\mathsf {Verify}(vk_{\mathcal {B}},\sigma _{\mathcal {B}},(\widehat{B},W_{\mathcal {B}},\mathsf {hp},\widehat{A}))\). If the verification fails, \(\mathcal {A}\) aborts the protocol, otherwise, \(\mathcal {A}\) reconstructs \(\mathsf {hk}\) using \((sk_{\mathcal {A}},esk_{\mathcal {A}},r_{\mathcal {A}})\). Finally, \(\mathcal {A}\) computes the session key as \(SK_{\mathcal {A}}\leftarrow \mathsf {Hash}(\mathsf {param}, \mathcal {L},\mathsf {hk},W_{\mathcal {B}})\). Similarly, party \(\mathcal {B}\) computes the session key as \(SK_{\mathcal {B}}\leftarrow \mathsf {ProjHash}(\mathsf {param}, \mathcal {L},\mathsf {hp},W_{\mathcal {B}},w_{\mathcal {B}})\).

Correctness Due to the projective hash function, we can easily obtain that,

$$\begin{aligned} \mathsf {Hash}(\mathsf {param}, \mathcal {L},\mathsf {hk},W_{\mathcal {B}})=\mathsf {ProjHash}(\mathsf {param}, \mathcal {L},\mathsf {hp},W_{\mathcal {B}},w_{\mathcal {B}}) \end{aligned}$$

which guarantees that \(SK_{\mathcal {A}}=SK_{\mathcal {B}}\).

4.2 Security analysis

Theorem 1

The above construction is \((\epsilon _1,\epsilon _2)\hbox {-}\mathsf {AI\hbox {-}LR\text{- }eCK}\)-secure if the underlying \(\mathcal {SPHF}\) is pseudo-random, \(\mathcal {SIG}\) is secure against EU-CMAA w.r.t. \(\mathcal {H}_{\mathsf {lpk\hbox {-}ow}}(\epsilon _1)\), \(\mathsf {Ext}_1\) is a strong extractor with \(\epsilon _1\)-hard-to-invert auxiliary inputs, and \(\mathsf {Ext}_2\) is a strong extractor with \(\epsilon _2\)-hard-to-invert auxiliary inputs, where both \(\epsilon _1\) and \(\epsilon _2\) are negligible.

Proof

Denote the advantage of adversary \(\mathcal {M}\) against our construction in the security model as \(\mathsf {Adv}_{\mathcal {M}}(k)\).

Let the test session \(\mathsf {sid}^*\) be as follows.

$$\begin{aligned} \mathsf {sid}^*=\left( \big (\widehat{A},\mathsf {hp}^*,\sigma _{\mathcal {A}}^*\right) ,\big (\widehat{B},W_{\mathcal {B}}^*,\widehat{A},\mathsf {hp}^*,\sigma _{\mathcal {B}}^*\big )\big ) \hbox { or } \left( \big (\widehat{B},W_{\mathcal {B}}^*,\widehat{A},\mathsf {hp}^*,\sigma _{\mathcal {B}}^*\big ),\big (\widehat{A},\mathsf {hp}^*,\sigma _{\mathcal {A}}^*\big )\right) . \end{aligned}$$

We adopt the game-hopping technique for the security proof of our construction. We define a sequence of modified attack games, \(\mathsf {Game}_0,\mathsf {Game}_1,\ldots \) between the simulator \(\mathcal {S}\) and adversary \(\mathcal {M}\). In each game, a random bit b is chosen by the simulator \(\mathcal {S}\) for the test session and \(\mathcal {M}\) outputs a bit \(b'\) at the end of game. We denote \(\mathsf {Succ}_i\) as the event that \(b=b'\) in \(\mathsf {Game}_i\). Therefore, the advantage of \(\mathcal {M}\) in \(\mathsf {Game}_i\) is \(\mathsf {Adv}_{\mathcal {M},i}=\Pr [\mathsf {Succ}_i]-1/2\). \(\square \)

It remains to show the details of each game.

Game 0 \(\mathsf {Game}_0\) is the original attack game. Suppose that adversary \(\mathcal {M}\) activates at most n(k) honest parties \(\{P_1,\ldots ,P_{n(k)}\}\) and activates party \(\mathcal {A}\in \{P_1,\cdots ,P_{n(k)}\}\) as an initiator at most N(k) times. In the i-th activation, \(\mathcal {S}\) chooses \(esk_{\mathcal {A},i} \mathop {\leftarrow }\limits ^{\$} \{0,1\}^{l_2(k)}\), computes \(\widetilde{lsk}_{\mathcal {A},i}=\mathsf {Ext}_1(sk_{\mathcal {A}}\), \(esk_{\mathcal {A},i}), \widetilde{esk}_{\mathcal {A}_i}=\mathsf {Ext}_2\) \((esk_{\mathcal {A},i},r_{\mathcal {A}})\), and \((r_{1,i},r_{2,i})= \widetilde{F}(\widetilde{lsk}_{\mathcal {A},i},\widetilde{esk}_{\mathcal {A},i})\). Suppose \(\mathcal {M}\) activates party \(\mathcal {B}\in \{P_1,\cdots ,P_{n(k)}\}\) as a responder at most N(k) times. In the j-th activation, \(\mathcal {S}\) chooses \(esk_{\mathcal {B},j} \mathop {\leftarrow }\limits ^{\$} \{0,1\}^{l_2(k)}\), computes \(\widetilde{lsk}_{\mathcal {B},j}=\mathsf {Ext}_1(s_{\mathcal {B}},esk_{\mathcal {B},j})\), \(\widetilde{esk}_{\mathcal {B},j}=\mathsf {Ext}_2(esk_{\mathcal {B},j},r_{\mathcal {B}})\), and \((w_{\mathcal {B},j},r_{3,j})= \widetilde{F}(\widetilde{lsk}_{\mathcal {B},j},\widetilde{esk}_{\mathcal {B},j})\). Similarly, for any activation of the other \(n(k)-2\) parties, \(\mathcal {S}\) simulates the session correctly. One can easily that,

$$\begin{aligned} \mathsf {Adv}_{\mathcal {M}}^{\mathsf {AI\hbox {-}LR\text{- }eCK}}(k)=\mathsf {Adv}_{\mathcal {M},0} \end{aligned}$$
(1)

Game 1

\(\mathsf {Game}_1\) is the same game as \(\mathsf {Game}_0\), except that at the beginning, \(\mathcal {S}\) chooses two parties randomly from \([P_1,\cdots ,P_{n(k)}]\) and aborts the game if they are not \(\mathcal {A}\) and \(\mathcal {B}\) respectively. We then have that,

$$\begin{aligned} \mathsf {Adv}_{\mathcal {M},1}=\big (1/n(k)^2\big )\cdot \mathsf {Adv}_{\mathcal {M},0} \end{aligned}$$
(2)

Game 2

\(\mathsf {Game}_2\) is the same as \(\mathsf {Game}_1\) except that \(\mathcal {S}\) aborts the game if \(\mathcal {M}\) generates \(\sigma _{\mathcal {A}}^*\) or \(\sigma _{\mathcal {B}}^*\). Now we analyze the probability that \(\mathcal {M}\) generates \(\sigma _{\mathcal {A}}^*\) or \(\sigma _{\mathcal {B}}^*\). On one hand, if adversary \(\mathcal {M}\) obtains \(lsk_{\mathcal {A}}\) (\(lsk_{\mathcal {B}}\)) through the reveal query, the case that \(\mathcal {M}\) generates \(\sigma _{\mathcal {A}}^*\) (\(\sigma _{\mathcal {B}}^*\)) would not happen as this implies that \(\mathcal {M}\) has corrupted \(\mathcal {A}\) (\(\mathcal {B}\)). This is not allowed due to the freshness requirement of \(\mathsf {sid}^*\). On the other hand, the probability that \(\mathcal {M}\) forges either of the above signatures without obtaining the corresponding long-term secret key is bounded by the unforgeability of the underlying signature scheme \(\mathcal {SIG}\). One may note that in our construciton, the random \(r_2\) of signing in the test or other sessions is derived from the revealed key and leaked key while the one used by the singing oracle in the EU-CMAA model is uniformly random. We insist that they are indistinguishable from the view of the adversary. This is due to the security property of the underlying randomness extractor and tPRF. Therefore, during the simulation, when the pseudo-random \(r_2\) is replaced by a uniformly random number by the signing oracle defined in the EU-CMAA model, the returned signature is indistinguishable from the one in the real AI-LR-eCK game from the view of a computational adversary. Suppose that the advantage of any adversary against \(\mathcal {SIG}\) in the EU-CMAA model is at most \(\mathsf {Adv}_{\mathcal {SIG},\mathcal {H}_{\mathsf {lpk\hbox {-}ow}}}^{\mathsf {EU\hbox {-}CMAA}}(k)\), then we have that,

$$\begin{aligned} {Adv}_{\mathcal {M},2}= \left( 1-\mathsf {Adv}_{\mathcal {SIG},\mathcal {H}_{\mathsf {lpk\hbox {-}ow}}}^{\mathsf {EU\hbox {-}CMAA}}(k)\right) \cdot \mathsf {Adv}_{\mathcal {M},1} \end{aligned}$$
(3)

Therefore, we say that \((\widehat{A},\mathsf {hp}^*,\sigma _{\mathcal {A}}^*)\) must be correctly computed by \(\mathcal {S}\) in the \(i^*\)-th activation for some \(i^*\in \{1,\ldots ,N(k)\}\) and \((\widehat{B},W_{\mathcal {B}}^*,\sigma _{\mathcal {B}}^*)\) must be correctly computed by \(\mathcal {S}\) in the \(j^*\)-th activation for some \(j^*\in \{1,\ldots ,N(k)\}\).

Game 3

\(\mathsf {Game}_3\) is the same as \(\mathsf {Game}_2\) except for the following. \(\mathcal {S}\) chooses \(i,j\in \{1,\ldots ,N(k)\}\) randomly and aborts the game it does not hold that \(i=i^*,j=j^*\). It is easily to obtain that,

$$\begin{aligned} {Adv}_{\mathcal {M},3}= \big (1/N(k)^2\big )\cdot \mathsf {Adv}_{\mathcal {M},2} \end{aligned}$$
(4)

Game 4

\(\mathsf {Game}_4\) is the same as \(\mathsf {Game}_3\) except for the following. Suppose that the behaviour of \(\mathcal {M}\) on the test session (and the matching session, if exits) belongs to the event \(\mathsf {E}^*\in \{\mathsf {E}_1,\ldots ,\mathsf {E}_8\}\) (Table 1). \(\mathcal {S}\) then chooses an event \(\mathsf {E}'\mathop {\leftarrow }\limits ^{\$} \{\mathsf {E}_1,\ldots ,\mathsf {E}_8\}\) and aborts the game if \(\mathsf {E}'\ne \mathsf {E}^*\). Therefore, we can see that,

$$\begin{aligned} {Adv}_{\mathcal {M},4}= 1/8\cdot \mathsf {Adv}_{\mathcal {M},3} \end{aligned}$$
(5)

Game 5

\(\mathsf {Game}_5\) is the same as \(\mathsf {Game}_4\) except for the following. For any activation of \(\mathcal {A}\) as an initiator, \(\mathcal {S}\) simulates it as follows.

Case A.5.1 \(\mathsf {E}^*\in \{\mathsf {E}_1,\mathsf {E}_3,\mathsf {E}_5\}\).

Instead of computing \(\widetilde{esk}_{\mathcal {A},i^*}=\mathsf {Ext}_2(esk_{\mathcal {A},i^*},r_{\mathcal {A}})\), \(\mathcal {S}\) chooses \(\widetilde{esk}_{\mathcal {A},i^*}\mathop {\leftarrow }\limits ^{\$}\{0,1\}^{t_2(k)}\). In this sub-case, we have the following claim.

Claim 1 For any adversary \(\mathcal {M}\), we have that

$$\begin{aligned} |\mathsf {Adv}_{\mathcal {M},5}-\mathsf {Adv}_{\mathcal {M},4}|\le 2\mathsf {Adv}\mathsf {Ext_2}(k), \end{aligned}$$

where \(\mathsf {Adv}\mathsf {Ext}_2(k)\) is the most advantage of any adversary against \(\mathsf {Ext}_2:\{0,1\}^{l_2(k)}\) \(\times \{0,1\}^{l_2'(k)}\) \(\rightarrow \{0,1\}^{t_2(k)}\) which is a strong extractor with \(\epsilon _2\)-hard-to-invert auxiliary inputs.

Proof

We use simulator \(\mathcal {S}\) as the adversary against the strong extractor with \(\epsilon \)-hard-to-invert auxiliary inputs. More precisely, we assume a security test environment for \(\mathsf {Ext}_2\), where \(\mathcal {S}\) is given \((r_\mathcal {A}, f_1(esk_{\mathcal {A},i^*})\), \(\cdots ,f_{q_e}(esk_{\mathcal {A},i^*}),T^*)\), of which \(T^*\) is either \(T_0=\mathsf {Ext}_2(esk_{\mathcal {A},i^*},r_{\mathcal {A}})\) or \(T_1=r\mathop {\leftarrow }\limits ^{\$}\{0,1\}^{t_2(k)}\). During the simulation, \(\mathcal {S}\) returns \((f_1(esk_{\mathcal {A},i^*}),\) \(\cdots ,f_{q_e}(esk_{\mathcal {A},i^*}))\) Footnote 1 as the leakage query outputs for \(\mathcal {M}\). As for the test session, i.e., \(i^*\)-th session, \(\mathcal {S}\) sets \(\widetilde{esk}_{\mathcal {A},i^*}\) as \(T^*\). One can note that when \(T^*=T_0\), the simulation is equivalent to \(\mathsf {Game}_4\). Otherwise the simulation is equivalent to \(\mathsf {Game}_5\).

Finally, when \(\mathcal {M}\) outputs its guess \(b'\), \(\mathcal {S}\) outputs 1 if \(b'=b\), otherwise outputs 0. Let \(\mathsf {AdvExt}_2(k)\) be the advantage of \(\mathcal {S}\) against \(\mathsf {Ext}_2\), then we have that,

$$\begin{aligned} \mathsf {AdvExt}_2(k)= & {} \Pr [\mathcal S ~\text{ outputs }~1|T^*=T_0]-\Pr [\mathcal S ~\text{ outputs }~1|T^*=T_1]\\= & {} \Pr [b'=b|T^*=T_0]-\Pr [b'=b|T^*=T_1]\\= & {} \frac{1}{2}\big (\mathsf {Adv}_{\mathcal {M},5}-\mathsf {Adv}_{\mathcal {M},4}\big ). \end{aligned}$$

and \(|\mathsf {Adv}_{\mathcal {M},5}-\mathsf {Adv}_{\mathcal {M},4}|\le 2\mathsf {Adv}\mathsf {Ext_2}(k)\). This completes the proof of Claim 1. \(\square \)

Case A.5.2 \(\mathsf {E}^*\in \{\mathsf {E}_2,\mathsf {E}_4,\mathsf {E}_6\}\).Footnote 2

Instead of computing \(\widetilde{lsk}_{\mathcal {A},i}=\mathsf {Ext}_1(sk_{\mathcal {A}}, esk_{\mathcal {A},i})\), \(\mathcal {S}\) chooses \(\widetilde{lsk}_{\mathcal {A},i}\mathop {\leftarrow }\limits ^{\$}\{0,1\}^{t_1(k)}\) for all \(i\in \{1,\cdots ,N(k)\}\). In this sub-case, we have the following claim.

Claim 2 For any adversary \(\mathcal {M}\), we have that

$$\begin{aligned} |\mathsf {Adv}_{\mathcal {M},5}-\mathsf {Adv}_{\mathcal {M},4}|\le 2\cdot N(k)\cdot \mathsf {Adv}\mathsf {Ext_1}(k), \end{aligned}$$

where \(\mathsf {Adv}\mathsf {Ext}_1(k)\) is the most advantage of any adversary against \(\mathsf {Ext}_1: \mathcal {SK}\) \(\times \{0,1\}^{l_2(k)}\) \(\rightarrow \{0,1\}^{t_1(k)}\) which is a strong extractor with \(\epsilon _1\)-hard-to-invert auxiliary inputs.

Proof

We use simulator \(\mathcal {S}\) as the adversary against the strong extractor with \(\epsilon \)-hard-to-invert auxiliary inputs. More precisely, we assume N(k) security games for \(\mathsf {Ext}_1\), where for each \(i\in \{1,\ldots ,N(k)\}\), \(\mathcal {S}\) is given \((esk_{\mathcal {A},i}, f_1(sk_{\mathcal {A}}),\ldots ,f_{q_l}(sk_{\mathcal {A}}),T_i^*)\), of which \(T_i^*\) is either \(T_{0,i}=\mathsf {Ext}_1(sk_{\mathcal {A}}, esk_{\mathcal {A},i})\) or \(T_{1,i}=r_i\mathop {\leftarrow }\limits ^{\$}\{0,1\}^{t_1(k)}\). During the simulation, \(\mathcal {S}\) returns \((f_1(sk_{\mathcal {A}}),\ldots ,f_{q_l}(sk_{\mathcal {A}}))\) Footnote 3 as the leakage query outputs for \(\mathcal {M}\). As for the i-th session, \(\mathcal {S}\) sets \(\widetilde{lsk}_{\mathcal {A},i}\) as \(T_i^*\). One can note that if \(T_i^*=T_{0,i}\) for all \(i\in \{1,N(k)\}\), the simulation is equivalent to \(\mathsf {Game}_4\). Otherwise the simulation is equivalent to \(\mathsf {Game}_5\). Therefore, applying the hybrid argument and according to the analysis in Claim 1 we then have that,

$$\begin{aligned} |\mathsf {Adv}_{\mathcal {M},5}-\mathsf {Adv}_{\mathcal {M},4}|\le 2\cdot N(k)\cdot \mathsf {Adv}\mathsf {Ext_1}(k). \end{aligned}$$

This completes the proof of Claim 2. \(\square \)

For any activation of \(\mathcal {B}\) as a responder, similarly, \(\mathcal {S}\) simulates as follows.

Case B.5.1 \(\mathsf {E}^*\in \{\mathsf {E}_1,\mathsf {E}_4,\mathsf {E}_7\}\).

\(\mathcal {S}\) chooses \(\widetilde{esk}_{\mathcal {B},j^*}\mathop {\leftarrow }\limits ^{\$}\{0\), \(1\}^{t_2(k)}\), instead of from \(\mathsf {Ext}_2(esk_{\mathcal {B},j^*},r_{\mathcal {B}})\).

Case B.5.2 \(\mathsf {E}^*\in \{\mathsf {E}_2,\mathsf {E}_3,\mathsf {E}_8\}\).

Instead of computing \(\widetilde{lsk}_{\mathcal {B},j}=\mathsf {Ext}_1(s_{\mathcal {B}}, esk_{\mathcal {B},j})\), \(\mathcal {S}\) chooses \(\widetilde{lsk}_{\mathcal {B},j}\mathop {\leftarrow }\limits ^{\$}\{0,1\}^{t_1(k)}\) for all \(j\in \{1,\cdots ,N(k)\}\).

One can note that Case B.5.1 and Case B.5.2 are similar to Case A.5.1 and Case A.5.2 respectively.

Therefore, for \(\mathsf {Game}_5\), we always have that,

$$\begin{aligned} |\mathsf {Adv}_{\mathcal {M},5}-\mathsf {Adv}_{\mathcal {M},4}|\le 2\cdot \text{ max }\{N(k)\cdot \mathsf {Adv}\mathsf {Ext_1}(k),\mathsf {Adv}\mathsf {Ext_2}(k)\} \end{aligned}$$
(6)

Game 6

\(\mathsf {Game}_6\) is the same as \(\mathsf {Game}_5\) except for the following.

For any activation of \(\mathcal {A}\) as an initiator, \(\mathcal {S}\) simulates as follows.

Case A.6.1 \(\mathsf {E}^*\in \{\mathsf {E}_1,\mathsf {E}_3,\mathsf {E}_5\}\).

Instead of computing \((r_{1,i^*},r_{2,i^*})= \widetilde{F}(\widetilde{lsk}_{\mathcal {A},i^*},\widetilde{esk}_{\mathcal {A},i^*})\), \(\mathcal {S}\) chooses \((r_{1,i^*},r_{2,i^*})\) \(\mathop {\leftarrow }\limits ^{\$}\{0,1\}^{t_3(k)}\). In this sub-case, we have the following claim.

Claim 3 For any adversary \(\mathcal {M}\), we have that

$$\begin{aligned} |\mathsf {Adv}_{\mathcal {M},6}-\mathsf {Adv}_{\mathcal {M},5}|\le 2\mathsf {Adv}\mathsf {tPRF}(k), \end{aligned}$$

where \(\mathsf {AdvtPRF}(k)\) is the most advantage of any adversary against the enhanced twisted \(\mathsf {PRF}\) \(\widetilde{F}\).

Proof

We use simulator \(\mathcal {S}\) as the adversary against the second property of the enhanced twisted pseudo-random function. Precisely, we assume a security test environment for \(\widetilde{F}\), where \(\mathcal {S}\) specifies the input \(\widetilde{lsk}_{\mathcal {A},i^*}\) and is given \(T^*\). Here \(T^*\) is either \(T_0=\widetilde{F}(\widetilde{lsk}_{\mathcal {A},i^*},\widetilde{esk}_{\mathcal {A},i^*})\) or \(T_1=r\mathop {\leftarrow }\limits ^{\$}\{0,1\}^{t_2(k)}\). During the simulation, as for the test session, i.e., \(i^*\)-th session, \(\mathcal {S}\) sets \((r_{1,i^*},r_{2,i^*})\) as \(T^*\). One can note that when \(T^*=T_0=\widetilde{F}(\widetilde{lsk}_{\mathcal {A},i^*},\widetilde{esk}_{\mathcal {A},i^*})\), although \(\widetilde{esk}_{\mathcal {A},i^*}\) here is unknown to \(\mathcal {S}\) and may be different from the one picked randomly by \(\mathcal {S}\) in \(\mathsf {Game}_5\), they are statistically indistinguishable from the view of adversary \(\mathcal {M}\). Therefore, when \(T^*=T_0\), the simulation is equivalent to \(\mathsf {Game}_5\). Otherwise the simulation is equivalent to \(\mathsf {Game}_6\).

Finally, when \(\mathcal {M}\) outputs its guess \(b'\), \(\mathcal {S}\) outputs 1 if \(b'=b\), otherwise outputs 0. Let \(\mathsf {AdvtPRF}(k)\) be the advantage of \(\mathcal {S}\) against \(\widetilde{F}\), then we have that,

$$\begin{aligned} \mathsf {AdvtPRF}(k)= & {} \Pr [\mathcal S ~\text{ outputs }~1|T^*=T_0]-\Pr [\mathcal S ~\text{ outputs }~1|T^*=T_1]\\= & {} \Pr [b'=b|T^*=T_0]-\Pr [b'=b|T^*=T_1]\\= & {} \frac{1}{2}(\mathsf {Adv}_{\mathcal {M},6}-\mathsf {Adv}_{\mathcal {M},5}). \end{aligned}$$

and \(|\mathsf {Adv}_{\mathcal {M},6}-\mathsf {Adv}_{\mathcal {M},5}|\le 2\mathsf {AdvtPRF}(k)\). This completes the proof of Claim 3. \(\square \)

Case A.6.2 \(\mathsf {E}^*\in \{\mathsf {E}_2,\mathsf {E}_4,\mathsf {E}_6\}\).

Instead of computing \((r_{1,i},r_{2,i})= \widetilde{F}(\widetilde{lsk}_{\mathcal {A},i},\widetilde{esk}_{\mathcal {A},i})\), \(\mathcal {S}\) chooses \((r_{1,i},r_{2,i})\mathop {\leftarrow }\limits ^{\$}\{0,1\}^{t_3(k)}\) for all \(i\in \{1,\ldots ,N(k)\}\). In this sub-case, we have the following claim.

Claim 4 For any adversary \(\mathcal {M}\), we have that

$$\begin{aligned} |\mathsf {Adv}_{\mathcal {M},6}-\mathsf {Adv}_{\mathcal {M},5}|\le 2\mathsf {Adv}\mathsf {tPRF}(k), \end{aligned}$$

where \(\mathsf {AdvtPRF}(k)\) is the most advantage of any adversary against the enhanced twisted \(\mathsf {PRF}\) \(\widetilde{F}\).

Proof

We use simulator \(\mathcal {S}\) as the adversary against the first property of the enhanced twisted pseudo-random function. Precisely, we assume a security test environment for \(\widetilde{F}\), where \(\mathcal {S}\) specifies the input \((\widetilde{esk}_{\mathcal {A},1},\ldots ,\) \(\widetilde{esk}_{\mathcal {A},N(k)})\) and is given \((T_1^*,\ldots ,T_{N(k)}^*)\). For all \(i\in \{1,\ldots ,N(k)\}\), \(T_i^*\) is either \(T_{0,i}=\widetilde{F}(\widetilde{lsk}_{\mathcal {A},i},\widetilde{esk}_{\mathcal {A},i})\) or \(T_{1,i}=r\mathop {\leftarrow }\limits ^{\$}\{0,1\}^{t_2(k)}\). During the simulation, as for the i-th session, \(\mathcal {S}\) sets \((r_{1,i},r_{2,i})\) as \(T_i^*\). One can note that when \(T_i^*=T_{0,i}=\widetilde{F}(\widetilde{lsk}_{\mathcal {A},i},\widetilde{esk}_{\mathcal {A},i})\) for all \(i\in \{1,\ldots ,N(k)\}\), although \(\widetilde{lsk}_{\mathcal {A},i}\) here is unknown to \(\mathcal {S}\) and may be different from the one picked randomly by \(\mathcal {S}\) in \(\mathsf {Game}_5\), they are statistically indistinguishable from the view of adversary \(\mathcal {M}\). Therefore, when \(T_i^*=T_{0,i}\), the simulation is equivalent to \(\mathsf {Game}_5\). Otherwise the simulation is equivalent to \(\mathsf {Game}_6\).

Finally, when \(\mathcal {M}\) outputs its guess \(b'\), \(\mathcal {S}\) outputs 1 if \(b'=b\), otherwise outputs 0. Let \(\mathsf {AdvtPRF}(k)\) be the advantage of \(\mathcal {S}\) against \(\widetilde{F}\), then we have that,

$$\begin{aligned} \mathsf {AdvtPRF}(k)= & {} \Pr [\mathcal S ~\text{ outputs }~1|T^*=T_0]-\Pr [\mathcal S ~\text{ outputs }~1|T^*=T_1]\\= & {} \Pr [b'=b|T^*=T_0]-\Pr [b'=b|T^*=T_1]\\= & {} \frac{1}{2}(\mathsf {Adv}_{\mathcal {M},6}-\mathsf {Adv}_{\mathcal {M},5}). \end{aligned}$$

and \(|\mathsf {Adv}_{\mathcal {M},6}-\mathsf {Adv}_{\mathcal {M},5}|\le 2\mathsf {AdvtPRF}(k)\). This completes the proof of Claim 4. \(\square \)

For any activation of \(\mathcal {B}\) as a responder, similarly, \(\mathcal {S}\) simulates as follows.

Case B.6.1 \(\mathsf {E}^*\in \{\mathsf {E}_1,\mathsf {E}_4,\mathsf {E}_7\}\).

Instead of computing \((w_{\mathcal {B},j^*},r_{3,j^*})= \widetilde{F}(\widetilde{lsk}_{\mathcal {B},j^*},\widetilde{esk}_{\mathcal {B},j^*})\), \(\mathcal {S}\) chooses \((w_{\mathcal {B},j^*}\), \(r_{3,j^*})\mathop {\leftarrow }\limits ^{\$}\{0,1\}^{t_3(k)}\).

Case B.6.2 \(\mathsf {E}^*\in \{\mathsf {E}_2,\mathsf {E}_3,\mathsf {E}_8\}\).

Instead of computing \((w_{\mathcal {B},j},r_{3,j})= \widetilde{F}(\widetilde{lsk}_{\mathcal {B},j},\widetilde{esk}_{\mathcal {B},j})\), \(\mathcal {S}\) chooses \((w_{\mathcal {B},j^*},r_{3,j^*})\) \(\mathop {\leftarrow }\limits ^{\$}\{0,1\}^{t_3(k)}\) for all \(j\in \{1,\cdots ,N(k)\}\).

One can note that Case B.6.1 and Case B.6.2 are similar to Case A.6.1 and Case A.6.2 respectively.

Therefore, for \(\mathsf {Game}_6\), we always have that,

$$\begin{aligned} |\mathsf {Adv}_{\mathcal {M},6}-\mathsf {Adv}_{\mathcal {M},5}|\le 2\cdot \mathsf {AdvtPRF}(k) \end{aligned}$$
(7)

Game 7

\(\mathsf {Game}_7\) is the same as \(\mathsf {Game}_6\) except for the following. For the test session \(\mathsf {sid}^*\), \(\mathcal {S}\) sets \(SK_{\mathcal {A}}^*\) as a random string instead of computing \(SK_{\mathcal {A}}^*\leftarrow \mathsf {Hash}(\mathsf {param}, \mathcal {L},\mathsf {hk}^*,W_{\mathcal {B}}^*)\). We then have the following result.

Claim 5 For any adversary \(\mathcal {M}\), we have that

$$\begin{aligned} |\mathsf {Adv}_{\mathcal {M},7}-\mathsf {Adv}_{\mathcal {M},6}|\le 2\mathsf {Adv}\mathsf {SPHF}(k), \end{aligned}$$

where \(\mathsf {AdvtPRF}(k)\) is the advantage of any adversary against the twisted \(\mathsf {PRF}\) \(\widetilde{F}\).

Proof

We use simulator \(\mathcal {S}\) as the adversary against the pseudo-randomness of the underlying \(\mathsf {SPHF}\), i.e., \(\mathcal {SPHF}\). More precisely, we assume a security test environment for \(\mathcal {SPHF}\), where \(\mathcal {S}\) is given \((\mathsf {hp}^*,W^*,\mathsf {hv}^*)\), of which \(\mathsf {hv}^*\) is either the hash value of \(W^*\) or \(v \mathop {\leftarrow }\limits ^{\$} \mathcal {Y}\). During the simulation, as for the test session, i.e., \(i^*\)-th session, \(\mathcal {S}\) sets \(\mathsf {hp}_{i^*}=\mathsf {hp}^*\) by implicitly assuming that \(\mathsf {hp}^*=\mathsf {ProjKG}(\mathsf {param}, \mathcal {L},\mathsf {hk}_{i^*})\) where \(\mathsf {hk}_{i^*}=\mathsf {HashKG}(\mathsf {param}, \mathcal {L},r_{1,i^*})\). Note that although \(r_{1,i^*}\) here may be different from that one picked by \(\mathcal {S}\) in \(\mathsf {Game}_6\), they are statistically indistinguishable from the view of adversary \(\mathcal {M}\). Moreover, \(\mathcal {S}\) also sets \(W_{\mathcal {B}}\) in the test session as \(W^*\) by implicitly assuming that \(W^*=\mathsf {WordG}(\mathsf {param},\mathcal {L},w_{\mathcal {B},j^*})\). Again, although \(w_{\mathcal {B},j^*}\) here may be different from that one picked by \(\mathcal {S}\) in \(\mathsf {Game}_6\), they are also statistically indistinguishable from the view of adversary \(\mathcal {M}\). Therefore, when \(\mathsf {hv}^*\) is the hash value of \(W^*\), the simulation is equivalent to \(\mathsf {Game}_6\). Otherwise the simulation is equivalent to \(\mathsf {Game}_7\).

Finally, when \(\mathcal {M}\) outputs its guess \(b'\), \(\mathcal {S}\) outputs 1 if \(b'=b\), otherwise outputs 0. Let \(\mathsf {AdvtPRF}(k)\) be the advantage of \(\mathcal {S}\) against \(\widetilde{F}\), then we have that,

$$\begin{aligned} |\mathsf {Adv}_{\mathcal {M},7}-\mathsf {Adv}_{\mathcal {M},6}|\le 2\mathsf {AdvSPHF}(k) \end{aligned}$$
(8)

This completes the proof of Claim 5. \(\square \)

One can easily see that \(\Pr [\mathsf {Succ}_7]= \frac{1}{2}\) as \(SK_{\mathcal {A}}^*\) is randomly chosen by \(\mathcal {S}\). That is, \(\mathsf {Adv}_{\mathcal {M},7} = 0\).

According to (18), we have the conclusion that \(\mathsf {Adv}_{\mathcal {M}}^{\mathsf {AI\hbox {-}LR\text{- }eCK}}(k)\) is negligible, which proves Theorem 1.

5 Instantiating the building blocks

In this section, we show an instantiation of our framework based on the strong extractor with hard-to-invert auxiliary inputs [41], a \(\mathsf {tPRF}\) from \(\mathsf {PRF}\)s, the decision Diffie–Hellman (DDH) assumption and the FHNNZ signature scheme [19].

5.1 Strong extractor with hard-to-invert auxiliary inputs

The work in [41] showed that a strong extractor with \(\epsilon \)-hard-to-invert auxiliary inputs can be constructed from the modified Goldreich–Levin theorem [14]. We first describe the following Goldreich–Levin theorem [14] over any field GF(q) for a prime q. Denote \(<x,r>=\sum _{i=1}^lr_ix_i\) as the inner product of \(x=(x_1,\ldots ,x_l)\) and \(r=(r_1,\ldots ,r_l)\).

Theorem 2

Let q be a prime, and let H be an arbitrary subset of GF(q). Let \(f:H^n\rightarrow \{0,1\}^*\) be any (possibly randomized) function. If there is a distinguisher \(\mathcal {D}\) that runs in time t such that \(|\Pr [\mathcal {D}(r,y,<s,r>)=1]-\Pr [\mathcal {D}(r,y,u)=1]|=\delta \) where \(s\mathop {\leftarrow }\limits ^{\$}H^n,y\leftarrow f(s), r\mathop {\leftarrow }\limits ^{\$} GF(q)^n,u\mathop {\leftarrow }\limits ^{\$}GF(q)\), then there is an inverter \(\mathcal {I}\) that runs in time \(t'=t\cdot \mathsf {poly}(n,|H|,1/\delta )\) such that \(\Pr [\mathcal {I}(y)=s]\ge \frac{\delta ^3}{512\cdot n \cdot q^2}\).

Below we show the instantiation described in [41]. Let \(\mathsf {Ext}(x,r)=<x,r>\), we have the following theorem.

Theorem 3

Let k be the security parameter. Let x be chosen uniformly random from \(\{0,1\}^{l(k)}\) where \(l(k)=\mathsf {poly}(k)\). Then given \(f\in \mathcal {H}_{\mathsf {ow}}(\epsilon )\), no PPT distinguisher \(\mathcal {D}\) can distinguish \((r,f(x),\mathsf {Ext}(x,r))\) from (rf(x), u) with probability \(\epsilon '\ge (512\cdot l(k)q^2\epsilon )^{1/3}\), where \( r\mathop {\leftarrow }\limits ^{\$} GF(q)^{l(k)},u\mathop {\leftarrow }\limits ^{\$}GF(q)\).

The proof of above theorem is referred to [41].

5.2 A simple instantiation of \(\mathsf {tPRF}\)

Following the work in [30], we show that the enhanced \(\mathsf {tPRF}\), i.e., \(\widetilde{F}:\{0,1\}^{t_1(k)}\times \{0,1\}^{t_2(k)}\) \(\rightarrow \{0,1\}^{t_3(k)}\) can be simply instantiated as follows. Without loss of generality, we take the construction of \(\widetilde{F}(\widetilde{lsk}_{\mathcal {A}},\widetilde{esk}_{\mathcal {A}})\) as an example.

Let \(\widetilde{lsk}_{\mathcal {A}}=\widetilde{lsk}_{\mathcal {A}}'||\widetilde{lsk}_{\mathcal {A}}'',\widetilde{esk}_{\mathcal {A}}=\widetilde{esk}_{\mathcal {A}}'||\widetilde{esk}_{\mathcal {A}}''\), where \(|\widetilde{lsk}_{\mathcal {A}}'|=k_1,|\widetilde{lsk}_{\mathcal {A}}''|=k_2,|\widetilde{esk}_{\mathcal {A}}'|=k_3,|\widetilde{esk}_{\mathcal {A}}''|=k_4\) such that \(k_1+k_2=t_1(k),k_3+k_4=t_2(k)\). Pick two \(\mathsf {PRF}\)s \(F':\{0,1\}^{k_1}\times \{0,1\}^{k_3}\rightarrow \{0,1\}^{t_3(k)}, F'':\{0,1\}^{k_4}\times \{0,1\}^{k_2}\rightarrow \{0,1\}^{t_3(k)}\). Then we construct \(\widetilde{F}\) as follows.

$$\begin{aligned} \widetilde{F}(\widetilde{lsk}_{\mathcal {A}},\widetilde{esk}_{\mathcal {A}})=F'_{\widetilde{lsk}_{\mathcal {A}}'}(\widetilde{esk}_{\mathcal {A}}')\oplus F''_{\widetilde{esk}_{\mathcal {A}}''}(\widetilde{lsk}_{\mathcal {A}}''). \end{aligned}$$

Based on the Theorem 1 in [30], one can easily have the following theorem.

Theorem 4

The above \(\widetilde{F}\) is an enhanced \(\mathsf {tPRF}\) as long as both \(F'\) and \(F''\) are \(\mathsf {PRF}\)s.

5.3 An \(\mathsf {SPHF}\) from the DDH assumption

In the following, we present the language we use in the instantiation. Specifically, we introduce the Diffie–Hellman language \(\mathcal {L}_{DH}\) and show how to construct a \(\mathsf {SPHF}\) based on it. Let \(\mathbb G\) be a group of prime order p and \(g_1,g_2 \in \mathbb {G}\) be generators of \(\mathbb G\).

Decision Diffie–Hellman assumption The DDH assumption says that for any abr randomly chosen from \(\mathbb {Z}_p\), \(\{(g_1,g_1^a,g_1^b,g_1^{ab})\}\mathop {\equiv }\limits ^{c}\{(g_1,g_1^a,g_1^b,g_1^{r})\}\).

Diffie–Hellman language The Diffie–Hellman Language is defined as follows.

$$\begin{aligned} \mathcal {L}_{DH}=\{(u_1,u_2)|\exists r\in \mathbb Z_p, s.t., u_1=g_1^r,u_2=g_2^r\}. \end{aligned}$$

One can see that the witness space of \(\mathcal {L}_{DH}\) is \( \mathcal {W}=\mathbb {Z}_p\) and \( \mathcal {L}_{DH}\subset \mathbb {G}^2\).

\(\mathsf {SPHF}\) on \(\mathcal {L}_{DH}\). Here we show how to construct an \(\mathsf {SPHF}\) (denoted by \(\mathcal {SPHF}_{DH}\)) over the language \(\mathcal {L}_{DH}\subset \mathcal X=\mathbb {G}^2\) onto the group \(\mathcal {Y}=\mathbb {G}\). The concrete construction is as follows.

  • \(\mathsf {SPHFSetup}(1^{\lambda })\): \(\mathsf {param}=(\mathbb {G},p,g_1,g_2)\);

  • \(\mathsf {HashKG}(\mathcal {L}_{DH}, \mathsf {param})\): \(\mathsf {hk}=(\alpha _1,\alpha _2)\mathop {\leftarrow }\limits ^{\$} \mathbb {Z}_p^2 \);

  • \(\mathsf {ProjKG}(\mathsf {hk},(\mathcal {L}_{DH}, \mathsf {param}))\): \(\mathsf {hp}=g_1^{\alpha _1}g_2^{\alpha _2}\in \mathbb {Z}_p\);

  • \(\mathsf {WordG}(\mathcal {L}_{DH}, \mathsf {param},w=r)\): \(W=(g_1^r,g_2^r)\);

  • \(\mathsf {Hash}(\mathsf {hk},(\mathcal {L}_{DH}, \mathsf {param}),W=(g_1^r,g_2^r))\): \(\mathsf {hv}=g_1^{r\alpha _1}g_2^{r\alpha _2}\in \mathbb {Z}_p\);

  • \(\mathsf {ProjHash}(\mathsf {hp},(\mathcal {L}_{DH}, \mathsf {param}),W=(g_1^r,g_2^r),w=r)\): \(\mathsf {hv}'=\mathsf {hp}^r \in \mathbb {Z}_p\).

It is easy to check the correctness of \(\mathcal {SPHF}_{DH}\) as follows,

$$\begin{aligned} \mathsf {Hash}(\mathsf {hk},(\mathcal {L}_{DH}, \mathsf {param}),W)=g_1^{r\alpha _1}g_2^{r\alpha _2}=\mathsf {hp}^r=\mathsf {ProjHash}(\mathsf {hp},(\mathcal {L}_{DH}, \mathsf {param}),W,w). \end{aligned}$$

As for the pseudo-randomness, we prove that if there exists an adversary that breaks the pseudo-randomness of \(\mathcal {SPHF}_{DH}\) with non-negligible probability, then we can use it to break the DDH assumption. Formally, suppose the adversary \(\mathcal {M}_1\) has the advantage \(\mathsf {Adv}_{\mathcal {SPHF}_{DH},\mathcal {M}_1}^{\mathsf {PR}}\), then we have an algorithm \(\mathcal {M}_2\) that solves the DDH problem with advantage \(\mathsf {Adv}_{\mathcal {M}_2}^{\mathsf {DDH}}= 2\cdot \mathsf {Adv}_{\mathcal {SPHF}_{DH},\mathcal {M}_1}^{\mathsf {PR}}\). Let the DDH instance be \((g,g^a,g^b,Z)\), \(\mathcal {M}_2\) then works as follows to decide whether Z equals \(g^{ab}\) (outputs 1) or not 1(outputs 0). Firstly, \(\mathcal {M}_2\) sets \(g_1=g,g_2=g^a\), picks \(\mathsf {hk}=(\alpha _1,\alpha _2)\mathop {\leftarrow }\limits ^{\$} \mathbb {Z}_p\), sets \(\mathsf {param}=(g_1,g_2,\mathbb {G},p)\) and computes \(\mathsf {hp}=g_1^{\alpha _1}g_2^{\alpha _2}\). \(\mathcal {M}_2\) sends \((\mathsf {param},\mathsf {hp})\) to \(\mathcal {M}_1\). \(\mathcal {M}_2\) then computes \(\mathsf {hv}=g_1^{b\alpha _1}Z^{\alpha _2}\) and sends \((g_1^b,Z,\mathsf {hv})\) to \(\mathcal {M}_1\). Finally, \(\mathcal {M}_1\) outputs its guess \(b'\) which \(\mathcal {M}_2\) outputs as its solution to the given DDH problem.

Let \(W'=(g_1^b,Z)\). If \(Z=g^{ab}\), then \(W' \in \mathcal {L}_{DH}\) as \(Z=g^{ab}=g_2^{b}\) which is the case that \(b=1\) in the \(\mathsf {PR}\)-game. If \(Z\ne g^{ab}\), then \(W'\notin \mathcal {L}_{DH}\) and hence \(\mathsf {hv}=g_1^{b\alpha _1}Z^{\alpha _2}=\mathsf {Hash}(hk,(\mathcal {L}_{DH},\mathsf {param}),W')\) is statistically indistinguishable from a random element \(v \mathop {\leftarrow }\limits ^{\$} \mathcal {Y}\) which is the case \(b=0\). Therefore, we have that

$$\begin{aligned} \mathsf {Adv}_{\mathcal {M}_2}^{\mathsf {DDH}}= & {} \Pr [b'=1|Z=g^{ab}]-\Pr [b'=1|Z\ne g^{ab}]\\= & {} \Pr [b'=1|Z=g^{ab}]-(1-\Pr [b'=0|Z\ne g^{ab}])\\= & {} \mathsf {Adv}_{\mathcal {SPHF}_{DH},\mathcal {M}_1}^{\mathsf {PR}}+1/2 -(1- (\mathsf {Adv}_{\mathcal {SPHF}_{DH},\mathcal {M}_1}^{\mathsf {PR}}+1/2))\\= & {} 2 \cdot \mathsf {Adv}_{\mathcal {SPHF}_{DH},\mathcal {M}_1}^{\mathsf {PR}}. \end{aligned}$$

This completes the proof.

5.4 The FHNNZ signature scheme [20]

The FHNNZ signature scheme is introduced by Faust et al. and shown to be existential unforgeability under chosen message and auxiliary input attacks (EU-CMAA)-secure with respect to exponentially hard-to-invert leakage. Below we give a quick review of the scheme.

The scheme construction Let H denote a family of second preimage resistant hash functions with key sampling algorithm \(\mathsf {Gen}_H\) and input domain \(\{0,1\}^{l_{\mathsf {in}}}\) where \(l_{\mathsf {in}}=\text{ poly }(k)\). Let \(\varGamma =(\mathsf {KeyGen}_E,\mathsf {Enc,Dec})\) be an IND-WLCCA secure labelled public-key encryption scheme, and \(\mathsf {\Pi }=(\mathsf {CRSGen,P,V})\) a reusable-CRS NIZK system. Readers are referred to [19] for the details of these underlying tools.

  • \(\mathsf {KeyGen}(1^{k})\): Sample \(s\leftarrow \mathsf {Gen}_{H}(1^k)\) and \((\mathsf {ek,dk})\leftarrow \mathsf {KeyGen}_{E}(1^k)\). Furthermore, sample \((\mathsf {crs,td_s})\leftarrow S_1(1^k)\) and \(x\leftarrow \{0,1\}^{l_{\mathsf {in}}}\), where \(S=(S_1,S_2)\) is the simulator for \(\mathsf {\Pi }\). Compute \(y=H_s(x)\). Set \((\mathsf {sk,vk})=(x,(y,s,\mathsf {ek,crs}))\).

  • \(\mathsf {Sign}(m,\mathsf {sk})\): Compute \(C=\mathsf {Enc}^m(\mathsf {ek},x)\). Using \(\mathsf {crs}\) and \(\mathsf {\Pi }\), generate a NIZK proof \(\pi \) proving that \(\exists x\) such that \((C=\mathsf {Enc}^m(\mathsf {ek},x)\wedge y=H_s(x))\). Output \(\sigma =(C,\pi )\).

  • \(\mathsf {Verify}(\mathsf {vk},m,\sigma )\): Parse \(\sigma \) as \(C,\pi \). Use \(\mathsf {crs}\) and \(\mathsf {V}\) to verify the NIZK proof \(\pi \). Output 1 if the proof verifies correctly and 0 otherwise.

On the leakage functions As shown in [19], the above construction is EU-CMAA-secure with respect to only exponentially hard-to-invert leakage. The central idea of their construction is to add an encryption \(C=\mathsf {Enc}^m(\mathsf {ek},x)\) of the signing key x to each signature. The encryption key \(\mathsf {ek}\) is part of the verification key of the signature scheme but the corresponding decryption key \(\mathsf {dk}\) is not part of the signing key. They then set up the signature scheme that \(\mathsf {dk}\) can be guessed with probability p such that \(\mathsf {negl}(k)\ge p \gg 2^{-k^c}\) for some negligible function \(\mathsf {negl}(\cdot )\) and a constant \(1> c > 0\). Since any PPT algorithm taking as input a signature and the public key (excluding \(\mathsf {dk}\)) can output the signing key x with probability p, the signing algorithm is then excluded from the admissible leakage function set \(\mathcal {H}\) which is with hardness at least \(2^{-k^c}\) (i.e., exponentially hard-to-invert).

Our adoption of the FHNNZ signature At first glance, it seems that our construction cannot achieve the security under the \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model if we just directly adopt the FHNNZ signature, as the admissible leakage function sets defined in \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) are computationally hard-to-invert and hence do not exclude the signing algorithm. However, this is actually not the case. That is, our construction is \(\mathsf {AI\hbox {-}LR\text{- }eCK}\)-secure even we do not put such a restriction on the leakage functions when applying the FHNNZ signature in our AKE protocol. It is due to the fact that in our general framework of AKE construction, for authentication, the initiator (\(\mathcal {A}\)) generates the signature on the transcript including the long-term public key (i.e., \(lpk_{\mathcal {B}}\)) of the responder and likewise the responder (\(\mathcal {B}\)) signs on the transcript containing the ephemeral public key (i.e., \(\mathsf {hp}\)) of the initiator. Therefore, the adversary cannot forge a signature on the challenge session through the leakage query, as it is asked to specify the allowed leakage function sets prior to the game setup in the \(\mathsf {AI\hbox {-}LR\text{- }eCK}\) model and hence neither the aforementioned long-term public key nor the ephemeral public key can be embedded into the leakage function by the adversary.

6 Conclusion

In this work, we filled the research gap between the existing works and the reality for AKE with leakage resilience. Our proposed model, \(\mathsf {AI\hbox {-}LR\text{- }eCK}\), is a strong yet meaningful AKE security notion that captures computationally hard-to-invert leakage attacks on both the long-term secret and the randomness (i.e., the ephemeral secret) under some reasonable restrictions. We also presented a general framework of \(\mathsf {AI\hbox {-}LR\hbox {-}eCK}\)-secure AKE protocols with an instantiation to show that our new model is indeed achievable.