Keywords

1 Introduction

An identification scheme enables one party - a prover - to prove its identity in front of another party - a verifier. In many public key \({\mathsf {IS}}\) constructions the prover has a long term secret key, and proves its knowledge in such a way, that the verifier, provided with the corresponding public key of the prover, is convince about that fact but gets no information about the prover’s secret. Typically the proving protocol consists of three rounds: a commitment, a challenge, and a response. In the commitment the prover sends to the verifier a commitment to some random ephemeral value (so called the ephemeral secret). In the challenge the verifier sends back to the prover some random unpredictable value. In the response the prover send to the verifier the result of some computations, involving the received challenge, and its long term secret key masked by the ephemeral value committed in the first message. The prover is accepted if the response “agrees” with the computation on the verifier side involving the commitment, the challenge, the response and the public key of the prover.

Problem Statement. The problem with this construction arises in systems (protocol implementations), where the ephemeral secrets may be leaked. Usually the masking method in the response is such, that the security of the long term key relies on the secrecy of the ephemeral key - e.g. the response value is a linear combination of the challenge, the committed ephemeral secret, and the long term secret. Thus, once the ephemeral secrets is leaked the long term secret is also compromised.

The leakage of the ephemeral secret can be archived by some malicious implementation of the device which is used to perform computation on the prover side. Such a device usually has a High Secure Memory module (HSM), where long term secrets are kept securely and accessed (indirectly) only via predefined interfaces. A less secure area is used for scheme program computations, including random numbers sampling. Especially implementations of the pseudo-random number generators are vulnerable to attacks. If the adversary can somehow learn their state, it can also learn random values (and ephemerals) produced by those generators. Sometimes even a subtle subliminal adversarial interference, such as the reset of the internal state and/or randomization source of the prover device, can have influence on the produced values.

Therefore in this paper we want to address this issue, and strengthen the security model for the ephemeral secrets even further. We say that scheme should stay secure even if the adversary injects the malicious ephemeral values of its choice to the device of the prover. When using some subliminal channel this could happen even without the prover knowledge and against its will. In our security model, such ephemerals used by the prover during its interaction with the malicious verifier should not help the adversary to impersonate the prover subsequently. In this paper we concentrate on the Schnorr \({\mathsf {IS}}\) [1]. This particular scheme is one of the fundamental cryptographic building block, which security relies on the hardness of discrete logarithm problem (DLP). As such it can be used as a compatible part of more complex constructions, based on similar computational assumptions. E.g. authenticated key establishment protocols based on Diffie-Hellman key exchange. The regular Schnorr \({\mathsf {IS}}\) is vulnerable to the ephemeral injecting attack, and ephemeral leakage. Thus we propose the modification of that scheme, which becomes secure in our proposed model.

Contribution. The contribution of the paper is the following:

  • We introduce a new strong security model for identification schemes in which we allow the adversary verifier to set random values used on the prover side in the commitment phase of the protocol. We define the \({\mathsf {IS}}\) scheme to be secure if such a setting in query stage of the security experiment, will not enable the adversary to impersonate the prover later in the impersonate stage.

  • We propose a modification of Schnorr authentication protocol [1], which becomes immune to malicious setting of the ephemeral key by the adversary. Such a setting neither leads to subsequent leakage of long term secret key of the prover, nor help the adversary to impersonate the prover later on. Subsequently we prove the security of the modified Schnorr \({\mathsf {IS}}\) in our strong model.

We argue that our proposition is especially applicable in the systems where the regular Schnorr \({\mathsf {IS}}\) is used, but where the scenarios of the ephemeral leakage could be taken under consideration. PACEAA protocol from [2] is such an example, where regular Schnorr \({\mathsf {IS}}\) is a part of deniable authentication process, and where it can be replaced by our modified version.

Previous Work. There are many fundamental identification schemes proposed so far, e.g. RSA based: [3] of Fiat and Shamir, [4] of Feige, Fiat and Shamir, or [5] of Guillou and Quisquater. In [1] Schnorr introduced DLP based construction, followed by [6] of Okamoto. There are also specialized identity based \({\mathsf {IS}}\) e.g. [7] provably secure in the standard model, or [8] secure against concurrent man-in-the-middle attack without random oracles by using a variant of BB signature scheme. Problem of the leakage of secret bits of the long term key of the prover, in Bounded Retrieval Model, was analyzed in [9]. The problem of the security of \({\mathsf {IS}}\) schemes under reset attacks on ephemeral secrets was raised in [10] in the context of zero-knowledge proofs. Later in [11] constructions for making the IS protocols immune against reset attacks were shown: the reset-secure identification protocols based on a deterministic, stateless digital signature scheme (as such that proposition is not deniable); the reset-secure identification protocols based on a CCA secure asymmetric encryption scheme (not naturally compatible with the Diffie-Hellman key exchange protocols initiated with the prover ephemeral public key); the reset-secure identification schemes based on pseudorandom functions and trapdoor commitments (has more than 3 rounds). Comparing with [11] our solution preserves the characteristic of the original Schnorr \({\mathsf {IS}}\): (1) it is defined in groups suitable for Diffie-Helman key exchange; (2) it has three rounds - the first one is initiated with the prover’s commitment; (3) it is deniable for the prover - i.e. it is simulatable by the verifier without the secret key of the prover.

The paper is organized in the following way. In Sect. 2 we recall the Schnorr identification protocol. In Sect. 3 we introduce our stronger security model which addresses the problem of the ephemeral setting by the active malicious adversary. In Sect. 4 we propose the modified version of Schnorr \({\mathsf {IS}}\), and prove its security in our model.

2 Schnorr Identification Scheme

2.1 Preliminaries and Notation

We loosely follow the notation from [9]. Let \(x_1,\ldots , x_n\leftarrow _R X\) denotes that each \(x_i\) is sampled uniformly at random from the set X. Let \(\mathcal {G}(1^\lambda )\) be a group generation algorithm that takes as an inputs \(1^\lambda \), and outputs a tuple \(\mathbb {G}\) = (p, q, g, G), where \(p,q \in PRIMES\) s.t. \(q|p-1\), \(\mathbb {Z}_p^*\) be a multiplicative group modulo p, and \(\left\langle g \right\rangle = G\) be a subgroup of \(\mathbb {Z}_p^*\) of order q. Let \(\mathcal {H}: \{0,1\}^*\rightarrow G\) be a hash function. We will use it to compute another element of G denoted by \(\hat{g}\). We assume the following:

Bilinear Map: Let \(G_T\) be another group of a prime order q. We assume that \(\hat{e}: G\times G \rightarrow G_T\) is a bilinear map s.t. following condition holds:

  1. (1)

    Bilinearity: \(\forall {a,b\in \mathbb {Z}^{*}_q},\forall {g,g\in G}\): \(\hat{e}(g^a,g^b)\) = \(\hat{e}(g,g)^{ab}\).

  2. (2)

    Non-degeneracy: \(\hat{e}(g,g)\ne 1\).

  3. (3)

    Computability: \(\hat{e}\) is efficiently computable.

The Discrete Logarithm (DL) Assumption: For any probabilistic polynomial time (PPT) algorithm \({\mathcal {A}}_\mathsf {DL}\) it holds that:

$$ \Pr [{\mathcal {A}}_\mathsf {DL}(\mathbb {G},g^x)=x \; | \; \mathbb {G}\leftarrow _R\mathcal {G}(1^\lambda ),x\leftarrow _R \mathbb {Z}^{*}_q]\le \epsilon _{\mathsf {DL}}(\lambda ), $$

where \(\epsilon _{\mathsf {DL}}(\lambda )\) is negligible.

The Computational Diffie-Hellman (CDH) Assumption: For any probabilistic polynomial time (PPT) algorithm \({\mathcal {A}}_\mathsf {CDH}\) it holds that:

$$ \Pr [{\mathcal {A}}_\mathsf {CDH}(\mathbb {G},g^x,g^y)=g^{xy} \; | \; \mathbb {G}\leftarrow _R\mathcal {G}(1^\lambda ),x\leftarrow _R \mathbb {Z}^{*}_q, y\leftarrow _R \mathbb {Z}^{*}_q]\le \epsilon _{\mathsf {CDH}}(\lambda ), $$

where \(\epsilon _{\mathsf {CDH}}(\lambda )\) is negligible.

The Decisional Diffie-Hellman Oracle (\(\mathcal {O}_\mathsf {DDH}\)) denotes the (PPT) algorithm, which for \(\mathbb {G}\leftarrow _R\mathcal {G}(1^\lambda ),x\in \mathbb {Z}^{*}_q, y\in \mathbb {Z}^{*}_q, z\in \mathbb {Z}^{*}_q\)

$$\mathcal {O}_\mathsf {DDH}(\mathbb {G},g^x,g^y,g^z)= 1~\mathrm{iff}~z=xy \mod q.$$

The Gap Computational Diffie-Hellman (GDH) Assumption: For any probabilistic polynomial time (PPT) algorithm \({\mathcal {A}}_\mathsf {GDH}^{\mathcal {O}_\mathsf {DDH}}\) that has access to decisional Diffie-Hellman oracle \(\mathcal {O}_\mathsf {DDH}\) it holds that:

$$ \Pr [{\mathcal {A}}_\mathsf {GDH}^{\mathcal {O}_\mathsf {DDH}}(\mathbb {G},g^x,g^y)=g^{xy} \; | \; \mathbb {G}\leftarrow _R\mathcal {G}(1^\lambda ),x\leftarrow _R \mathbb {Z}^{*}_q, y\leftarrow _R \mathbb {Z}^{*}_q]\le \epsilon _{\mathsf {GDH}}(\lambda ), $$

where \(\epsilon _{\mathsf {GDH}}(\lambda )\) is negligible.

2.2 Identification Schemes

An identification scheme is a system in which a prover proves its identity to a verifier. More formally we define the following:

Definition 1

(Identification Scheme). An identification scheme \({\mathsf {IS}}\) is a system which consists of four algorithms \((\mathsf {ParGen}\), \(\mathsf {KeyGen}\), \(\mathcal {P}\), \(\mathcal {V})\) and a protocol \(\pi \):

  • \(\mathsf {params}\leftarrow \mathsf {ParGen}(1^\lambda )\) : inputs the security parameter \(\lambda \), and outputs public parameters available to all users of the system (we omit them from the rest of the description).

  • \((\mathsf {sk},\mathsf {pk})\leftarrow \mathsf {KeyGen}()\) : outputs the secret key \(\mathsf {sk}\) and corresponding public key \(\mathsf {pk}\).

  • \(\mathcal {P}(\mathsf {pk},\mathsf {sk})\) : denotes the prover – an ITM which interacts with the verifier \(\mathcal {V}\) in the protocol \(\pi \).

  • \(\mathcal {V}(\mathsf {pk})\) : denotes the verifier – an ITM which interacts with the prover \(\mathcal {V}\) in the protocol \(\pi \).

  • \(\pi (\mathcal {P},\mathcal {V})\) : denotes the protocol between the prover and the verifier.

We distinguish two stages of the scheme:

  • Initialization: In this stage parameters are generated: \(\mathsf {params}\leftarrow \mathsf {ParGen}(1^\lambda )\), and users are registered, e.g. on behalf of the user of identity \(\hat{A}\) the procedure \((a,A)\leftarrow \mathsf {KeyGen}()\) generates the pair of the secret key and the corresponding public key, denoted by a and A respectively.

  • Operation: In this stage any user, e.g. \(\hat{A}\), demonstrates its identity to a verifier by performing the protocol \(\pi (\hat{A}(a,A),\mathcal {V}(A))\) related to the keys aA. Finally the verifier outputs 1 for “accept” or 0 for “reject”. For simplicity we denote \(\pi (\mathcal {P},\mathcal {V})\rightarrow 1\) if \(\mathcal {P}\) was accepted by \(\mathcal {V}\) in \(\pi \).

We require that the scheme is complete i.e. protocol \(\pi (\mathcal {P}(\mathsf {sk},\mathsf {pk}),\mathcal {V}(\mathsf {pk}))\rightarrow 1\) for any pair \((\mathsf {sk},\mathsf {pk})\leftarrow \mathsf {KeyGen}()\).

There are many security modes for identification schemes. Intuitively the scheme is regarded as secure if it is impossible for any adversary prover algorithm \({\mathcal {A}}\), to be accepted, e.g. as identity \(\hat{A}\), by the verifier given the public key A, without the input of the appropriate secret key \(\mathsf {sk}=a\). That is we require that probability \(\Pr [\pi (\mathcal {P}(\mathsf {sk},\mathsf {pk}),\mathcal {V}(\mathsf {pk}))\rightarrow 1]\) is negligible. Now we denote formally the passive adversary mode that is used for the regular Schnorr identification. In this mode the adversary passively listens to the polynomial number \(\ell \) of the protocol executions between the prover and the verifier, \(\pi (\mathcal {P}(\mathsf {sk},\mathsf {pk}),\mathcal {V}(\mathsf {pk}))\), hoping that these observations will, later on, help him to impersonate the prover (without the prover secret key), to the verifier. We denote the view \(\mathsf {v}^{\mathcal {P},\mathcal {V},\ell }=\{T_1,\ldots ,T_\ell \}\) as the total knowledge \({\mathcal {A}}\) can gain after the \(\ell \) runs of \(\pi (\mathcal {P}(\mathsf {sk},\mathsf {pk}),\mathcal {V}(\mathsf {pk}))\), where \(T_i\) is the transcript of the protocol messages in ith execution.

Definition 2

(Passive Adversary (\(\mathsf {PA}\))). Let \({\mathsf {IS}}=(\mathsf {ParGen}\), \(\mathsf {KeyGen}\), \(\mathcal {P}\), \(\mathcal {V}\), \(\pi )\) is an identification scheme. We define security experiment \({\texttt {Exp}}^{{\mathsf {PA}},\lambda ,\ell }_{\mathsf {IS}}\):

  • \(\mathtt {Init \ stage:}\) Let \(\mathsf {params}\leftarrow \mathsf {ParGen}(1^\lambda )\), \((\mathsf {sk},\mathsf {pk})\leftarrow \mathsf {KeyGen}()\). Let the adversary \({\mathcal {A}}\), be the malicious algorithm given the public key \(\mathsf {pk}\).

  • \(\mathtt {Query \ stage:}\) \({\mathcal {A}}\) passively observes a polynomial number \(\ell \) of executions of the protocol \(\pi (\mathcal {P}(\mathsf {sk},\mathsf {pk}),\mathcal {V}(\mathsf {pk}))\). Let \(\mathsf {v}^{\mathcal {P},\mathcal {V},\ell }=\{T_1,\ldots ,T_\ell \}\) is the view \({\mathcal {A}}\) gains after the \(\ell \) runs of \(\pi (\mathcal {P}(\mathsf {sk},\mathsf {pk}),\mathcal {V}(\mathsf {pk}))\), where \(T_i\) is the transcript of ith execution.

  • \(\mathtt {Impersonation \ stage:}\) \({\mathcal {A}}\) runs the protocol \(\pi ({\mathcal {A}}(\mathsf {pk},\mathsf {v}^{\mathcal {P},\mathcal {V},\ell }),\mathcal {V}(\mathsf {pk}))\) with the honest verifier.

We define the advantage of \({\mathcal {A}}\) in the experiment \({\texttt {Exp}}^{{\mathsf {PA}},\lambda ,\ell }_{\mathsf {IS}}\) as probability of acceptance in the last stage:

$$\begin{aligned} {\mathbf {Adv}}({\mathcal {A}},{\texttt {Exp}}^{{\mathsf {PA}},\lambda ,\ell }_{\mathsf {IS}})=\Pr [\pi ({\mathcal {A}}(\mathsf {pk},\mathsf {v}^{\mathcal {P},\mathcal {V},\ell }),\mathcal {V}(\mathsf {pk}))\rightarrow 1]. \end{aligned}$$

We say that the identification scheme IS is secure if \({\mathbf {Adv}}({\mathcal {A}},{\texttt {Exp}}^{{\mathsf {PA}},\lambda ,\ell }_{\mathsf {IS}})\) is negligible in \(\lambda \).

2.3 Regular Schnorr Identification Scheme

Let us recall the Schnorr identification scheme from [1].

Fig. 1.
figure 1

The Schnorr identification scheme.

Protocol Simulation: The eavesdropping passive adversary learns transcript tuple \(T=(X,c,s)\). The random variables Xcs are uniformly distributed on their domains. In the protocol variables \(x=\log _gX,c\) are mutually independent, and together determine \(s=x+ac\) for the fixed a. On the other hand the protocol transcript can be efficiently simulated by choosing \(\tilde{s},\tilde{c}\) first and subsequently computing \(\tilde{X}=(g^{\tilde{s}}/A^{\tilde{c}})\). Then the simulator algorithm, denoted by \(\mathcal {S}_{\mathsf {IS}}^\pi (\tilde{c}\leftarrow _R \mathbb {Z}^{*}_q,\tilde{s}\leftarrow _R \mathbb {Z}^{*}_q)\), can replay the precomputed transcript \(\tilde{T}=(\tilde{X},\tilde{c},\tilde{s})\) in the correct order, thus simulating the interaction between the prover and the verifier. The tuples \(T=(X,c,s)\) and \(\tilde{T}=(\tilde{X},\tilde{c},\tilde{s})\) are identically distributed. As the immediate consequence of the simulatability, the security requirements for the above-mentioned protocol is that the challenge c is not know to the prover before it sends the commitment X to the verifier. This is especially crucial in the setups where the possible leakage on the verifier side is considered. In real implementation it must be ensured that the challenge value c is coined only after the verifier obtains the value X, but not earlier. Below we recall the security of Schnorr \({\mathsf {IS}}\) in \(\mathsf {PA}\) model.

Rewinding Technique: The idea behind the proof is the following: If we have the efficient adversary algorithm \({\mathcal {A}}\) for which \({\mathbf {Adv}}({\mathcal {A}},{\texttt {Exp}}^{{\mathsf {PA}},\lambda ,\ell }_{\mathsf {IS}}))=\epsilon \) is non-negligible, then, with also non-negligible probability \(\epsilon (\epsilon -1/q)\), it can be run twice and accepted, for the same fixed ephemeral x, but with different challenges \(c_1, c_2\) resulting with different responses \(s_1, s_2\). The \(\epsilon \) factor denotes the probability of acceptance in the first run, while \(\epsilon -1/q\) is the probability of acceptance in the second run for \(c_1\ne c_2\). The two tuples \((x,c_1,s_1)\), \((x,c_2,s_2)\) will help us to break the underlying \(\mathsf {DL}\) problem.

Theorem 1

Let \({\mathsf {IS}}\) denotes the Schnorr identification scheme (as of Fig. 1). \({\mathsf {IS}}\) is secure (in the sense of Definition 2), i.e. the advantage \({\mathbf {Adv}}({\mathcal {A}},{\texttt {Exp}}^{{\mathsf {PA}},\lambda ,\ell }_{\mathsf {IS}}))\) is negligible in \(\lambda \), for any PPT algorithm \({\mathcal {A}}\).

Proof

(Sketch). The proof is by contradiction. Suppose there is an adversary \({\mathcal {A}}\) for which the advantage \({\mathbf {Adv}}({\mathcal {A}},{\texttt {Exp}}^{{\mathsf {PA}},\lambda ,\ell }_{\mathsf {IS}}))\) is non-negligible. Then it can be used as a subprocedure by the efficient algorithm \({\mathcal {A}}_{\mathsf {DL}}\) that breaks the \(\mathsf {DL}\) assumption, computing e.g. \(\log _g(A)\) for the given instance of \(\mathsf {DL}\) problem \((\mathbb {G},A)\), also with a non negligible probability, in the following way:

  • \(\mathtt {Init \ stage:}\) Let \(\mathsf {params}\leftarrow \mathbb {G}=(p, q, g, G)\), \(\mathsf {pk}=A\). The adversary \({\mathcal {A}}\), is given the public key A.

  • \(\mathtt {Query \ stage:}\) We simulate the view \(\tilde{\mathsf {v}}^{\mathcal {P},\mathcal {V},\ell }=\{\tilde{T_1},\ldots ,\tilde{T_\ell }\}\) of \({\mathcal {A}}\) in the \(\ell \) executions of the protocol \(\pi \), where each \(\tilde{T_i}=(\tilde{X}_i,\tilde{c}_i,\tilde{s}_i)\) for \(\tilde{X}=(g^{\tilde{s}}/A^{\tilde{c}})\) produced by the simulator \(\mathcal {S}_{\mathsf {IS}}^\pi ()\).

  • \(\mathtt {Impersonation \ stage:}\) We run the protocol \(\pi ({\mathcal {A}}(\mathsf {pk},\tilde{\mathsf {v}}^{\mathcal {P},\mathcal {V},\ell }),\mathcal {V}(\mathsf {pk}))\) serving the role of the honest verifier. Then we use a rewinding technique: we fix the random value x used in \(X=g^x\) by the algorithm \({\mathcal {A}}\), and let \({\mathcal {A}}\) interact twice with the verifier, choosing each time a different random c, say \(c_1\) and \(c_2\). These will result with \(x, c_1, s_1\) and \(x, c_2, s_2\) accordingly. If the verifier accepts in both cases we have \(s_1=x+ac_1\) and \(s_2=x+ac_2\). Thus we have \(s_1-s_2\) = \(a(c_1-c_2)\), so we can compute \(a=(s_1-s_2)/(c_1-c_2)\).   \(\square \)

3 New Stronger Security Model

In this section we propose the new strong security model for \({\mathsf {IS}}\). In this model we assume that in the learning phase the adversary can influence the choices of ephemeral secrets of the prover in an adaptive manner. In the worst scenario, the malicious verifier denoted by \(\mathcal {\tilde{\mathcal {V}}}\), can choose ephemerals on behalf of \(\mathcal {P}\), and inject them to \(\mathcal {P}\), even against its will and without its knowledge, over some subliminal channel before the computation involving x on \(\mathcal {P}\) side starts. Let \(\bar{x}\) denotes the ephemeral secrets chosen by \(\mathcal {\tilde{\mathcal {V}}}\), and \(\mathcal {P}^{\bar{x}}\) denotes the honest prover \(\mathcal {P}\) with injected \(\bar{x}\), which uses this value as the random ephemeral during the protocol execution. Furthermore, we assume the subsequent choices of \(\mathcal {\tilde{\mathcal {V}}}\) can be adjusted according to responses from \(\mathcal {P}^{\bar{x}}\) during the subsequent protocol executions. We denote the protocol execution in which the ephemeral secrets \(\bar{x}\) was chosen by \(\mathcal {\tilde{\mathcal {V}}}\) and \(\mathcal {P}\) was forced to use it, as \(\pi (\mathcal {P}^{\bar{x}}(\mathsf {sk},\mathsf {pk}),\mathcal {\tilde{\mathcal {V}}}(\mathsf {pk},\bar{x}))\). We denote the view \(\mathsf {v}^{\mathcal {P},\mathcal {\tilde{\mathcal {V}}},\mathbf {\bar{x}}(\ell )}\) as the total knowledge \(\mathcal {\tilde{\mathcal {V}}}\) can gain after the polynomial number \(\ell \) of executions \(\pi (\mathcal {P}^{\bar{x}}(\mathsf {sk},\mathsf {pk}),\mathcal {\tilde{\mathcal {V}}}(\mathsf {pk},\bar{x}))\), where \(\mathbf {\bar{x}}(\ell )=\{{\bar{x}}_1,\ldots ,{\bar{x}}_{\ell }\}\) are the adaptive choices of \(\mathcal {\tilde{\mathcal {V}}}\).

Definition 3

(Chosen Prover Ephemeral – ( \({\mathsf {CPE}}\) )). Let \({\mathsf {IS}}=(\mathsf {ParGen}\), \(\mathsf {KeyGen}\), \(\mathcal {P}\), \(\mathcal {V}\), \(\pi )\) is an identification scheme. We define security experiment \({\texttt {Exp}}^{\mathsf {CPE},\lambda ,\ell }_{\mathsf {IS}}\):

  • \(\mathtt {Init \ stage:}\) Let \(\mathsf {params}\leftarrow \mathsf {ParGen}(1^\lambda )\), \((\mathsf {sk},\mathsf {pk})\leftarrow \mathsf {KeyGen}()\). Let the adversary \({\mathcal {A}}\), be the coalition of malicious algorithms \((\tilde{\mathcal {P}},\tilde{\mathcal {V}})\) given the public key \(\mathsf {pk}\).

  • \(\mathtt {Query \ stage:}\) \({\mathcal {A}}\) runs a polynomial number \(\ell \) of executions of the protocol \(\pi (\mathcal {P}^{{\bar{x}}_i}(\mathsf {sk},\mathsf {pk}),\mathcal {\tilde{\mathcal {V}}}(\mathsf {pk},{\bar{x}}_i)\) with the honest prover \(\mathcal {P}^{{\bar{x}}_i}\), collecting \(\mathsf {v}^{\mathcal {P},\mathcal {\tilde{\mathcal {V}}},\mathbf {\bar{x}}(\ell )}\), where \({\bar{x}}_i \in \{{\bar{x}}_1,\ldots ,{\bar{x}}_{\ell }\}\) denotes the adaptive choices of \(\mathcal {\tilde{\mathcal {V}}}\) injected as ephemerals to the prover \(\mathcal {P}^{\bar{x}_i}\) in the ith execution.

  • \(\mathtt {Impersonation \ stage:}\) \({\mathcal {A}}\) runs the protocol \(\pi (\mathcal {\tilde{\mathcal {P}}}(\mathsf {pk},\mathsf {v}^{\mathcal {P},\mathcal {\tilde{\mathcal {V}}},\mathbf {\bar{x}}(\ell )}),\mathcal {V}(\mathsf {pk}))\) with the honest verifier.

We define the advantage of \({\mathcal {A}}\) in the experiment \({\texttt {Exp}}^{\mathsf {CPE},\lambda ,\ell }_{\mathsf {IS}}\) as probability of acceptance in the last stage:

$$\begin{aligned} {\mathbf {Adv}}({\mathcal {A}},{\texttt {Exp}}^{\mathsf {CPE},\lambda ,\ell }_{\mathsf {IS}})=\Pr [\pi (\mathcal {\tilde{\mathcal {P}}}(\mathsf {pk},\mathsf {v}^{\mathcal {P},\mathcal {\tilde{\mathcal {V}}},\mathbf {\bar{x}}(\ell )}),\mathcal {V}(\mathsf {pk}))\rightarrow 1]. \end{aligned}$$

We say that the identification scheme is secure if \({\mathbf {Adv}}({\mathcal {A}},{\texttt {Exp}}^{\mathsf {CPE},\lambda ,\ell }_{\mathsf {IS}})\) is negligible in \(\lambda \).

It is easy to check that the regular Schnorr \({\mathsf {IS}}\) is not secure in the proposed \({\mathsf {CPE}}\) model. The adversary with the injected \(\bar{x}\) can easily compute the secret key \(a=(s-\bar{x})/c\), and impersonate the prover \(\hat{A}\) later on.

4 Modified Schnorr Identification Scheme

The idea behind the modification is the following. We want to address the threat that the adversary with the knowledge of c, \(s=x+ac\) and the leaked x computes static secret a. Therefore instead of sending s in plain-text, the prover will send s hidden in the exponent \(S=\hat{g}^s\), where the new generator \(\hat{g}=\mathcal {H}(X|c)\) is obtained with the hash function \(\mathcal {H}\). Now even if the ephemeral value x is leaked, the adversary should face DLP to obtain the value a from S. On the other hand we use the bilinear map \(\hat{e}\) on the verifier side to check the linear equation \(s=x+ac\) in the exponent. Indeed it holds that \(\hat{e}(S,g)\) = \(\hat{e}(\mathcal {H}(X|c)),XA^{c})\) due to the fact that \(\hat{e}(\mathcal {H}(X|c),XA^{c})\) = \(\hat{e}(\mathcal {H}(X|c)^{x+ac},g)\).

The proposed modified Schnorr \({\mathsf {IS}}\) is depicted in Fig. 2.

Fig. 2.
figure 2

The modified Schnorr identification scheme

In Fig. 3 we depict side-by-side the differences between the original Schnorr and our modified version. In our proposition we have one exponentiation and one hashing more on provers side. On the other hand the verifier does not have extra exponentiation during the verification. Indeed it has even one explicit exponentiation less – does not have to compute \(g^s\). Instead, additionally it has to compute the hash, and compare results of two bilinear functions.

Fig. 3.
figure 3

Schnorr identification comparison

4.1 Simulation in the Passive Adversary Mode

The modified Schnorr \({\mathsf {IS}}\) preserves the simulatability property of its original version. In the weaker passive adversary model, the eavesdropping adversary learns transcript tuple \(T=(X,c,S)\), similarly as in the original Schnorr \({\mathsf {IS}}\). The random variables XcS are uniformly distributed on their domains. In \(\pi \) variables \(x=\log _gX,c\) are mutually independent, and together determine \(S=\hat{g}^{x+ac}\) for the fixed a. On the other hand the protocol transcript can be efficiently simulated by choosing \(\tilde{s},\tilde{c}\) first, then subsequently computing \(\tilde{X}=(g^{\tilde{s}}/A^{\tilde{c}})\), and only then \(\hat{g}=\mathcal {H}(\tilde{X}|\tilde{c})\) and \(\tilde{S}=\hat{g}^{\tilde{s}}\). Observe that for this transcript the verification holds: \(\hat{e}(\tilde{S},g)\)=\(\hat{e}(\mathcal {H}(\tilde{X}|\tilde{c}),\tilde{X}A^{\tilde{c}})\). Then the simulator algorithm, denoted by \(\mathcal {S}_{\mathsf {IS}}^\pi (\tilde{c}\leftarrow _R \mathbb {Z}^{*}_q,\tilde{s}\leftarrow _R \mathbb {Z}^{*}_q)\), can replay the precomputed transcript \(\tilde{T}=(\tilde{X},\tilde{c},\tilde{s})\) in the correct order, thus simulating the interaction between the prover and the verifier. The tuples \(T=(X,c,s)\) and \(\tilde{T}=(\tilde{X},\tilde{c},\tilde{s})\) are identically distributed.

4.2 Simulation in the Chosen Prover Ephemeral Mode

The modified Schnorr \({\mathsf {IS}}\) is also simulatable in the proposed stronger Chosen Prover Ephemeral (CPE) model. Assuming programmable ROM (Random Oracle Model), we can simulate the protocol \(\pi (\mathcal {P}^{{\bar{x}}}(\mathsf {pk})\), \(\mathcal {\tilde{\mathcal {V}}}^{\mathcal {O}_\mathcal {H}}(\mathsf {pk},{\bar{x}}))\rightarrow 1\) on behalf of the prover \(\mathcal {P}^{{\bar{x}}}(\mathsf {pk})\) without the secret key \(\mathsf {sk}\), using the injected ephemerals \(\bar{x}\), and interacting with the active adversary \(\mathcal {\tilde{\mathcal {V}}}^{\mathcal {O}_\mathcal {H}}(\mathsf {pk},{\bar{x}})\), which injects the ephemerals \(\bar{x}\) to the prover and performs adaptive choices of challenges. Note that the adversary calls the oracle \(\mathcal {O}_\mathcal {H}\) to compute the hash value for the queried input.

Theorem 2

The modified Schnorr protocol (depicted in Fig. 2) is simulatable in the CPE model (of Definition 3).

Proof

The simulator \(\mathcal {S}_{\mathsf {IS}}^{{\mathsf {CPE}},\pi }()\) is defined in the following way:

  1. (1)

    Hash queries \(\mathcal {O}_\mathcal {H}\): We setup ROM table for hash queries \(\mathcal {O}_\mathcal {H}\). The table has three columns IHr: for the input, the output and the masked exponent respectively. On each query \(\mathcal {O}_\mathcal {H}(I_i)\) we check if we have it already defined - if so we return the corresponding output \(H_i\). Otherwise we choose \(r_i\leftarrow _R \mathbb {Z}^{*}_q\), compute \(H_i=g^{r_i}\), place a new row (\(I_i, H_i, r_i\)) in the ROM table, and return \(H_i\).

  2. (2)

    Commitment X: When injected ephemeral \(\bar{x}\) we use it to compute \(\tilde{X}=g^{\bar{x}}\). We send \(\bar{X}\) to the verifier \(\mathcal {\tilde{\mathcal {V}}}^{\mathcal {O}_\mathcal {H}}(\mathsf {pk},\bar{x})\) in the first message.

  3. (3)

    Proof S: On receiving \(\tilde{c}\) from the verifier, we call \(\mathcal {O}_\mathcal {H}(\bar{X}|\tilde{c})\) We check \(\mathcal {O}_\mathcal {H}\) table for the input \(\bar{X}|c\), locate and retrieve the corresponding \(g^r\) and r. We set \(\hat{g}=g^r\). We compute S = \(\tilde{X}^{r}A^{rc}\) = \(\hat{g}^{\bar{x}+ac}\) for \(\hat{g}\leftarrow \mathcal {O}_\mathcal {H}(\bar{X}|\tilde{c})\). Now verification on prover side holds: \(\hat{e}(\tilde{S},g)\)=\(\hat{e}(\hat{g},\tilde{X}A^{\tilde{c}})\) for \(\hat{g}\leftarrow \mathcal {O}_\mathcal {H}(\bar{X}|\tilde{c})\), and the real transcript tuple \(T=(X,c,s)\), and the simulated \(\tilde{T}=(\tilde{X},\tilde{c},\tilde{s})\) are identically distributed.   \(\square \)

4.3 Security Analysis

We follow the same proving methodology as in the case of the original Schnorr \({\mathsf {IS}}\). First we allow the adversary to gain some knowledge: we simulate the proofs on behalf of the prover (but without its secret key) interacting with malicious verifier, which injects the ephemerals for our usage. We are able to do this in ROM. Then, assuming that the advantage of the adversary is non-negligible, in the impersonation stage we use rewinding technique for obtaining two tuples \((X,c_1,S_1)\), \((X,c_2,S_2)\) which subsequently will help us to break the underlying hard problem - \(\mathsf {GDH}\) in this case - also with non-negligible probability.

Theorem 3

Let \({\mathsf {IS}}\) denotes the modified Schnorr identification scheme (as of Fig. 2). \({\mathsf {IS}}\) is secure (in the sense of Definition 3), i.e. the advantage \({\mathbf {Adv}}({\mathcal {A}}\), \({\texttt {Exp}}^{\mathsf {CPE},\lambda ,\ell }_{\mathsf {IS}}))\) is negligible in \(\lambda \), for any PPT algorithm \({\mathcal {A}}\).

Proof

(Sketch). We use ROM for hash queries. The proof is by contradiction. Suppose there is an adversary \({\mathcal {A}}= (\mathcal {\tilde{\mathcal {P}}}, \mathcal {\tilde{\mathcal {V}}})\) for which \({\mathbf {Adv}}({\mathcal {A}},{\texttt {Exp}}^{\mathsf {CPE},\lambda ,\ell }_{\mathsf {IS}}))\) is non-negligible. Then it can be used as a subprocedure by the efficient algorithm \({\mathcal {A}}_{\mathsf {GDH}}\) that breaks the \(\mathsf {GDH}\) for the given instance \(g^\alpha , g^\beta \), computing \(g^{\alpha \beta }\) also with non-negligible probability.

  • \(\mathtt {Init \ stage:}\) Let \(\mathsf {params}\leftarrow \mathbb {G}=(p, q, g, G)\) s.t. \(\mathsf {CDH}\) holds and \((g^\alpha , g^\beta )\) is GDH instance in \(\mathbb {G}\). We set \(\mathsf {pk}=g^\alpha \). The adversary \({\mathcal {A}}\), is given the public key \(\mathsf {pk}=g^\alpha \). We setup ROM table for hash queries \(\mathcal {O}_\mathcal {H}\). The table has three columns IHr for the input, the output and the masked exponent respectively. We will serve hash in the following way: in the \(\mathtt {Query \ stage}\) we will use the simulator \(\mathcal {S}_{\mathsf {IS}}^{{\mathsf {CPE}},\pi }\) (as in the proof of Theorem 2). In the \(\mathtt {Impersonation \; stage}\) we will provide to the adversary the value \((g^\beta )^r\), where r is a random mask.

  • \(\mathtt {Query \ stage:}\) We simulate in ROM a polynomial number \(\ell \) of executions of the protocol \(\pi (\mathcal {P}^{{\bar{x}}_i}(\mathsf {pk}),\mathcal {\tilde{\mathcal {V}}}^{\mathcal {O}_\mathcal {H}}(\mathsf {pk},{\bar{x}}_i)\) without the secret key, interacting with the active adversary verifier \(\mathcal {\tilde{\mathcal {V}}}^{\mathcal {O}_\mathcal {H}}(\mathsf {pk},{\bar{x}}_i)\), which injects ephemerals \({\bar{x}}_i\) by running the simulator \(\mathcal {S}_{\mathsf {IS}}^{{\mathsf {CPE}},\pi }\):

    (1) Serving Hash queries \(\mathcal {O}_\mathcal {H}\) : On each query \(\mathcal {O}_\mathcal {H}(I_i)\) we check if we have it already defined - if so we return the corresponding output \(H_i\). Otherwise we choose \(r_i\leftarrow _R \mathbb {Z}^{*}_q\), compute \(H_i=g^{r_i}\), place a new row (\(I_i, H_i, r_i\)) in the ROM table, and return \(H_i\).

    (2) Commitment X : When injected ephemeral \(\bar{x}\) we use it to compute \(\tilde{X}=g^{\bar{x}}\). We send \(\tilde{X}\) to the verifier \(\mathcal {\tilde{\mathcal {V}}}^{\mathcal {O}_\mathcal {H}}(\mathsf {pk},\bar{x})\) in the first message.

    (3) Proof S : On receiving \(\tilde{c}\) from the verifier, we call \(\mathcal {O}_\mathcal {H}(\tilde{X}|\tilde{c})\). We check \(\mathcal {O}_\mathcal {H}\) table for the input \(\bar{X}|c\), locate and retrieve the corresponding \(g^r\) and r. We set \(\hat{g}=g^r\). We compute S = \(\tilde{X}^{r}A^{rc}\) = \(\hat{g}^{\bar{x}+ac}\) for \(\hat{g}\leftarrow \mathcal {O}_\mathcal {H}(\tilde{X}|\tilde{c})\). Now verification on prover side holds: \(\hat{e}(\tilde{S},g)\)=\(\hat{e}(\hat{g},\tilde{X}A^{\tilde{c}})\) for \(\hat{g}\leftarrow \mathcal {O}_\mathcal {H}(\tilde{X}|\tilde{c})\). Observe that the simulated transcript tuple \(\tilde{T}=(\tilde{X},\tilde{c},\tilde{s})\), and the tuple from the real protocol \(T=(X,c,s)\) and are identically distributed. Let \(\mathsf {v}^{\mathcal {P},\mathcal {\tilde{\mathcal {V}}},\mathbf {\bar{x}}(\ell )}\) be the view of the adversary collected in this stage, where \({\bar{x}}_i \in \{{\bar{x}}_1,\ldots ,{\bar{x}}_{\ell }\}=\mathbf {\bar{x}}(\ell )\) denotes the adaptive choices of \(\mathcal {\tilde{\mathcal {V}}}\) injected as ephemerals to the prover \(\mathcal {P}^{\bar{x}_i}\) in ith execution.

  • \(\mathtt {Impersonation \ stage:}\) In ROM we run \(\pi (\mathcal {\tilde{\mathcal {P}}}^{\mathcal {O}_\mathcal {H}}(\mathsf {pk},\mathsf {v}^{\mathcal {P},\mathcal {\tilde{\mathcal {V}}},\mathbf {\bar{x}}(\ell )}),\mathcal {V}(\mathsf {pk}))\) serving the role of the honest verifier. We use the rewinding technique: we fix the random value x used in \(X=g^x\) by the algorithm \(\mathcal {\tilde{\mathcal {P}}}\), and let \(\mathcal {\tilde{\mathcal {P}}}\) interact twice with the verifier, choosing each time a different random challenge, \(c_1\) and \(c_2\), such that neither \(X|c_1\) nor \(X|c_2\) were the input to \(\mathcal {O}_\mathcal {H}\) in \(\mathtt {Query \ stage}\), and setting \(H_1 = \mathcal {O}_\mathcal {H}(X|c_1)\leftarrow (g^\beta )^{r_1}\), \(H_2 = \mathcal {O}_\mathcal {H}(X|c_2)\leftarrow (g^\beta )^{r_2}\) for \(r_1, r_2\leftarrow _R \mathbb {Z}^{*}_q\). These will result with (\(X, c_1, S_1, \hat{g}_1, r_1\)) and (\(X, c_2, S_2, \hat{g}_2, r_2\)) accordingly. If we accept the adversary prover both times by checking: \(\hat{e}({S_1},g) = \hat{e}(\hat{g}_1,XA^{c_1})\), and \(\hat{e}({S_2},g) = \hat{e}(\hat{g}_2,XA^{c_2})\), we conclude: \(S_1 = (g^{\beta r_1})^x(g^{\beta r_1})^{\alpha c_1}\) and \(S_2=(g^{\beta r_2})^x(g^{\beta r_2})^{\alpha c_2}\). Thus we have \(S_1^{{r_1}^{-1}}/{S_2}^{{r_2}^{-1}} = {(g^\beta )}^{\alpha c_1 - \alpha c_2}\), so we can compute \(g^{\alpha \beta } = (S_1^{{r_1}^{-1}}/{S_2}^{{r_2}^{-1}})^{{(c_1-c_2)}^{-1}}\).   \(\square \)

5 Conclusion

In this paper we modify the Schnorr \({\mathsf {IS}}\) from [1] in such a way that it becomes immune to the ephemeral key setting. Such a setting can be done e.g. by the malicious verifier who exploits the knowledge of the erroneous pseudo-random number generator implemented into the device of the prover. We observe that secret key of the prover, masked by the ephemeral values in the response message of the protocol, are no longer secure in such setups. Therefore the prover, in the response message, sends the fragile values hidden in the exponent. The verifier uses bilinear maps to check the equality of the equation in exponent on its side for the commitment and the public key of the prover. We introduce the new stronger security model to cover that scenario. We prove the security of the proposed scheme in our model.