Keywords

1 Introduction

Identification schemes (\(\mathsf {IS}\)) based on Public Key Infrastructure (PKI) allow a Prover, holding a secret key, to prove its possession via a zero-knowledge protocol executed with a Verifier holding a corresponding public key. There are two common requirements that \(\mathsf {IS}\) should satisfy: (1) security - a malicious Prover should not be able to successfully complete the protocol without the corresponding secret key; (2) privacy - in some scenarios, the protocol should be deniable, meaning that its transcript must not be a strong proof of Prover’s participation. Alternatively, there are cases in which the protocol should not be deniable and must provide a strong proof of Prover’s participation. Typically, \({\mathsf {IS}}\)es require complex computations over large numbers, and are deployed on the users’ electronic devices, which store sensitive secret keys. There are several common threats concerning this aspect, emerging from the fact that the end users see the devices as black boxes, and they have to trust that the scheme implementation processes are not tampered with. Very often, such devices are produced by vendors beyond of the end users’ control, and as such are subject to malicious modification, which can bring about the following vulnerabilities:

  • Prover’s Ephemeral Leakage: Especially important for three round identification schemes, with three messages exchanged between a Prover and a Verifier:

    1. (1)

      the Prover sends a commitment to a random value to the Verifier;

    2. (2)

      the Verifier sends to the Prover another random value called a challenge;

    3. (3)

      a response message sent by the Prover is a result of a function of the challenge and the secret key masked by the committed ephemeral.

    At the Verifier’s side, this response is checked by the means of the public key with the commitment and the challenge. If a malicious manufacturer implements a covert channel within a Prover’s device, it can learn (or set) ephemeral values coined in the commitment phase, and unmask the secret key from the response. This way, the ephemeral leakage subsequently enables impersonation attacks using the Prover’s identity. Note that Schnorr [2] and Okamoto [3] \({\mathsf {IS}}\)es are vulnerable to this attack. Recently, a remedy for that problem has been proposed in [1]. The solution is quite flexible and works for many similar three round constructions.

  • Verifier’s Ephemeral Leakage: Alternatively, if there is a back-door channel in a Verifier’s device, it can be exploited by a malicious Prover to read ephemeral values coined by the Verifier before the challenge phase. There are \({\mathsf {IS}}\)es which rely on the secrecy of such values e.g. [4,5,6]. In all these schemes the Adversary knowing the Verifier’s ephemeral value can impersonate the Prover without the secret key. It is worth to notice that typical three round identification schemes are immune, from their design, to attacks based on the Verifier’s ephemeral leakage, since the only random value of the Verifier is the challenge revealed to the Prover in the second message. This statement, however, requires an assumption that the challenge value is coined strictly after the commitment phase, as otherwise impersonation would be trivial, due to simulatability property of the \(\mathsf {IS}\).

  • Losing Deniability: Although typical three round \({\mathsf {IS}}\)es resist Verifier’s ephemeral leakage attacks, they suffer from the deniability attacks mounted by the active malicious Verifier. Indeed, instead of coining the challenge at random, the Adversary can use a Fiat-Shamir transformation [7] and compute challenge as a hash value over the commitment, this way changing the scheme into an undeniable signature.

Problem Statement: In this paper we address the following issue: How to design and implement a deniable \({\mathsf {IS}}\):

  1. (1)

    secure against ephemeral leakage on both Prover’s and Verifier’s side;

  2. (2)

    withstanding attacks based on Fiat-Shamir transformation.

1.1 Contribution of the Paper

The contribution of the paper is the following:

  • We introduce a new strong security model for deniable identification schemes in which we allow Adversaries:

    • to set ephemerals on Provers’ side in the Query Stage of the security experiment,

    • to read ephemerals used on Verifiers’ side in the final (Impersonation) Stage of the security experiment.

      We define the \(\mathsf {IS}\) to be secure if no Adversary, even given such a power and knowledge, is able to impersonate a Prover, without their secret key.

  • We propose a general extension to three-rounds identification protocols, e.g. [1,2,3], hardening them against Attacks on Deniability by Fiat-Shamir transformation, secure in our stronger model.

  • We show an example of our extension based on a modified Schnorr scheme, and prove its security in our model.

Our proposition is useful for systems based on three-round \(\mathsf {IS}\), where randomness leakage is possible. There is a growing demand for schemes secure in such scenarios, due to recent revelations regarding undermining cryptographic standards and implementations.

Remark:

note that typical, 4-round Malicious Verifier Zero Knowledge schemes, that are based on commitments to challenge are not secure in the Verifier Leakage model. Coining challenge before Prover’s commitment is sent may lead to straightforward impersonation: the challenge leakage allows for textbook simulation.

Previous Work. Identification schemes have been in use since the dawn of the modern, public-key cryptography [2, 7,8,9]. Schnorr has introduced a DLP based construction [2], followed by [3] of Okamoto. Several \(\mathsf {IS} \)es are specialized in terms of models or attack schemes, e.g. [10, 11]. [12] introduced a notion of vulnerability to ephemeral leakage and proposed \(\mathsf {IS}\) protocols invulnerable to such attacks. [13] shown \(\mathsf {IS}\) secure against Reset Attacks based on stateless, deterministic signature schemes, CCA-secure asymmetric encryption schemes and pseudorandom functions with trapdoor commitments. Subversion resilience is a concept regarding security of various schemes in settings, where malicious manufacturer may replace original scheme with a modified one that behaves identically, but may leak additional information by hidden trapdoors in regular outputs [14,15,16].

The paper is organized in the following way. In Sect. 2 we review our strong security model, strongly based on models from [1]. In Sect. 3 we propose the extensions of generic three-rounds \({\mathsf {IS}}\)es following the commit, challenge, response schema, which protects against Fiat-Shamir transformation-based attacks on deniability. In Sect. 4 we modify the protocol from [1], and prove its security in our model.

2 System Model

Let us first recall the definition of \(\mathsf {IS}\) from [1] loosely based on Okamoto’s definition [3].

Definition 1

(Identification Scheme). An identification scheme \(\mathsf {IS}\) is a tuple of procedures \((\mathsf {PG}, \mathsf {KG}_\mathcal {P}, \mathsf {KG}_\mathcal {V}, \mathcal {P}, \mathcal {V}, \pi )\):

 

\(\mathsf {par}\leftarrow \mathsf {PG}(1^\lambda )\)::

takes the parameter \(\lambda \), and outputs public parameters.

\(({\mathsf {sk}},{\mathsf {pk}})\leftarrow \mathsf {KG}_\mathcal {P} (\mathsf {par})\)::

outputs secret and public keys of the prover.

\(({\mathsf {se}},{\mathsf {pe}})\leftarrow \mathsf {KG}_\mathcal {V} (\mathsf {par})\)::

(optional) outputs secret and public keys of the verifier.

\(\mathcal {P} ({\mathsf {sk}},{\mathsf {pe}})\)::

denotes the Prover algorithm which interacts with the Verifier \(\mathcal {V} \).

\(\mathcal {V} ({\mathsf {pk}},{\mathsf {se}})\)::

denotes the Verifier algorithm which interacts with the Prover \(\mathcal {P} \).

\(\pi (\mathcal {P},\mathcal {V})\)::

denotes the protocol of interactions between \(\mathcal {P} \) and \(\mathcal {V} \).

 

\(\mathsf {IS}\) has Initialization and Operation Stages. In Initialization Stage, parameters and keys for users are generated. In the latter, a user proves interactively its identity in front of the Verifier: \(\pi (\mathcal {P} ({\mathsf {sk}},{\mathsf {pe}}),\mathcal {V} ({\mathsf {pk}},{\mathsf {se}}))\). We write \(\pi (\mathcal {P},\mathcal {V})\rightarrow 1\) if \(\mathcal {P} \) and \(\mathcal {V} \) have mutually accepted each other in \(\pi \). The scheme is complete iff

$$ \Pr [({\mathsf {sk}},{\mathsf {pk}})\leftarrow \mathsf {KG}_\mathcal {P} (),({\mathsf {se}},{\mathsf {pe}})\leftarrow \mathsf {KG}_\mathcal {V} (), \pi (\mathcal {P} ({\mathsf {sk}},{\mathsf {pe}}),\mathcal {V} ({\mathsf {pk}},{\mathsf {se}}))\rightarrow 1]=1. $$

The optional, verifier key pair \(({\mathsf {se}}, {\mathsf {pe}})\) exists in several \(\mathsf {IS}\) schemes. If the \(\mathsf {IS}\) does not rely on it, or even explicitly denies its existence, we may assume \(\mathsf {KG}_\mathcal {V} \) always returns \((\bot ,\bot )\) on any input.

2.1 Impersonation Resilience

The fundamental security requirement for \(\mathsf {IS}\) is that no malicious Prover algorithm \(\mathcal {A}\), without the secret key \({\mathsf {sk}}\) corresponding to the public key \({\mathsf {pk}}\) used by the Verifier, should be accepted in protocol \(\pi \). In other words, we require that probability \(\Pr [\pi (\mathcal {A}({\mathsf {pk}},{\mathsf {pe}}),\mathcal {V} ({\mathsf {pk}},{\mathsf {se}}))\rightarrow 1]\le \epsilon _\lambda \) where \(\epsilon _\lambda \) is a negligible function. We formally define our security model in Sect. 2.3.

2.2 Adversary Model

The process in which an Adversary gains knowledge about the attacked protocol is modeled by a Query Stage of the security experiment. This means that the Adversary runs a polynomial number \(\ell \) of the protocol executions between the Prover and the Verifier: \(\pi (\mathcal {P} ({\mathsf {sk}},{\mathsf {pe}}),\mathcal {V} ({\mathsf {pk}},{\mathsf {se}}))\). We consider the Active Adversary which actively participates in the stage, usually as a Verifier \(\widetilde{\mathcal {V}}\), i.e. it actively chooses messages sent to the Prover. Based on [1], we assume the Adversary additionally adaptively sets the ephemeral values for the Prover in each protocol run in the Query Stage. Finally, extending the model from [1], we consider the Adversary that can read ephemeral values of the Verifier in the Impersonation Stage, immediately after those values are produced.

2.3 Security Experiments

Let \({\bar{x}}_i\) be adaptive ephemerals from a malicious Verifier \(\widetilde{\mathcal {V}}\) injected to the Prover \(\mathcal {P} ^{\bar{x}_i}\) in the ith execution of the Query Stage. Let the view \(v_i=\{T_1,\ldots ,T_{i}\}\cup \{{\bar{x}}_1,\ldots ,{\bar{x}}_{i}\}\) be the total knowledge \(\mathcal {A}\) can gain after i runs of \(\pi \), where \(T_i\) is the transcript of the protocol messages in the ith execution. The \(\mathsf {IS}\) is \({\mathsf {CPLVE}}\)-secure if such a cumulated knowledge after \(\ell \) executions does not help the Adversary to be accepted by the Verifier except with a negligible probability.

Definition 2

(Chosen Prover-Leaked Verifier Ephemeral – ( \({\mathsf {CPLVE}}\) )). 

Let \(\mathsf {IS} =(\mathsf {PG}\), \(\mathsf {KG}_\mathcal {P} \), \(\mathsf {KG}_\mathcal {V} \), \(\mathcal {P} \), \(\mathcal {V} \), \(\pi )\). We define security experiment \({\mathtt {Exp}}^{{\mathsf {CPLVE}},\lambda ,\ell }_{\mathsf {IS}}\):  

\(\mathtt {Init \ Stage{:}}\) :

\(\mathsf {par}\leftarrow \mathsf {PG}(1^\lambda )\), \(({\mathsf {sk}},{\mathsf {pk}})\leftarrow \mathsf {KG}_\mathcal {P} (\mathsf {par})\), \(({\mathsf {se}},{\mathsf {pe}})\leftarrow \mathsf {KG}_\mathcal {V} (\mathsf {par})\). \(\mathcal {A}{:}(\widetilde{\mathcal {P}}({\mathsf {pk}},{\mathsf {pe}}),\widetilde{\mathcal {V}}({\mathsf {pk}},{\mathsf {pe}}))\).

\(\mathtt {Query \ Stage{:}}\) :

For \(i=1\) to \(\ell \) run \(\pi (\mathcal {P} ^{{\bar{x}}_i}({\mathsf {sk}},{\mathsf {pe}}),\widetilde{\mathcal {V}}({\mathsf {pk}},{\mathsf {pe}},{\bar{x}}_i,v_{i-1}))\), where \({\bar{x}}_i \in \{{\bar{x}}_1,\ldots ,{\bar{x}}_{\ell }\}\) are the adaptive ephemerals from \(\widetilde{\mathcal {V}}\) injected to the Prover \(\mathcal {P} ^{\bar{x}_i}\) in the ith execution, and \(v_{i-1}\) is the total view of \(\mathcal {A}\) until the ith execution.

\(\mathtt {Impersonation \ Stage{:}}\) :

\(\mathcal {A}\) executes the protocol \(\pi (\widetilde{\mathcal {P}}({\mathsf {pk}},{\mathsf {pe}},v_\ell , \mathbf {\bar{e}}),\mathcal {V} ({\mathsf {pk}},{\mathsf {se}}))\), where \({\mathbf {\bar{e}}}\) are the ephemerals of the Verifier leaked to the malicious Prover \(\widetilde{\mathcal {P}}\).

 

The advantage of \(\mathcal {A}\) in the experiment \({\mathtt {Exp}}^{{\mathsf {CPLVE}},\lambda ,\ell }_{\mathsf {IS}}\) is the probability of acceptance in the last stage:

We say that the \(\mathsf {IS}\) is \((\lambda ,\ell )\)-\({\mathsf {CPLVE}}\)–secure if and \(\epsilon _\lambda \) is negligible in \(\lambda \).

We utilize the definition of deniability from [17], which itself generalizes the idea from [18]. Let \(\pi \) be a protocol in \(\mathsf {IS}\). We assume an adversary \(\mathcal {M}\) which inputs an arbitrary number of public keys \(\pmb {{\mathsf {pk}}} = ({\mathsf {pk}}_1, \ldots , {\mathsf {pk}}_\ell )\), randomly coined with an appropriate key generating algorithm. The adversary initiates an arbitrary number of protocols with the honest parties, some in a role of the prover, others in a role of the verifier. The view of \(\mathcal {M}\) consists of its internal randomness, and the transcript of the entire interaction, in all the protocols in which \(\mathcal {M}\) participated. We denote this view as \(\mathsf {View}_\mathcal {M}(\pmb {\mathsf {pk}},a)\).

Definition 3

We say that \(\pi \) is a strongly deniable protocol of \(\mathsf {IS} \) with respect to the class A of auxiliary inputs if for any adversary \(\mathcal {M}\), for any input of public keys \(\pmb {{\mathsf {pk}}} = ({\mathsf {pk}}_1, \ldots , {\mathsf {pk}}_\ell )\) and any auxiliary input \(a \in A\), there exists a simulator \(SIM_\mathcal {M}\) that, running on the same inputs as \(\mathcal {M}\), produces a simulated view which is indistinguishable from the real view of \(\mathcal {M}\). That is, consider the following two probability distributions, where \(\pmb {{\mathsf {pk}}}\) = \(({\mathsf {pk}}_1, \ldots , {\mathsf {pk}}_\ell )\) is the set of public keys of the honest parties:

$$\begin{aligned}&\mathcal {R}eal(\lambda , a) = [({\mathsf {sk}}_i, {\mathsf {pk}}_i)\leftarrow \mathsf {KG}(1^\lambda ); (a, \pmb {{\mathsf {pk}}}, \mathsf {View}_\mathcal {M}(\pmb {\mathsf {pk}},a)] \\&\mathcal {S}im(\lambda , a) = [({\mathsf {sk}}_i, {\mathsf {pk}}_i)\leftarrow \mathsf {KG}(1^\lambda ); (a, \pmb {{\mathsf {pk}}}, SIM_\mathcal {M}(\pmb {\mathsf {pk}}, a)] \end{aligned}$$

then for all probabilistic poly-time machines Dist and all \(a \in A\), there exists a function \(\epsilon _\lambda \) negligible in \(\lambda \) s.t.:

$$ |\Pr {}_{x\in \mathcal {R}eal(\lambda , a)} [{ \textsf {Dist}}(x)=1]| - |\Pr {}_{x\in \mathcal {S}im(\lambda , a) } [{ \textsf {Dist}}(x)=1]|\le \epsilon _\lambda . $$

The idea behind this definition is that no adversary can follow a strategy that is not simulatable, i.e. there exist a distinguisher differentiating between the real adversary and a simulator. In other words, all adversarial strategies are simulatable.

2.4 Deniability Attack in Active Mode

Let \(T=(X,c,S)\) denote the transcript of a 3-round \(\mathsf {IS} \). In Fig. 1 we recall how active Verifier can use the Fiat-Shamir transformation to generate undeniable transcript of the protocol, effectively transforming the 3-round interactive \(\mathsf {IS} \) into non-interactive signature scheme. The value r is a randomizing factor. In real signature schemes, the value r is replaced by message m. The hash input \(i=(X,r)\) is an undeniable proof that the party \(\mathcal {P} \) has participated in the protocol.

Fig. 1.
figure 1

The attack on deniability of typical 3-round \(\mathsf {IS} \).

3 Extended Identification Schemes

3.1 General Idea – Commitment to an Unknown Value

The general idea behind the proposed extensions is that in order to achieve the strong deniability property in the Verifier Ephemeral Leakage scenario, the Verifier has to prove that the challenge has not been produced via the transformation of the Prover’s commitment X. Therefore, at the beginning of the protocol, the Verifier itself randomly chooses a commitment to an unknown challenge, which can be opened by them only after they obtain the first message from the Prover. We propose two different methods for this purpose: (a) Deterministic Encryption Method; (b) Proof of Computation Method; which can be used separately or together.

3.2 Deterministic Encryption Method

This extension is based on the assumption that the scheme in subject can be used in conjunction with a deterministic asymmetric encryption, for which, w.l.o.g., we use the following definition.

Definition 4

(Asymmetric Encryption Scheme). Let \(\mathsf {E} = (\mathsf {KG}_{\mathsf {E}}, \mathcal {E}, \mathcal {D})\) denote a secure deterministic encryption scheme, s.t. \(({\mathsf {se}},{\mathsf {pe}}) \leftarrow \mathsf {KG}_{\mathsf {E}}()\):

  1. (1)

    \(\forall _{(m\in M)}: \mathcal {E}({\mathsf {pe}},m)\rightarrow c \in C, s{.}t. \mathcal {D}({\mathsf {se}},c)\rightarrow m\),

  2. (2)

    \(\forall _{(c\in C)}: \mathcal {D}({\mathsf {se}},c)\rightarrow m \in M, s{.}t. \mathcal {E}({\mathsf {pe}},m)\rightarrow c\)

where \(({\mathsf {se}},{\mathsf {pe}})\) is a secret/public key pair; MC are plaintext, and ciphertext spaces; \((\mathsf {KG}_{\mathsf {E}}, \mathcal {E}, \mathcal {D})\) are key generation, encryption and decryption algorithms.

The only security property of \(\mathsf {E}\) that is required in the proposed scheme is its secrecy or one-wayness, that is:

Definition 5

(Encryption One-Wayness). An Asymmetric Encryption Scheme \(\mathsf {E}\) has encryption one-wayness property, if for any PPT algorithm \(\mathcal {A}\), for \(({\mathsf {se}},{\mathsf {pe}}) \leftarrow \mathsf {KG}_{\mathsf {E}}(1^\lambda )\) and for a \(c \in C\) selected uniformly at random:

$$\begin{aligned} \Pr [\mathcal {A}({\mathsf {pe}}, c) = \mathcal {D}({\mathsf {se}}, c)] \le \epsilon _\lambda \end{aligned}$$

for a negligible function \(\epsilon _\lambda \).

Note that the equation is actually equivalent to \( \Pr [\mathcal {E}({\mathsf {pe}}, \mathcal {A}({\mathsf {pe}}, c)) = c] \le \epsilon _\lambda \) and to \( \Pr [\mathcal {A}({\mathsf {pe}}, \mathcal {E}({\mathsf {pe}}, m)) = m] \le \epsilon _\lambda \) for uniformly selected message \(m \in M\).

An example of such a scheme is a textbook RSA Encryption [19]. With the \(\mathsf {E}\) scheme, as of Definition 4, the extension is the following: at the beginning the Verifier chooses the ciphertext \(\hat{c}\) randomly, which is immediately sent to the Prover. This is a commitment to a yet unknown challenge c, and corresponds to the Verifier’s ephemeral value, known to the malicious Prover in the Verifier Ephemeral Leakage model. Then, the Verifier waits until it gets a commitment from the Prover and only then opens the commitment \(m=\mathcal {D}({\mathsf {se}}, \hat{c})\), chooses a random bit \(b\leftarrow _R \{0,1\}\) and sends mb to the Prover. The bit b allows for randomization of c, but the information size of b is insufficient to indicate the Prover’s identity, as both options are equally simulatable. Both parties compute the commitment with a secure one way hash function \(c=\mathcal {H}(m,b)\). This reflects the situation in which both the Prover and the Verifier learn the commitment c only after X has been received by the Verifier. On the other hand, the Prover checks if the value m agrees with the commitment \(\mathcal {E}({\mathsf {pe}}, m){\mathop {=}\limits ^{?}}\hat{c}\), and then it is convinced that the challenge m has not been produced by a Fiat-Shamir-like transformation over its own commitment X. If \(\mathcal {E}({\mathsf {pe}}, m)\ne \hat{c}\), the Prover stops the protocol.

The proposed extension is depicted in Fig. 2. Note that the \(\mathsf {IS} \) has a slightly different interface as \(\mathcal {P} \) and \(\mathcal {V} \) take each others’ public keys and their own secret keys on input (contradictory to the Definition 1 where only Prover’s keys were considered). The single random bit b has a very small influence on the protocol itself, but is crucial in proving the security of the underlying \(\mathsf {IS} \), when the proof uses rewinding techniques, in order to produce two distinct challenges for the same initial commitment.

Fig. 2.
figure 2

Extension based on encryption scheme.

Lemma 1

The extension proposed in Fig. 2 protects against deniability attacks on 3-round \(\mathsf {IS} \) via Fiat-Shamir transformation - as of Fig. 1.

Proof

The proof is by contradiction. Assume that a malicious Verifier successfully, with non-negligible probability, mounts the attack resulting with transcript \(T= (\hat{c}, X, m,b, S)\) and the proof \(i=(X,r)\), s.t: \(m=\mathcal {D}({\mathsf {se}}, \hat{c})\), \(c=\mathcal {H}(m,b)\) and \(c=\mathcal {H}'(X,r)\) for any hash function \(\mathcal {H}'\), then we successfully find a collision for the hash function \(\mathcal {H}\) with inputs \(i=(m,b)\) and \(i=(X,r)\) (if \(\mathcal {H}= \mathcal {H}'\)), or break preimage resistance of either \(\mathcal {H}\) (with the image being \(c=\mathcal {H}'(X,r)\)) or \(\mathcal {H}'\) (with the image being \(c=\mathcal {H}(m,b)\)).    \(\square \)

Lemma 2

The extension proposed in Fig. 2 retains zero-knowledge properties of the underlying \(\mathsf {IS}\).

Proof

(Sketch). Completeness. Straightforward verification shows that if the original \(\mathsf {IS}\) was complete, the modified scheme is complete as well. The addition of \(\hat{c}\) and the way c is computed does not influence the protocol if only \(\mathcal {H}\) is a secure hash function indistinguishable from a Random Oracle into the challenge space.

Soundness. The method of proving soundness of the modified scheme is closely related to the method used to prove the soundness of \(\mathsf {IS} \). In principle, \(\mathcal {P} \) cannot derive any knowledge from the commitment scheme except with a negligible probability. If \(\mathcal {P} \) could derive any information about the challenge message before the commitment phase, they would be able to break the encryption one-wayness of \(\mathsf {E}\) (cf. Definition 5).

Zero-knowledge. The protocol is simulatable if only \(\mathsf {IS} \) is simulatable. Let us choose \(m \in M\) and \(b \in \{0,1\}\) at random. Compute \(c = \mathcal {H}(m,b)\) and simulate transcript (XcS) of \(\mathsf {IS} \) for the given challenge c. Compute commitment \(\hat{c}= \mathcal {E}({\mathsf {pe}},m)\). Return \((\hat{c}, X, (m, b), S)\) as the simulated transcript.    \(\square \)

3.3 Proof of Computation Method

This extension is based on the assumption that the Verifier’s computing device \(\mathsf {D}_\mathcal {V} \) is faster than the Prover’s computing device \(\mathsf {D}_\mathcal {P} \). Let \({\mathsf {RT}}_\mathsf {D}(A)\) denote a running time of the device \(\mathsf {D}\) executing an algorithm A. Let \((\mathsf {P},X)\) denote a computational problem in domain X, and \(\varsigma \) denote its solution. Let \(\mathsf {Ver}(\mathsf {P},X,\varsigma )\) denote a fast verification algorithm which returns 1 if \(\varsigma \) is a solution for \((\mathsf {P},X)\) or returns 0 otherwise. Let \(\mathcal {S}(\mathsf {P},X)\) denote the algorithm solving \((\mathsf {P},X)\). We assume that \(\mathcal {S}(\mathsf {P},X)\) is “quite” complex, that is, on any device \(\mathsf {D}\) it holds that: \({\mathsf {RT}}_\mathsf {D}(\varsigma =\mathcal {S}(\mathsf {P},X)) \gg {\mathsf {RT}}_\mathsf {D}(\mathsf {Ver}(\mathsf {P},X,\varsigma ))\). To capture that the Verifier’s computing device \(\mathsf {D}_\mathcal {V} \) is faster than the Prover’s computing device \(\mathsf {D}_\mathcal {P} \) we assume that: \({\mathsf {RT}}_{\mathsf {D}_\mathcal {V}}(\mathcal {S}(\mathsf {P},X)) < {\mathsf {RT}}_{\mathsf {D}_\mathcal {P}}(\mathcal {S}(\mathsf {P},X))\), for any \((\mathsf {P},X,\mathcal {S})\).

Let \(\mathcal {G}({\mathsf {P}}, w)\) be a domain generation algorithm for problem \({\mathsf {P}}\) that takes a seed \(w\in Seed\) as an input, and outputs a domain X for \({\mathsf {P}}\). Let \(\mathcal {H}: \{0,1\}^*\rightarrow Seed\) be a one way function used to compute a seed w for \(\mathcal {G}(P, w)\). Assume the following process of generating a sequence of problems \(P, X_i\) and its solutions \(\varsigma _i\) from the random seed \(w\in _R Seed\).

figure a

Assume the verification process:

figure b

The Proof of Computation System \(\mathsf {PCS}\) is a tuple of the above defined algorithms: \((\mathcal {G},\mathsf {P},\mathcal {S},\mathsf {Ver},\mathsf {Gen},\mathsf {Check}, \mathcal {H})\). The proposed extension is depicted in Fig. 3

Fig. 3.
figure 3

Extension based on Proof of Computation System.

Lemma 3

The extension proposed in Fig. 3 protects against deniability attacks on 3-round \(\mathsf {IS} \) via Fiat-Shamir transformation - as of Fig. 1.

Proof

The proof is by contradiction. Similarly as in the proof of Lemma 1, if a malicious Verifier successfully, with non-negligible probability, attacks the scheme getting the transcript \(T= (w, X, \left\langle \varsigma _i\right\rangle _i^n, S)\) and the Fiat-Shamir undeniability proof \(i=(X,m)\), s.t: \(\left\langle \varsigma _i\right\rangle _i^n\) = \(\mathsf {Gen}(\mathsf {P},w)\), \(c=\mathcal {H}(\left\langle \varsigma _i\right\rangle _i^n)\), and \(c=\mathcal {H}(X,m)\), then we successfully find a collision for the hash function \(\mathcal {H}\) with inputs \(i=(\left\langle \varsigma _i\right\rangle _i^n)\) and \(i=(X,r)\) (if \(\mathcal {H}= \mathcal {H}'\)), or break preimage resistance of either \(\mathcal {H}\) (with the image being \(c=\mathcal {H}'(X,r)\)) or \(\mathcal {H}'\) (with the image being \(c=\mathcal {H}(\left\langle \varsigma _i\right\rangle _i^n)\)).    \(\square \)

4 Specific Scheme Proposition

To show the applicability of our propositions we introduce the modification of the scheme from [1] augmented with our first extension, using textbook RSA encryption. The proposed scheme is depicted in Fig. 4.

Fig. 4.
figure 4

The proposed modified \(\mathsf {IS}\).

4.1 Simulation in the Passive Adversary Mode

The modified Schnorr \(\mathsf {IS} \) preserves the simulatability property of its original version. The protocol transcript can be efficiently simulated by the following algorithm (for any public keys \(({\mathsf {pk}}, {\mathsf {pe}})\) and challenge message (mb)):

figure c

Observe that for this transcript the verification holds: \(\hat{e}({S},g)\) = \(\hat{e}(\mathcal {H}_G(X, c)\), \({X}{\mathsf {pk}}^{{c}})\). The simulator can play the simulated transcript \({T}=(\hat{c}, {X},{m},{S})\) in the correct order, thus mimicking the real interaction between the parties. The real transcript and the simulated tuple are identically distributed.

4.2 Security Analysis

In our analysis we assume that there is an effective Adversary that breaks our scheme from Fig. 4. In the Query Stage, we interact with the Adversary, simulating the proofs without the secret key, but using the injected ephemerals. In the Impersonation Stage, there are two mutually exclusive possibilities: either the Adversary knows the challenge \(c=\mathcal {H}_q(m,b)\) before sending X, or he does not. Therefore, in our reduction proof, we guess in which alternative the Adversary exists. If it knows the value \(c=\mathcal {H}_q(m,b)\), we use it to break underlying security of RSA. If the Adversary attacks without the knowledge of the challenge \(c=\mathcal {H}_q(m,b)\) we proceed as in the original proof from [1]. In the latter case, we follow the methodology from [2, 3], using rewinding technique. Namely, we fix randomness \(\hat{c}, X\), but change the bit b by setting it to 0 for the first run, and to 1 for the second run. This results with two tuples \((\hat{c}, X, m ,0, S_1)\), \((\hat{c}, X, m ,1, S_2)\) letting us solve the underlying hard problem – in this case \(\mathsf {CDH}\).

Theorem 4

Let \(\mathsf {IS} \) denote the modified identification scheme (as in Fig. 4). \(\mathsf {IS} \) is secure (in the sense of Definition 2), i.e. the advantage \({{\mathbf {Adv}}}(\mathcal {A}\), \({\mathtt {Exp}}^{{\mathsf {CPLVE}},\lambda ,\ell })\) is negligible in \(\lambda \), for any PPT algorithm \(\mathcal {A}\).

We postpone the proof to the Appendix A.

5 Conclusion

In this paper, we have shown how to modify a wide class of three-move identification schemes secure against Prover Ephemeral Injection into identification schemes secure against Verifier Ephemeral Leakage and Deniability Attack. We have shown an example based on a modified Schnorr \(\mathsf {IS}\) from [1]. We have formalized a security model and proved the security of our constructions.