Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In \(\mathtt {DB}\) protocols, there are two types of entities; provers and verifiers. In concurrent execution of \(\mathtt {DB}\) protocols, multiple protocol instances are run at the same time. Provers and verifiers are usually connected to a back-end server, which only takes care of the registration phase and is silent otherwise.

Secure \(\mathtt {DB}\) protocols provide two functionalities: (1) authentication of a registered prover to a verifier, and (2) bounding the prover’s distance to the verifier. The bounding is commonly by measuring the round trip time in a fast-exchange phase between the prover and the verifier, during which the verifier presents challenges to the prover and verifies if the prover’s responses are correct. The round-trip times of correct challenge and responses are used to estimate the distance between prover and verifier, and then compare it against a specified distance bound \(\mathbb {D}\).

Privacy is a necessary property in many location-based services. One approach for providing privacy in location-privacy, is to modify the geometric data, for example loose accuracy of location data, to achieve privacy [15, 21]. In distance-bounding however, the location accuracy is a requirement. To achieve both privacy and location accuracy, one can unlink the location data from provers’ authentication information. This problem has been well studied in the context of RFID systems [13, 22]. In these systems the location of provers are known by the verifiers, but provers hide their identity in their interactions. In RFID systems however, verifiers are assumed trusted, and the privacy is only against Man-In-the-Middle (MiM) adversaries. The assumption of trusted verifier is not acceptable in many distance-based location services in which the verifiers can get exploited, and so it is important to consider stronger privacy models, in particular consider untrusted verifiers.

In a symmetric-key \(\mathtt {DB}\) protocol, there is a shared-key between prover and registration authority and so privacy is not achievable against an adversary who can access the internal state of the authority. One can always achieve prover privacy by using the same key for all provers. This however is unacceptable because of the threat it exposes to the whole system. There are three famous public-key based \(\mathtt {DB}\) protocols in the literature: (a) The seminal Brands-Chaum protocol [4], which uses commitments and signatures. The protocol is not secure against Terrorist-Fraud Attack (TFA). (b) Bussard-Bagga [5] protocol (\(\mathtt {DBPK}\)-\(\mathtt {Log}\)), which uses bit commitment and was designed to provide TFA resistance, but it was recently broken [2]. And, (c) the recent Hermans et al. [14] protocol, which uses elliptic curve cryptography and does not provide TFA resistance. It also does not allow the privacy adversary to access the internal state of the registration authority. Therefore, there is no \(\mathtt {DB}\) protocol, which is both public-key based and is secure against terrorist-fraud adversary.

The following questions have been open in the literature. (i) Design a public-key based DB protocol that is, secure against terrorist-fraud adversary; and (ii) design a secure privacy preserving \(\mathtt {DB}\) protocol that provides privacy for provers against a privacy adversary who controls the verifier, and has access to the internal state of registration authority.

In this paper we answer to these open problems. Our contributions is threefold; first we propose a privacy model which we refer to it as extensive-privacy, and show that it is stronger than the wide-privacy notion of [22], and the privacy model of Gambs et al. [11], with respect to the adversary’s access to the state of the non-prover entities. Second, we fix the security flaw of \(\mathtt {DBPK}\)-\(\mathtt {Log}\) protocol, and achieve the first public-key based distance-bounding protocol, which is secure against TFA adversary (called \(\mathtt {DBPK}\)-\(\mathtt {log^{+}}\) protocol). Finally, we propose the protocol \(\mathtt {PDB}\), as an extension of \(\mathtt {DBPK}\)-\(\mathtt {log^{+}}\), which is secure in the proposed privacy model.

2 Background

In \(\mathtt {DB}\) systems, since the seminal paper [4], the implicit assumption about all secure \(\mathtt {DB}\) protocols is the presence of a secure function which generates and distributes the private-keys of the entities. In some cases this function can be executed by a verifier. A distance-bounding protocol allows a registered prover to prove that she is within a distance bound to verifier, and in possession of the secret-key which is used for user authentication information.

In order to have a privacy-preserving distance-bounding system, we need to consider three properties; Authentication, Distance-Bounding and Privacy. By having authentication property, the verifiers determines whether an approaching entity is indeed a legitimate prover or not. Distance-Bounding property guarantees that the prover is located within a pre-defined distance. And privacy property provides assurance to provers that their interactions in the system is not traceable.

Authentication is obtained by a protocol between prover and verifier. The prover must prove that she knows a secret value which is registered to the system. The protocol must satisfy two properties; correctness and soundness. The correctness holds when the honest verifier accepts, if an honest prover, who knows the secret value is involved. The soundness holds when no adversary who doesn’t have access to the secret value of a registered prover, can convince the honest verifier to accept in the authentication. Gambs et al. [11] consider these two properties, but make two assumptions: (1) the server revokes all corrupted provers upon corruption, and (2) dishonest provers cannot yield their private-key to the adversary, unless they get un-registered. Implementing these assumptions are major challenges, and so one of the goals of our work is to weaken these assumptions.

Distance-Bounding protocols run a fast-exchange phase, which guarantees presence of the owner of the secret-key within distance bound \(\mathbb {D}\). By assuming that no prover is willing to disclose her secret-key to others, five attacking scenarios have been studied in DB protocols: Distance-Fraud Attack (DFA) [4]; a dishonest prover \(\mathcal {P}^*\), which is not located within distance \(\mathbb {D}\) to verifier \(\mathcal {V}\), tries to convince \(\mathcal {V}\) that she is located within distance \(\mathbb {D}\) to \(\mathcal {V}\). Mafia-Fraud Attack (MFA) [8]; an adversary \(\mathcal {A}\), which is located within distance \(\mathbb {D}\) to \(\mathcal {V}\) (between \(\mathcal {V}\) and far away honest prover \(\mathcal {P}\)), convinces \(\mathcal {V}\) that \(\mathcal {P}\) is close-by. Terrorist-Fraud Attack (TFA) [8]; an adversary \(\mathcal {A}\) (located within distance \(\mathbb {D}\) to \(\mathcal {V}\)) co-operates with a dishonest prover \(\mathcal {P}^*\) (far away) to convince \(\mathcal {V}\) that \(\mathcal {P}^*\) is located within distance \(\mathbb {D}\) to \(\mathcal {V}\). Distance-Hijacking [7]; a dishonest prover \(\mathcal {P}^*\), which is not located within the distance \(\mathbb {D}\) to a verifier \(\mathcal {V}\), exploits some honest provers \(\mathcal {P}_1, \ldots , \mathcal {P}_n\) to mislead \(\mathcal {V}\) about the actual distance between \(\mathcal {P}^*\) and \(\mathcal {V}\). Impersonation-Attack [1]; a dishonest prover \(\mathcal {P}^*\) purports to be another prover in her interaction with \(\mathcal {V}\).

Vaudenay et al. [23] proposed a general attack model, which captures all these attacks;

  • Distance-Fraud is defined to capture the classic DFA and Distance-Hijacking.

  • MiM attack captures MFA and Impersonation-Attack.

  • Collusion-Fraud is a different game-based definition about TFA. Based on this definition, if there is any PPT adversary who can win the TFA game with probability \(\gamma \), then there exist a weaker MiM adversary who can succeed in a specific MiM game with probability \(\gamma '\).

Authentication and Distance-Bounding have been studied in \(\mathtt {DB}\) protocols. Recently, a considerable amount of focus have been on proposing formal definitions and provably secure of symmetric \(\mathtt {DB}\) protocols [3, 9, 10]. In Dürholz et al. [9], the strong simulation-based terrorist-fraud for “single prover, single verifier” setting have been defined. Fischlin-Onete [10] have extended [9] and defined even stronger simulation-based model and proposed a protocol with security proof. On the other hand, Boureanu et al. [3] showed that the definition of Dürholz et al. is too strong, and proposed a general and practical game-based terrorist-fraud model for “multiple prover, multiple verifier” setting (same model as [23]). Boureanu et al. proposed the SKI protocol, which is claimed to be secure against the defined distance-fraud, MiM and collusion-fraud. By the way, this protocol is not proven to be secure under the defined collusion-fraud adversary, despite their claim. The provided proof is just for deterministic PPT adversaries in the TFA game, rather than any PPT adversary in this game.

Privacy is considered as un-traceablility of different sessions of a single prover. This notion of privacy have been well studied in the RFID framework against MiM adversaries, which mostly use symmetric setting and assume trusted verifier and registration authority [13, 22]. In Hermans et al. [13] model, privacy is defined as a game between an adversary and a challenger. The adversary has oracle access to the following functionalities; create honest provers (\(\mathtt {CreateProver}\)), launch a session of a pre-defined protocol (\(\mathtt {Launch}\)), ask the challenger to choose one of the given two provers and return an anonymous handle (\(\mathtt {DrawProver}\)), send message to an anonymous prover (\(\mathtt {SendProver}\)), free the anonymous handle of a prover (\(\mathtt {Free}\)), send message to the verifier in a protocol session (\(\mathtt {SendVerifier}\)), see output of the verifier in a session of protocol (\(\mathtt {Result}\)), and finally get the non-volatile internal state of an honest prover (\(\mathtt {Corrupt}\)). The adversary wins the privacy game if she can find out the chosen bit of challenger in \(\mathtt {DrawProver}\) oracle. Peeters-Hermans [19] added a new oracle to the above list by which, the adversary can create insider prover and have control on it (\(\mathtt {CreateInsider}\)).

Vaudenay [22] have classified the adversaries, based on their access to the above oracles. A wide adversary has access to \(\mathtt {Result}\) oracle, otherwise it will be a narrow adversary. In parallel, the adversary can be weak (no access to \(\mathtt {Corrupt}\) oracle), forward (\(\mathtt {Corrupt}\) queries can only be followed by other \(\mathtt {Corrupt}\) queries), destructive (\(\mathtt {Corrupt}\) queries destroys the access to the corrupted prover), and strong (unlimited access to \(\mathtt {Corrupt}\) oracle). Paise-Vaudenay [17] showed that destructive and strong privacy is not achievable in symmetric-key systems.

Gambs et al. [11] built of the work of Hermans et al. [14] to define public-key \(\mathtt {DB}\) protocols that are privacy-preserving and constructed the first protocol, which is secure against three separate adversaries: (1) MiM adversary, (2) a MiM adversary who has access to the internal state of the verifier, and (3) an honest-but-curious adversary who knows the internal state of the verifier and the registration server. We extend this model and introduce a stronger adversary who has access to the internal state of verifiers and registration server (extensive Footnote 1 adversary). Therefore, the following order in the new classification holds; \( NARROW \subseteq WIDE \subseteq \) Gambs et al. \( \subseteq EXTENSIVE \) based on having access to the state of verifiers and registration authority, as well as \( WEAK \subseteq FORWARD \subseteq DESTRUCTIVE \subseteq STRONG \) based on having access to the state of provers.

In this paper we propose a new protocol, which provides authentication, distance-bounding (distance-fraud, MiM and terrorist-fraud resistance) and privacy (against extensive-weak adversary).

2.1 Distance-Bounding Proof-of-Knowledge (\(\mathtt {DBPK}\)-\(\mathtt {Log}\))

Bussard-Bagga [5] proposed the only public-key DB protocol which was designed to be secure against TFA adversary. This protocol combines a fast-exchange \(\mathtt {DB}\) protocol, Pedersen commitment scheme [18] and zero-knowledge proof-of-knowledge [20]. In this protocol, a prover chooses a key pair and registers the public-key with a trusted server. The verifiers are trusted and have access to the public-key of provers. The system parameters are set by a trusted authority, and uses a cyclic group whose order is a strong prime. These parameters are shared and used by all the participants. This protocol was designed to be secure against DFA, MFA and TFA adversaries, while the information leakage about the private-key of provers is minimal.

The protocol combines the bitwise operation, used for fast-exchange phase, and modular operations that are required for commitment schemes. This results in some security loss of the secret-keys, while maintaining indistinguishability of the secret-keys.

Bay et al. [2] showed TFA and DFA attacks on this protocol, which takes advantage of poor auditing of un-used elements in the \(\mathtt {Commitment~Opening}\) phase (i.e. half of the bit commitments won’t get opened).

2.2 \(\mathtt {BBS^{+}}\) Signature Scheme [6]

This signature scheme uses bilinear mapping and supports signing of committed message blockFootnote 2 \(M=\{m_1,\ldots ,m_L\}\), without knowing about the actual message. Two entities are invloved in this scheme; a trusted signer (S) who holds the signing key, and a client (C) who knows a message block.

This signature scheme, follows the standard operations [12] of all signature schemes \(\mathtt {BBS^{+}}\) = (\(\mathtt {KeyGen}\), \(\mathtt {Sign}\), \(\mathtt {Verify}\)):

  • \((sk_{S}, pk_{S} = h_0^{sk_{S}}) \leftarrow \mathtt {KeyGen} (1^{\lambda })\) ; S creates a key pair and publishes the public-key.

  • , s.t. \(M=\{m_1,\ldots ,m_L\}\), \(\sigma = (A, e, s)\), \(partial(\sigma ) = (A, e)\), \(cmt(M)=(\{C_i = g_1^{m_i}g_2^{r_i}\},\) \(C_M= g_1^{s'} g_2^{m_1} \ldots g_{L+1}^{m_L})\), and \(A=(g_0 g_1^s g_2^{m_1} \ldots g_{L+1}^{m_L}) ^{\frac{1}{e+sk_{S}}}\) for random values of \(\{r_i\}, s, s', e\); C and S get involved in a protocol for signing a committed message block (M). In this protocol, first C calculates the commitment \(cmt(M)=(\{C_i\}, C_M)\) as mentioned above, and sends it to S, then they run \(PoK\{(\{m_i, r_i\}, s'):C_M \bigwedge \{C_i\}\}\) to verify the possession and integrity of the values in the commitment. Then S creates the signature as \(A=(g_0 g_1^{s''} C_M)^{\frac{1}{e+sk_{S}}}\) for random e and \(s''\) and sends \(\{A, e, s''\}\) to C. And finally C calculates \(s = s' + s''\), checks the validity of \(\sigma =\{A, e, s\}\) and keeps \(\sigma \) as the signature of S on M.

  • , s.t. \(C'_i = g_1^{m_i}g_2^{r'_i}\) for random \(r'_i\); C and any entity(E) with access to the public parameters of system, get involved in a protocol for proving the possession of a signature on a committed message block. First C creates new commitments on each element of the message block \(\{C'_i\}\) and sends it to E. Then they run a signature proof-of-knowledge \(SPK\{(A, e, s, m_1, \ldots , m_L): A=(g_0 g_1^s g_2^{m_1} \ldots g_{L+1}^{m_L}) ^{\frac{1}{e+sk_{S}}}\}\) to prove possession of a valid signature on M, which is committed by \(\{C'_i\}\). S returns \(Out_P=1\) if no error happens.

This signature scheme provides two extra properties, beside the standard properties of signature schemes (authenticity, integrity and non-repudiation), as follows:

  • blind-sign: \(\mathtt {Sign}\) protocol allows a client to obtain the signature of signer on a message, which is committed as \(cmt = Commit(M)\) using Pedersen commitment. The client uses ZKPoK to prove that the committed values are correctly calculated from message. The protocol perfectly hides the message from the signer. The signature is secure under LRSW assumption [16].

  • blind-verify: In \(\mathtt {Verify}\) protocol, the client computes a non-interactive honest-verifier zero-knowledge proof-of-knowledge (SPK) protocol in order to prove to any verifier that she knows a message block (M) and a signature on it, where the commitment of the message is presented. The proof does not leak any information about the message and the signature.

3 Model

There are three types of entities; a set of untrusted provers \(\mathcal {P}= \{\mathcal {P}_1, \ldots , \mathcal {P}_n\}\), a set of untrusted verifiers \(\mathcal {V}= \{\mathcal {V}_1, \ldots , \mathcal {V}_m\}\), and a honest-but-curious registration authority (\(\mathcal {RA}\)) with a key-pair (\(pk_{\mathcal {RA}}/sk_{\mathcal {RA}}\)). We assume \(\mathcal {RA}\) can generate her key pair. Each registered prover \(\mathcal {P}_i\) has a secret key \(sk_i\), and a certificate of \(\mathcal {RA}\) for it (\(\sigma _i = Sign_{\mathcal {RA}}(sk_i)\)). The communication channels are public. We assume there exists a public and secure board which keeps the public parameters of the system, including the public-key of registration authority and every entity has secure read access to the board.

There are three operational phases in this model; (1) \(\mathcal {RA}\) makes a key pair by executing “\(\mathtt {KeyGen}\)” phase and puts the public-key on the public board. (2) In “\(\mathtt {Registration}\)” phase, a new prover (\(\mathcal {P}_i\)) with a chosen secret-key (\(sk_i\)), and \(\mathcal {RA}\) interact, resulting in \(\mathcal {P}_i\) obtaining a registeration certificate (\(sk_i, \sigma _i\)). (3) In “\(\mathtt {DB}\)” phase, a registered prover \(\mathcal {P}_i\) interacts with a verifier \(\mathcal {V}_j\), that has read access to the public board. At the end of this phase, \(\mathcal {P}_i\) proves that she knows a secret key (\(sk_i\)), she has a signature of \(\mathcal {RA}\) on \(sk_i\), and is located within a distance bound \(\mathbb {D}\) to \(\mathcal {V}_j\). The verifier returns a single bit \(Out_{\mathcal {V}}\) as the output of protocol (\(Out_{\mathcal {V}}=1\) for accept and \(Out_{\mathcal {V}}=0\) for reject).

Definition 1

Correctness: for any pair of honest prover and honest verifier, with mutual distance of at most \(\mathbb {D}\), the “\(\mathtt {DB}\)” protocol should always return \(Out_{\mathcal {V}}=1\).

The protocol’s soundness is against two types of adversaries: distance-bounding adversary (\(\mathcal {A}_{DB}\)) and privacy adversary (\(\mathcal {A}_{P}\)). The distance-bounding adversary \(\mathcal {A}_{DB}\) can, (1) read the public board, and (2) control the corrupted provers. The aim of \(\mathcal {A}_{DB}\) is to convice an honest verifier to return \(Out_{\mathcal {V}}=1\) as the output of \(\mathtt {DB}\) operation. The privacy adversary \(\mathcal {A}_{P}\) can, (1) read the public board, and (2) read the internal state of \(\mathcal {RA}\), and (3) control all verifiers and corrupted provers. The aim of \(\mathcal {A}_{P}\) is to distinguish between two honest provers based on their interactions with the system.

We define three general DB attacks. The definition of distance-fraud and MiM attacks are in-line with Vaudenay et al.’s [23] approach, but the proposed terrorist-fraud attack is following the classic definition. First, a simple two-party attack is considered, where a dishonest prover is far away from the verifier, but wants to convince the verifier that she is within the distance.

Definition 2

\(\alpha \) -resistance Distance-Fraud: For any PPT adversary \(\mathcal {A}\), which is not located within the distance \(\mathbb {D}\) to an honest verifier \(\mathcal {V}_j\), and is able to run a \(\mathtt {Registration}\) session with \(\mathcal {RA}\), the probability of returning \(Out_{\mathcal {V}}=1\) in an interaction with \(\mathcal {V}_j\) is not more than \(\alpha \).

This definition captures Distance-Hijacking, in which a dishonest far-away prover \(\mathcal {P}^*\) may mis-use some honest provers to successfully authenticate to \(\mathcal {V}_j\). In this attack, the adversary is allowed to mis-behave in the \(\mathtt {Registration}\) session, which results in a more powerful adversary in comparison to the distance-fraud adversary of Vaudenay et al. [23]. The second \(\mathtt {DB}\) attack is a three-party attack in which an honest prover \(\mathcal {P}\) is far-away from honest verifier (\(\mathcal {V}\)), but a malicious adversary \(\mathcal {A}_2\) which is located within the distance \(\mathbb {D}\), wants to convince \(\mathcal {V}\) about the distance bound of \(\mathcal {P}\).

Definition 3

\(\beta \) -resistance MiM: For any PPT adversary \(\mathcal {A}\), which can

  1. (i)

    initiate \(\mathtt {Registration}\) or \(\mathtt {DB}\) session of an honest prover (\(\mathcal {P}_i\)),

  2. (ii)

    listen/block/change the communications of a \(\mathtt {Registration}\) session between an honest prover (\(\mathcal {P}_i\)) and \(\mathcal {RA}\),

  3. (iii)

    listen the communications of polynomially bounded instances of \(\mathtt {DB}\) sessions between any honest verifier \(\mathcal {V}_j\) and \(\mathcal {P}_i\), when she is located within distance \(\mathbb {D}\) to \(\mathcal {V}_j\) (learning phase),

  4. (iv)

    listen/block/change the communications of polynomially bounded instances of \(\mathtt {DB}\) sessions between any honest verifier \(\mathcal {V}_k\) and \(\mathcal {P}_i\), when she is not located within distance \(\mathbb {D}\) to \(\mathcal {V}_k\),

  5. (v)

    run any polynomially bounded instances of algorithms with independent inputs from above,

the probability of returning \(Out_{\mathcal {V}}=1\) in any of \(\mathtt {DB}\) sessions with \(\mathcal {V}_k\) is not more than \(\beta \).

Impersonation-Attack is an special case of this attack, in which there is no learning phase and no honest prover. If a protocol is secure against MiM adversary, then it is secure against impersonation-attack with at least the same probability. With the same argument as before (i.e. more freedom in \(\mathtt {Registration}\) phase), this definition is stronger than the MiM adversary of Vaudenay et al. [23].

In the third attack, three parties are involved; a dishonest prover (\(\mathcal {P}^*\)) which is far away from the honest verifier (\(\mathcal {V}_j\)), and an adversary which is located within the distance to \(\mathcal {V}_j\) and helps \(\mathcal {P}^*\) to convince \(\mathcal {V}_j\) about the distance bound of \(\mathcal {P}^*\).

Definition 4

\(\gamma \) -resistance Terrorist-Fraud: For any pair of PPT adversary \(\mathcal {A}^{TF}\) and dishonest prover (\(\mathcal {P}_i^*\)), which is not located within distance \(\mathbb {D}\) to an honest verifier (\(\mathcal {V}_j\)), when they have the following abilities;

  1. (i)

    \(\mathcal {P}_i^*\) is able to run a \(\mathtt {Registration}\) session with \(\mathcal {RA}\)

  2. (ii)

    \(\mathcal {P}_i^*\) can communicate with \(\mathcal {A}^{TF}\) outside the \(\mathtt {DB}\) protocol, but is not willing to leak any information about her own secret key,

  3. (iii)

    \(\mathcal {A}^{TF}\) can listen/block/change the communications of a \(\mathtt {DB}\) session between any honest verifier \(\mathcal {V}_j\) and \(\mathcal {P}_i^*\), and

then the probability of convincing \(\mathcal {V}_j\) to return \(Out_{\mathcal {V}}=1\) in the \(\mathtt {DB}\) session is not more than \(\gamma \).

We define privacy in terms of the distinguishability advantage of an adversary in a game with a challenger. The adversary chooses two provers \(\mathcal {P}_0\) and \(\mathcal {P}_1\), and gives them to the challenger. The challenger chooses a random bit \(b \in _R \{0, 1\}\) and returns the anonymous handle of \(\mathcal {P}_b\), to the adversary. The adversary can have access the orcales listed below, for any polynomial number of times, and then outputs a bit \(b'\). Success probability of adversary is in terms of \(\Pr (b=b')\). The formal definition is as follows:

Definition 5

\(\rho -\) Extensive Privacy: Consider the following game between a challenger and adversary \(\mathcal {A}_P\) who can make query to the following oracles:

  • \((\mathcal {P}_i) \leftarrow \mathtt {CreateProver} ()\) ; creates a prover with a unique identifier \(\mathcal {P}_i\). This oracle creates the internal keys and certificates of a new prover and returns \(\mathcal {P}_i\).

  • \((\mathcal {P}_i, stt_{\mathcal {P}}) \leftarrow \mathtt {CreateInsider} ()\) ; This oracle is same as \(\mathtt {CreateProver}\), but it also returns the internal state of the prover.

  • \((\pi , m) \leftarrow \mathtt {Launch} ()\) ; this oracle runs a pre-defined protocol on verifier and returns the session identifier \(\pi \) and the message sent by the verifier.

  • \((vtag) \leftarrow \mathtt {DrawProver} (\mathcal {P}_i, \mathcal {P}_j)\) ; on input of two provers, this oracle returns a unique virtual fresh identifier to either \(\mathcal {P}_i\) for \(b=0\), or \(\mathcal {P}_j\) for \(b=1\). It first checks if any of them is an insider or already drawn, and terminates if so. Then asks from challenger to choose one bit \(b \in _R \{0, 1\}\). Based on this bit, creates an anonymous handle vtag for the chosen prover and returns it. Note that in this step, there a private table \(\mathcal {T}\), which stores the tuple \((vtag, \mathcal {P}_i, \mathcal {P}_j, b)\).

  • \((m') \leftarrow \mathtt {SendProver} (vtag, m)\) ; on input of anonymous handle of a prover and a message m, this oracle sends the message m to the prover, and returns prover’s reply message (\(m'\)).

  • \(() \leftarrow \mathtt {Free} (vtag)\) ; on input of anonymous handle of a prover, this oracle removes the handle, which eliminates any access to the prover through this handle. And by using the recorded tuple in the database, the related prover will get reset (i.e. erase volatile memory).

  • \((m') \leftarrow \mathtt {SendVerifier} (\pi , m)\) ; on input of a protocol session (\(\pi \)) and a message m, this oracle sends the message m to the verifier in the session \(\pi \), and returns verifier’s reply message (\(m'\)).

  • \((stt_{\mathcal {V}}) \leftarrow \mathtt {StateVerifier} ()\) ; this oracle returns the internal state of the verifier. This oracle includes the functionality of the \(\mathtt {Result}\) oracle in [22], which just returns the final output of session \(\pi \).

  • \((stt_{\mathcal {RA}}) \leftarrow \mathtt {StateRA} ()\) ; this oracle returns the internal state of \(\mathcal {RA}\).

\(\mathcal {A}_P\) wins if she can find the choice of challenger in \(\mathtt {DrawProver}\) oracle. A protocol is \(\rho \)-extensive private, if and only if there is no PPT adversary \(\mathcal {A}_P\) who can win the game with advantage of more than \(\rho \).

Note 1. Extensive-privacy is stronger than wide-privacy [22] and the privacy model of Gambs et al. [11], in regards to the view of adversary about all non-prover entities. That’s because the adversary has access to two more oracles \(\mathtt {StateVerifier}\) and \(\mathtt {StateRA}\) at the same time. This classification is about the view of adversary about all non-prover entities. The view of adversary about the provers have been considered independently based on her access to \(\mathtt {Corrupt}\) oracle, which is not in the scope of this privacy model.

Finally, we define the security of \(\mathtt {PDB}\):

Definition 6

\((\alpha ,\! \beta ,\! \gamma ,\! \rho )-\) secure Privacy-Preserving Distance-Bounding: A \(\mathtt {PDB}\) protocol is defined by a tuple (\(KeyGen,reg_{\mathcal {P}},reg_{\mathcal {RA}},db_{\mathcal {P}},db_{\mathcal {V}},\mathbb {D}\)), as follows:

  1. 1.

    \((pk_{\mathcal {RA}},sk_{\mathcal {RA}}) \leftarrow \mathtt {KeyGen} (1^{\lambda })\) : A randomized algorithm such that on the input of the security parameter \(\lambda \), returns a key pair to \(\mathcal {RA}\).

  2. 2.

    : An interactive protocol between two PPT ITMs; \(\mathtt {reg_{\mathcal {P}}}\) returns a secret-key and signature of \(\mathcal {RA}\) on it (\(sk_i,\sigma _i=Sign_{\mathcal {RA}}(sk_i)\)) by taking \(pk_{\mathcal {RA}}\) as input, while \(\mathtt {reg_{\mathcal {RA}}}\) takes \(sk_{\mathcal {RA}}\) as input.

  3. 3.

    : An interactive protocol between two PPT ITMs; \(\mathtt {db_{\mathcal {V}}}\) returns a single bit \(Out_{\mathcal {V}}\), by taking \(pk_{\mathcal {RA}}\) as input, while \(\mathtt {db_{\mathcal {P}}}\) takes prover’s secret-key and \(\mathcal {RA}\)’s signature (\(sk_i,\sigma _i= Sign_{\mathcal {RA}}(sk_i)\)) as input. \(\mathtt {db_{\mathcal {P}}}\) provides a commitment on \(sk_i\), and then provides three proofs about the commitment; (i) proves that she knows the committed value (\(sk_i\)), (ii) proves that she knows a valid signature of \(\mathcal {RA}\) on the committed value (\(\sigma _i\)), and (iii) proves that she (owner of \(sk_i\)) is located within the distance \(\mathbb {D}\). \(\mathtt {db_{\mathcal {V}}}\) returns \(Out_{\mathcal {V}} = 1\) if the three proofs are correct, otherwise \(Out_{\mathcal {V}} = 0\).

  4. 4.

    \(\mathbb {D}\) is an integer indicating the distance bound.

The protocol is secure, if the following properties hold; Correctness (Definition 1), \(\alpha \) -distance-fraud (Definition 2), \(\beta \) -MiM (Definition 3), \(\gamma \) -terrorist-fraud (Definition 4), \(\rho \) -extensive-privacy (Definition 5).

4 PDB Construction

In this section we introduce our protocol, as an extension of \(\mathtt {DBPK}\)-\(\mathtt {Log}\) [5], by using Pedersen commitment [18], zero-knowledge proof-of-knowledge protocols  [20] and signature proof-of-knowledge protocols [12]. The overview of the protocol is as follows:

  1. 1.

    \(\mathtt {Setup\!:}\) \(\mathcal {RA}\) creates a key-pair for signing secret-key of new provers. This operation is an instance of “\(\mathtt {BBS^{+}.KeyGen}\)” function.

  2. 2.

    \(\mathtt {Registration\!:}\) A new prover (\(\mathcal {P}_i\)) gets registered by \(\mathcal {RA}\). \(\mathcal {P}_i\) chooses a random secret-key (\(sk_i\)), and gets the signature of \(\mathcal {RA}\) for it (\(\sigma _i = Sign_{\mathcal {RA}}(sk_i)\)) in a blind form. This operation is an instance of “\(\mathtt {BBS^{+}.Sign}\)” interactive protocol.

  3. 3.

    \(\mathtt {Distance}\)-\(\mathtt {Bounding\!:}\) A registered prover (\(\mathcal {P}_i\)) provides distance-bounding proof to a verifier (\(\mathcal {V}_j\)). \(\mathcal {P}_i\) sends a Pedersen commitment \(C=commit(sk_i)\) to \(\mathcal {V}\), and then provides three proofs about the commitment; (i) proves that she knows the committed value (\(sk_i\)), (ii) proves that she knows a valid signature of \(\mathcal {RA}\) on the committed value, by using “\(\mathtt {BBS^{+}.Verify}\)” non-interactive protocol, and (iii) proves that she is located within the distance bound \(\mathbb {D}\). This proof is based on \(\mathtt {fast}\)-\(\mathtt {exchange}\) bitwise operation of every single bit of \(sk_i\).

Global Common Parameters. By considering \(\lambda \) as the security parameter, let’s define \((\mathbb {G}_1, \mathbb {G}_2)\) as a bilinear group pair with computable isomorphism \(\psi \) such that \(|\mathbb {G}_1| = |\mathbb {G}_2| = p\) for a \(\lambda \)-bit strong prime p, such that \(p=2q+1\) for a large prime number q. \(\mathbb {G}_p\) is a group of order p, and the bilinearity mapping is \(\hat{e}:\mathbb {G}_1 \times \mathbb {G}_2 \rightarrow \mathbb {G}_p\). \(H:\{0,1\}^* \rightarrow \mathbb {Z}_p\) and \(H_{evt}:\{0,1\}^* \rightarrow \mathbb {G}_p\) are hash functions. Let \(g_0, g_1, g_2\) be generators of \(\mathbb {G}_1\), \(h_0, h_1, h_2\) be generators of \(\mathbb {G}_2\) such that \(\psi (h_i) = g_i\), and \(u_0, u_1, u_2\) be generators of \(\mathbb {G}_p\) such that the discrete logarithm of the generators are unknown. The generation of these parameters can be done by a trusted general manager, which is present just once at the begining. We use \(\mathtt {BBS^{+}}\) signature scheme [6] with a block message of size one (\(M=\{sk_i\}\)), so obviously the parameters of \(\mathtt {BBS^{+}}\) is included in these parameters. These values are public and considered as the input of all operations (omitted for simplicity).

Fig. 1.
figure 1

\(\mathtt {PDB}\): \(\mathtt {Bit\ Commitment}\) step

The steps of the protocol are as follows:

1. Setup: \((sk_{\mathcal {RA}}, pk_{\mathcal {RA}}= h_0^{sk_{\mathcal {RA}}}) \leftarrow \mathtt {BBS^{+}.KeyGen} (1^{\lambda })\)

2. Registration: They do the following steps:

  • \(\mathcal {P}_i\) randomly chooses an odd number \(sk_i \in _R \mathbb {Z}_p \setminus \{q\}\).

  • They execute

3. Distance-Bounding: There are five steps in this phase; (i) \(\mathtt {BBS^{+}.Verify}\), (ii) \(\mathtt {Bit\ Commitments}\), (iii) \(\mathtt {Fast}\)-\(\mathtt {Exchange}\), (iv) \(\mathtt {Commitment\ Opening}\) and (v) \(\mathtt {Proof}\)-\(\mathtt {of}\)-\(\mathtt {Knowledge}\). At any step of this phase, \(\mathcal {V}\) terminates with \(Out_{\mathcal {V}} = 0\), if any failure happens, otherwise if it reaches the end, it returns \(Out_{\mathcal {V}} = 1\) as the output.

(i) If \(Out = 1\), then \(\mathcal {V}_j\) continues to the next step and keeps the value of C.

(ii) In \(\mathtt {Bit~Commitment}\) step, the process of Fig. 1 gets executed. At the end of this phase, \(\mathcal {V}_j\) is able to compute:

\(z=\prod _{l=0}^{\lambda -1}(C_{k,l}C_{e,l})^{2^l} = g_1^{\sum _{l=0}^{\lambda -1} (2^l.k[l] + 2^l.k[l])}.h^{\sum _{l=0}^{\lambda -1} (2^l.(v_{k,l} + v_{e,l}))} = g_1^{k+e}.h^{v} = g_1^{u.sk_i}.h^{v}\) mod p

such that: \(k=\sum _{l=0}^{\lambda -1} (2^l.k[l])\ mod\ p-1\), \(e=\sum _{l=0}^{\lambda -1} (2^l.e[l])\ mod\ p-1\), \(e=\sum _{l=0}^{\lambda -1} (2^l.e[l])\ mod\ p-1\), \(v=\sum _{l=0}^{\lambda -1} (2^l.(v_{k,l} + v_{e,l}))\ mod\ p-1\).

(iii) In \(\mathtt {Fast}\)-\(\mathtt {Exchange}\) step, process of Fig. 2 runs for \(\forall l \in \{0, \ldots , \lambda -1\}\).

Fig. 2.
figure 2

\(\mathtt {PDB}\): \(\mathtt {Fast}\)-\(\mathtt {Exchange}\) step

Fig. 3.
figure 3

\(\mathtt {PDB}\): \(\mathtt {Commitment~Opening}\) step

(iv) In \(\mathtt {Commitment~Opening}\) step, the process of Fig. 3 runs for \(\forall l \in \{0,~\ldots , \lambda -1\}\).

(v) Finally in \(\mathtt {Proof}\)-\(\mathtt {of}\)-\(\mathtt {Knowledge}\) step, and interactive instance of

\(PoK[(sk_i, v, r):z=g_1^{u.sk_i}.h^{v} \wedge C=g_1^{sk_i}.g_2^{r}]\) takes place to make sure that the summation of the secret values k and e is equal to randomized form of the committed secret-key (\(u.sk_i\)). One possible way of doing the \(\mathtt {PoK}\) is repeating the process of Fig. 4 for t times. The process continues, unless occurance of failure.

Fig. 4.
figure 4

\(\mathtt {PDB}\): \(\mathtt {Proof}\)-\(\mathtt {of}\)-\(\mathtt {Knowledge}\) step

If all t times verifications succeed, then \(\mathcal {V}_j\) returns \(Out_{\mathcal {V}}=1\). Note that if we replace the secret-key commitments (\(C=g_1^{sk_i}.g_2^{r}\)) with public-keys of the prover (\(pk_i=g_1^{sk_i}\)), and remove the \(\mathtt {BBS^{+}}\) signature scheme, then we would have a secure public-key distance-bounding protocol (\(\mathtt {DBPoK}\)-\(\mathtt {log^{+}}\)), which fixes the vulnerabilities of \(\mathtt {DBPK}\)-\(\mathtt {Log}\) [5]. Now we provide our claim about the security of \(\mathtt {PDB}\) protocol.

Theorem 1

\(\mathtt {PDB}\) protocol is \((negl(\lambda ), negl(\lambda ), negl(\lambda ), negl(\lambda ))\)-secure, under Definition 6.

The security proof of this theorem will be provided in the full version paper.

5 Conclusion

In this paper we solved the open problem of having a public-key \(\mathtt {DB}\) protocol, which is secure against all \(\mathtt {DB}\) adversaries by proposing a new protocol (\(\mathtt {DBPoK}\)-\(\mathtt {log^{+}}\)). This protocol is based on \(\mathtt {DBPoK}\)-\(\mathtt {Log}\) protocol which has shown to be vulnerable against DFA and TFA. We achieved security by adding some \(\mathtt {PoK}\) operations. The computational cost of this achievement is equivalent to about \(4.\lambda \) exponentiations in a prime order cyclic group per each \(\mathtt {DB}\) instance.

Moreover we proposed a new privacy model for the provers against dishonest verifiers and honest-but-curious registration authority. And finally extended \(\mathtt {DBPoK}\)-\(\mathtt {log^{+}}\) protocol to build a privacy-preserving \(\mathtt {DB}\) protocol (\(\mathtt {PDB}\)) in the new privacy model. This protocol, inherits distance-bounding properties from \(\mathtt {DBPoK}\)-\(\mathtt {log^{+}}\) protocol. We replaced the public-key setting of provers, with Pedersen commitments and adopted \(\mathtt {BBS^{+}}\) signature scheme to provide privacy and authentication at the same time. As a result, \(\mathtt {PDB}\) provides all three properties together; distance-bounding, privacy and authentication. The computational cost of this achievement, is about 25 extra exponentiations per each \(\mathtt {DB}\) instance, in comparison with \(\mathtt {DBPoK}\)-\(\mathtt {log^{+}}\).

There are still two open problems in this field; (1) having a \(\mathtt {DB}\) protocol secure against all \(\mathtt {DB}\) adversaries, which supports extensive privacy against an adversary who has access to \(\mathtt {corrupt}\) oracle. And (2) having the same \(\mathtt {DB}\) protocol in the presence of dishonest registration authority.