Keywords

1 Introduction

Biometric authentication has become increasingly popular as a fast and convenient method of authentication that does not require to remember and manage long and cumbersome passwords. However, the main advantage of biometrics, i.e., their direct and inherent link with the identity of individuals, also rises serious security and privacy concerns. Since biometric characteristics can not be changed or revoked, unauthorised leakage of this information leads to irreparable security and privacy breaches such as identity fraud and individual profiling or tracking [18]. Thus, there is an urgent need for efficient and reliable privacy-preserving biometric authentication protocols (\(\mathsf {BAP}\)s).

The design of privacy-preserving \(\mathsf {BAP}\)s is by itself a very delicate procedure. It becomes even more challenging when one considers the distributed setting in which a resource-constrained client outsources the computationally heavy authentication process to more powerful external entities. In this paper, we focus on Yasuda et al.’s protocol for privacy-preserving \(\mathsf {BAP}\)s in the distributed setting [23] and show how to mitigate the privacy attacks presented by Abidin et al. [1] by employing Backes et al.’s verifiable computation scheme [4].

1.1 Background and Related Work

Distributed privacy-preserving \(\mathsf {BAP}\)s usually involve the following entities: (i) a client/user \(\mathcal {C}\), (ii) a database \(\mathcal {DB}\), (iii) a computational server \(\mathcal {CS}\), and (iv) an authentication server \(\mathcal {AS}\). The granularity of roles and entities in the biometric authentication process facilitates the privacy-preservation of the sensitive information. This distributed setting, indeed guarantees that no single entity has access to both the biometric templates (fresh and stored ones) and the identity of the querying user.

Several existing proposals of privacy-preserving \(\mathsf {BAP}\)s use the distributed setting, e.g., [5, 21,22,23], and make leverage on advanced cryptographic techniques such as homomorphic encryption [7, 23], oblivious transfer [8] and garbled circuits [14]. In particular, Yasuda et al.’s protocol [23] was claimed to be privacy-preserving since it is based on the distributed setting and relies on a novel somewhat homomorphic encryption scheme based on ideal lattices. Abildin et al. [1] showed that Yasuda et al.’s \(\mathsf {BAP}\) is privacy-preserving only in the honest-but-curious model and described an algorithm that enables a malicious \(\mathcal {CS}\) to recover a user’s biometric template. Intuitively, Abidin et al.’s attack succeeds because \(\mathcal {AS}\) does not detect that the malicious \(\mathcal {CS}\) returns a value different from the one corresponding to the output of the (honest) outsourced computation, leaving space for hill-climbing strategies [20] that may lead to the disclosure of the stored reference biometric template.

Verifiable delegation of computation (\(\mathsf {VC}\)) is a cryptographic primitive that enables a client to securely and efficiently offload computations to an untrusted server [11]. Verification of arbitrary complex computations was initially achieved via interactive proofs [2, 13] and then moved towards more flexible and efficient schemes such as [3, 9, 10, 19]. The setting of \(\mathsf {VC}\) schemes is by nature distributed and thus perfectly fits the basic requirement of privacy-preserving \(\mathsf {BAP}\)s. For this reason, Bringer et al. [6] suggested to use \(\mathsf {VC}\) techniques to detect malicious behaviours in \(\mathsf {BAP}\).

In this paper, we provide the first explicit instantiation of a distributed privacy-preserving \(\mathsf {BAP}\) which achieves security against malicious \(\mathcal {CS}\) thanks to the verifiability of the delegated computation.

1.2 Our Contributions

In this paper, we mitigate Abidin et al.’s attack [1] against Yasuda et al.’s privacy-preserving biometric authentication protocol [23] by the means of the verifiable computation scheme by Backes et al. [4]. We combine the two schemes in an efficient and secure way, and obtain a modification of Yasuda et al.’s protocol with strong privacy guarantees. As a result, we obtain a new \(\mathsf {BAP}\) which builds on top of Yasuda et al.’s and is truly privacy-preserving in the distributed setting.

From a general point of view, this paper offers a strategy to transform privacy-preserving \(\mathsf {BAP}\)s that are secure in the honest-but-curious model into schemes that can tolerate a malicious \(\mathcal {CS}\) by addressing the most significant challenges in privacy-preserving \(\mathsf {BAP}\)s: to guarantee integrity and privacy of both the data and the computation. Despite the idea of combining \(\mathsf {VC}\) and \(\mathsf {BAP}\) is quite natural and intuitive [6], the actual combination needs to be done carefully in order to avoid flawed approaches.

Organisation. The paper is organized as follows. Section 2 describes the background notions used in the rest of the paper. Section 3 contains our modification of Backes et al.’s \(\mathsf {VC}\) scheme to combine it with the somewhat homomorphic encryption scheme used in [23]. Section 4 presents an improved version of Yasuda et al.’s \(\mathsf {BAP}\) together with a security and efficiency analysis. The proposed privacy-preserving \(\mathsf {BAP}\) incorporates the new construction of \(\mathsf {VC}\) on encrypted data of Sect. 3. Section 5 is an important side-note to our contributions, as it demonstrates how naïve and straight-forward compositions of \(\mathsf {VC}\) and homomorphic encryption may lead to leakage of private information. Section 6 concludes the paper.

2 Preliminaries

Notations. We denote by \(\mathbb {Z}\) and \(\mathbb {Z}_p = \mathbb {Z}/p\mathbb {Z}\) the ring of integers and the integers modulo p, respectively. For two integers \(x,d\in \mathbb {Z}\), \([x]_d\) denotes the reduction of x modulo d in the range of \([-d/2,d/2]\). We write vectors with capital letters, e.g., A, and refer to the i-th component of A as \(A_i\). The symbol \(x\leftarrow ^{\!\!\!\! \$}\mathcal {X}\) denotes selecting x uniformly at random from the set \(\mathcal {X}\).

We denote the Hadamard product for binary vectors as \(\diamond : \mathbb {Z}_2^n \diamond \mathbb {Z}_2^n \rightarrow \mathbb {Z}_2^n\), with \(A\diamond B = C\), \(C_i = A_i \cdot B_i \in \mathbb {Z}_2\) for \(i=1,2, \ldots ,n\). The Hadamard product is similar to the inner product of vectors except that the output is a vector rather than an integer.

Bilinear Maps. A symmetric bilinear group is a tuple \((p,\mathbb {G}_{},\mathbb {G}_{T},g,g_T,e)\), where \(\mathbb {G}_{}\) and \(\mathbb {G}_{T}\) are groups of prime order p. The elements \(g\in \mathbb {G}_{}\) and \( g_T\in \mathbb {G}_{T}\) are generators of the group they belong to, and \(e:\mathbb {G}_{ }\times \mathbb {G}_{ }\longrightarrow \mathbb {G}_{T}\) is a bilinear map, i.e., \(\forall A,B\in \mathbb {G}_{ }\) and \(x,y\in \mathbb {Z}_{p}\) it holds that \(e(xA,yB)=e(A,B)^{xy}\) and \(e(g,g)\ne 1_{\mathbb {G}_{T}}\). In the setting of \(\mathsf {VC}\), the map e is cryptographically secure, i.e., it should be defined over groups where the discrete logarithm problem is assumed to be hard or it should be hard to find inverses. In bilinear groups there exists a natural isomorphism between \(\mathbb {G}_{}\) and \((\mathbb {Z}_p, +)\) given by \(\phi _g (x) = g^x\); similarly for \(\mathbb {G}_{T}\). Since \(\phi _g \) and \(\phi _{g_T}\) are isomorphisms, there exist inverses \(\phi _g^{-1}: \mathbb {G}\rightarrow \mathbb {Z}_p\) and \(\phi _{g_T}^{-1}: \mathbb {G}_T \rightarrow \mathbb {Z}_p\), that can be used to homomorphically evaluate any arithmetic circuit \(f: \mathbb {Z}_p^n \rightarrow \mathbb {Z}_p\), from \(\mathbb {G}\) to \(\mathbb {G}_T\). More precisely, there exists a map \({\mathbf {GroupEval}}\) (as defined in [4]):

$$\begin{aligned} {\mathbf {GroupEval}}(f, X_1,\ldots ,X_n) = \phi _{g_T}(f(\phi _g^{-1}(X_1),\ldots ,\phi _g^{-1}(X_n))). \end{aligned}$$

For security, we assume \(\phi _{g}\) and \(\phi _{g_T}\) are not efficiently computable.

Homomorphic MAC Authenticators. In this paper, we make use of Backes, Fiore and Reischuk’s verifiable computation scheme based on homomorphic MAC authenticators [4], which we refer to as \(\mathsf {BFR} \). The \(\mathsf {BFR} \) scheme targets functions f that are quadratic polynomials over a large number of variables. Figure 1 contains a succinct description of the \(\mathsf {BFR} \) scheme.

Fig. 1.
figure 1

The \(\mathsf {BFR} \) verifiable delegation of computation scheme.

For further details we refer the reader to the main paper [4].

Homomorphic Encryption. Let \(\mathcal {M}\) denote the space of plaintexts that support an operation \(\boxdot \), and \(\mathcal {C}\) be the space of ciphertexts with \(\odot \) as operation. An encryption scheme is said to be homomorphic if for any key, the encryption function \({\mathbf {Enc}}\) satisfies: \({\mathbf {Enc}}(m_1\boxdot m_2) \leftarrow {\mathbf {Enc}}(m_1) \odot {\mathbf {Enc}}(m_2), \) for all \(m_1, m_2 \in \mathcal {M}\), where \(\leftarrow \) means computed without decryption. In this paper, we only use Somewhat Homomorphic Encryption schemes (\(\mathsf {SHE}\)). As the name suggests these schemes only support a limited number of homomorphic operations, e.g., indefinite number of homomorphic additions and finite number of multiplications. The choice to use \(\mathsf {SHE}\) instead of Fully Homomorphic Encryption [12] is due to efficiency: \(\mathsf {SHE}\), if used appropriately, can be much faster and more compact [15].

The Yasuda et al. Protocol. Yasuda et al. [23] proposed a privacy-preserving biometric authentication protocol that targets one-to-one authentication and relies on somewhat homomorphic encryption based on ideal lattices. Two packing methods facilitate efficient calculations of the secure Hamming distance, which is a common metric used for comparing biometric templates. The protocol uses a distributed setting with three parties: a client \(\mathcal {C}\), a computation server \(\mathcal {CS}\) (which contains the database \(\mathcal {DB}\)) and an authentication server \(\mathcal {AS}\). The protocol is divided into three phrases.

  • Setup Phase: \(\mathcal {AS}\) generates the public key \({pk}\) and the secret key \({sk}\) of the \(\mathsf {SHE}\) scheme in [23]. \(\mathcal {AS}\) gives \({pk}\) to \(\mathcal {C}\) and \(\mathcal {CS}\) and keeps \({sk}\).

  • Enrollment phase: \(\mathcal {C}\) provides a feature vector A from the client’s biometric data (e.g., fingerprints), runs the type-1 packing method and outputs the encrypted feature vector \(\mathsf {vEnc}_{1}(A)\). The computation server stores \((\mathsf{ID}, \mathsf {vEnc}_{1}(A))\) in \(\mathcal {DB}\) as the reference template for the client ID.

  • Authentication phase: upon an authentication request, \(\mathcal {C}\) provides a fresh biometric feature vector B encrypted with the type-2 packing method and sends \((\mathsf{ID}, \mathsf {vEnc}_{2}(B))\) to the computational server. \(\mathcal {CS}\) extracts from the database the tuple \((\mathsf{ID}, \mathsf {vEnc}_{1}(A))\) using ID as the search key. \(\mathcal {CS}\) calculates the encrypted Hamming distance \(ct_{\mathsf {HD}}\) and sends it to the authentication server. \(\mathcal {CS}\) decrypts \(ct_{\mathsf {HD}}\) and retrieves the actual Hamming distance \({\mathsf {HD}}(A, B) = {\mathsf {Dec}}({sk}, ct_{\mathsf {HD}})\). \(\mathcal {AS}\) returns \({\mathsf {yes}}\) if \({\mathsf {HD}}(A, B) \le \kappa \) or \({\mathsf {no}}\) if \(HD (A,B) > \kappa \), where \(\kappa \) is the predefined accuracy threshold of the authentication system.

Figure 2 depicts the authentication phase of Yasuda et al.’s \(\mathsf {BAP}\).

For additional details on biometric authentication protocols and systems we refer the reader to [16].

Fig. 2.
figure 2

Authentication phase in the Yasuda et al.’s \(\mathsf {BAP}\) [23].

3 Combining the \(\mathsf {BFR} \) and the \(\mathsf {SHE}\) Schemes

In this section, we describe how to efficiently combine the verifiable computation scheme \(\mathsf {BFR} \) by Backes et al. [4] with the somewhat homomorphic scheme \(\mathsf {SHE}\) by Yasuda et al. [23]. We call the resulting scheme \(\mathsf {BFR} \)+\(\mathsf {SHE}\). Our motivation for defining this new scheme is to build a tailored version of \(\mathsf {BFR} \) that we insert in Yasuda et al.’s biometric authentication protocol to mitigate the template recovery attack of [1].

As a preliminary step, we explain the most challenging point of the combination of the two schemes: the ring range problem. This problem rises because the elements and operations in \(\mathsf {BFR} \) and \(\mathsf {SHE}\) are defined over two different rings. This passage is quite mathematical, but it is necessary to guarantee the correctness of our composition \(\mathsf {BFR} \)+\(\mathsf {SHE}\), presented later on in this section.

3.1 The Ring Range Problem

The most significant challenge in combining the Backes et al. \(\mathsf {VC}\) scheme with the Yasuda et al. \(\mathsf {SHE}\) scheme is the different range of the base rings. While \(\mathsf {BFR} \) handles all operations in \(\mathbb {Z}_p\), where p is a prime, the operations in the \(\mathsf {SHE}\) scheme [23] are handled in \(\mathbb {Z}_d\), where d is the resultant of two polynomials. Therefore, in our \(\mathsf {BFR} \)+\(\mathsf {SHE}\) scheme, we need to tweak the input data if there is a mismatch in the ranges. In the calculation of the secure Hamming distance, there is a constant term equal to \(-2\), which lives in \([-d/2,d/2)\) but not in [0, p). In order to verify and generate proper tags, we can write \(-2\) as \(D = (d-2) \mod p\). Furthermore, we need to check the impact of the range difference to the verification carried out by the client. The first equation in the \({\mathbf {Ver}}\) algorithm of \(\mathsf {BFR} \) is:

$$\begin{aligned} ct_{\mathsf {HD}}= y_{0}^{({{\mathsf {HD}}})}, \end{aligned}$$
(1)

where \(ct_{\mathsf {HD}}\in \mathbb {Z}_d\) and \(y_0^{(HD)} \in \mathbb {Z}_p\). In our instantiation, the term \(ct_{\mathsf {HD}}\) corresponds to the encrypted Hamming distance between the fresh and the reference templates, while \(y_0^{({\mathsf {HD}})}\) is a component of the final authentication tag. As long as \(d \ne p\), Eq. (1) is not satisfied even when the computation is carried out correctly.

We present a general solution to this problem. For simplicity, we assume \(p<d\) (as the tag size should be ideally small), although the reasoning also applies when \(p>d\) by swapping the place of p and d. Our solution relies on keeping track of the dividend. Given a stored template \(\alpha \in \mathbb {Z}_d\) and a fresh template \(\beta \in \mathbb {Z}_d\), both encrypted, we have that: \(\alpha = \alpha ' + mp, \alpha ' = \alpha \mod p \in \mathbb {Z}_p\), \(\beta = \beta ' + kp\) and \(\beta ' = \beta \mod p \in \mathbb {Z}_p\).

Let \(\mathsf{SWHD}(x,y)\) be the arithmetic circuit for calculating the encrypted Hamming distance without the final modulo d. Let \(c = \mathsf{SWHD}(\alpha , \beta )\) and \(c' = \mathsf{SWHD}(\alpha ', \beta ')\), we can derive: \(\mathsf{SWHD}(\alpha , \beta ) \mod p = \mathsf{SWHD}(\alpha ', \beta ') \mod p;\) and \(c = \ell \cdot p + c'.\) The value \(\ell \) is the dividend. In our case of study, we want to perform the comparison between the Hamming distance (of the biometric templates) and the threshold \(\kappa \) which determines if the templates match, i.e., the client is authenticated, or not. To this end, we would track \(\ell \mod d\) instead of \(\ell \) directly. The reason is that \(\ell \) contains more information and would lead to a privacy leak. Relating back to Eq. (1) we have: \(ct_{\mathsf {HD}}= c \mod d \in \mathbb {Z}_q\) and \(y_{0}^{({\mathsf {HD}})} = c' \in \mathbb {Z}_p\). Given \(c = \ell \cdot p + c'\), it holds that:

$$\begin{aligned} \begin{aligned} ct_{\mathsf {HD}}&= c\mod d = (\ell *p+c') \mod d \\&=c'\mod d + (\ell \mod d)\cdot (p \mod d)\\&= (y_{0}^{({\mathsf {HD}})}\mod d) + (\ell \mod d)\cdot (p \mod d) \end{aligned} \end{aligned}$$
(2)

Thus, if we define \(\ell _d : = (\ell \mod d)\cdot (p \mod d)\), the verification equation in (1) becomes \(ct_{\mathsf {HD}}= y_{0}^{({\mathsf {HD}})} (\mod d + \ell _d)\), which is satisfied whenever \(ct_{\mathsf {HD}}\) is computed correctly (as we show in Sect. 3.3).

3.2 Our \(\mathsf {BFR} \)+\(\mathsf {SHE}\) Scheme

To facilitate the intuition of how we incorporate \(\mathsf {BFR} \)+\(\mathsf {SHE}\) in Yasuda et al.’s \(\mathsf {BAP}\) we describe the algorithms of \(\mathsf {BFR} \)+\(\mathsf {SHE}\) directly in the case the encrypted vectors are biometric templates:

  • \(\mathsf {BFR} \)+\(\mathsf {SHE}\).\(\mathbf{{KeyGen}}\) \((\lambda )\): The key generation algorithm of \(\mathsf {BFR} \)+\(\mathsf {SHE}\) runs \(\mathsf {SHE}.\mathbf{{KeyGen}}(\lambda )\rightarrow ({pk}, {sk})\) and \(\mathsf {BFR} .\mathbf{{KeyGen}}(\lambda )\rightarrow ({ek}, {vk})\). The output is the four-tuple \(({ek}, {pk}, {sk}, {vk})\).

  • \(\mathsf {BFR} \)+\(\mathsf {SHE}\).\({\mathbf {Enc}}\) \(({pk}, A, phase)\): The encryption algorithm takes as input the (encryption) public key \({pk}\), a plaintext biometric template \(A\in \{0,1\}^{2048}\) and a \(phase\in \{1,2\}\) to select the appropriate packing method. It outputs the ciphertext \(ct\) computed as \(ct=\mathsf{vEnc}_{phase}(A)\), using the type-phase packing method of the \(\mathsf {SHE}\) scheme.

  • \(\mathsf {BFR} \)+\(\mathsf {SHE}\).\({\mathbf {Auth}}\) \(({vk}, L, ct)\): on input the verification key \({vk}\), a ciphertext \(ct\) and a multi-label \(L=(\varDelta , \tau )\), with \(\varDelta \) the data set identifier (e.g., the client’s ID) and \(\tau \) the input identifier (e.g., “stored biometric template” or “fresh biometric template”); this algorithm outputs \(\sigma \leftarrow \mathsf {BFR} .{\mathbf {Auth}}({vk}, L, ct) \), with \(\sigma = (y_{0}, Y_{i}, 1)= (ct, F_{K}(\varDelta , \tau )\cdot g^{-ct})^{1/\theta } )\), where the value \(\theta \) and the function \(F_K\) are defined in \({vk}\).

  • \(\mathsf {BFR} \)+\(\mathsf {SHE}\).\({\mathbf {Comp}}\) \(({pk}, ct_1, ct_2)\): The compute algorithm takes as input the encryption public key \({pk}\), and two ciphertexts \(ct_1, ct_2\), which intuitively correspond to the encryptions \(\mathsf {vEnc}_{1}(A)\) and \(\mathsf {vEnc}_{2}(B)\) respectively. The output is the encrypted Hamming distance \({\mathsf {HD}}\) calculated as: \(ct_{\mathsf {HD}}= C_{2}\cdot \mathsf {vEnc}_{1}(A)+ C_{1}\cdot \mathsf {vEnc}_{2}(B)+ (-2\cdot \mathsf {vEnc}_{1}(A)\cdot \mathsf {vEnc}_{2}(B)) \in \mathbb {Z}_d\), where \(C_{1}:=\left[ \sum \limits _{i=0}^{n-1}r^{i}\right] _{d}\) and \(C_{2}:=[-C_{1}+2]_{d}\) and rd are extracted from \({pk}\). To solve the ring range problem described in Sect. 3.1 we compute \(\ell _d\) as follows. Let c be the result of the (encrypted) Hamming distance computation without the final modulo d. Then \(c'=c \mod p\) and \(c = \ell p+c'\), where \(c'\) is a component in the authentication tag and \(\ell \) is a dividend. We compute \(\ell _d = \ell \mod d = (c-[c \mod p])/p \mod d\). The output is \((ct_{\mathsf {HD}}, \ell _d)\).

  • \(\mathsf {BFR} \)+\(\mathsf {SHE}\).\({\mathbf {Eval}}\) \(({ek}, {pk}, \sigma _{1}, \sigma _{2})\): The evaluation algorithm takes as input the evaluation key \({ek}\), the ecryption public key \({pk}\), and two tags, which intuitively correspond to the authenticators for the two biometric templates, AB. In our case of study, the function to be evaluated is fixed to be \(f={\mathsf {HD}}\) the Hamming distance. This algorithm outputs \(\sigma _{\mathsf {HD}}= (y_{0}, Y_{1}, \hat{Y}_{2})\leftarrow \mathsf {BFR} .{\mathbf {Eval}}({ek}, {\mathsf {HD}}, (\sigma _1, \sigma _{2}))\).

    In details, every input gate accepts either two tags \(\sigma _{A}\), \(\sigma _{B} \in (\mathbb {Z}_{p} \times \mathbb {G} \times \mathbb {G}_T)^2\), or one tag and a constant \(\sigma \), \(c\in ((\mathbb {Z}_{p} \times \mathbb {G} \times \mathbb {G}_T) \times \mathbb {Z}_p \)). The output of a gate is a new tag \(\sigma ' \in (\mathbb {Z}_{p} \times \mathbb {G} \times \mathbb {G}_T)\), which will be fed into the next gate in the circuit as one of the two inputs. The operation stops when the final gate of f is reached and the resulting tag \(\sigma _{HD}\) is returned. A tag has the format \(\sigma ^{(i)}=(y_{0}^{(i)}, Y_{1}^{(i)}, \hat{Y}_{2}^{(i)}) \in \mathbb {Z}_{p} \times \mathbb {G} \times \mathbb {G}_T\) for \(i=1, 2\) (indicating the two input tags), which corresponds respectively to the coefficients of \((x^{0},x^{1},x^{2})\) in a polynomial. If \(\hat{Y}_{2}^{(i)}\) is not defined, it is assumed that it has value \(1 \in \mathbb {G}_{T}\). Next we define the specific operations for different types of gates:

    • Addition. The output tag \(\sigma '=(y_{0},Y_{1},\hat{Y}_{2})\) is calculated as:

      $$\begin{aligned} y_{0} = y_{0}^{(1)}+y_{0}^{(2)},\quad Y_{1} = Y_{1}^{(1)}\cdot Y_{1}^{(2)}, \quad \hat{Y}_{2} =\hat{Y}_{2}^{(1)}\cdot \hat{Y}_{2}^{(2)}. \end{aligned}$$
    • Multiplication. The output tag \(\sigma '=(y_{0},Y_{1},\hat{Y}_{2})\) is calculated as:

      $$\begin{aligned} y_{0} = y_{0}^{(1)}\cdot y_{0}^{(2)}, \quad Y_{1} = Y_{1}^{(1)}\cdot Y_{1}^{(2)}, \quad \hat{Y}_{2} =e(\hat{Y}_{1}^{(1)},\hat{Y}_{1}^{(2)}). \end{aligned}$$

      Since the circuit f has maximum degree 2, the input tags to a multiplication gate can only have maximum degree 1 each.

    • Multiplication with constant. The two inputs are one tag \(\sigma \) and one constant \(c \in \mathbb {Z}_p\). The output tag \(\sigma '=(y_{0},Y_{1},\hat{Y}_{2})\) is calculated as: \(y_{0} = c\cdot y_{0}^{(1)}\), \(Y_{1} = (Y_{1}^{(1)})^{c}\), \(\hat{Y}_{2} = (\hat{Y}_{2}^{(1)})^{c}\).

  • \(\mathsf {BFR} \)+\(\mathsf {SHE}\).\({\mathbf {Ver}}\) \(({vk}, \mathcal {P}_{\varDelta }, ct_{\mathsf {HD}}, \sigma _{\mathsf {HD}}, \ell _d)\): The verification algorithm computes \(b\leftarrow \mathsf {BFR} .{\mathbf {Ver}}({vk}, {sk}, \mathcal {P}_\varDelta , ct_{\mathsf {HD}}, \sigma _{\mathsf {HD}}, \kappa )\) to verify the correctness of the outsourced computation. In our case of study, \(\mathcal {P}_{\varDelta }\) is a multi-labeled program [4] for the arithmetic circuit for calculating the encrypted \({\mathsf {HD}}\). The \(\mathsf {BFR} \).\({\mathbf {Ver}}\) algorithm essentially performs two integrity-checks:

    $$\begin{aligned} ct_{\mathsf {HD}}= & {} y_{0}\mod (d + \ell _d) \end{aligned}$$
    (3)
    $$\begin{aligned} W= & {} e(g,g)^{y_{0}}\cdot e(Y_{1},g)^{\theta }\cdot (\hat{Y}_{2})^{\theta ^{2}} \end{aligned}$$
    (4)

    If the verification output is \(b=0\) the algorithm returns

    $$(acc_{\mathsf {VC}}, acc_{{\mathsf {HD}}}) = (0, 0).$$

    Otherwise, if \(b=1\), it proceeds with the biometric authentication check: it computes \(w\leftarrow \mathsf {SHE}.{\mathsf {Dec}}(ct)\) to retrieve the actual Hamming distance \(w={{\mathsf {HD}}}(A, B)\). If \(\mathsf{HD}(A,B)\le \kappa \), here \(\kappa \) corresponds to the accuracy of the \(\mathsf {BAP}\), the algorithm returns

    $$(acc_{\mathsf {VC}}, acc_{{\mathsf {HD}}}) = (1, 1).$$

    If \(\mathsf{HD}(A,B)> \kappa \), the output is

    $$(acc_{\mathsf {VC}}, acc_{{\mathsf {HD}}}) = (1, 0).$$

3.3 Correctness Analysis

In our \(\mathsf {BFR} \)+\(\mathsf {SHE}\) scheme the outsourced function is the Hamming distance \({\mathsf {HD}}\), that can be represented by a bi-variate deterministic quadratic function. Thus, we can avoid using gate-by-gate induction proofs, as done in [4], and demonstrate the correctness in a direct way. In what follows, we adopt the notation in [4], and we prove the correctness of \(\mathsf {BFR} \)+\(\mathsf {SHE}\) by walking through the arithmetic circuit of \({\mathsf {HD}}\) step by step.

Figure 3 depicts the arithmetic circuit for calculating the encrypted Hamming distance. A and B denote the encrypted stored and fresh biometric templates respectively. \(C_{1}\) and \(C_{2}\) are the constants in the function as defined in the \(\mathsf {BFR} \)+\(\mathsf {SHE}\).\({\mathbf {Comp}}\) algorithm. The D letter indicates the \(-2\) in the function, but since \(-2\) is not in the valid range \(\mathbb {Z}_{p}\) required by the original \(\mathsf {BFR} \) scheme, we need to have an intermediate transformation of \(D = d - 2\). All \(A, B, C_{1}\) and \(C_{2}\) are in \(\mathbb {Z}_d\). Finally, the \(\sigma \)s are the outcome tags of the form \(\sigma ^{(i)}=(y_{0}^{(i)}, Y_{1}^{(i)}, \hat{Y}_{2}^{(i)}) \in \mathbb {Z}_{p} \times \mathbb {G} \times \mathbb {G}_T\) after each gate operation, and the Rs are values in either \(\mathbb {{G}}\) or \(\mathbb {G_{T}}\), which are used for homomorphic evaluation over bilinear groups (i.e., \({\mathbf {GroupEval}}\) in [4]).

Fig. 3.
figure 3

The arithmetic circuit for calculating the encrypted Hamming distance.

We let \(\alpha \) and \(\beta \) be \(\mathsf {vEnc}_{1}(A)\) and \(\mathsf {vEnc}_{2}(B)\) and each of them has a tag: \(\sigma _{\alpha } = (y_{0}^{(A)}, Y_{1}^{(A)}, 1)\) and \(\sigma _{\beta } = (y_{0}^{(B)}, Y_{1}^{(B)}, 1)\). These two tags are generated by the \(\mathsf {BFR} \).\({\mathbf {Auth}}\) algorithm, which specifies that \(y_{0}^{(A)}= \alpha \) and \(Y_{1}^{(A)}= (R_{\alpha }\cdot g^{-\alpha })^{1/\theta }\). Similarly, we have \(y_{0}^{(B)}= \beta \) and \(Y_{1}^{(B)}= (R_{\beta }\cdot g^{-\beta })^{1/\theta }\). To verify the correctness of our \(\mathsf {BFR} \)+\(\mathsf {SHE}\) scheme, we need to check that the two equations specified in the \(\mathsf {BFR} \).\({\mathbf {Ver}}\) algorithm are satisfied if the computation is performed correctly. To this end, let \(\sigma _{{\mathsf {HD}}} = (y_0^{({\mathsf {HD}})}, Y_1^{({\mathsf {HD}})}, \hat{Y}_{2}^{({\mathsf {HD}})})\) be the final tag (which is equivalent to \(\sigma _{6} \) in the arithmetic circuit depicted in Fig. 3).

The first step is to derive the tags for the intermediate calculation and eventually the final tag. If we run the \(\mathsf {SHE}\).\({\mathbf {Eval}}\) algorithm homomorphically through the circuit, we will get the outcome tags \(\sigma _{1}, \ldots , \sigma _{6}\) (for details see Appendix A). We thus derive \(\sigma _{{\mathsf {HD}}}\) (equivalent to \(\sigma _6\)):

$$\begin{aligned} \sigma _{HD}= & {} (y_0^{({\mathsf {HD}})}, Y_1^{({\mathsf {HD}})}, \hat{Y}_{2}^{({\mathsf {HD}})}) \\= & {} (\quad C_2 \cdot y_{0}^{(A)}+ C_1 \cdot y_{0}^{(B)}+ D \cdot y_{0}^{(A)}\cdot y_{0}^{(B)}, \\&\qquad \qquad (Y_{1}^{(A)})^{y_{0}^{(B)}\cdot D + C_{2}} \cdot (Y_{1}^{(B)})^{y_{0}^{(A)}\cdot D + C_{1}}, \quad e(Y_{1}^{(A)},Y_{1}^{(B)})^{D}). \end{aligned}$$

Now we show the proofs for the two verification equations. First we need to prove Eq. (3), i.e., \(ct_{\mathsf {HD}}= y_{0}\mod (d + \ell _d)\). The equality holds as for Eq. (2). The end result is: \(ct_{\mathsf {HD}}= y_{0}^{({\mathsf {HD}})}\mod d + (\ell \mod d)\cdot (p \mod d)\). As we define \(\ell _d = (\ell \mod d)\cdot (p \mod d)\), we can derive Eq. (3). Secondly, we need to prove that Eq. (4) holds, i.e., \(W= e(R_{\alpha }^{C_2} \cdot R_{\beta }^{C_1}, g) \cdot e(R_{\alpha }, R_{\beta })^{D}.\) To this end, we run \({\mathbf {GroupEval}}(f,R_{\alpha }, R_{\beta })\) and execute the bilinear gate operations. Recall that \(R_{\alpha }\) and \(R_{\beta }\) correspond to \(R_{A}\) and \(R_B\) in the notation used in the construction, Denote by \(R_6\) the final result of running \({\mathbf {GroupEval}}\) over the circuit of \({\mathsf {HD}}\). It holds that:

$$\begin{aligned} R_6= & {} {\mathbf {GroupEval}}(f,R_{\alpha }, R_{\beta }) = e(R_{\alpha }^{C_2} \cdot R_{\beta }^{C_1}, g) \cdot e(R_{\alpha }, R_{\beta })^{D} \\= & {} {\mathbf {GroupEval}}(f,R_{\alpha }, R_{\beta }) = e(g, g)^{y_{0}^{{\mathsf {HD}}}}\cdot e(Y_{1}^{{\mathsf {HD}}}, g)^{\theta }\cdot (\hat{Y}_{2}^{({\mathsf {HD}})})^{\theta ^{2}} \end{aligned}$$

By expanding the last expression the desired result (see Appendix A for details). Thus, we have proved the correctness of the \(\mathsf {BFR} \)+\(\mathsf {SHE}\) scheme. which are the results of the pseudo-random function \(F_K\) in the \(\mathsf {BFR} \).\({\mathbf {Ver}}\) algorithm.

4 Improving the Yasuda et al. Protocol

In this section, we describe a modified version of the Yasuda et al. [23] protocol that is secure against the recently identified hill-climbing attack that can be performed by a malicious computation server \(\mathcal {CS}\). It is composed of four distributed parties: a client \(\mathcal {C}\) (holding the keys \({pk}\), \({ek}\) and \({vk}\)), a computation server/database \(\mathcal {CS}\) (holding the keys \({pk}\) and \({ek}\)), an authentication server \(\mathcal {AS}\) (holding the keys \({pk}\), \({sk}\) and \({vk}\)). In the proposed protocol, we preserve the assumption that \(\mathcal {AS}\) is a trusted party and furthermore assume the client \(\mathcal {C}\) and the database \(\mathcal {DB}\) are also trusted parties. \(\mathcal {C}\) is responsible to manage the secret key \({vk}\) for the verifiable computation scheme and \(\mathcal {DB}\) stores the encrypted reference biometric templates with the identities of the corresponding clients. However, \(\mathcal {CS}\) can be malicious and cheat with flawed computations. We describe the three main phases of our proposed improvement of Yasuda et al.’s privacy-preserving biometric authentication protocol:

  • Setup Phase: In this phase the authentication server \(\mathcal {AS}\) runs \(\mathsf {SHE}\).\(\mathbf{{KeyGen}}\) \((\lambda )\) to generate the public key \({pk}\) and the secret key \({sk}\) of the somewhat homomorphic encryption (\(\mathsf {SHE}\)) scheme. \(\mathcal {AS}\) keeps \({sk}\) and distributes \({pk}\) to both the client \(\mathcal {C}\) and the computation server \(\mathcal {CS}\).

  • Enrollment Phase: Upon client registration, the client \(\mathcal {C}\) runs \(\mathsf {BFR} \).\(\mathbf{{KeyGen}}\) \((\lambda )\) to generate the public evaluation key \({ek}\) and the secret verification key \({vk}\). \(\mathcal {C}\) distributes \({ek}\) to \(\mathcal {CS}\) and \({vk}\) to \(\mathcal {AS}\). The client \(\mathcal {C}\) generates a 2048-bit feature vector A from the client’s biometric data, runs \(\mathsf {BFR} \)+\(\mathsf {SHE}\).\({\mathbf {Enc}}\) \(({pk}, A, 0)\) to obtain the ciphertext \(ct_A\). \(\mathcal {C}\) authenticates \(ct_A\) by running \(\mathsf {BFR} \)+\(\mathsf {SHE}\).\({\mathbf {Auth}}\) \(({vk}, L_A, ct_A)\) and outputs a tag \(\sigma _{A}\). Then \(\mathcal {C}\) sends the three-tuple \((\mathsf{ID}, ct_A, \sigma _{A})\) to the database. This three-tuple serves as the reference biometric template for the specific client with identity ID.

  • Authentication Phase: The client provides fresh biometric data as a feature vector \(B\in \{0,1\}^{2048}\). \(\mathcal {C}\) runs \(\mathsf {BFR} \)+\(\mathsf {SHE}\).\({\mathbf {Enc}}\) \(({pk}, B, 1)\) to obtain the ciphertext \(ct_B\) and authenticates it by running \(\sigma _{B} \leftarrow {\mathsf {BFR} }+{\mathsf {SHE}}.{\mathbf {Auth}}({vk},L_B, ct_B)\). \(\mathcal {C}\) sends \((\mathsf{ID}, ct_B, \sigma _{B})\) to \(\mathcal {CS}\), who extracts the tuple \((\mathsf{ID}, ct_A, \sigma _{A})\) corresponding to the client to be authenticated (using the \({\mathsf{ID}}\) as the search key). \(\mathcal {CS}\) calculates the encrypted Hamming distance \(ct_{{\mathsf {HD}}}\leftarrow {\mathsf {BFR} }+{\mathsf {SHE}}.{\mathbf {Comp}}({pk}, ct_A, ct_B)\) and generates a corresponding tag \(\sigma _{{\mathsf {HD}}} \leftarrow {\mathsf {BFR} }+{\mathsf {SHE}}.{\mathbf {Eval}}({ek}, {pk}, \sigma _{A}, \sigma _{B})\). Then, \(\mathcal {CS}\) sends \(({\mathsf{ID}}, ct_{\mathsf {HD}}, \sigma _{{\mathsf {HD}}})\) to the authentication server. \(\mathcal {AS}\) runs \((acc_{\mathsf {VC}}, acc_{{\mathsf {HD}}})\leftarrow {\mathsf {BFR} }+{\mathsf {SHE}}.{\mathbf {Ver}}({vk},{sk}, \mathcal {P}_{\varDelta }, ct_{\mathsf {HD}}, \sigma _{{\mathsf {HD}}}, \kappa )\), where \(\kappa \) is the desired accuracy level of the \(\mathsf {BAP}\). If either \(acc_{\mathsf {VC}}\) or \(acc_{{\mathsf {HD}}}\) is 0 \(\mathcal {AS}\) outputs a \({\mathsf {no}}\), for authentication rejection. Otherwise, \((acc_{\mathsf {VC}}, acc_{{\mathsf {HD}}}) = (1, 1)\) and \(\mathcal {AS}\) outputs \({\mathsf {yes}}\), for authentication success.

4.1 Security Analysis of the Proposed \(\mathsf {BAP}\)

Our primary aim is to demonstrate that our privacy-preserving biometric authentication protocol is not vulnerable to Abidin et al.’s template recovery attack [1]. To this end, we sketch the attack setting in Fig. 4.

Fig. 4.
figure 4

Setting for Abidin et al.’s template recovery attack in [1].

We recall that for this attack, the adversary is a malicious computational server who tries to recover the stored reference biometric template of a client with identity ID. All the other parties of the \(\mathsf {BAP}\), are trusted and behave honestly.

In what follows, we show that the malicious \(\mathcal {CS}\) cannot forge a tag \(\sigma _{{\mathsf {HD}}'}\) that passes the verification checks performed in \(\mathsf {BFR} \). It is possible for the adversary to cheat on the first equality (Eq. (3)) as it only tests that the returned computation result (\(ct_{\mathsf {HD}}\) or \(ct_{P}\)) aligns with the arithmetic circuit used to generate the tag (\(\sigma _{{\mathsf {HD}}}\) or \(\sigma _{{\mathsf {HD}}'}\)). In [1], \(\mathcal {CS}\) succeeds by computing the arithmetic circuit for the inner product instead of \({\mathsf {HD}}\). In this case, it is not possible for the malicious computational server to fool the second integrity check (Eq. (4)). In details, \(\mathcal {AS}\) calculates \(W = {\mathbf {GroupEval}}(f, R_\alpha , R_\beta )\), and since \(\mathcal {AS}\) is honest, \(f={\mathsf {HD}}\) is the arithmetic circuit for the Hamming distance. If \(\mathcal {CS}\) returns incorrect results, with overwhelming probability the second verification equation does not hold, thus the attack is mitigated.

Other Threats. In what follows, we consider attack scenarios in which one of the participating entities in the \(\mathsf {BAP}\) is malicious.

Malicious Client. \(\mathcal {C}\) is responsible to capture the reference template and the fresh template as well as to perform the encryption. If the client is malicious, the knowledge of the encryption secret key and of the identity ID enables \(\mathcal {C}\) to initiate a center search attack and recover the stored template A as explained in [17]. Unfortunately, Pagnin et al. [17] show that this class of attacks cannot be detected using verifiable computation techniques, since the attacker is not cheating with the computation.

A new concern with the modified Yasuda et al. protocol is the key generation for \(\mathsf {BFR} \). In the protocol, we let the client \(\mathcal {C}\) generate the private key \({vk}\), the evaluation key \({ek}\) and the authentication tags because we assume \(\mathcal {C}\) is a trusted party. If \(\mathcal {C}\) turns malicious, it could give a fake \({vk}\) to the authentication server \(\mathcal {AS}\) and initiate the template recovery attack with the inner product by simulating \(\mathcal {CS}\). Since the adversary (\(\mathcal {C}\)) controls \({vk}\), the computation verification step becomes meaningless.

Malicious Computation Server. The main motivation to integrate \(\mathsf {VC}\) in \(\mathsf {BAP}\)s is indeed to prevent \(\mathcal {CS}\) from behaving dishonestly. Unlike the client \(\mathcal {C}\), \(\mathcal {CS}\) only has access to the encrypted templates \(\mathsf {vEnc}_{1}(A)\) and \(\mathsf {vEnc}_{2}(B)\) and the user pseudonyms. \(\mathcal {CS}\) cannot modify the secret key of the \(\mathsf {BFR} \) scheme. We have analysed how the template recovery attack conducted by \(\mathcal {CS}\) can be countered and hence we shorten the discussion here.

In contrast to the original protocol, \(\mathcal {CS}\) needs to calculate an extra value \(\ell _d\) to solve the range issue after integrating \(\mathsf {BFR} \). However, \(\ell _d\) is still operated on the ciphertext level and is not involved in the second verification equation. Thus learning \(\ell _d\) does not provide any significant advantage in recovering the templates.

Malicious Authentication Server. A malicious \(\mathcal {AS}\) will completely break down the privacy of the \(\mathsf {BAP}\) since it controls the secret key \({sk}\) used by the \(\mathsf {SHE}\) scheme. If \(\mathcal {AS}\) successfully eavesdrops and obtains the ciphertext \(\mathsf {vEnc}_{1}(A)\) or \(\mathsf {vEnc}_{2}(B)\), it can recover the plaintext biometric templates.

4.2 Efficiency Analysis

The original \(\mathsf {BFR} \) scheme in [4] allows alternative algorithms to improve the efficiency of the verifier. Although in our instantiation we did not use these algorithms, the current definition of the multi-labels in \(\mathsf {BFR} \)+\(\mathsf {SHE}\) is extensible. Given also that the function to be computed is \(f={\mathsf {HD}}\) and has a very simple description as arithmetic circuit, running the \(\mathsf {BFR} \)+\(\mathsf {SHE}\).\({\mathbf {Ver}}\) algorithm requires O(|f|) computational time. In addition, if the amortized closed-form efficiency functionality is adopted, the verification function will run in time O(1). Nonetheless, the arithmetic circuit of \({\mathsf {HD}}\) has 6 gates only and the saved computation overhead would be relatively small.

5 A Flawed Approach

Privacy and integrity are the two significant properties desired in a privacy-preserving \(\mathsf {BAP}\). There are two possible ways to combine \(\mathsf {VC}\) and homomorphic encryption (\(\mathsf {HE}\)): running \(\mathsf {VC}\) on top of \(\mathsf {HE}\), and viceversa, running \(\mathsf {HE}\) on top of \(\mathsf {VC}\).

In the first case, the data (biometric template) is first encrypted and then encoded to generate an authentication proof. Our construction of \(\mathsf {BFR} \)+\(\mathsf {SHE}\) follows this principle. In this approach, \(\mathcal {AS}\) can make the judgement whether the output of \(\mathcal {CS}\) is from a correct computation of \({\mathsf {HD}}\) before decrypting the ciphertext.

In the second case, the data is first encoded for verifiable computation and then the encoded data is encrypted. This combination is not really straightforward and is prune to security breaches.

In this section, we demonstrate an attack strategy that may lead to information leakage in case the homomorphic encryption scheme (henceforth \(\mathsf {FHE}\))Footnote 1 is applied on top of a \(\mathsf {VC}\) scheme. For the sake of generality, we define \({\mathsf {FHE}}=(KeyGen_{FHE}, Enc, Dec, Eval)\). For verifiable computation scheme we adopt the notation of Gennaro et al. [11] and define \({\mathsf {VC}} =(KeyGen_{VC}, ProbGen, Compute, Ver)\), where \(KeyGen_{VC}\) outputs the private key \(sk_{vc}\) and public key \(pk_{vc}\); ProbGen takes \(sk_{vc}\) and the plaintext x as input and outputs the encoded value \(\sigma _x\); Compute takes the circuit f, the encrypted encoded input and outputs the encoded version of the output; Ver is performed to verify the correctness of the computation given the secret key \(sk_{vc}\) and the encoded output \(\sigma _y\). The main idea of the flawed approach is to first encode the data in plaintext and then encrypt the encoded data. It can be represented by \(\hat{x} = Enc (ProbGen(x))\), where \(\hat{x}\) is what the malicious server gets access to.

5.1 The Attack

We describe now a successful attack strategy to break the privacy-preservation property of a \(\mathsf {BAP}\) built with the second composition method: \(\mathsf {HE}\) on top of \(\mathsf {VC}\) (or \(\mathsf {HE}\) after \(\mathsf {VC}\)). The adversary’s goal is to recover \(\sigma _y\), i.e., the encoded value of the computation result. The attack runs in different phases. We show that the privacy-preserving property is broken if \(q \ge n\), where q is the number of queries in the learning phase and n is the length of the encoded result \(\sigma _y\). For simplicity we collect the two entities \(\mathcal {C}\) and \(\mathcal {AS}\) into a single trusted party \(\mathcal {V}\) that we refer to as the Verifier.

The attack is depicted in Fig. 5, a more detailed description follows.

Fig. 5.
figure 5

An attack strategy against the naïve the integration of \(\mathsf {FHE}\) on top of \(\mathsf {VC}\).

  • Setup phase: \(\mathcal {V}\) generates the keys of the protocol and gives \(pk_{vc}, pk_{FHE}\) to \(\mathcal {A}\).

  • Challenge phase:\(\mathcal {V}\) generates the encoded version \(\sigma _x\) for the input x. \(\mathcal {V}\) encrypts the encoded input and sends \(Enc(\sigma _x)\) to \(\mathcal {A}\).

  • Learning phase: \(\mathcal {A}\) uses \(\mathcal {V}\) as a decryption oracle by sending verification queries, which can be further divided into the following steps:

    1. 1.

      \(\mathcal {A}\) performs honest computations and derives the \(Enc(\sigma _y)\).

    2. 2.

      \(\mathcal {A}\) constructs a vector \(A'\in \mathbb {Z}_2^n\) equal in length to \(\sigma _{y}\). \(A'\) is initialized with the last bit set to 0 and the rest of the bits set to 1. For the \(i^{th}\) trial, we set \(A' = (1_1, \ldots ,0_i,1_{i+1}, \ldots ,1_{n-1},1_n)\), i.e., set the \(i^{th}\) bit to 0 and the rest bits to 1.

    3. 3.

      \(\mathcal {A}\) encrypts the tailored vector \(A'\) and reuses the honest result \(Enc(\sigma _{y})\) from step 1. Then she computes: \(Enc(\sigma _{y'}) = Enc(A') \diamond Enc(\sigma _{y})\), where \(\diamond \) represents the Hadamard product for binary vectors and sends the result to \(\mathcal {V}\) for verification.

    4. 4.

      \(\mathcal {V}\) decrypts \(Enc(\sigma _y')\). Thanks to the homomorphic property of \(\mathsf {FHE}\), \(\mathcal {V}\) can derive \(\sigma _{y'} = A' \diamond \sigma _{y}\). \(\mathcal {V}\) checks the computation based on the encoded result \(\sigma _{y'}\) and returns either accept if \(Ver(sk, \sigma _{y'}) = 1\) or reject, otherwise.

    5. 5.

      The attacker \(A'\) acts as a “mask”: it copies all the bit values of \(\sigma _{y}\) into \(\sigma _{y'}\) except for the \(i^{th}\) bit, which is always set to zero. Consequently, if the output of the verification is accept, \(\mathcal {A}\) will learn that \(\sigma _y = \sigma _{y'}\) as well as \(Enc(\sigma _{y}) = Enc(\sigma _{y'})\), which reveals that the \(i^{th}\) bit of \(\sigma _{y}\) equals to 0. Similarly, if the output of the verification is reject, \(\mathcal {A}\) learns that the \(i^{th}\) bit of \(\sigma _y\) is 1. In both cases, one bit of \(\sigma _{y}\) is leaked.

  • Output phase: After \(q \ge n\) verification queries, where n equals the length of \(\sigma _y\), \(\mathcal {A}\) outputs \(\sigma _{y'} \).

It is trivial to check that that \(\sigma _{y'} = \sigma _{y}\) and thus \(Ver(sk,\sigma _y' ) = Ver(sk,\sigma _y' ) = 1\) and attacker’s goal is achieved.

The attack demonstrates that the order of combining a \(\mathsf {VC}\) and a (F)HE is very crucial: the verifier must decrypt the ciphertext before it can determine whether it is the result of the correct outsourced computation. Adopting such a scheme in a \(\mathsf {BAP}\) would make \(\mathcal {AS}\) a decryption oracle. Leaking information on the Hamming distance may be exploited to perform further attacks that might lead to the full recovery of biometric templates as it has been recently shown [17]. Formally speaking, we can say that the \(\mathsf {HE}\) on top of \(\mathsf {VC}\) is not a chosen-ciphertext attack (CCA) secure scheme.

6 Conclusions

Biometric authentication protocols have gained considerable popularity for access control services. Preserving the privacy of the biometric templates is highly critical due to their irrevocable nature. Yasuda et al. proposed a biometric authentication protocol [23] using a \(\mathsf {SHE}\) scheme. However, a hill-climbing attack [1] has been presented against this protocol that relies on a malicious internal computation server \(\mathcal {CS}\) that performs erroneous computations and leads to the disclosure of the biometric reference template. We counter the aforementioned attack by constructing a new scheme named \(\mathsf {BFR} \)+\(\mathsf {SHE}\) which adds a verifiable computation layer to the \(\mathsf {SHE}\) scheme. We then describe a modified version of the Yasuda et al. protocol that utilizes our \(\mathsf {BFR} \)+\(\mathsf {SHE}\) scheme, and demonstrate that the improved \(\mathsf {BAP}\) provides higher privacy guarantees. Although employing \(\mathsf {VC}\) to mitigate hill-climbing attack techniques seems a quite straight-forward step, we demonstrate that not all combinations of a \(\mathsf {VC}\) scheme with a \(\mathsf {HE}\) one are secure, and show how a naïve combination leads to a drastic private information leakage in \(\mathsf {BAP}\).