Keywords

1 Introduction

Cloud computing is a cost effective computing paradigm for convenient, on-demand data access to a shared pool of configurable computing resources such as networks, servers, storage, applications and services [2]. Broadly, there are three types of consumers in cloud computing – Cloud server as a consumer, Merchant (Data Owner) as a consumer, and Customer as a consumer. Cloud server facilitates storage and services in which merchant stores the application data and all eligible customers of the merchant get on-demand services from the cloud infrastructure. Data owner hires the cloud infrastructure for storing application data in the cloud storage. While resource outsourcing provides significant advantages to data owners as well as to service consumers, there are some important concerns such as security, privacy, ownership and trust that have been discussed substantially over past decade [3,4,5,6]. For example, the company can delegate the health monitoring systems to the cloud, where a patient can directly communicate with the cloud. However, upon receiving the patient request the cloud can generate a fabricated report for some malicious intent. Therefore, there is a possibility that cloud server can manipulate the data without data owner’s knowledge. In order to avoid such scenarios, data owner can prefers to store data in cloud server in a controlled manner so that the cloud server cannot manipulate the data while consumer getting services from it. In recent times, several mHealth services have been proposed [4, 7,8,9,10]. MediNet [7] discussed a mobile healthcare system that can personalize the self-care process for patients with both diabetes and cardiovascular disease. MediNet uses a reasoning engine to make recommendations to a patient based on current and previous readings from monitoring devices connected to the patient and on information that is known about the patient. HealthKiosk [8] proposed a family-based healthcare system that considers contextual information and alerting mechanisms for continuous monitoring of health conditions, where the system design of HealthKiosk has an important entity known as sensor proxy that acts as a bridge between the raw data sensed from the sensing device and the kiosk controller, and also acts as a data processing unit. In [9], a taxonomy of the strategies and types of health interventions have been discussed and implemented with mobile phones. Lin et al. [4] proposed a cloud-assisted privacy preserving mobile health monitoring system to protect the privacy of users. Their scheme uses the key private proxy re-encryption technique by which the computational cost of the users is primarily done in the cloud server. A basic model for mobile healthcare system is depicted in Fig. 1.

Fig. 1.
figure 1

A basic model for mobile healthcare system

In 2015, Guo et al. [1] proposed a scheme for verifiable privacy-preserving monitoring for cloud-assisted health systems. In this paper, we show that the scheme [1] suffers from major security weaknesses, in particular, the scheme does not provide privacy-preserving services, which is the main claim of the scheme. We provide a mitigation for the weaknesses by modifying the scheme. The improved scheme retains the security and privacy claims of [1] without increasing any overhead.

The remainder of the paper is organized as follows. Section 2 reviews the Guo et al.’s scheme. Section 3 shows the security weaknesses of the scheme. Section 4 provides the proposed improvements. We conclude the paper in Sect. 5.

2 Guo et al.’s Scheme

Guo et al. [1] proposed a scheme, appeared in INFOCOM 2015, that claims verifiable privacy-preserving service in healthcare systems. The scheme has two main objectives - (i) privacy-preserving identity verification, and (ii) verifiable PHR computation. The former provides secure identity verification on cloud without revealing identity of user while later guarantees the correctness of generated PHR. The scheme consists of four entities as follows.

  • Trust Authority (TA): TA performs issuance and distributing secret and public parameters to other entities of the scheme.

  • Cloud Service Provider (CSP): CSP verifies user identity and computes health record computation using the monitoring program \(f({\varvec{x}})\) provided by the company.

  • Company: Company provides health record computation to users with the help of CSP.

  • Users: Users are the consumers for their health services/records.

The scheme works as follows. A user receives a private certificate \(\sigma \) from TA. After receiving \(\sigma \) user asks for a blind signature \(\psi \) on \(\sigma \) from the company. After that the user is a registered entity for the monitoring program \(f({\varvec{x}})\) and the blind signature \(\psi \) is issued for the user. Here, \(f({\varvec{x}})\) is a confidential polynomial function and \({\varvec{x}}\) is the user’s data generated by the user as \({\varvec{x}}\) = (\(x_{1},x_{2},x_{3},\cdots ,x_{N}\)), \(x_{i}\in Z_{n}^*\). To access the health records the user encrypts the vector and then sends an encrypted vector with \(\psi \) to the CSP. User computes \({\varvec{c}}\) = E(\({\varvec{m}}\)), where \({\varvec{m}}\) is monitored raw data and E(\(\cdot \)) is a secure encryption scheme. User then generates proofs on \(\sigma \) which are used for authentication. If public verification of given \(\psi \) is done by the CSP then it computes \(f({\varvec{x}})\) on given \({\varvec{c}}\). After that the CSP computes the monitoring function and gives results \(f(E({\varvec{m}}))\) and signature \(\delta \) to the user. User now decrypts using his secret key and checks for correctness of \(f(E({\varvec{m}}))\) and \(\delta \) based on monitored data \({\varvec{m}}\). The detailed construction of the scheme works with the following phases.

2.1 System Setup

TA sets up the system by choosing the security parameters and the corresponding public parameters.

  1. 1.

    General Setup: TA chooses a security parameter \(\xi \) and generates public parameters param = (\(n, G, G_1, e\)), where \(n = pq\) is the order of group G, p and q are large primes, and e is a bilinear pairing mapping.

  2. 2.

    Partially Blind Signature Setup: TA issues domain public parameter (g, \(g^{s}\)) \(\in G^{2}\), where s is a master secret key. TA selects two hash functions \(H : \{0,1\}^* \rightarrow G\) and \( H_{0} : \{0,1\}^* \rightarrow Z_{n}^*\). A signing key pair (pk, sk), where \(pk = H(id_{c}) \in G\) and \(sk = H( id_{c})^s\) is generated by TA for the company.

  3. 3.

    Monitoring System Setup: TA chooses \(g_{0} \in G\) and publishes h, where \(h = g_{0}^p \in _{R} G_{q}\). TA issues \(\sigma \) after providing ID \(id_A\) for user, where \(\sigma = g^{\frac{1}{s+id_A}}\). TA gives the private key \(sk = q\) to the user.

2.2 Privacy-Preserving Identity Verification

This phase is composed of the following four sub-protocols.

  1. 1.

    Signature Request

    \((\theta ,\phi ) \leftarrow \) Request(\(g, pk, id_{A}, w\)): User asks for some parameters to company for partially blind signature \(\psi \) on \(\sigma \). Before the request is sent, user and company agree on string \(l \in \lbrace 0,1\rbrace ^n\). Then, the company selects \(t \in _{R} Z_{n}^*\), calculates \(\theta = g^{t}\), \(\phi = H(id_{c})^t\) and sends (\(\theta , \phi \)) to the user.

  2. 2.

    Partially Blind Signature Generation Process

    \(\epsilon ' \leftarrow \) BlindSign(\(\theta ,g^{s},\phi ,l,\sigma )\): User randomly chooses \(\alpha ,\beta ,\gamma \in _{R} Z_{n}^\star \) and calculates \(\theta ' = \theta ^{\alpha } \cdot \big (g^s\big )^\gamma = g^{\alpha t + \gamma s}\), \(\phi ' = H(id_{c})^{\alpha (\beta + t)} H(l)^{-\gamma }\) and \(u=\alpha ^{-1} H_{0}(\sigma \parallel \phi ')+\beta \), and sends these to the company. Then, the company calculates

    $$\begin{aligned} \epsilon = H(id_{c})^{s(t+u)} H(l)^{t} \end{aligned}$$

    and sends it back to the user, who unblinds \(\epsilon \) by calculating \(\epsilon ' = \epsilon ^{\alpha }\).

  3. 3.

    Commitment and Proof Generation Process

    \((com_{i},\pi ) \leftarrow \) ProveGen(\( \theta ',\phi ',\epsilon ',\sigma ,l\)). CSP verifies user’s identity by using the blind signature \(\psi = (\theta ',\phi ',\epsilon ',\sigma ,w)\) as follows.

    $$ e(e',g)e(X,\sigma ) e(Y, g^{-s})e(H(l)^{-1},\theta ') \mathop {=}\limits ^{?} e(g,g) $$

    where \(X = g^{id_{A}} g^s\) and \(Y = \phi ' \cdot H(id_{c})^{H_{0}(\sigma \parallel \phi ')}\).

    Note that the verification of the above equation requires the identity \(id_A\) of the user along with the blind signature \(\psi \). Therefore, if the user directly sends the blind signature \(\psi \) to the CSP, then it reveals the correlation of \(id_A\) and the partially blind signature \(\epsilon '\).

    Now user generates proofs for the signature and the certificate. For generation of commitments, user chooses \(\mu _{i},\nu _{i} \in _{R} Z_{n}, i=1,2,3,4\).

    \(\texttt {com}_{1} = \epsilon ' h^{\mu _{1}} = H(id_{c})^{\alpha s(t+u)} H(l)^{\alpha t} h^{\mu _{1}}, \texttt {com}_{1}' = gh^{\nu _{1}}\)

    \(\texttt {com}_{2} = g^{id_{A}+s}h^{\mu _{2}}, \texttt {com}_{2}' = \sigma h^{\nu _{2}} = g^{\frac{1}{s+id_{A}}} h^{\nu _{2}}\)

    \(\texttt {com}_{3} =\phi '\cdot H(id_{c})^{H_{0}(\sigma \parallel \phi ')} h^{\mu _{3}}, \texttt {com}_{3}' = g^{-s} h^{\nu _{3}}\)

    \(\texttt {com}_{4} = H(l)^{-1} h^{\mu _4}, \texttt {com}_{4}' = \theta ' h^{\nu _{4}} = g^{\alpha t + \gamma s } h^{\nu _{4}}\)

    After calculating commitment set, user builds the proof

    $$\pi = \varPi ^{4}_{1} (\texttt {com}_i h^{-\mu _i})^{\nu _i} (\texttt {com}_{i}')^{\mu _i}$$

    and then sends \((\lbrace \texttt {com}_{i},\texttt {com}_{i}'\rbrace _{i=1}^{4},\pi )\) to the CSP for verification.

  4. 4.

    Identity Verification Process

    (0,1) \(\leftarrow \) Verify\((\lbrace \texttt {com}_{i},\texttt {com}_{i}'\rbrace _{i=1}^{4},\pi ,h,e(g,g))\). CSP checks the following equality and returns 1 for successful verification, 0 for unsuccessful verification.

    $$\begin{aligned} \prod \limits _{i=1}^4 e(\texttt {com}_i,\texttt {com}_{i}') = e(g,g)e(\pi ,h) \end{aligned}$$

2.3 Verifiable PHR Computation

After identity verification, user uploads PHR by the following steps.

  1. 1.

    Monitoring Program Delegation: The company delegates the monitoring program to the cloud and then user’s PHR is computed by the cloud. The company sends the coefficient vector \({\varvec{a}}=(a_{0}, a_{1}, \cdots , a_{k})\) and string l to the cloud, where l is used for identifying correlation program.

  2. 2.

    PHR Encryption: Let PHR m be an entry from data vector \({\varvec{m}} = (m_{1},m_{2},\cdots ,\) \(m_{N}),m_{i} \in Z_{n}\). User chooses a set of random numbers \({\varvec{r}} = (r_{0},r_{1},\cdots ,r_{k}),r_{i} \in Z_{n}\). Then, the user sends \({\varvec{r}}\) to the company. After getting \({\varvec{r}}\), the company calculates \(\varvec{r'} = {\varvec{r}}\cdot {\varvec{a}} = (a_{0}r_{0},a_{1}r_{1},\cdots ,a_{k}r_{k})\). Then, company sends \(h^{\bar{r}}=h^{\sum _{i=0}^{k}r'_{i}}\) and \(g^{\bar{r}}\) to the user, and \(\bar{r}\) to the CSP, where \(\bar{r}=\sum _{i=0}^{k}a_ir_i\). User picks \(d \in _{R} Z_{n}\) and generates the ciphertext of PHR as

    $$ c = \left( gh^{d\cdot r_{0}},g^m h^{d\cdot r_{1}},g^{m^{2}}h^{d\cdot r_{2}},\cdots ,g^{m^{k}}h^{d\cdot r_{k}}\right) $$

    where each entry is computed as \(c_{i} = g^{m^{i}}\cdot \left( {h^{r_{i}}}\right) ^{d}\). Now, user sends \(\lbrace c,\lambda ,H(l) \rbrace \) to the CSP, where \(\lambda = \frac{1}{(x-m)\cdot d }\) mod n. User also requests the company to compute a public parameter \(g^{f(x)}\), which later the company sends to the CSP.

  3. 3.

    Verifiable PHR Computation: PHR is computed as follows. \(\upsilon = \prod \limits _{i=0}^k \Big ( g^{m^{i}}\cdot \left( {h^{r_{i}}}\right) ^{d} \Big )^{-a_{i}} = \prod \limits _{i=0}^k g^{-a_{i}\cdot m^{i}} \cdot h^{-a_{i}r_{i}d} = g^{\sum _{i=0}^{k}-a_{i}\cdot m^{i}} \cdot h^{\sum _{i=0}^{k}-a_{i}r_{i}d}\) \( = g^{-f(m)} \cdot h^{-d\sum _{i=0}^{k}r'_{i}}\)

    CSP computes \(\lambda ' = \frac{\lambda }{\bar{r}} = \frac{1}{(x-m)\cdot d \cdot \bar{r}} \) and signature \(\delta \) using \(g^{f(x)}\) as, \(\delta = \big ( g^{f(x)} \cdot \upsilon \big )^{\lambda '} = \big ( g^{f(x)-f(m)} \cdot h^{-d\sum _{i=0}^{k}r'_{i}} \big )^{\frac{1}{(x-m)\cdot d \cdot \bar{r}}}\) \( = g^{\frac{f(x)-f(m)}{(x-m)} \cdot \frac{1}{d\bar{r}}} \cdot h^{-{\frac{1}{(x-m)}}} = \Big ( g^{w(x)} \cdot h^{-\frac{d\bar{r}}{(x-m)}} \Big )^{\frac{1}{d\bar{r}}}\)

    where w(x) is a \((k-1)\)-degree polynomial function. If f(m) is the value based on m, then only it satisfies this condition \(w(x)\equiv {\frac{f(x)-f(m)}{(x-m)}}\). Then, CSP sends \(\lbrace \upsilon ,\delta \rbrace \) to the user.

  4. 4.

    PHR Result Decryption and Verification: Using the private key \(sk = q\) the user decrypts \(\upsilon \) as

    $$\begin{aligned} \bigg ( \frac{1}{\upsilon } \bigg )^q = \big ( g^{f(m)} h^{d\bar{r}} \big )^q = \big ( g^q \big )^{f(m)} h^{d\bar{r}q} = \big (g^q\big )^{f(m)} \in G_{p}. \end{aligned}$$

    User can recover f(m) by computing the discrete log of \(\bigg ( \frac{1}{\upsilon } \bigg )^q\) with base \(g^q\). Here, f(m) is bounded by M where M is very small compared to p,q and therefore, it is feasible to compute the discrete log of \(\bigg ( \frac{1}{\upsilon } \bigg )^q\).

    For getting the proof on f(m), the user sends encrypted (xf(m)) to the company. Then, the company constructs coefficient vector w(x) as \({\varvec{w}}\) = \((w_{0},w_{1},\cdots ,w_{k-1})\) and proves \(W = g^Z\), where \(Z={\sum _{i=0}^{k-1} w_{i}x^{i}}\) and responds to the user. Now, the user calculates \((g^{\bar{r}})^d=g^{d\bar{r}}\) and \(\eta = (h^{\bar{r}})^{-d/(x-m)}\). Finally, the user verifies the following equation to see whether the CSP has computed correct results or not.

    $$\begin{aligned} e(W\cdot \eta ,g) \mathop {=}\limits ^{?} e(\theta , g^{d\bar{r}}). \end{aligned}$$

3 Security Weaknesses in Guo et al.’s scheme

We show two security flaws in Guo et al.’s scheme [1]. The company’s goal is the confidentiality of the monitoring program f(x). If a malicious user obtain f(x) then he can use it for free and he can even sell it to someone else. We note that the company delegates the monitoring program f(x) to the CSP, with the assumption that the computation of f(x) on patients’ PHR can be done by the CSP without loosing the confidentiality of the monitoring program f(x). In other words, the monitoring program f(x) should not be known to any other party except the Company and the CSP. Furthermore, anyone can pass the identity verification process without even communicating with the TA or company and therefore, if a malicious user leaks H(l) to a non-user (attacker), then the attacker can use the system with all credentials.

3.1 Insider Attack

The monitoring program is a polynomial of degree k and hence, it can be represented as a \(k+1\) length vector, \({\varvec{a}} = (a_0,a_1,a_2,\dots ,a_k)\), where \(a_i\) is the coefficient of \(x^i\) in the polynomial.

$$\begin{aligned} f(x) = \sum _{i=0}^{k}a_ix^i=a_0 + a_1x + a_2x^2+\dots +a_kx^k. \end{aligned}$$

The company wants to keep this vector \({\varvec{a}}\) secret from everyone except the cloud. Therefore, there are total \(k+1\) unknowns and it is easy to find values for these unknowns if we have \(k+1\) linearly independent equations involving the coefficients \(\{a_i\}_{i=0}^k\). An authenticated user(insider) can use the service for \(k+1\) times and get PHR report \(f(m_i)\), where \(m_i\) is the PHR sent by the user on \(i^{th}\) time use of the service. Using the set \(\{(m_i,f(m_i))\}_{i=0}^k\), the user can create the system of equations in \(k+1\) variable and solve it for the vector \({\varvec{a}}\). More concretely, assume that the user has the set \(\{(m_i,f(m_i))\}_{i=0}^k\). Then for each \(i\in \{0,1,2,\dots ,k\}\), we have

$$ a_0 + a_1m_i + a_2m_i^2+\dots +a_km_i^k = f(m_i) $$

Without loss of generality, we assume that these \((k+1)\) equations are linearly independent (if not, then the user can always use the service until it is true). We can represent this system of equation in terms of matrices as follows.

\(A =\left[ \begin{array}{cccc} 1 &{} m_1 &{} \dots &{} m_1^k \\ 1 &{} m_2 &{} \dots &{} m_2^k \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 1 &{} m_{k+1} &{} \dots &{} m_{k+1}^k \\ \end{array}\right] \) \(X = \left[ \begin{array}{c} a_0\\ a_1\\ \vdots \\ a_k \\ \end{array} \right] \) \(B = \left[ \begin{array}{c} f(m_1)\\ f(m_2)\\ \vdots \\ f(m_k)\\ \end{array} \right] \)

$$\begin{aligned} AX=B \end{aligned}$$

Solution of the above system of equations is given by

$$ X = A^{-1}B. $$

Now, the user can easily solve the above system of equation for the vector \(X=(a_0,a_1,\dots ,a_k)\) and the user can use it to compute \(f(m) = \sum _{i=0}^ka_im^i\) for any PHR m. In the scheme [1], it is assumed that the degree of the polynomial is around 10 and that makes this attack more easy. Although this attack does not violate privacy of other users, it reveals the confidential monitoring program f(x) of all users pertaining to the company who owns the monitoring program. In this attack, the user obtains f(x) and thereby, computes the result of f(x) without contacting the CSP or the Company, which reveals the confidentiality of the monitoring program f(x).

3.2 Outsider Attack

We note that the cloud does not use any extra information other than the commitments sent by the user and the public parameters published by TA. This makes the process vulnerable to unauthenticated identity verification. The attacker can choose commitments as follows.

figure a

Since g and h are public parameters, the attacker does not have any trouble in choosing these commitments and \(\pi \) can be any random element of the group G. The attacker sends \(\pi \) and \((\{\texttt {com}_i,\texttt {com}'_i\}_{i=1}^4)\) to the cloud for verification. Upon receiving the commitments from the user, the cloud verifies the equality of the following equation.

$$ \prod _{i=1}^4e(\texttt {com}_i,\texttt {com}'_i) = e(g,g)e(\pi ,h) $$

Proof. We prove the equality of the above equation.

figure b

Therefore, anyone can pass through the identity verification process. Once the verification is successful, the cloud allows the attacker to use the service. Here, we assume that the attacker already has H(l) and k. The attacker follows the rest of the process same as an authenticated user described in the previous section and gets \((\upsilon ,\delta )\) in response from the cloud, where

$$\begin{aligned} \upsilon = g^{-f(m)}h^{-d\overline{r}} \end{aligned}$$

The \(\upsilon \) contains information about f(m) and the attacker’s aim is to get the PHR report f(m) for the PHR m. Note that the attacker is not an authenticated user and he does not have the secret key q and hence, can not decrypt \(\upsilon \). However, the attacker can find f(m) using brute force because size of f(m) is small. Since the attacker follows the rest of the process after identity verification, the attacker will have d and \(h^{\overline{r}}\). The attacker computes

$$\begin{aligned} \upsilon ' = \upsilon h^{d\overline{r}} = g^{-f(m)}. \end{aligned}$$

In [1], the authors have considered that values of m and f(m) are not more than 1000. Therefore, the attacker can simply check whether \(\upsilon '\) is equal to \(g^{-j}\) for every \(j\in \{0,1,2,\dots ,1000\}\). By using only 1000 iterations, the attacker can successfully get f(m).

4 Proposed Improvements

4.1 Prevention of Insider Attack

The insider attack is possible because the attacker knows the degree k of the polynomial. We provide a way to keep the polynomial f(x) secure by keeping the degree of the polynomial secret.

Let m be the PHR value and user wants to get a report f(m) for it. User chooses two random numbers \(r_0, d \in Z_n\) and a random prime \(p_1\). User computes \(m' = m + p_1\) and sends \((r_0,d,m')\) to the company. The Fig. 2 reflects the changes suggested for preventing the observed insider attack in Guo et al.’s scheme.

Fig. 2.
figure 2

Prevention of insider attack: changes between Original and Modified schemes

After receiving \((r_0,d,m')\), the company generates k random integers \(r_1,r_2,\dots ,r_k\in Z_n\) using \(r_0\). Company calculates

$$ {\varvec{r'}} ={\varvec{r}}.{\varvec{a}} = (a_0r_0,a_1r_1,\dots ,a_kr_k) $$

and

$$ c =(gh^{dr_0},g^{m'}h^{dr_1},g^{m'^2}h^{dr_2},\dots ,g^{m'^k}h^{dr_k}). $$

Company sends \((h^{\bar{r}},g^{\bar{r}})\) to the user and \((\bar{r},c)\) to the cloud, where \(\bar{r} = \sum _{i=0}^{k}a_ir_i\). User selects a random point \(x \in Z_n\) and computes \(\lambda = \frac{1}{x-m'}d\). User sends x to the company and \((\lambda ,H(l))\) to the cloud. Company computes \(g^{f(x)}\) and sends it to the cloud. Upon receiving \((\lambda ,H(l))\) from user and \((\bar{r},c,g^{f(x)})\) from the company, the cloud computes \(\upsilon \) and \(\delta \). Everything remains same except that c is encryption of \(m'\) instead of m. After decrypting \(\upsilon \), user gets \(f(m') = f(m+p_1)\). For sufficiently large value of \(p_1\), we have \(f(m+p_1)\) mod \(p_1 = f(m)\). The verification process remains same. Since the user does not know the degree k, the user can not retrieve coefficients of the polynomial f(x).

4.2 Prevention of Outsider Attack

We modify the scheme in such a way that only registered user can use the service to get PHR report f(m) for a given PHR m. Note that the cloud computes f(m) only after successful identity verification process. After generation of the blind signature, the company and the cloud agree on some random number \(z \in _{R} Z_{p^{*}}\) and a timestamp \(t_{m}\). Then, the company computes \(g_{1} = g^{H(t_{m}\parallel z)}\) and sends \(g_{1}\) with \(\epsilon \) to the user. The Figs. 3 and 4 reflect the changes suggested for preventing the observed outsider attack in Guo et al.’s scheme.

Fig. 3.
figure 3

Prevention of outsider attack: changes shown in Blind signature

After receiving \(\lbrace g_{1},\epsilon \rbrace \) the user computes commitments. Except \(\texttt {com}_2\) all other commitments remain same. We modify \(\texttt {com}_{2}\) as follows:

$$ \texttt {com}_{2} = g_{1}^{id_{A}+s} h^{\mu _{2}} $$

Now, based on this modification, user computes the proof

$$\pi = \prod ^{4}_{i=1} (\texttt {com}_i h^{-\mu _i})^{\nu _i} (\texttt {com}_{i}')^{\mu _i} $$

and sends \((\lbrace \texttt {com}_{i},\texttt {com}_{i}'\rbrace _{i=1}^{4},\pi )\) to the cloud for verification. During the identity verification process, the cloud verifies the equality of following equation and returns 1 for successful verification and 0 for unsuccessful verification.

$$ \prod \limits _{i=1}^4 e(\texttt {com}_i,\texttt {com}_{i}') = e(g_{1},g)e(\pi ,h). $$

Correctness:

figure c

Here, the attacker does not have \(g_{1}\), so he can not pass the identity verification process. Without passing the verification process, the attacker can not compute f(m) for any PHR m. We note that after the identity verification there is also a need for message authentication (to avoid user impersonation attack) between the company and the user in the PHR computation phase of the scheme.

Fig. 4.
figure 4

Prevention of outsider attack: changes shown for Identity Verification

Table 1. Comparison of Guo et al.’s scheme and the improved scheme

4.3 Performance Analysis

We compare the Guo et al.’s scheme and the proposed improved scheme with respect to the computational, storage and communication costs requirement in the schemes. In the Table 1, k is the degree of the monitoring program f(x), n is a public parameter, and M is the size of the message space. The Table 1 provides computational complexity of both schemes in terms of the number of group multiplications (G), the number of integer multiplications (M) and the number of bilinear pairing computations (E). For exponentiation of a group element, we consider the square and multiply algorithm to count the number of group multiplications. The improved scheme is comparable with Guo et al.’s scheme in terms of computation and storage costs and provides better efficiency in terms of communication cost.

5 Conclusion

We have discussed a recent work on verifiable privacy-preserving service in healthcare systems [1], appeared in INFOCOM 2015. We have shown that the scheme does not provide privacy-preserving services, suffers from insider and outsider attacks. We have suggested for mitigation by modifying the scheme. The improved scheme retains the security and privacy claim without increasing any overhead.