1 Introduction

In recent years, the requirements of safe, comfortable, and convenient for modern house are increasing. The research and design of smart home will satisfy aspire of people’s life style [1, 2]. As the development of the Cloud computing and Internet of Things, the applications of smart home are continually improved. The IoT-Cloud paradigm enables a new breed of services; there are huge changes on the concept of smart environments including smart homes, smart buildings, and smart grids.

The new services create meaningful information from the cloud-based analysis of IoT-based data, due to the unlimited resources of the Clouds. On the IoT and Clouds platform, the big data collected from many sensors can be calculated to generate useful and sensitive information for the end users. In the new business models of smart home, paradigm security breaches can have devastating economic and social impact, and cybercrime is increasing in this context [3].

Although users may be under criminal investigation, evidence can be removed from cloud storage by CSPs. Even if the infrastructure provided by a CSP is more robust and reliable than that of personal computing devices, users still face internal and external threats to their security and privacy, including hardware failure, software errors, hackers, and malicious insiders. Moreover, a CSP may discard data that are rarely accessed and reuse the same storage space for other customers.

A considerable amount of data is now stored in the cloud. In addition to providing a secure channel within which to exchange data and download and upload files, determining how cloud systems can further improve security warrants investigation. From this perspective, CSPs must design a mechanism for civil and criminal investigations. However, in cloud systems, user data is generally under the physical control of CSPs. When user data in the cloud is deleted, the remaining small fractions of data are difficult to use in court. Therefore, identifying users who maliciously modify or delete data is essential in cloud data auditing.

To achieve security, several cloud-storage auditing protocols that can verify the integrity of the data stored over a cloud have been proposed [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]. Ateniese et al. formally defined protocols for Provable Data Possession (PDP) and presented two PDP schemes based on RSA algorithm in [4]. The server responses queried blocks and tags to the client; then, the client checks whether the server possesses the file. Shacham and Waters [5] improved the security of the original PDP using the data fragmentation concept. Erway et al. improved dynamicity of the original PDP schemes in [6]. Then, Ateniese et al. implemented a model for building public-key homomorphic linear authenticators (HLAs) thus reduce the communication and computation overhead in [7,8,9]. Yang and Jia improved security in [10] and the authors implemented an efficient data auditing scheme [11].

In [12, 13], Zhu et al. proposed schemes that support the batch auditing for multiple clouds. Zhu et al. developed the scheme minimizes computation and communication costs in [14]. Sookhak et al. design a new data structure to decrease the computation overhead in remote data auditing scheme [15]. Whereas in [16, 17], Yu et al. and Liu et al. have proposed key management systems for cloud storage to reduce the key management cost and private key exposure risk. In [18], Yang et al. proposed an identity tracing protocol that enables the group manager to trace the identity of members when dispute occurs. In [19, 20], the authors provide a privacy-preserving public auditing mechanism for shared data. The common properties of these protocols are as follows: (1) integrity (auditing of data in the cloud computing framework) and (2) efficiency (verification of data with minimum computational cost).

As described herein, our proposed method provides a mechanism to audit the following illegal behavior by malicious users or CSP insiders. CSPs and users cannot privately modify the electronically stored information even through collusion. Our approach provides a more efficient dynamic auditing than do the schemes in [11, 15] and addresses some of the security holes in [12]. Specifically, we propose a public data auditing method based on the bilinear arithmetic of elliptic curves. This method checks data possession in cloud storage at a computational cost lower than those of homomorphic cryptosystems. The two key features of our approach are that it verifies the integrity of the data against collusion and outperforms other state-of-the-art data auditing methods.

The rest of this paper is organized as follows. Section II describes the system model and defines the proposed protocol. Section III describes the properties of the framework, including dynamic operation. The security characteristics are described in Section IV, and Section V presents the results of performance tests of the proposed protocol. Finally, Section VI concludes the paper.

2 Preliminaries and definitions

This section describes the system model and defines the proposed protocol. Cloud-storage architecture includes three roles: that of the CSP, users, and third-party auditor (TPA). The TPA provides public audit services that enable users to review the integrity of their outsourced data.

2.1 Preliminaries

Bilinear map

Let G1, G2, and G T be multiplicative cyclic groups of prime order p. Let g1 and g2 be the generators of G1 and G2, respectively. A bilinear map is a map e : G1 × G2 → G T such that for all u ∈ G1, v ∈ G2, and a, b ∈ Z P , e (ua, vb) = e(u, v)ab.

This bilinearity implies that for any u1, u2 ∈ G1 and v ∈ G2e (u1 ∙ u2, v) = e (u1, v) ∙e (u2, v) [19]. By using bilinearity, a homomorphic linear file block signature can be combined into one value. In our scheme, this kind of homomorphic property is based on mathematic homomorphism.

2.2 Definition of the system model

After owners create data in smart home systems, the CSP stores the data and maintains access for users. The TPA then performs data storage auditing for both the owners and CSP. The proposed protocol is shown in Fig. 1.

Fig. 1
figure 1

Architecture of the proposed protocol

3 Proposed protocol

We consider that users belong to the same group in smart home systems if they share data between them. Notably, the number of users in a group must be more than two. The proposed protocol is detailed in the following subsections.

3.1 Construction

The proposed protocol contains six algorithms, namely, Key.Gen, Setup, Acknowledge, Challenge, Proof, and Verify. For cloud user A, Key.Gen generates a random key pair (α, β). The secret key is α, and the public key is β. For a file F, setup establishes public parameters that are declared to the public through the TPA; the file F is then processed with file block signatures \( T={\left\{{\sigma}_i^{\prime}\right\}}_{1\le i\le n} \) and transferred to the CSP. The setup of the proposed scheme is shown in Table 1.

Table 1 Proposed protocol for the setup phase

The CSP runs Acknowledge to recognize the client data and sends an acknowledgment message to the TPA. The TPA uses Challenge to select some random data blocks Q = {(i, v i )i ∈ [1,  n]}, where the values of i are distinct. Finally, the TPA sends challenge set Q to the prover. Proof receives the blocks of file F, the set of block signatures T, and the challenge set Q from the auditor as its input, and returns a proof of possession P for the challenged blocks in F. The outputs of the auditing results are then verified (i.e., whether the possession has been proved). Table 2 provides a list of the notations used in the proposed scheme, and the main procedure is detailed in Table 3.

Table 2 Notations
Table 3 Proposed protocol for the main procedures
Table 4 Signature table
Table 5 Group with two users
Table 6 Comparison of the dynamic data update operations of remote data auditing protocols

3.2 Dynamic protocol support

This section describes the support for dynamic data operations, including modification, insertion, and deletion. Dynamic data updates are supported through a data structure called the list table (LT). The TPA is responsible for this data structure. The LT prevents the CSP from using a previous version of the stored data rather than the updated one following the verification phase. An LT contains two fields: the original index O i and the version number V i . The index of each block in the LT is the actual position of the outsourced data block. We introduce this new data structure to decrease the computational overhead. First, we group all n data blocks into kx groups. Then, a block is inserted after i or a specific block i is deleted; the verifier moves less than \( \frac{n}{k^x} \) blocks and incurs a computation overhead of \( \frac{n}{k^x} \). For example, if x = 2, the n data blocks are grouped into k2 groups, as shown in Fig. 2. The procedures for the modification, insertion, and deletion of data are explained as follows:

Fig. 2
figure 2

Division of n data blocks into k2 groups

3.2.1 Data modification

The user executes the modification algorithm as follows:

  1. 1)

    The user modifies the block of the file from m i to \( {m}_i^{\ast } \) and updates \( {V}_i^{\ast } \) =V i  + 1.

  2. 2)

    The user generates a new block signature \( {\left({\sigma}_i^{\prime}\right)}^{\ast } \) for the modified data block \( {m}_i^{\ast } \) as follows:

$$ {\left({\sigma}_i^{\prime}\right)}^{\ast }=H\left({W}_i^{\ast}\right)\bullet {\left(\prod \limits_{j=1}^s{u_j}^{m_{i,j}^{\ast }}\right)}^{\alpha^{(l)}\kern0.5em }, $$
(5)

where\( {W}_i^{\ast }=\left( FID\ \left\Vert i\right\Vert {O}_i\ \right\Vert {V}_i^{\ast }\ \left\Vert\ {t}_i^{\ast}\right) \).

  1. 3)

    The user sends a modification request message to the CSP, which includes\( \left({\left({\sigma}_i^{\prime}\right)}^{\ast },{m}_i^{\ast}\right) \).

  2. 4)

    Upon receiving the modification request message, the CSP replaces block mi with \( {m}_i^{\ast } \) and updates the block signature to \( {\left({\sigma}_i^{\prime}\right)}^{\ast }. \)

  3. 5)

    The CSP generates the acknowledgment message \( {S}_i^{\ast } \) using the secret key SK CSP as follows:

$$ {S}_i^{\ast }={\left({W}_i\right)}^{SK_{CSP}\kern0.5em }. $$
(6)

Then, the CSP transfers \( {S}_i^{\ast } \) to the TPA. Upon receiving \( {S}_i^{\ast } \), the TPA saves it in the signature table.

  1. 6)

    The user sends the modification request message \( {W}_i^{\ast }=\left( FID\ \left\Vert i\right\Vert {O}_i\ \right\Vert {V}_i^{\ast }\ \left\Vert\ {t}_i^{\ast}\right),H\left({W}_i^{\ast}\right) \) to the TPA, and the TPA updates the LT. Then, the CSP sends the modification request message \( {W}_i^{\ast }=\left( FID\ \left\Vert i\right\Vert {O}_i^{\ast}\right\Vert {V}_i^{\ast }\ \left\Vert\ {t}_i^{\ast}\right),H\left({W}_i^{\ast}\right) \) to the TPA, and the TPA further updates the LT.

3.2.2 Data insertion

The user runs the data insertion algorithm through the following steps:

  1. 1)

    The user inserts a new data block \( {m}_i^{\ast } \) and sets the initial value for the version \( {V}_i^{\ast } \) and the original index \( {O}_i^{\ast } \) = n + 1 of the data block \( {m}_i^{\ast } \).

  2. 2)

    The user generates a block signature \( {\left({\sigma}_i^{\prime}\right)}^{\ast } \) for the new data block \( {m}_i^{\ast } \); then by (5),

$$ {\left({\sigma}_i^{\prime}\right)}^{\ast }=H\left({W}_i^{\ast}\right)\bullet {\left(\prod \limits_{j=1}^s{u_j}^{m_{i,j}^{\ast }}\right)}^{\alpha^{(l)}\kern0.5em }, $$

where\( {W}_i^{\ast }=\left( FID\ \left\Vert i\right\Vert {O}_i^{\ast}\right\Vert {V}_i^{\ast }\ \left\Vert\ {t}_i^{\ast}\right) \).

  1. 3)

    The user sends an insert request message to the CSP, which includes (\( {\sigma}_i^{\ast },{m}_i^{\ast },{W}_i^{\ast } \)). Upon receiving the insert request message, the CSP inserts \( {m}_i^{\ast } \) and the block signature \( {\sigma}_i^{\ast } \).

  2. 4)

    The CSP generates the acknowledgment message \( {S}_i^{\ast } \) using its secret key SK CSP as follows:

$$ {S}_i^{\ast }={\left({W}_i\right)}^{SK_{CSP}\kern0.5em }. $$

The CSP then transfers S i to the TPA. Upon receiving the acknowledgment message \( {S}_i^{\ast } \), the TPA saves it in the signature table.

  1. 5)

    The user sends the insert request message \( {W}_i^{\ast }=\left( FID\ \left\Vert i\right\Vert {O}_i^{\ast}\right\Vert {V}_i^{\ast }\ \left\Vert\ {t}_i^{\ast}\right) \), \( H\left({W}_i^{\ast}\right) \) to the TPA; the TPA inserts a new row in the LT after the ith block and shifts the subsequent blocks down one position.

  2. 6)

    The TPA receives the insert request message \( {W}_i^{\ast }=\left( FID\ \left\Vert i\right\Vert {O}_i^{\ast}\right\Vert {V}_i^{\ast }\ \left\Vert\ {t}_i^{\ast}\right) \) and constructs a new row in the LT.

3.2.3 Data deletion

The user runs the data deletion algorithm by performing the following steps:

  1. 1)

    The user deletes a data block m i and updates \( {V}_i^{\ast }={V}_i \) as well as the original index of data block \( {O}_i^{\ast } \) = O i .

  2. 2)

    The user generates a data block identifier \( {W}_i^{\ast }=\left( FID\ \left\Vert i\right\Vert {O}_i^{\ast}\right\Vert {V}_i^{\ast }\ \left\Vert\ {t}_i^{\ast}\right) \) for the data block m i .

  3. 3)

    The user sends the delete request message to the CSP, which includes (σ i , m i ). Upon receiving the delete request message, the CSP deletes m i and the block signature σ i .

  4. 4)

    The CSP generates the acknowledgment message \( {S}_i^{\ast } \) by using its secret key SK CSP as follows:

$$ {S}_i^{\ast }={\left({W}_i\right)}^{SK_{CSP}\kern0.5em }. $$

Then, the CSP transfers S i to the TPA. Upon receiving the acknowledgment message \( {S}_i^{\ast }, \) the TPA saves it in the signature table.

  1. 5)

    The user sends the delete request message \( {W}_i^{\ast }=\left( FID\ \left\Vert i\right\Vert {O}_i^{\ast}\right\Vert {V}_i^{\ast }\ \left\Vert\ {t}_i^{\ast}\right),H\left({W}_i^{\ast}\right) \) to the TPA, and the TPA saves all of the message requests in the LT.

4 Security analysis

In this section, we first validate the correctness of the proposed protocol. To assess the security of our proposed scheme, we identify where the data may have been tampered with or lost; then, we describe how to prevent such collusion.

4.1 Correctness

The correctness of our protocol is illustrated in the following theorem:

Theorem 1

In our proposed protocol, the TPA can determine whether chosen data blocks are correctly stored.

Proof. The verification of (4) in Section III can be rewritten in detail as follows:

$$ e\left(\prod \limits_{\left(i,{v}_i\right)\in Q}\sigma, {g}_2\right)=\psi \bullet e\left(\prod \limits_{\left(i,{v}_i\right)\in Q}H{\left({W}_i\right)}^{v_i},{g}_2\right)\bullet \gamma \bullet \tau . $$

Because σ can be expressed as

$$ \sigma =\left(\prod \limits_{\left(i,{v}_i\right)\in Q}{\rho}^{v_i}\right)\bullet \prod \limits_{\left(i,{v}_i\right)\in Q}{\left\{H\left({W}_i\right)\bullet {\left(\prod \limits_{j=1}^s{u_j}^{m_{i,j}}\right)}^{\alpha^{(l)}}\right\}}^{v_i}=\prod \limits_{\left(i,{v}_i\right)\in Q}\left\{{\rho}^{v_i}\bullet H{\left({W}_i\right)}^{v_i}\bullet {\left(\prod \limits_{j=1}^s{u_j}^{\mu_j^{\prime }}\right)}^{\alpha^{(l)}}\right\}.\kern4.25em $$

So, the equation of LEFT as below,

$$ \mathrm{LEFT}=e\left(\prod \limits_{\left(i,{v}_i\right)\in Q}{\sigma_i}^{v_i},{g}_2\right)=e\left(\prod \limits_{\left(i,{v}_i\right)\in Q}\left\{{\rho}^{v_i}\bullet H{\left({W}_i\right)}^{v_i}\bullet {\left(\prod \limits_{j=1}^s{u_j}^{\mu_j^{\prime }}\right)}^{\alpha^{(l)}}\right\},{g}_2\right). $$

The right-hand side of (4) is the prover’s response τ, γ, ψ and the challenge set (i, v i ); by using (3), this can be expressed as follows:

$$ {\displaystyle \begin{array}{c}\mathrm{RIGHT}=e\left(\prod \limits_{\left(i,{v}_i\right)\in Q}{\rho}^{v_i},{g}_2\right)\bullet e\left(\prod \limits_{\left(i,{v}_i\right)\in Q}H{\left({W}_i\right)}^{v_i},{g}_2\right)\bullet \prod \limits_{j=1}^se\left({u_j}^{-{r}_j},{\beta}^{(l)}\right)\bullet \prod \limits_{j=1}^se\left({u_j}^{\mu_j},{\beta}^{(l)}\right)\\ {}=e\left(\prod \limits_{\left(i,{v}_i\right)\in Q}{\rho}^{v_i}\bullet H{\left({W}_i\right)}^{v_i},{g}_2\right)\bullet \prod \limits_{j=1}^se\left({u_j}^{\mu_j-{r}_j},{\beta}^{(l)}\right)\kern9.5em \\ {}=e\left(\prod \limits_{\left(i,{v}_i\right)\in Q}{\rho}^{v_i}\bullet H{\left({W}_i\right)}^{v_i},{g}_2\right)\bullet \prod \limits_{j=1}^se\left({u_j}^{\mu_j^{\prime }},{\beta}^{(l)}\right).\kern2.75em \end{array}} $$

Then, we can show that the right- and left-hand sides are equal:

$$ \mathrm{RIGHT}=e\left(\prod \limits_{i\in Q}{\rho}^{v_i}\bullet H{\left({W}_i\right)}^{v_i},{g}_2\right)\bullet \prod \limits_{j=1}^se\left({u_j}^{\mu_j^{\prime }},{g}_2^{\alpha^{(l)}}\right)\kern3.5em =e\left(\prod \limits_{\left(i,{v}_i\right)\in Q}{\rho}^{v_i}\bullet H{\left({W}_i\right)}^{v_i}\bullet {\left\{\prod \limits_{j=1}^s{u_j}^{\mu_j^{\prime }}\right\}}^{\alpha^{(l)}},{g}_2\right). $$

4.2 Security model

In this section, we discuss how the proposed protocol provides the security properties introduced in section III. The proof of deletion may be bypassed through collusion between the user and CSP. Therefore, no traces of deleted files will remain. For example, Alice sends Bob some files to satisfy a contract, but the files might be deleted by the CSP. In this scenario, Bob is honest, but Alice is malicious and colludes with the CSP to deceive Bob for financial gain. Later, Alice can present the proof of the deletion of the file to accuse Bob of evidence tampering.

Therefore, addressing collusion between the user and CSP in cloud-storage auditing is a crucial problem. In proposed protocol, TPA saves a relative proof that is signed by CSP and thus can prove whether the message was modified or deleted. This is similar to monitoring a car in a public parking lot: if the car is damaged, the owner can identify the perpetrator.

Theorem 2

If one is the original owner of a file F in proposed smart home systems, and then the block signature cannot be forged by other users.

Proof. In the proposed protocol, the request message and data block identifiers are created by the user by using the secret key α(l) and the public key β(l). The shared file is divided into a number of blocks, each of which is signed by the user. When a user modifies the shared file, they sign the new block using their secret key. From (1) and Theorem 1 we prove that the block signature cannot be forged by other users.

$$ {\sigma}_i^{\prime }=H\left({W}_i\right)\bullet {\left(\prod \limits_{j=1}^s{u_j}^{m_{i,\kern0.5em j}}\right)}^{\alpha^{(l)}\kern0.5em }. $$

Meanwhile, for the CSP, the secret key is SK CSP and the public key is PK CSP ; the acknowledgment message of the CSP, \( {S}_i={\left({W}_i\right)}^{SK_{CSP}\kern0.5em } \), is proved as follows:

$$ {\displaystyle \begin{array}{c}e\left({S}_i,{g}_2\right)=e\left({\left({W}_i\right)}^{SK_{CSP}}\right),{g}_2\Big)\\ {}=e\left(\left({W}_i\right),{g}_2^{SK_{CSP}}\right)\\ {}=e\left({W}_i,, {PK}_{CSP}\right).\end{array}} $$

By utilizing the Boneh–Lynn–Shacham (BLS) signature technique [21, 22], other users cannot compute a valid signature without the secret key of the CSP. Thus, users cannot deny possession files that they created.

Theorem 3

If a user performs modifications, deletions, or insertions to a file in proposed smart home systems, then the block signatures of these modifications, deletions, or insertions cannot be forged.

Proof. In the proposed protocol, the request message and data block identifiers are created by the user by using the secret key α(l) and the public key β(l). From (5) and Theorem 1, we prove that the block signature cannot be forged.

$$ {\left({\sigma}_i^{\prime}\right)}^{\ast }=H\left({W}_i^{\ast}\right)\bullet {\left(\prod \limits_{j=1}^s{u_j}^{m_{i,j}^{\ast }}\right)}^{\alpha^{(l)}\kern0.5em }, $$

For the CSP, the secret key is SK CSP and the public key is PK CSP ; the acknowledgment message of the CSP, \( {S}_i^{\ast }={\left({W}_i^{\ast}\right)}^{SK_{CSP}} \), is proved as follows:

$$ {\displaystyle \begin{array}{c}e\left({S}_i^{\ast },{g}_2\right)=e\left({\left({W}_i^{\ast}\right)}^{SK_{CSP}}\right),{g}_2\Big)\\ {}=e\left(\left({W}_i^{\ast}\right),{g}_2^{SK_{CSP}}\right)\\ {}=e\left({W}_i^{\ast}\right),{PK}_{CSP}\Big).\end{array}} $$

Similar to the proof of Theorem 2, by utilizing the BLS signature technique [21, 22], other users cannot compute a valid signature without the secret key of the CSP. Thus, the CSP and these users cannot deny modifications, deletions, or insertions that they make to files.

However, in the scheme in [11], the CSP and these users can deny modifications to files. We take a group with two users as an example (Table 5). In this scenario, Bob is honest but Alice is malicious and colludes with the CSP to modify a shared file in order to deceive Bob for financial gain. Without proof of data modification held by a TPA, the CSP can collude with Alice and delete file blocks that were originally created or modified by Bob. Then, Alice can accuse Bob of tampering with evidence. On the other hand, with proof of data modification held by a TPA, the CSP cannot deny the proof that is signed by CSP. Thus, the proposed method provides a mechanism to audit the illegal behavior by malicious users or CSP insiders.

Theorem 4

Our protocol cannot reveal content to the TPA when an attack happens in [12].

Proof. The parameters are the same as those presented in Section III. Let \( {\mu}_j^{\prime }=\sum \limits_{\left(i,{v}_i\right)\in \mathrm{Q}\ }{v}_i\bullet {m}_{i,j} \) and \( {\sigma}^{\prime }=\prod \limits_{i\in Q}{\left({\sigma}_i\right)}^{v_i} \). The CSP chooses random r j and ρ; then,

$$ {\displaystyle \begin{array}{c}{\mu}_j={r}_j+{\mu}_j^{\hbox{'}}={r}_j+\sum \limits_{\left(\mathrm{i},\upnu \mathrm{i}\right)\in \mathrm{Q}}{v}_i\cdot {m}_{i,j}\\ {}{\sigma}_i=\rho \cdot {\sigma}_i^{\hbox{'}}.\end{array}} $$

To prevent from data leakage attack, the CSP needs to blind both μ j and σ i . Then, the CSP computes \( \mu ={\left\{{\mu}_j\right\}}_{\begin{array}{c}1\le j\le s\\ {}\end{array}} \), γ, and τ using R and ρ. As a result, μ, γ, and τ are unknown for TPA, as the response proof of storage correctness to the TPA. Therefore, the verifier cannot obtain \( {\left\{{m}_{ij}\right\}}_{\begin{array}{c}1\le i\le n\ \\ {}1\le j\le s\ \end{array}} \) from \( \mu ={\left\{{\mu}_j\right\}}_{\begin{array}{c}1\le j\le s\\ {}\end{array}} \); similarly, the verifier cannot obtain the block signature \( {\sigma}_i^{\prime } \). Because of the random masking of r j and ρ, the verifier cannot obtain information about \( {\mu}_j^{\prime } \) and \( {\sigma}_i^{\prime } \).

To protect the privacy of the checked data, the leakage of personnel information in the protocol for the proof of data possession must be considered. Without background information, the proposed scheme provides security superior to the data auditing scheme proposed by Yang and Jia [11].

Theorem 5

The proposed protocol can resist the replay attack.

Proof. In proposed protocol, the TPA picks some random data blocks, i.e., subset [1, i] of the set [1, n], to construct the challenge set Q. And the TPA generates a random element \( {v}_i\in {Z}_p^{\ast } \) for each i. Thus, there are different challenge sets Q in different challenge-response phase. The CSP cannot use the previous proof to pass the verification without retrieving the challenged data blocks and data tags.

Besides, the hash value \( H{\left({W}_i\right)}^{v_i} \) is unknown for CSP, the adversary can not forge the hash value. As result, the malicious CSP cannot launch the replay attack based on the same values.

5 Performance analysis

We compare the computational cost of the proposed scheme with those of [11, 15]. The experiments were performed on a Linux system with an Intel Core i5-2435M CPU at 2.40 GHz and with 2 GB RAM. We used the pairing-based cryptography library version 0.5.12 [23] to simulate our auditing scheme and the computational costs of dynamic data updates were simulated with MySQL Version 14.14 Distrib. 5.5.52. We also used a MNT d159 elliptic curve [23], with a base field size of 159 b and embedding degree of 6. The MNT d159 curve has a 160-b group order, which means that p is a 160-b long prime.

The file used in the simulations of the proposed and existing auditing schemes had 125,000 data blocks; the total size was less than 1 GB and the size of each block was 8 kB (Fig. 3). The computational cost incurred by finding the position of each block is negligible, because the computation is shorter than those of the data update. Moreover, the positions can easily be precomputed before data update operations; thus, the computation overhead due to finding the positions of data block is negligible.

Fig. 3
figure 3

Comparison of the computational costs of update operations for file sizes from 1 to 5 GB

The mathematical model of the proposed scheme is supported by empirical results for file sizes from 1 to 5 GB, where the number of update requests for insertion or deletion is 50, and the number of groups k is equal to 10 (Fig. 3). To insert a block after i or delete a specific block i in the method proposed by Yang et al., the computational overhead is n. The computational overhead of the method proposed by Sookhak et al. is \( \frac{n}{k} \). In our method, the maximum computational overhead of the verifier (when x = 2) is only \( \frac{n}{k^2} \).

We compare the computational complexity of our method with that proposed by Sookhak et al. for dynamic data update operations (Table 6). The complexity of our method is O(\( \frac{n}{k^x} \)); when x = 2 and the CSP inserts a block after i or deletes block i, the verifier moves fewer than \( \frac{n}{k^2} \) computational overhead of \( \frac{n}{k^2} \).

In the method proposed by Yang et al., to insert a block after i or delete block i, the verifier must move n − i blocks in the data structure. Therefore, the computational overhead of their method for inserting and deleting operations is O (n). Similarly, for these operations in the method proposed by Sookhak et al., the verifier must move \( \frac{n}{k}-i \) blocks, which incurs a computational overhead of \( \frac{n}{k} \). The complexity of their method is O (\( \frac{n}{k} \)); thus, our method is faster than that of Sookhak et al.

6 Conclusions

In this study, we developed an efficient and secure collusion-resistant protocol in smart home systems. The contributions of this paper are summarized as follows:

  1. 1)

    First, we designed a suitable framework for collusion resistance and formalized the definition of the public auditing scheme for the TPA, which supports the public audit ability of remote data.

  2. 2)

    Additionally, our scheme is more efficient than other state-of-the-art data auditing methods. In particular, we introduced a new data structure that decreases the computational overhead in dynamic data update operations.

Overall, we proposed an efficient protocol for proving data possession in cloud service applied to smart home systems. The proposed method minimizes the computational cost incurred through the application of bilinear pairings.