Keywords

1 Introduction

1.1 Background

Privacy-preserving Verifiable (outsourced) Computation (PVC), characterized by four properties [8], i.e., correctness, security, privacy and efficiency, has attracted many researchers from the cryptography and information security community. Various protocols [2,3,4,5,6, 8, 9, 15, 16] have been proposed to solve the problems related to the outsourcing of computations on general and specific functions. In particular, for those protocols, privacy is a significant property guaranteeing that the information hidden in the data structure related to the input and output of the outsourced computation cannot be revealed to any unauthorised entity who has access to the data. The analysis of privacy is based on the notion of indistinguishability (see [7] about this notion). This implies that the input and output data are semantically hidden to the unauthorised entity.

To construct the PVC protocols for outsourcing expensive linear algebraic computations, at ACM-ASIACCS 2010, Atallah and Frikken [2] introduced a new hardness assumption called the Secret Hiding assumption (SH) (see Sect. 2.1). Specifically, the authors presented two concrete versions, i.e., the Weak SH assumption (WSH) and the Strong SH assumption (SSH), respectively. The WSH assumption, informally, states that it is hard to distinguish (with knowing the prime \(\textit{p}\)) between the uniform distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\) and the distribution \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) that outputs the matrix with \(\lambda +1\) rows \([\varSigma _{\textit{j}=1}^{\lambda }{} \textit{a}_{1, \textit{j}}\cdot \textit{k}_{\textit{r}}^{\textit{j}}\ldots \varSigma _{\textit{j}=1}^{\lambda }{} \textit{a}_{\textit{m}, \textit{j}}\cdot \textit{k}_{\textit{r}}^{\textit{j}}]\) and \(\lambda \) rows uniformly distributed over \(\mathbb {Z}_{\textit{p}}^{\textit{m}}\), where \(\textit{n}=2\cdot \lambda +1\), where \(\lambda \in \mathbb {N}_{+}\), \(\textit{m}\in \mathbb {N}_{+}\) (e.g., \(\textit{m}=2\cdot \lambda +1\)), \(\forall \textit{r}\in \{1, \ldots , \lambda +1\} \textit{k}_{\textit{r}}\) is chosen from \(\mathbb {Z}_{\textit{p}}^{*}\) uniformly at random, and \(\forall \textit{i}\in \{1,\ldots , \textit{m}\}, \textit{j}\in \{1, \ldots , \lambda \} \textit{a}_{\textit{i}, \textit{j}}\) is chosen from \(\mathbb {Z}_{\textit{p}}\) uniformly at random. The SSH assumption is similar to the WSH assumption, and it states that the uniform distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\) is computationally indistinguishable from the distribution \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) that outputs the matrix with \(\lambda +\textit{e}+1\) rows \([\varSigma _{\textit{j}=1}^{\lambda }{} \textit{a}_{1, \textit{j}}\cdot \textit{k}_{\textit{r}}^{\textit{j}} \ldots \varSigma _{\textit{j}=1}^{\lambda }{} \textit{a}_{\textit{m}, \textit{j}} \cdot \textit{k}_{\textit{r}}^{\textit{j}}]\) and \(\lambda +\textit{e}+1\) rows uniformly distributed over \(\mathbb {Z}_{\textit{p}}^{\textit{m}}\), where \(\textit{n}=2\cdot \lambda +2\cdot \textit{e}+2\), where \(\textit{e}\in \mathbb {N}_{+}\) (e.g., \(\textit{e}=\lambda \)). To validate the plausible hardnesses of the above assumptions, Atallah and Frikken provided a proof to show that the SH assumption (i.e., the WSH assumption) implies the existence of one-way functions. This means that proving the SH assumption is at least as hard as proving \(\textsc {P}\ne \textsc {NP}\) [2].

Based on the SH assumption, Atallah and Frikken [2] proposed two concrete PVC protocols for efficiently outsourcing the multiplication of large-scale matrices (see Sect. 2.3). Specifically, these two provably private protocols can be seen as the ingenious extensions of Shamir’s secret sharing [14], and they are always regarded as the typical work in the PVC community. Atallah and Frikken first introduced a protocol based on the WSH assumption under the two non-colluding servers model (denoted by ). In this warm-up protocol, a client needs to generate \(\lambda \) and \(2\cdot \lambda +1\) pairs of matrices for each server, respectively. The servers perform \(O (\lambda )\) matrix multiplications. Then, the authors developed a protocol based on the SSH assumption under the single server model (denoted by ). In this main protocol, a client must create \(4\cdot \lambda +2\) pairs of matrices for the single server, and this server also perform \(O (\lambda )\) matrix multiplications. Furthermore, the authors provided a method to make the protocol under the single server model hold the security (i.e., the property related to the integrity verification). Of course, there exist some other researchers who are also interested in the SH assumption. For example, Laud and Pankova [12] tried to construct a PVC protocol for outsourcing solutions of linear programming problems based on the SSH assumption.

1.2 Our Contributions

Atallah and Frikken have given some theoretical consequences related to the SH assumption, but whether the SH problem corresponding to the assumption is a hard problem is still a worthwhile research area, particularly when the assumption is proposed for applications in the concrete real-world scenarios. In this paper, we present some rigorous analyses, targeting the SH problem and the Atallah-Frikken PVC Protocols for matrix multiplication, as follows:

  • We present the decisional and search variants of the SH problem in Sect. 2, which are more standard problems when compared with the originals.

  • We propose an analysis, discussed in Sect. 3, to break the decisional variant of the SH assumption (including the WSH assumption and SSH assumption) in a wide range of parameters. Our precise analysis focusing on evaluating the rank of a matrix shows that the decisional-SH problem (including the decisional-WSH problem and decisional-SSH problem) is not a hard problem, and the given SH distribution \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) can be distinguished from the uniform distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\) with overwhelming probability.

  • We invoke the idea of the analysis for solving the decisional-SH problem to undermine the privacy of and in Sect. 3. Our analyses running in polynomial-time take advantage of the distinctions between the rank distributions of two types of given ciphertext matrices (see Theorems 7 and 8 for the two types of ciphertext matrices). The success probabilities of the analyses are close to 1, which shows that neither of those protocols is private against passive eavesdropping (i.e., a ciphertext-only attack \((\textsf {COA})\) (see Definition 3)) and also a chosen-plaintext attack \((\textsf {CPA})\) (see Definition 4).

  • We implement the simulation experiments on our theoretical analyses for solving the decisional-SH problem and breaking the privacy of and . The experimental results, presented in Sect. 4, confirm our analyses, which demonstrates that the decisional-SH problem is not a hard problem for a wide range of parameters, and and are not semantically private PVC protocols.

1.3 Organization of the Rest of the Paper

The remainder of the paper is organized as follows. Section 2 introduces the decisional and search versions of the SH assumption (including the WSH assumption and SSH assumption), the Atallah and Frikken’s theoretical exploration on the SH assumption, the PVC protocols and and the formal definition of privacy. Section 3 describes the adversary’s strategy and the detailed theoretical analyses for solving the decisional-SH problem and breaking the privacy of and . Section 4 gives some detailed experimental verifications about our theoretical analyses in Sect. 3. The paper is concluded in Sect. 5 with a direction for future research.

Notation: Throughout the paper, we generally do math modulo \(\textit{p}\) for some prime \(\textit{p}\). We denote by bold lower-case letters vectors over \(\mathbb {Z}_{p }^{n }\) for \(n \ge 2\), and by bold upper-case letters matrices over \(\mathbb {Z}_{p }^{n \times m }\) for \(n \), \(m \ge 2\), where \(\mathbb {Z}_{p }\) is a finite field of size \(p \). We refer a set of elements from a row or a column of a matrix to as a vector. We denote by \(\textit{x}_{\textit{i}, \textit{j}}\) the individual element in the \(\textit{i}^{th}\) row and \(\textit{j}^{th}\) column of a matrix X. For any integer \(\textit{n}\), we denote the set \(\{1, \ldots , \textit{n}\}\) by \([\textit{n}]\). We denote a security parameter by \(\lambda \in \mathbb {N}_{+}\). We denote the transpose of \(\textit{x}\) by \(\textit{x}^{\textit{T}}\), the rank of a matrix X by rank(X), and the minimum of two values by \(\textsf {min}(\cdot , \cdot )\). We denote the class of polynomial functions in \(\lambda \) by \(\textsf {poly}(\lambda )\), and some unspecified negligible function in \(\lambda \) by \(\textsf {negl}(\lambda )\). We use \(\textit{x}\xleftarrow {\$}\varPsi \) to denote the operation of uniformly sampling an element \(\textit{x}\) from a finite set \(\varPsi \). For some probability distribution \(\chi \), \(\textit{x}\leftarrow \chi \) refers to sampling \(\textit{x}\) according to \(\chi \).

2 Preliminaries

In this section, we recall the SH assumption, and proposed by Atallah and Frikken at ACM-ASIACCS 2010 [2]. We also present the formal definition of privacy for the PVC protocol.

2.1 The SH Assumption

We first describe the probability distribution \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) that results from the following steps, where \(\textit{n}\in \{2\cdot \lambda +1, 2\cdot \lambda +2\cdot \textit{e}+2\}\) and \(\textit{m}=\textsf {poly}(\lambda )\ge 2\), where \(\textit{e}\in \mathbb {N}_{+}\).

  1. 1.

    Choose a uniformly random matrix \(\mathbf A \xleftarrow {\$}\mathbb {Z}_{p }^{m \times \lambda }\), where each element \(\textit{a}_{\textit{i}, \textit{j}}\xleftarrow {\$}\mathbb {Z}_{p }\) for \(\textit{i}\in [\textit{m}]\) and \(\textit{j}\in [\lambda ]\). Choose \(\ell =\lambda +1\) (resp. \(\ell =\lambda +\textit{e}+1)\) distinct values \(\textit{k}_{1}, \ldots , \textit{k}_{\ell }\xleftarrow {\$}\mathbb {Z}_{p }^{*}\).

  2. 2.

    For \(\textit{r} \in [\lambda +1]\) (resp. \(\textit{r}\in [\lambda +\textit{e}+1])\), compute \(\mathbf d _{\textit{r}}=(\mathbf A \cdot \mathbf k _{\textit{r}})^{\textit{T}}\), where \(\mathbf k _{\textit{r}}=[\textit{k}_{\textit{r}} \textit{k}_{\textit{r}}^{2} \ldots \textit{k}_{\textit{r}}^{\lambda }]^{\textit{T}}\). Obtain \(\ell =\lambda +1\) (resp. \(\ell =\lambda +\textit{e}+1)\) row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\ell }\), where \(\mathbf d _{\textit{r}} = [\varSigma _{\textit{j}=1}^{\lambda }{} \textit{a}_{1, \textit{j}}\cdot \textit{k}_{\textit{r}}^{\textit{j}} \ldots \varSigma _{\textit{j}=1}^{\lambda }{} \textit{a}_{\textit{m}, \textit{j}} \cdot \textit{k}_{\textit{r}}^{\textit{j}}]\) for \(\textit{r} \in [\lambda +1]\) (resp. \(\textit{r} \in [\lambda +\textit{e}+1])\).

  3. 3.

    For \(\textit{r}\in [\lambda ]\) (resp. \(\textit{r}\in [\lambda +\textit{e}+1])\), choose \(\mathbf u _{\textit{r}} \xleftarrow {\$}\mathbb {Z}_{p }^{m }\). Obtain \(\tau =\lambda \) (resp. \(\tau =\lambda +\textit{e}+1)\) row vectors \(\mathbf u _{1}, \ldots , \mathbf u _{\tau }\).

  4. 4.

    Combine the \(\ell =\lambda +1\) (resp. \(\ell =\lambda +\textit{e}+1)\) row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\ell }\) with the \(\tau =\lambda \) (resp. \({\tau =\lambda +\textit{e}+1)}\) row vectors \(\mathbf u _{1}, \ldots , \mathbf u _{\tau }\) to generate an \({\textit{n}\times \textit{m}}\) matrix \(\mathbf R \). Choose a random permutation of the set \([\textit{n}]\) to permute the rows of \(\mathbf R \). The permuted matrix is the final matrix.

Then, we present the WSH problem and SSH problem as follows:

Definition 1

(WSH Problem). Let \(\textit{n} = 2\cdot \lambda +1\) and \(\textit{m} = \textsf {poly}(\lambda )\ge 2\). The WSH distribution \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) for a given prime \(\textit{p}\) is the set of the permuted matrices, where each matrix includes \(\lambda +1\) row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\lambda +1}\) and \(\lambda \) row vectors \(\mathbf u _{1}, \ldots , \mathbf u _{\lambda }\).

  • The decisional-WSH problem is: For some fixed prime \(\textit{p}\) and given arbitrarily many samples (i.e.,  a polynomial number of samples) from \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\), to computationally distinguish whether these samples are distributed uniformly or whether they are distributed as \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\).

  • The search-WSH problem is: For some fixed prime \(\textit{p}\) and given \(\textit{n}\) samples from the distribution \(\chi (\textit{p})^{\textit{m}}\) \((i.e.,~a~sample~from~\chi (\textit{p})^{\textit{n}\times \textit{m}})\), to find \(\textit{k}_{1}, \ldots , \textit{k}_{\lambda +1}\) \((or~\mathbf A )\).

Definition 2

(SSH Problem). Let \(\textit{n} = 2\cdot \lambda +2\cdot \textit{e}+2\) and \(\textit{m} = \textsf {poly}(\lambda )\ge 2\), where \(\textit{e} \in \mathbb {N}_{+}\). The SSH distribution \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) for a given prime \(\textit{p}\) is the set of the permuted matrices, where each matrix includes \(\lambda +\textit{e}+1\) row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\lambda +\textit{e}+1}\) and \(\lambda +\textit{e}+1\) row vectors \(\mathbf u _{1}, \ldots , \mathbf u _{\lambda +\textit{e}+1}\).

  • The decisional-SSH problem is: The description of this problem is the same as that of the decisional version in Definition 1.

  • The search-SSH problem is: The description of this problem is the same as that of the search version in Definition 1. The aim is to find \(\textit{k}_{1}, \ldots , \textit{k}_{\lambda +\textit{e}+1}\) \((or~\mathbf A )\).

According to Atallah and Frikken’s opinion, the WSH assumption denotes that no polynomial-time adversary solve the decisional and search WSH problem, and the SSH assumption means that no polynomial-time adversary can solve the decisional and search SSH problem. Specifically, the decisional-WSH assumption and the decisional-SSH assumption state that the distribution \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) is computationally indistinguishable from the uniform distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\) for \(\textit{n} \in \{2\cdot \lambda +1, 2\cdot \lambda +2\cdot \textit{e}+2\}\). Then, this implies that, for any polynomial-time adversary \(\mathcal {A}\), we have

$$\begin{aligned} \begin{array}{l} {\textsf {Adv}_{\mathcal {A}, \text {SH}}(\textit{p}, \textit{n}, \textit{m})} \overset{\text {def}}{=} \left| {\textsf {Suc}_{\mathcal {A}, \text {SH}}(\textit{p}, \textit{n}, \textit{m}) - \frac{1}{2}} \right| \le \textsf {negl}(\lambda ) \end{array}, \end{aligned}$$
(1)

where \(\text {SH}\) is either the \(\text {WSH}\) distribution or the \(\text {SSH}\) distribution, \(\textsf {Suc}_{\mathcal {A}, \text {SH}}(\textit{p}, \textit{n}, \textit{m})\) denotes the probability of \(\mathcal {A}\)’s successful guess employing some adversary’s strategy for the distribution of the sample \(\mathbf X \) from \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\), and \(\textsf {Adv}_{\mathcal {A}, \text {SH}}(\textit{p}, \textit{n}, \textit{m})\) denotes the advantage of \(\mathcal {A}\)’s guess for the distribution. Note that, \(\mathcal {A}\)’s guess for the sample \(\mathbf X \in \mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\) can be based on the following experiment:

  1. 1.

    \(\textit{b} \xleftarrow {\$}\{0, 1\}\).

  2. 2.

    If \(\textit{b}=1\) then \(\mathbf X \leftarrow \chi (\textit{p})^{\textit{n}\times \textit{m}}\) else \(\mathbf X \xleftarrow {\$} \mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\).

  3. 3.

    If \(\mathcal {A}(\mathbf X )=\textit{b}\) then \(\mathcal {A}\) wins else \(\mathcal {A}\) loses.

2.2 Atallah-Frikken Theorems Related to the SH Assumption

For the above WSH assumption and SSH assumption, Atallah and Frikken introduced some associated consequences below.

Lemma 1

([2], Lemma 1). Given a set of \(\ell \) special row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\ell }\), where \(\ell < \lambda +1\), this set of row vectors is distributed identically to a set of \(\ell \) uniformly random row vectors from \(\mathbb {Z}_{\textit{p}}^{\textit{m}}\).

Theorem 1

([2], Corollary 2). Consider a \((\lambda +1) \times \textit{m}\) matrix that includes \(\lambda +1\) randomly permuted rows consisting of \(\ell \) special row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\ell }\) and \(\lambda +1-\ell \) uniformly random row vectors from \(\mathbb {Z}_{\textit{p}}^{\textit{m}}\), where \(\ell < \lambda +1\). This type of matrix is distributed identically to the uniformly sampled matrix from \(\mathbb {Z}_{\textit{p}}^{(\lambda +1)\times \textit{m}}\).

Theorem 2

([2], Theorem 6). Consider an \(\textit{n} \times \textit{m}\) matrix sampled from \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\), where \(\textit{n}=2\cdot \lambda +2\cdot \textit{e}+2\). Choose a set of \(\lambda +1\) row vectors from this matrix uniformly at random. The probability that all the \(\lambda +1\) row vectors come from the \(\lambda +\textit{e}+1\) special row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\lambda +\textit{e}+1}\) is negligible in \(\lambda \).

In [2], Atallah and Frikken did not prove the WSH assumption and SSH assumption from first principles, but the authors confirmed the hardness of the WSH problem and proposed the following theorem.

Theorem 3

([2], Theorem 9). Assume that the decisional-WSH assumption holds, the function that outputs an \(\textit{n}\times \textit{m}\) matrix by invoking the generation steps of the distribution \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) is a one-way function, where \(\textit{n}=2 \cdot \lambda +1\).

Actually, from Theorem 3, the difficulty for distinguishing between the uniform distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\) and the distribution \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) shows the lower bound of the difficulty for finding \(\textit{k}_{1}, \ldots , \textit{k}_{\lambda +1}\) (or \(\mathbf A \)). This means that the hardness of the decisional-WSH problem implies the hardness of the search-WSH problem. We refer to [2] for more details.

2.3 Atallah-Frikken PVC Protocols for Matrix Multiplication

Since decisional version is more handy for applications, Atallah and Frikken proposed two PVC protocols and based on the plausible hardnesses of the decisional-WSH problem and decisional-SSH problem, respectively. Specifically, these protocols consist of a tuple of Probabilistic Polynomial-Time (PPT) algorithms \(\textsf {PVC}=(\textsf {KeyGen}, \textsf {ProbGen}, \textsf {Compute}, \textsf {ResuGen})\), where \(\textsf {KeyGen}\) is a private-key generation algorithm, \(\textsf {ProbGen}\) a problem generation algorithm that produces some ciphertext inputs for an outsourced function, \(\textsf {Compute}\) a function computation algorithm that is run by the server to produce some ciphertext outputs of the outsourced function, and \(\textsf {ResuGen}\) a result generation algorithm that produces the real output. The details of these two protocols are as follows:

The Two-Server Case: Given a security parameter \(\lambda \), the matrix size \(\textit{v}=\textsf {poly}(\lambda )\), the size of the message space \(\textit{p}=\textsf {poly}(\lambda )\) and the degree of a polynomial \(\textit{h}=\lambda \). For two \(\textit{v}\times \textit{v}\) matrices \(\mathbf M _{1}, \mathbf M _{2} \in \mathbb {Z}_{\textit{p}}^{\textit{v}\times \textit{v}}\), a quadruple of PPT algorithms is defined by

  1. 1.

    \(\textsf {AFT.KeyGen}(1^{\lambda })\): Choose a uniformly random matrix \(\mathbf A \xleftarrow {\$} \mathbb {Z}_{p }^{2\cdot v ^{2}\times \textit{h}}\), \(2 \cdot \lambda +1\) distinct values \(\textit{k}_{1}, \ldots , \textit{k}_{2\cdot \lambda +1} \xleftarrow {\$}\mathbb {Z}_{p }^{*}\) and a random permutation \(\theta \) of the set \([2\cdot \lambda +1]\). Output a fresh key \(\textsf {sk}=(\mathbf A , \{\textit{k}_{1}, \ldots , \textit{k}_{2\cdot \lambda +1}\}, \theta )\).

  2. 2.

    \(\textsf {AFT.ProbGen}(\textsf {sk}, \mathbf M _{1}, \mathbf M _{2})\): Run \(\mathbf A \cdot \mathbf k \) to obtain a vector \(\mathbf d \) that involves \(2\cdot v ^{2} \textit{h}\)-degree polynomials, where \(\mathbf k =[\textit{k}~\textit{k}^{2} \ldots \textit{k}^{\textit{h}}]^{\textit{T}}\), where \(\textit{k}\) is an indeterminate. Use these \(\textit{h}\)-degree polynomials to mask each element of \(\mathbf M _{1}\) and \(\mathbf M _{2}\), and generate two ciphertexts \(\mathbf C _{1}\) and \(\mathbf C _{2}\). Specifically, \(\forall \textit{i}\), \(\textit{j} \in [\textit{v}]\), \(\textit{i}' \in [2\cdot v ^{2}]~\textit{c}_{\textit{i}, \textit{j}}=\varSigma _{\textit{s}=1}^{\textit{h}} \textit{a}_{\textit{i}', \textit{s}} \cdot \textit{k}^{\textit{s}}+\textit{m}_{\textit{i}, \textit{j}}\). For \(\textit{r} \in [2 \cdot \lambda +1]\), let \(\textit{k}=\textit{k}_{\textit{r}}\) and compute \(\mathbf C _{1}(\textit{k}_{\textit{r}})\) and \(\mathbf C _{2}(\textit{k}_{\textit{r}})\). This implies that \(\textit{c}_{\textit{i}, \textit{j}} (\textit{k}_{\textit{r}}) = \varSigma _{\textit{s}=1}^{\textit{h}} \textit{a}_{\textit{i}', \textit{s}} \cdot \textit{k}_{\textit{r}}^{\textit{s}} + \textit{m}_{\textit{i}, \textit{j}}\). Choose \(2\cdot \lambda \) uniformly random matrices \(\mathbf B _{1}, \ldots , \mathbf B _{2\cdot \lambda } \xleftarrow {\$} \mathbb {Z}_{p }^{\textit{v}\times \textit{v}}\) and create \(\lambda \) pairs \((\mathbf B _{1}, \mathbf B _{2}), \ldots , (\mathbf B _{2\cdot \lambda -1}, \mathbf B _{2\cdot \lambda })\). A client sends a set of matrix pairs \(\textit{U}^{(1)} = \{(\mathbf C _{1}(\textit{k}_{1}), \mathbf C _{2}(\textit{k}_{1})), \ldots , (\mathbf C _{1}(\textit{k}_{\lambda }), \mathbf C _{2}(\textit{k}_{\lambda }))\}\) to the first server. Moreover, the client permutes the \(2\cdot \lambda +1\) matrix pairs of the set \(\textit{U}^{(2)} = \{(\mathbf C _{1}(\textit{k}_{\lambda +1}), \mathbf C _{2}(\textit{k}_{\lambda +1})), \ldots , (\mathbf C _{1}(\textit{k}_{2\cdot \lambda +1}), \,\mathbf C _{2}(\textit{k}_{2\cdot \lambda +1})), (\mathbf B _{1},\, \mathbf B _{2}), \ldots , \,(\mathbf B _{2\cdot \lambda -1}, \mathbf B _{2\cdot \lambda })\}\) using \(\theta \), and sends the permuted set \(\textit{U}^{(2)}\) to the second server.

  3. 3.

    \(\textsf {AFT.Compute}(\textit{U}^{(1)}, \textit{U}^{(2)})\): The products of all matrix pairs in \(\textit{U}^{(1)}\) and \(\textit{U}^{(2)}\) are computed by those two servers and put in two sets \(\textit{Q}^{(1)}\) and \(\textit{Q}^{(2)}\), respectively. These two sets \(\textit{Q}^{(1)}\) and \(\textit{Q}^{(2)}\) are sent back to the client.

  4. 4.

    \(\textsf {AFT.ResuGen}(\textsf {sk}, \textit{Q}^{(1)}, \textit{Q}^{(2)})\): Based on \(\theta \), choose some matrices from \(\textit{Q}^{(1)}\) and \(\textit{Q}^{(2)}\), which correspond to \(\mathbf M _{1}\) and \(\mathbf M _{2}\). Interpolate these matrices to find the real result of \(\mathbf M _{1}\cdot \mathbf M _{2}\).

The Single-Server Case: Given a security parameter \(\lambda \), the matrix size \(\textit{v}=\textsf {poly}(\lambda )\), the size of the message space \(\textit{p}=\textsf {poly}(\lambda )\) and the degree of a polynomial \(\textit{h}=\lambda \). For two \(\textit{v}\times \textit{v}\) matrices \(\mathbf M _{1}, \mathbf M _{2}\in \mathbb {Z}_{\textit{p}}^{\textit{v}\times \textit{v}}\), a quadruple of PPT algorithms is defined as

  1. 1.

    \(\textsf {AFS.KeyGen}(1^{\lambda })\): Choose a uniformly random matrix \(\mathbf A \xleftarrow {\$}\mathbb {Z}_{p }^{2\cdot v ^{2}\times \textit{h}}\), \(2\cdot \lambda +1\) distinct values \(\textit{k}_{1},\ldots , \textit{k}_{2\cdot \lambda +1}\xleftarrow {\$}\mathbb {Z}_{p }^{*}\) and a random permutation \(\theta \) of the set \([4\cdot \lambda +2]\). Output a fresh key \(\textsf {sk}=(\mathbf A , \{\textit{k}_{1}, \ldots , \textit{k}_{2\cdot \lambda +1}\}, \theta )\).

  2. 2.

    \(\textsf {AFS.ProbGen}(\textsf {sk}, \mathbf M _{1}, \mathbf M _{2})\): Run \(\mathbf A \cdot \mathbf k \) to obtain a vector \(\mathbf d \) that includes \(2\cdot v ^{2}~\textit{h}\)-degree polynomials, where \(\mathbf k =[\textit{k}~\textit{k}^{2}~\ldots ~\textit{k}^{\textit{h}}]^{\textit{T}}\), where \(\textit{k}\) is an indeterminate. Use these \(\textit{h}\)-degree polynomials to mask each element of \(\mathbf M _{1}\) and \(\mathbf M _{2}\), and generate two ciphertexts \(\mathbf C _{1}\) and \(\mathbf C _{2}\). Specifically, \(\forall \textit{i}\), \(\textit{j}\in [\textit{v}]\), \(\textit{i}'\in [2\cdot \textit{v}^{2}] \textit{c}_{\textit{i}, \textit{j}} = \varSigma _{\textit{s}=1}^{\textit{h}}{} \textit{a}_{\textit{i}', \textit{s}}\cdot \textit{k}^{\textit{s}}+\textit{m}_{\textit{i}, \textit{j}}\). For \(\textit{r}\in [2\cdot \lambda +1]\), let \(\textit{k}=\textit{k}_{\textit{r}}\) and compute \(\mathbf C _{1}(\textit{k}_{\textit{r}})\) and \(\mathbf C _{2}(\textit{k}_{\textit{r}})\), where \(\forall \textit{i},\textit{j}\in [\textit{v}], \textit{i}'\in [2\cdot \textit{v}^{2}]\textit{c}_{\textit{i}, \textit{j}}(\textit{k}_{\textit{r}}) = \varSigma _{\textit{s}=1}^{\textit{h}}{} \textit{a}_{\textit{i}', \textit{s}}\cdot \textit{k}_{\textit{r}}^{\textit{s}}+\textit{m}_{\textit{i}, \textit{j}}\). Choose \(4\cdot \lambda +2\) uniformly random matrices \(\mathbf B _{1}, \ldots , \mathbf B _{4\cdot \lambda +2}~\xleftarrow {\$}~\mathbb {Z}_{p }^{\textit{v}\times \textit{v}}\) and create \(2\cdot \lambda +1\) pairs \((\mathbf B _{1}, \mathbf B _{2}), \ldots , (\mathbf B _{4\cdot \lambda +1}, \mathbf B _{4\cdot \lambda +2})\). A client permutes the \(4\cdot \lambda +2\) matrix pairs of the set \(\textit{U} = \{(\mathbf C _{1}(\textit{k}_{1}), \mathbf C _{2}(\textit{k}_{1})), \ldots , (\mathbf C _{1}(\textit{k}_{2\cdot \lambda +1})\), \(\mathbf C _{2}(\textit{k}_{2\cdot \lambda +1}))\), \((\mathbf B _{1}, \mathbf B _{2}), \ldots , (\mathbf B _{4\cdot \lambda +1}, \mathbf B _{4\cdot \lambda +2})\}\) using \(\theta \), and sends the permuted set \(\textit{U}\) to a server.

  3. 3.

    \(\textsf {AFS.Compute}(\textit{U})\): The products of all matrix pairs in \(\textit{U}\) are computed by the server and put in a set \(\textit{Q}\). The set \(\textit{Q}\) is sent back to the client.

  4. 4.

    \(\textsf {AFS.ResuGen}(\textsf {sk}, \textit{Q})\): Based on \(\theta \), choose some matrices from \(\textit{Q}\), which correspond to \(\mathbf M _{1}\) and \(\mathbf M _{2}\). Interpolate these matrices to find the real result of \(\mathbf M _{1}\cdot \mathbf M _{2}\).

For , Atallah and Frikken introduced a method to verify the result returned from a server who is lazy or malicious. This verification algorithm is a probabilistic verification process that means successfully detecting a cheating server with non-negligible probability. Since our work focuses on the privacy property of the PVC protocol, we refer to [2] for more details about the verification process.

2.4 Privacy Definition

According to [2], a property of and , from an informal ciphertext indistinguishability statement, is that it is infeasible for any passive PPT adversary \(\mathcal {A}\) to computationally distinguish the ciphertexts over two distinct inputs. Specifically, a ciphertext is a set of matrix pairs (i.e., \(\textit{U}^{(1)}, \textit{U}^{(2)}, \textit{U}\)). This computational problem is linked to the notion of privacy against passive adversary. Based on different attack models, two formal definitions are given below.

Definition 3

(Privacy Against Passive Eavesdropping). For a PVC protocol \(\textsf {PVC} = (\textsf {KeyGen}, \textsf {ProbGen}, \textsf {Compute}, \textsf {ResuGen})\), the following experiment associated with a \(\textsc {PPT}\) eavesdropping adversary \(\mathcal {A}\) is considered:

Experiment \(\textsf {Exp}_{\mathcal {A}}^{\text {ind-priv}^{\text {coa}}}[\textsf {PVC}, \lambda ]\):

  • \(((\mathbf M _{1(0)}, \mathbf M _{2(0)}), (\mathbf M _{1(1)}, \mathbf M _{2(1)}))\leftarrow \mathcal {A}(1^{\lambda })\);

  • \(\textsf {sk}\leftarrow \textsf {KeyGen}(1^{\lambda })\);

  • \(\textit{b}\xleftarrow {\$}\{0, 1\}\);

  • \(\textit{U}_{\textit{b}}\leftarrow \textsf {ProbGen}(\textsf {sk}, \mathbf M _{1(\textit{b})}, \mathbf M _{2(\textit{b})})\);

  • \(\textit{b}'\leftarrow \mathcal {A}((\mathbf M _{1(0)}, \mathbf M _{2(0)}), (\mathbf M _{1(1)}, \mathbf M _{2(1)}), \textit{U}_{\textit{b}})\);

  • If \(\textit{b}'=\textit{b}\), output 1; else, output 0,

where \(\textit{U}_{\textit{b}}\) is called a challenge ciphertext. The computation of \(\textit{U}_{\textit{b}}\) is done by the performer of the experiment. Then, we define the advantage of \(\mathcal {A}\) in the experiment above as follows:

$$ \begin{array}{l} {\textsf {Adv}_{\mathcal {A}}^{\text {ind-priv}^{\text {coa}}}(\textsf {PVC}, \lambda )} = \left| {\Pr [\textsf {Exp}_{\mathcal {A}}^{\text {ind-priv}^{\text {coa}}}[\textsf {PVC}, \lambda ] = 1] - \frac{1}{2}} \right| \end{array}. $$

\(\textsf {PVC}\) is private if, for any \(\mathcal {A}\), there exists a negligible function \(\textsf {negl}\) such that

$$ \begin{array}{l} {\textsf {Adv}_{\mathcal {A}}^{\text {ind-priv}^{\text {coa}}}(\textsf {PVC}, \lambda )} \le \textsf {negl}(\lambda ) \end{array}. $$

Definition 4

(Privacy Against A Chosen-Plaintext Attack). For a PVC protocol \(\textsf {PVC} = (\textsf {KeyGen}, \textsf {ProbGen}, \textsf {Compute}, \textsf {ResuGen})\), the following experiment associated with a \(\textsc {PPT}\) adversary \(\mathcal {A}\) is considered:

Experiment \(\textsf {Exp}_{\mathcal {A}}^{\text {ind-priv}}[\textsf {PVC}, \lambda ]\):

  • \(((\mathbf M _{1(0)}, \mathbf M _{2(0)}), (\mathbf M _{1(1)}, \mathbf M _{2(1)}))\leftarrow \mathcal {A}^{\textsf {PrivProbGen}(\textsf {KeyGen}(1^{\lambda }), \cdot , \cdot )}(1^{\lambda })\);

  • \(\textsf {sk}\leftarrow \textsf {KeyGen}(1^{\lambda })\);

  • \(\textit{b}\xleftarrow {\$}\{0, 1\}\);

  • \(\textit{U}_{\textit{b}}\leftarrow \textsf {ProbGen}(\textsf {sk}, \mathbf M _{1(\textit{b})}, \mathbf M _{2(\textit{b})})\);

  • \(\textit{b}'\leftarrow \mathcal {A}^{\textsf {PrivProbGen}(\textsf {KeyGen}(1^{\lambda }), \cdot , \cdot )}((\mathbf M _{1(0)}, \mathbf M _{2(0)}), (\mathbf M _{1(1)}, \mathbf M _{2(1)}), \textit{U}_{\textit{b}})\);

  • If \(\textit{b}'=\textit{b}\), output 1; else, output 0,

where the oracle \(\textsf {PrivProbGen}(\textsf {KeyGen}(1^{\lambda }), \mathbf M _{1}, \mathbf M _{2})\) asks \(\textsf {ProbGen}(\textsf {KeyGen}(1^{\lambda }), \mathbf M _{1}, \mathbf M _{2})\) to obtain a set of matrix pairs \(\textit{U}\) and send it back. The output from \(\textsf {PrivProbGen} (\textsf {KeyGen}(1^{\lambda }), \mathbf M _{1}, \mathbf M _{2})\) is probabilistic. Then, we can define the advantage of \(\mathcal {A}\) in the experiment above as follows:

$$ \begin{array}{l} {\textsf {Adv}_{\mathcal {A}}^{\text {ind-priv}}(\textsf {PVC}, \lambda )} = \left| {\Pr [\textsf {Exp}_{\mathcal {A}}^{\text {ind-priv}}[\textsf {PVC}, \lambda ] = 1] - \frac{1}{2}} \right| \end{array}. $$

\(\textsf {PVC}\) is private if, for any \(\mathcal {A}\), there exists a negligible function \(\textsf {negl}\) such that

$$ \begin{array}{l} {\textsf {Adv}_{\mathcal {A}}^{\text {ind-priv}}(\textsf {PVC}, \lambda )} \le \textsf {negl}(\lambda ) \end{array}. $$

Remark 1

From Katti et al.’s work [11], privacy against passive eavesdropping is equivalent to privacy. If a PVC protocol satisfies privacy based on Definition 4, it must also satisfy privacy based on Definition 3. However, if a PVC protocol does not satisfy privacy, it also does not satisfy privacy.

In [2], Atallah and Frikken gave the detailed proofs for privacy of and and the following theorems.

Theorem 4

([2], Theorem 5). Assume that the two servers do not collude and the decisional-WSH assumption holds. Then, is private.

Theorem 5

([2], Sect. 4.5.3). Assume that the decisional-SSH assumption holds. Then, is private.

3 Breaking the Decisional-SH Assumption

In this section, we first present a rigorous analysis for breaking the decisional-WSH assumption and decisional-SSH assumption. Then, we show how the analysis for solving the decisional-SH problem extends naturally to and , thus demonstrating that both of them are not private.

3.1 Adversary’s Strategy

For the decisional-WSH problem (resp. decisional-SSH problem) in Definition 1 (resp. Definition 2), if a polynomial-time adversary \(\mathcal {A}\) wants to solve this problem with non-negligible advantage, she must employ some unexpected strategy. In general, the adversary’s direct strategy is that she tries to find a set that involves \(\ell \) special row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\ell }\) efficiently and evaluate the distinction between the set of \(\ell \) special row vectors and a set of \(\ell \) uniformly random vectors over \(\mathbb {Z}_{\textit{p}}^{\textit{m}}\). As stated in Theorem 1, any set of \(\lambda +1\) row vectors that include at least one uniformly random vector over \(\mathbb {Z}_{\textit{p}}^{\textit{m}}\) is distributed identically to the set of \(\lambda +1\) uniformly random vectors over \(\mathbb {Z}_{\textit{p}}^{\textit{m}}\). This implies that \(\mathcal {A}\) needs to find at least \(\ell =\lambda +1\) special row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\lambda +1}\). However, Atallah and Frikken argued that \(\mathcal {A}\) is unlikely to find \(\mathbf d _{1}, \ldots , \mathbf d _{\lambda +1}\) with significant probability (e.g., Theorem 2)Footnote 1.

Then, we take a step back and consider such a question: if we sample a matrix from a distribution which is either the WSH distribution (resp. the SSH distribution) or uniformly random, what type of factor about this matrix do we need to analyze and evaluate? We believe that one of the important factors is the rank of a matrix. This means that the adversary’s strategy can be based on the analysis for the rank of a matrix. From this point of view, we propose an adversary’s strategy that proceeds in two steps.

Strategy Overview: Let \(\mathbf X \) be an \(\textit{n}\times \textit{m}\) matrix that is sampled from a distribution which is either the WSH distribution (resp. the SSH distribution) \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) or the uniform distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\).

  1. 1.

    Compute the rank of \(\mathbf X \), denoted by \(\textsf {rank}(\mathbf X )\).

  2. 2.

    Check whether \(\textsf {rank}(\mathbf X )\) is below some value \(\varepsilon \le \textsf {min}(\textit{n}, \textit{m})\) or not below this value. If \(\textsf {rank}(\mathbf X )\) is below \(\varepsilon \), \(\mathbf X \) is sampled from \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\); otherwise, \(\mathbf X \) is sampled from the uniform distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\).

Why the Rank-Based Analysis Works? The idea of the proposed strategy is remarkably simple. It focuses on a distinguishing problem about the distributions of ranks of matrices from those two distributions. Specifically, the value \(\varepsilon \) can be seen as a threshold rank that is the critical factor of the proposed strategy. To motivate why computing the rank of a matrix is useful for solving the decisional-WSH problem and decisional-SSH problem, we list the following two facts:

  • Fact 1: Consider an \(\textit{n}\times \textit{m}\) matrix \(\mathbf X \) over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\). W.l.o.g. assume that \(\textit{n}\le \textit{m}\). If there are \(\ell <\textit{n}\) linearly dependent row vectors in \(\mathbf X \), all the \(\textit{n}\) row vectors of \(\mathbf X \) are linearly dependent. This implies that the rank of \(\mathbf X \) must be below \(\textit{n}\) (i.e., \(\textsf {rank}(\mathbf X )<\textit{n})\).

  • Fact 2: Consider an \(\textit{n}\times \textit{m}\) matrix \(\mathbf X \) sampled from the uniform distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\). W.l.o.g. assume that \(\textit{n}\le \textit{m}\). With high probability, the \(\textit{n}\) row vectors of \(\mathbf X \) are linearly independent, and the rank of \(\mathbf X \) is \(\textit{n}\) (i.e., \(\textsf {rank}(\mathbf X )=\textit{n})\).

Specifically, based on Linial and Weitz’s work [13] (see Eq. (2)), we verify the Fact 2 concretely. To implement this verification, we choose parameters \(\textit{p}>4\,\cdot \,\lambda \,+\,2\), \(\textit{n}=2\,\cdot \,\lambda \,+\,1\) and \(\textit{m}\ge \textit{n}\), and compute the results on the probabilities of the full-row-rank matrices for different parameters. The verification results show that the probability of a uniformly random matrix having rank \(\textit{n}\) is nearly 1, i.e., \(\Pr [\textsf {rank}(\mathbf X )=\textit{n}]\approx 1\), which can show the rank distribution of the uniformly random matrices over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\). For more details about the verification results, we refer the reader to the full version of our paper.

$$\begin{aligned} \Pr [\textsf {rank}(\mathbf X ) = \textit{z}] = \frac{1}{\textit{p}^{(\textit{n} - \textit{z}) \cdot (\textit{m} - \textit{z})}} \cdot \prod \limits _{\textit{i}=0}^{\textit{z}-1}\frac{(1-\textit{p}^{\textit{i}-\textit{n}}) \cdot (1-\textit{p}^{\textit{i}-\textit{m}})}{1-\textit{p}^{\textit{i}-\textit{z}}} \end{aligned}$$
(2)

According to Fact 1 and Fact 2, if the matrices sampled from some distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\) always have some linearly dependent row vectors, the ranks of these matrices are always below the matrix sizes, and the rank distribution is distinguished from the rank distribution of the uniformly random matrices over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\) with non-negligible advantage.

Then, based on the above analysis, if \(\mathcal {A}\) employs the proposed strategy to solve the decisional-WSH problem and decisional-SSH problem, the crux is that whether there are \(\lambda +1\) linearly dependent special row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\lambda +1}\) and what is the probability that \(\mathbf d _{1}, \ldots , \mathbf d _{\lambda +1}\) are linearly dependent. Assume that \(\mathbf d _{1}, \ldots , \mathbf d _{\lambda +1}\) must be linearly dependent, then the rank of a matrix sampled from \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) can leak information about the matrix structure. In what follows, we focus on exploring the linear relation of the \(\lambda +1\) special row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\lambda +1}\) and give the answer.

3.2 Analysis for the Decisional-SH Assumption

To show the linear relation of the \(\lambda +1\) special row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\lambda +1}\), we consider a set of the transposes of the \(\lambda +1\) vectors \([(\mathbf d _{1})^{\textit{T}}~\ldots (\mathbf d _{\lambda +1})^{\textit{T}}]\) as the product of two matrices \(\mathbf A \cdot \mathbf K \), where \(\mathbf A \) is an \(\textit{m}\times \lambda \) uniformly random matrix where the \(\textit{i}^{\textit{th}}\) column is \(\mathbf a _{\textit{i}}\) for \(\textit{i}\in [\lambda ]\), and \(\mathbf K \) is a \(\lambda \times (\lambda +1)\) matrix where the \(\textit{r}^{\textit{th}}\) column is \(\mathbf k _{\textit{r}}=[\textit{k}_{\textit{r}}~\textit{k}_{\textit{r}}^{2}~\ldots ~\textit{k}_{\textit{r}}^{\lambda }]^{\textit{T}}\) for \(\textit{r}\in [\lambda +1]\). Specifically, according to Fact 2, our following analysis focuses on the case with high probability that the vectors \(\mathbf a _{1}, \ldots , \mathbf a _{\lambda }\) are linearly independentFootnote 2.

Lemma 2

Consider an \(\textit{m}\times (\lambda +1)\) matrix \((\mathbf A \cdot \mathbf K )\). Assume that \(\textit{m}=\textsf {poly}(\lambda )>\lambda \), and the column vectors \(\mathbf a _{1}, \ldots , \mathbf a _{\lambda }\) are linearly independent. Then, \(\textsf {rank}(\mathbf A \cdot \mathbf K ) <\textsf {min}(\textit{m}, \lambda +1)\), which implies that the special row vectors \(\mathbf d _{1}, \ldots ,\mathbf d _{\lambda +1}\) are linearly dependent.

Proof

The result in this lemma is immediate, actually. For the formal proof, we refer the reader to the full version of our paper.

According to Eq. (2), the probability that the column vectors \(\mathbf a _{1}, \ldots , \mathbf a _{\lambda }\) are linearly independent (i.e., \(\textsf {rank}(\mathbf A )=\lambda \)) is \(\varPi _{\textit{i}=0}^{\lambda -1}(1-\textit{p}^{\textit{i}-\textit{m}}\)). Then, the probability that the row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\lambda +1}\) are linearly dependent is also \(\varPi _{\textit{i}=0}^{\lambda -1}(1-\textit{p}^{\textit{i}-\textit{m}}\)). Specifically, if \(\textit{p}\) is a large prime (e.g., \(\textit{p}>4\cdot \lambda +2\)), the row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\lambda +1}\) are likely to be linearly dependent.

Then, based on the proposed adversary’s strategy and Lemma 2, we show our main analysis results for solving the decisional-WSH problem and decisional-SSH problem.

Lemma 3

Consider a sample \(\mathbf X \) from either the WSH distribution (resp. the SSH distribution) \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) or the uniform distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\), where \({\textit{n}~\in \{2\cdot \lambda +1, 2\cdot \lambda +2\cdot \textit{e}+2\}}\). Assume that \(\textit{m}>2\cdot \lambda \) \((resp.~{\textit{m}>2\cdot \lambda +\textit{e}+1)}\), and \(\textit{p}\) is a large prime, e.g., \(\textit{p}>4\cdot \lambda +2\). Let \(\varphi = \varPi _{\textit{i}=0}^{\lambda -1}(1-\textit{p}^{\textit{i}-\textit{m}})\). Let \(\eta = Pr[\textsf {rank}(\mathbf X )=\textit{z}]\), where the probability is for the case that \(\mathbf X \) is uniformly random, and \(\textit{z}=\textsf {min}(\textit{n}, \textit{m})\). If \(\textsf {rank}(\mathbf X )<\textsf {min}(\textit{n}, \textit{m})\), the probability that \(\mathbf X \) is sampled from \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) satisfies \(Pr[\mathbf X ~\leftarrow ~\chi (\textit{p})^{\textit{n}\times \textit{m}}|\textsf {rank}(\mathbf X )<\textsf {min}(\textit{n}, \textit{m})] \ge ~\frac{1}{1+\frac{1-\eta }{\varphi }}\), and if \(\textsf {rank}(\mathbf X )=\textsf {min}(\textit{n}, \textit{m})\), the probability that \(\mathbf X \) is uniformly random satisfies \(Pr[\mathbf X \xleftarrow {\$}\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}~|~\textsf {rank}(\mathbf X ) = \textsf {min}(\textit{n}, \textit{m})]\ge \frac{1}{1+\frac{1-\varphi }{\eta }}\).

Proof

For the detailed proof, we refer the reader to the full version of our paper.

In Lemma 3, since \(\textit{p}\) is a large prime, we can obtain \(\Pr [\mathbf X ~\leftarrow ~\chi (\textit{p})^{\textit{n}\times \textit{m}}|\textsf {rank}(\mathbf X ) <\textsf {min}(\textit{n}, \textit{m})] \approx 1\) and \(\Pr [\mathbf X ~\xleftarrow {\$}~\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}|\textsf {rank}(\mathbf X ) = \textsf {min}(\textit{n}, \textit{m})] \approx 1\).

Theorem 6

Let \(\varphi =\varPi _{\textit{i}=0}^{\lambda -1}(1-\textit{p}^{\textit{i}-\textit{m}})\). Let \(\eta =Pr[\textsf {rank}(\mathbf X ) = \textit{z}]\) denote the probability that the rank of any \(\textit{n}\times \textit{m}\) uniformly random matrix \(\mathbf X \) is \(\textit{z}\), where \(\textit{z} = \textsf {min}(\textit{n}, \textit{m})\), where \(\textit{n}\in \{2\cdot \lambda +1, 2\cdot \lambda +2\cdot \textit{e}+2\}\). Assume that \(\textit{m}>2\cdot \lambda \) \((resp.~\textit{m}>2\cdot \lambda +\textit{e} +1)\), and \(\textit{p}\) is a large prime, e.g., \(\textit{p}>4\cdot \lambda +2\). Then there exists an adversary \(\mathcal {A}\) running in polynomial-time \(\textit{t}\) for solving the decisional-WSH problem \((resp.~decisional{\text {-}}SSH~problem)\) with

$$ \begin{array}{l} {\textsf {Adv}_{\mathcal {A}, \text {SH}}(\textit{p}, \textit{n}, \textit{m})} \ge \frac{1}{2} \cdot (\varphi + \eta ) - \frac{1}{2} \end{array}, $$

where \(\textit{t}\) is used to compute the rank of a matrix. Specifically, since \(\textit{p}\) is a large prime, \(\mathcal {A}\) has advantage \(\textsf {Adv}_{\mathcal {A}, \text {SH}} (\textit{p}, \textit{n}, \textit{m})\approx \frac{1}{2}\) in solving the decisional-WSH problem \((resp.~decisional{\text {-}}SSH~problem)\).

Proof

Let \(\textsf {Unif}(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}})\) denote the uniform distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\). The adversary \(\mathcal {A}\) has access to an oracle that is either \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) or \(\textsf {Unif}(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}})\). She calls the oracle arbitrarily many times (i.e., a polynomial number of times) to obtain samples of the form \(\mathbf X _{\textit{i}}\) and uses the rank-based adversary’s strategy to evaluate each sample. If \(\textsf {rank}(\mathbf X _{\textit{i}})<\textsf {min}(\textit{n}, \textit{m}), \mathcal {A}\) outputs \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\). If \(\textsf {rank}(\mathbf X _{\textit{i}})=\textsf {min}(\textit{n}, \textit{m}), \mathcal {A}\) returns \(\textsf {Unif}(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}})\).

We first look at the probability distribution of the rank of \(\mathbf X _{\textit{i}}\) when the oracle that \(\mathcal {A}\) has access to is \(\textsf {Unif}(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}})\). In this case it’s easy to see that \(\Pr [\textsf {rank}(\mathbf X _{\textit{i}})=\textsf {min}(\textit{n}, \textit{m})]=\eta \) and \(\Pr [\textsf {rank}(\mathbf X _{\textit{i}})<\textsf {min}(\textit{n}, \textit{m})]=1-\eta \).

If the oracle is \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\), as discussed earlier, we have \(\Pr [\textsf {rank}(\mathbf X _{\textit{i}})=\textsf {min}(\textit{n}, \textit{m})] \le 1-\varphi \) and \(\Pr [\textsf {rank}(\mathbf X _{\textit{i}})<\textsf {min}(\textit{n}, \textit{m})] \ge ~\varphi \).

Thus, based on Lemma 3, we obtain the success probability (see Sect. 2.1)

This means that \(\textsf {Adv}_{\mathcal {A}, \text {SH}}(\textit{p}, \textit{n}, \textit{m}) \ge ~\frac{1}{2}\cdot \varphi +\frac{1}{2}\cdot \eta -\frac{1}{2}\). Specifically, when \(\textit{p}\) is a large prime, the value of \(\varphi \) is close to 1. Moreover, as discussed in Sect. 3.1, \(\eta \) is also close to 1 if \(\textit{p}\) is not a small prime. Then, \(\textsf {Adv}_{\mathcal {A}, \text {SH}}(\textit{p}, \textit{n}, \textit{m})\) is close to \(\frac{1}{2}\), which confirms our theorem.

Theorem 6 demonstrates that we can break the decisional-WSH assumption and decisional-SSH assumption efficiently for a wide range of parameters. The final result contradicts Atallah and Frikken’s result in Eq. (1). However, this does not imply that we can solve the search-WSH problem and search-SSH problem efficiently, which shows the inaccuracy of Theorem 3.

3.3 Analysis for and

Now we want to present the formal analysis for privacy of and . Specifically, it is straightforward to use the idea of the analysis for the decisional-WSH assumption and decisional-SSH assumption to undermine the privacy of and . This means that an adversary \(\mathcal {A}\) employs the rank-based strategy to evaluate a given ciphertext matrix. Note that, our analysis is based on the experiment (see Definition 3), where an eavesdropping adversary \(\mathcal {A}\) running in polynomial-time has non-negligible advantage to show that both protocols are not private (and thus not private).

Lemma 4

Given a uniformly random matrix \(\mathbf A \in \mathbb {Z}_{\textit{p}}^{2\cdot \textit{v}^{2}\times \lambda }\) where the \(\textit{i}^{\textit{th}}\) column is \(\mathbf a _{\textit{i}}\) for \(\textit{i}\in [\lambda ]\), a \(\lambda \times \textit{n}\) matrix \(\mathbf K \) where the \(\textit{r}^{\textit{th}}\) column is \(\mathbf k _{\textit{r}}=[\textit{k}_{\textit{r}}~\textit{k}_{\textit{r}}^{2}~\ldots ~\textit{k}_{\textit{r}}^{\lambda }]^{\textit{T}}\) for \(\textit{r}\in [\textit{n}]\), and a \(2\cdot \textit{v}^{2}\times \textit{n}\) matrix \(\mathbf S \) where the elements of the \(\textit{i}^{\textit{th}}\) column \(\mathbf s _{\textit{i}}\) are the same as the corresponding elements of the \(\textit{j}^{\textit{th}}\) column \(\mathbf s _{\textit{j}}\) for \(\textit{i}\), \(\textit{j}\in [\textit{n}]\), where \(\mathbf s _{\textit{i}}\xleftarrow {\$}\mathbb {Z}_{\textit{p}}^{2\cdot \textit{v}^{2}}\)Footnote 3. Let \(\textit{p}\) be a large prime \((e.g.,~\textit{p}>4\cdot \lambda +2)\), \(\textit{v}=\textsf {poly}(\lambda )>\sqrt{\frac{\lambda }{2}}\) and \(\textit{n}\in \{\lambda +1, 2\cdot \lambda +1\}\). Assume that the column vectors \(\mathbf a _{1}, \ldots , \mathbf a _{\lambda }, \mathbf s _{\textit{i}}\) are linearly independent. Then, for the \(2\cdot \textit{v}^{2}\times \textit{n}\) matrix \((\mathbf A \cdot \mathbf K +\mathbf S )\), we have \(\textsf {rank}(\mathbf A \cdot \mathbf K +\mathbf S )=\lambda +1\).

Proof

For the formal proof, we refer the reader to the full version of our paper.

Corollary 1

Consider two \(2\cdot \textit{v}^{2}\times \textit{n}\) matrices \((\mathbf A \cdot \mathbf K +\mathbf S )\) and \((\mathbf A \cdot \mathbf K +\mathbf Z )\), where the definitions of \(\mathbf A , \mathbf K \) and \(\mathbf S \) are in Lemma 4, and \(\mathbf Z \) is a \(2\cdot \textit{v}^{2}\times \textit{n}\) zero matrix. Let \(\textit{v} = \textsf {poly}(\lambda )>\sqrt{\frac{\lambda }{2}}\) and \(\textit{n}~\in \{\lambda +1, 2\cdot \lambda +1\}\). Then the probability \(Pr[\textsf {rank}(\mathbf A \cdot \mathbf K +\mathbf S )= \lambda +1] = \varPi _{\textit{i}=0}^{\lambda }(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}})\), and the probability \(Pr[\textsf {rank}(\mathbf A \cdot \mathbf K +\mathbf Z ) <\lambda +1] = \varPi _{\textit{i}=0}^{\lambda -1}(1 -\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}})\) for the case that the vectors \(\mathbf a _{1}, \ldots , \mathbf a _{\lambda }, \mathbf s _{\textit{i}}\) are linearly independent.

Proof

We again refer the reader to the full version of our paper for the detailed proof.

Based on Lemma 4 and Corollary 1, we present the following theorems of breaking the privacy of and .

Theorem 7

The protocol does not satisfy privacy based on Definition 3 under the condition that the size of the message space \(\textit{p}\) is a large prime \((e.g., \ \textit{p}>4\cdot \lambda +2)\) and the matrix size \(\textit{v}>\sqrt{\lambda }\). Specifically, the advantage of an PPT adversary \(\mathcal {A}\) for breaking the privacy of this protocol is close to \(\frac{1}{2}\).

Proof

According to Definition 3, for the experiment , an PPT adversary \(\mathcal {A}\) chooses two pairs of \(\textit{v}\times \textit{v}\) matrices \((\mathbf M _{1(0)}, \mathbf M _{2(0)}), (\mathbf M _{1(1)}, \mathbf M _{2(1)})\). Specifically, \((\mathbf M _{1(0)}, \mathbf M _{2(0)})\xleftarrow {\$}\mathbb {Z}_{\textit{p}}^{\textit{v}\times \textit{v}}\times \mathbb {Z}_{\textit{p}}^{\textit{v}\times \textit{v}}\), and \((\mathbf M _{1(1)}, \mathbf M _{2(1)})\) are two zero matrices. The challenge ciphertext is the matrix set \(\textit{U}_{\textit{b}}\) that comes from the second server (i.e., \(\textit{U}_{\textit{b}}^{(2)}\)). \(\mathcal {A}\) flattens out each pair of matrices of \(\textit{U}_{\textit{b}}\) into a list of \(2\cdot \textit{v}^{2}\) values to generate a \((2\cdot \lambda +1)\times 2\cdot \textit{v}^{2}\) matrix \(\mathbf E _{\textit{b}}\). \(\mathbf E _{\textit{b}}\) involves either all rows of the matrix \((\mathbf A \cdot \mathbf K +\mathbf S )^{\textit{T}}\) or all rows of the matrix \((\mathbf A \cdot \mathbf K +\mathbf Z )^{\textit{T}}\), where the descriptions of the transposes of these two matrices are in Corollary 1. For winning the experiment in Definition 3, \(\mathcal {A}\) employs a PPT distinguisher \(\mathcal {D}\) based on the proposed adversary’s strategy as follows:

Distinguisher \(\mathcal {D}\):

  • For the case \(\textsf {rank}(\mathbf E _{\textit{b}})=2\cdot \lambda +1, \mathcal {A}\) outputs \(\textit{b}'=0\).

  • For the case \(\textsf {rank}(\mathbf E _{\textit{b}})<2\cdot \lambda +1, \mathcal {A}\) outputs \(\textit{b}'=1\).

The positive integer \(2\cdot \lambda +1\) is regarded as the threshold rank. If \(\textsf {Adv}_{\mathcal {A}}^{\text {ind-priv}^{\text {coa}}}\) () is non-negligible, then is not private. In what follows, we show this result by considering a large prime \(\textit{p}\) (e.g., \(\textit{p}>4\cdot \lambda +2)\) and a matrix size \(\textit{v}>\sqrt{\lambda }\).

Specifically, from Corollary 1, we obtain

$$ \left\{ \begin{aligned} \Pr [\textsf {rank}(\mathbf E _{\textit{b}}) = 2\cdot \lambda +1 | \mathbf E _{\textit{b}} = \mathbf E _{0}]&= \prod \nolimits _{\textit{i}=0}^{\lambda }(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) \cdot \prod \nolimits _{\textit{i}=0}^{2\cdot \lambda }(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) \\ \Pr [\textsf {rank}(\mathbf E _{\textit{b}}) < 2\cdot \lambda +1 | \mathbf E _{\textit{b}} = \mathbf E _{1}]&\ge \prod \nolimits _{\textit{i}=0}^{\lambda -1}(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) \\ \end{aligned}. \right. $$

Thus, we have

$$ \begin{array}{l} {\Pr [\textsf {Exp}_{\mathcal {A}}^{\text {ind-priv}^{\text {coa}}} [\textsf {AF-PVC}_{\textit{two}}, \lambda ] = 1 ]} \\ \ge \frac{1}{2} \cdot \prod \limits _{\textit{i}=0}^{\lambda }(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}})\cdot \prod \limits _{\textit{i}=0}^{2\cdot \lambda }(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) + \frac{1}{2} \cdot \prod \limits _{\textit{i}=0}^{\lambda -1}(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) \\ \quad = \frac{1}{2} \cdot \prod \limits _{\textit{i}=0}^{\lambda -1}(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) \cdot ((1-\textit{p}^{\lambda -2\cdot \textit{v}^{2}})\cdot \prod \limits _{\textit{i}=0}^{2\cdot \lambda }(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) + 1) \end{array}. $$

Since \(\textit{p}\) is a large prime, as discussed earlier, we can obtain . This means that .

Theorem 8

The protocol AF-PVC\(_{single}\) does not satisfy IND-COA privacy based on Definition 3 under the condition that the size of the message space \(\textit{p}\) is a large prime \((e.g.,\ \textit{p}>4\cdot \lambda +2)\) and the matrix size \(\textit{v}>\sqrt{\frac{3\cdot \lambda +1}{2}}\). Specifically, the advantage of an PPT adversary \(\mathcal {A}\) for breaking the privacy of this protocol is close to \(\frac{1}{2}\).

Proof

The proof follows a similar procedure to that for Theorem 7. For the experiment \(\textsf {Exp}_{\mathcal {A}}^{\text {ind-priv}^{\text {coa}}}\)[AF-PVC\(_{\textit{single}}, \lambda \)] in Definition 3, an PPT adversary \(\mathcal {A}\) also chooses two pairs of \(\textit{v}\times \textit{v}\) matrices \((\mathbf M _{1(0)}, \mathbf M _{2(0)})\) and \((\mathbf M _{1(1)}, \mathbf M _{2(1)})\), where \((\mathbf M _{1(0)}, \mathbf M _{2(0)})\xleftarrow {\$}\mathbb {Z}_{\textit{p}}^{\textit{v}\times \textit{v}}\times \mathbb {Z}_{\textit{p}}^{\textit{v}\times \textit{v}}\), and \((\mathbf M _{1(1)}, \mathbf M _{2(1)})\) are two zero matrices. The challenge ciphertext is the matrix set \(\textit{U}_{\textit{b}}\). \(\mathcal {A}\) flattens out each pair of matrices of \(\textit{U}_{\textit{b}}\) into a list of \(2\cdot \textit{v}^{2}\) values to generate a \((4\cdot \lambda +2)\times 2\cdot \textit{v}^{2}\) matrix \(\mathbf E _{\textit{b}}\). \(\mathbf E _{\textit{b}}\) includes either all rows of the matrix \((\mathbf A \cdot \mathbf K +\mathbf S )^{\textit{T}}\) or all rows of the matrix \((\mathbf A \cdot \mathbf K +\mathbf Z )^{\textit{T}}\). To win the experiment in Definition 3, \(\mathcal {A}\) employs a PPT distinguisher \(\widehat{\mathcal {D}}\) based on the proposed adversary’s strategy as follows:

Distinguisher \(\widehat{\mathcal {D}}\):

  • For the case \(\textsf {rank}(\mathbf E _{\textit{b}})=3\cdot \lambda +2, \mathcal {A}\) outputs \(\textit{b}'=0\).

  • For the case \(\textsf {rank}(\mathbf E _{\textit{b}})<3\cdot \lambda +2, \mathcal {A}\) outputs \(\textit{b}'=1\).

The positive integer \(3\cdot \lambda +2\) is regarded as the threshold rank. If \(\textsf {Adv}_{\mathcal {A}}^{\text {ind-priv}^{\text {coa}}}\) (AF-PVC\(_{single}, \lambda \)) is non-negligible, then AF-PVC\(_{single}\) is not IND-COA private. In what follows, we show this result by considering a large prime p (e.g., \(\textit{p}>4\cdot \lambda +2)\) and a matrix size \(\textit{v}>\sqrt{\frac{3\cdot \lambda +1}{2}}\).

$$ \begin{array}{l} \Pr [\textsf {Exp}_{\mathcal {A}}^{\text {ind-priv}^{\text {coa}}} [\textsf {AF-PVC}_{\textit{single}}, \lambda ] = 1 ] \\ = \Pr [\mathbf E _{\textit{b}} = \mathbf E _{0} | \textsf {rank}(\mathbf E _{\textit{b}}) = 3 \cdot \lambda + 2] \cdot \Pr [\textsf {rank}(\mathbf E _{\textit{b}}) = 3 \cdot \lambda + 2] \\ \quad +\,\Pr [\mathbf E _{\textit{b}} = \mathbf E _{1} | \textsf {rank}(\mathbf E _{\textit{b}})< 3 \cdot \lambda + 2] \cdot \Pr [\textsf {rank}(\mathbf E _{\textit{b}}) < 3 \cdot \lambda + 2] \end{array}. $$

Specifically, from Corollary 1, we have

$$ \left\{ \begin{aligned} \Pr [\textsf {rank}(\mathbf E _{\textit{b}}) = 3\cdot \lambda +2 | \mathbf E _{\textit{b}} = \mathbf E _{0}]&= \prod \nolimits _{\textit{i}=0}^{\lambda }(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) \cdot \prod \nolimits _{\textit{i}=0}^{3\cdot \lambda +1}(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) \\ \Pr [\textsf {rank}(\mathbf E _{\textit{b}}) < 3\cdot \lambda +2 | \mathbf E _{\textit{b}} = \mathbf E _{1}]&\ge \prod \nolimits _{\textit{i}=0}^{\lambda -1}(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) \\ \end{aligned}. \right. $$

Then, we can obtain

$$ \begin{array}{l} {\Pr [\textsf {Exp}_{\mathcal {A}}^{\text {ind-priv}^{\text {coa}}} [\textsf {AF-PVC}_{\textit{single}}, \lambda ] = 1 ]} \\ \ge \frac{1}{2} \cdot \prod \limits _{\textit{i}=0}^{\lambda }(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}})\cdot \prod \limits _{\textit{i}=0}^{3\cdot \lambda +1}(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) + \frac{1}{2} \cdot \prod \limits _{\textit{i}=0}^{\lambda -1}(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) \\ \quad = \frac{1}{2} \cdot \prod \limits _{\textit{i}=0}^{\lambda -1}(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) \cdot ((1-\textit{p}^{\lambda -2\cdot \textit{v}^{2}})\cdot \prod \limits _{\textit{i}=0}^{3\cdot \lambda +1}(1-\textit{p}^{\textit{i}-2\cdot \textit{v}^{2}}) + 1) \end{array}. $$

Since p is a large prime, as discussed earlier, the above success probability \(\Pr [\textsf {Exp}_{\mathcal {A}}^{\text {ind-priv}^{\text {coa}}}\)[AF-PVC\(_{single}, \lambda ] = 1] \approx 1\). This implies that the adversary’s advantage Adv\(_{\mathcal {A}}^{\text {ind-priv}^{\text {coa}}}\) (AF-PVC\(_{single}, \lambda ) \approx \frac{1}{2} \nleqslant \textsf {negl}(\lambda \)).

3.4 Discussion

In order to make the readers fully understood our rank-based analyses, we present some concrete discussions below.

Parameters: The parameter choice is significant for our analyses of solving the decisional-SH problem and breaking the privacy of AF-PVC\(_{two}\) and AF-PVC\(_{single}\).

First, for the size of the message space \(\textit{p}\), it should be set as a large prime that is at least larger than \(\lambda \), e.g., \(\textit{p}>4\cdot \lambda +2\). On the one hand, as shown in Sect. 3.1, if \(\textit{p}\) is large enough, with high probability, the vectors from an uniformly random matrix are linearly independent. This is necessary for our analyses. On the other hand, if a client wants to outsource the multiplication of some matrix pair \((\mathbf M _{1}, \mathbf M _{2})\in \mathbb {Z}_{\textit{p}}^{\textit{v}\times \textit{v}}\times \mathbb {Z}_{\textit{p}}^{\textit{v}\times \textit{v}}\) to a powerful server, then the message space of each element in these two matrices should be large, which makes the client hard to run the expensive computation. Otherwise, there is no need to do the outsourcing, and the client can carry out the computation locally. This means that the decisional-SH problem and the feasible protocols AF-PVC\(_{two}\) and AF-PVC\(_{single}\) with a suitable large parameter \(\textit{p}\) are the targets of our analyses.

Second, for the matrix size \(\textit{v}\) (resp. the matrix size m involved in the decisional-SH problem), it should satisfy \(\textit{v}>\sqrt{\lambda }\) (see Theorem 7) or \(\textit{v}>\sqrt{\frac{3\cdot \lambda +1}{2}}\) (see Theorem 8) (resp. \(\textit{m}>2\cdot \lambda \) or \({\textit{m}>2\cdot \lambda +\textit{e}+1}\) for \({\textit{e}\in \mathbb {N}_{+}}\) (see Theorem 6)Footnote 4). On the one hand, for an \(\textit{n}\times 2\cdot \textit{v}^{2}\) (resp. \(\textit{n}\times \textit{m}\)) matrix, where \(\textit{n}\in \{2\cdot \lambda +1, 4\cdot \lambda +2\) (resp. \(2\cdot \lambda +2\cdot \textit{e}+2)\}\), since the rank of the matrix is dependent on \(\textsf {min}(\textit{n}, 2\cdot \textit{v}^{2}\) (resp. \(\textit{m}\))), if \(\textit{v}\) (resp. \(\textit{m}\)) does not satisfy the above condition, the rank-based adversary’s strategy no longer has any effect. This means that our analyses cannot solve the decisional-SH problem and break the privacy of AF-PVC\(_{two}\) and AF-PVC\(_{single}\). On the other hand, if a client wants to outsource the multiplication of some matrix pair to a powerful server, a key requirement is that these two matrices should be large-scale, which makes the outsourcing practical. If the matrix size does not satisfy the above condition, e.g., \(\textit{v}\le \sqrt{\lambda }\), the amount of work performed by the client for the outsourcing may be not substantially cheaper than performing the computation on its own. This implies that the outsourcing may be impractical. Therefore, this demonstrates that our analyses focus on the meaningful decisional-SH problem and protocols AF-PVC\(_{two}\) and AF-PVC\(_{single}\).

Adversary’s Cost: The cost of our analyses is generated by computing the rank of a matrix. We can employ any existing algorithm for obtaining the rank of a matrix. In general, for a matrix from \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\) with rank \(\textit{z}\le \textsf {min}(\textit{n}, \textit{m})\), using Gaussian elimination, we may compute the rank of the matrix in \(\textit{O}(\textit{z}\cdot \textit{n}\cdot \textit{m})\) field operations and storage of \(\textit{n}\cdot \textit{m}\) field elements [17]. Of course, the rank of a matrix can be computed probabilistically by invoking the blackbox approaches, e.g., the Wiedemann method [10, 18]. More concretely, if our analyses employ the blackbox method, for a matrix from \(\mathbb {Z}_{\textit{p}}^{(2\cdot \lambda +1)\times (2\cdot \lambda +1)}\) with rank \(2\cdot \lambda \), we need to take \(\tilde{\textit{O}}(2\cdot \lambda \cdot (2\cdot \lambda +1)^{2})\) time and use \(\tilde{\textit{O}}(2\cdot \lambda +1)\) storage to obtain the rank of this matrix, where we employ the “\(soft-Oh\)” (i.e., \(\tilde{\textit{O}}\)) notation to suppress log factors.

4 Experimental Verifications

In order to give the reader a glance at the practical results of our analyses for solving the decisional-SH problem and breaking the privacy of AF-PVC\(_{two}\) and AF-PVC\(_{single}\). We implemented our analyses in Sect. 3 and reported the adversary’s advantages and costs.

4.1 Setup

Hardware and Software: We conducted the real example experiments on a Lenovo ThinkStation (Intel(R) Xeon(R) E5-2620, 24 hyperthreaded cores at 2.00 GHz, 8 GB RAM at 2.00 GHz), on Windows (Windows 7, x64_64). Our implementations are single-threaded. We used the NTL library [1] version 10.5.0 for the field operations over \(\mathbb {Z}_{\textit{p}}\) and the matrix operations.

Parameters Choice: In our implementations we covered \(\lambda =80, 128, 192\) and 256 privacy. These selections lead to the parameters in Table 1, where \(\textit{e}=\lambda , \textit{n}\in \{2\cdot \lambda +1, 2\cdot \lambda +2\cdot \textit{e}+2\}\), \(\textit{m}\in \{2\cdot \lambda +1, 3\cdot \lambda +1\}\) for \(\textit{n}=2\cdot \lambda +1\) and \({\textit{m}\in \{3\cdot \lambda +1, 4\cdot \lambda +2\}}\) for \({\textit{n}=2\cdot \lambda +2\cdot \textit{e}+2, \textit{p}>4\cdot \lambda +2}\), \(\textit{v}=\lceil \sqrt{\lambda }+1\rceil ~for~\textit{n}=2\cdot \lambda +1\) and \(\textit{v}=\lceil \sqrt{\frac{3\cdot \lambda +1}{2}}+1\rceil \) for \(\textit{n}=2\cdot \lambda +2\cdot \textit{e}+2\)Footnote 5, and \(\textit{h}=\lambda \).

Table 1. The used parameters for our analyses

4.2 Results and Timings

The experimental results are presented in Tables 2, 3, 4 and 5. Specifically, the adversary’s advantages and timings for solving the decisional-WSH problem and decisional-SSH problem are shown in Tables 2 and 3, and the adversary’s advantages and timings for breaking the privacy of AF-PVC\(_{two}\) and AF-PVC\(_{single}\) are reported in Tables 4 and 5. To obtain these results, we compute the advantages that the adversaries answer correctly in the whole experiment process (i.e., 200 experiments). Note that, for each experiment of solving the decisional-WSH problem (resp. the decisional-SSH problem), a fresh sample from either the distribution \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) or the uniform distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\) is used for the guess. For each experiment of breaking the privacy of AF-PVC\(_{two}\) (resp. AF-PVC\(_{single}\)), a fresh key sk used by AF-PVC\(_{two}\) (resp. AF-PVC\(_{single}\)) is generated to complete the matrix masking. Moreover, for each timing in the tables, the value is the average value over 200 experiments.

Table 2. Results on the decisional-WSH problem
Table 3. Results on the decisional-SSH problem
Table 4. Results on AF-PVC\(_{two}\)
Table 5. Results on AF-PVC\(_{single}\)

As reported in Tables 2, 4 and 5, all the experimental results about the adversary’s advantage are in accord with the analyses of Theorems 6, 7 and 8, and demonstrate that there exists a PPT adversary algorithm that (almost) always succeeds in guessing the distribution of a given sample or the bit \(\textit{b}\) in the eavesdropping indistinguishability experiment (see Definition 3). For the results in Table 3, when \(\textit{n}=\textit{m}\), the adversary’s advantages validate the analysis in Theorem 6, which is based on the fact that \(\textit{m}>3\cdot \lambda +1\). However, when \(\textit{n}>\textit{m}\), the adversary’s advantages are close to 0. This is because the rank of a given matrix from one of those two distributions is dependent on \(\textit{m}\) if \(\textit{m}\le 3\cdot \lambda +1\). Then, in this case, the proposed rank-based strategy is invalid for distinguishing the SSH distribution from the uniform distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\), and the adversary must guess randomly.

Moreover, the timings of all the example experiments in Tables 2, 3, 4 and 5 show that our rank-based analyses for solving the decisional-SH problem and breaking the privacy of AF-PVC\(_{two}\) and AF-PVC\(_{single}\) are really efficient. Specifically, for some small matrix sizes (e.g., \((n, m)=(161, 161)\) in Table 2), our analyses take less than a second.

5 Conclusions

In this paper, we propose an efficient analysis method for solving the decisional-WSH problem and decisional-SSH problem introduced by Atallah and Frikken [2]. Specifically, the strategy of our analysis takes advantage of the rank distribution of the matrix to distinguish between the samples from the WSH distribution (resp. the SSH distribution) \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) and the samples from the uniform distribution over \(\mathbb {Z}_{\textit{p}}^{\textit{n}\times \textit{m}}\). The adversary’s advantage of our analysis on a wide range of parameters is close to 0.5. Moreover, we employ a similar approach to break the privacy of AF-PVC\(_{two}\) and AF-PVC\(_{single}\). The analysis results show that both protocols are not IND-COA private.

Solving the Search Variant of the SH Problem? Our rank-based analysis can break the decisional-WSH assumption and decisional-SSH assumption, but this does not implies that we can also break the search versions efficiently. Actually, for breaking the search-WSH assumption and search-SSH assumption, our rank-based analysis may be regarded as a preprocessing step. To check whether a row of a matrix sampled from \(\chi (\textit{p})^{\textit{n}\times \textit{m}}\) is a row vector from \(\mathbf d _{1}, \ldots , \mathbf d _{\ell }\) or from \(\mathbf u _{1}, \ldots , \mathbf u _{\tau }\), the adversary first replaces the row that needs to be tested by a row vector chosen from \(\mathbb {Z}_{\textit{p}}^{\textit{m}}\) uniformly at random, and then computes the rank of the matrix where the tested row has been replaced. According to our analysis in Sect. 3.2, if the obtained rank increases (compared with the rank of the matrix sampled from \(\chi (\textit{p})^{\textit{n}\times \textit{m}})\), this implies that the tested row is from the row vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\ell }\). The above procedure can be run at most \(\textit{n}-1\) times (with overwhelming probability) to reveals all the vectors \(\mathbf d _{1}, \ldots , \mathbf d _{\ell }\). However, this result is not equivalent to finding \(\textit{k}_{1}, \ldots , \textit{k}_{\ell }\) (or \(\mathbf A \)). Therefore, how to break the search variant of the SH assumption efficiently is an interesting open problem.