Keywords

1 Introduction

Since Ajtai’s seminal work [1] in 1996, lattice-based cryptosystems become more and more popular due to their potential ability to resist the quantum computer attack and successful applications in constructing important cryptographic primitives: such as the hash functions [1, 19, 20, 23], the digital signature schemes [4, 9, 13], the encryption schemes [3, 11, 14, 25], and the fully homomorphic encryption schemes [7, 10].

Another attractive feature of lattice-based cryptosystems is their average-case security can be based on the worst-case hardness of some lattice problems, which are typically some approximation variants of the shortest vector problem (SVP) and the shortest independent vectors problem (SIVP).

SVP refers to the problem of finding a shortest non-zero vector in a given lattice, and its hardness has been studied widely [2, 6, 8, 12]. Interestingly, there are three variants of SVP: search-SVP, optimization-SVP, and decisional-SVP, which aim to find a shortest nonzero vector, find the length of the shortest vector, and decide whether the shortest vector is shorter than some given number respectively. It is well known that the three variants of SVP are equivalent to each other (see [22]). In fact, it is obvious that if we could solve search-SVP then we can solve the other two problems. Moreover, it is easy to show the equivalence between decisional-SVP and optimization-SVP. However, reducing search-SVP to optimization-SVP is not an easy task.

The first efficient reduction from search-SVP to optimization-SVP was presented by Kannan [18] in 1987. However, the reduction is not rank-preserving, since it needs the optimization-SVP oracle to deal with some lower rank lattices, besides the lattices with the same rank as the original lattice. Moreover, the reduction invokes the optimization-SVP oracle for polynomial times. In 2013, Hu and Pan [16] revisited the reduction and presented a rank-preserving reduction which can solve search-SVP with only one call to the optimization-SVP oracle.

When considering the relations between search-SIVP and optimization-SIVP, it becomes a bit more complicated. Search-SIVP refers to the question of finding n linearly independent vectors in a given n-rank lattice \(\mathcal {L}\) such that the maximal length of the vectors is as small as possible. In fact, denote by \(\lambda _i(\mathcal {L})\) \((1\le i\le n)\) the successive minima, that is, the minimum length of a set of i linearly independent vectors in \(\mathcal {L}\), where the length of a set is defined as the length of the longest vector in it. Then the target of search-SIVP is to find n linearly independent vectors with length at most \(\lambda _n(\mathcal {L})\), whereas optimization-SIVP should be defined as the problem to find \(\lambda _n\). It is obvious that optimization-SIVP can be reduced to its search version. However, it seems hard to give a reduction from the search version to the optimization version since \(\lambda _n\) is only an upper bound of the length of these independent vectors.

In this paper, we consider a lattice problem called the successive minima problem (SMP), which is introduced in [5]. In fact, the original SMP in [5] which aims to find n linearly independent vectors achieving the successive minima respectively is a search version of this problem. We will naturally consider its optimization version as to find all the values of successive minima. Therefore, the relation between these two variants will be considered. Obviously, a reduction from optimization-SMP to its search version is trivial, but the inverse reduction seems difficult.

By perturbing the original lattice basis carefully as in [16], we can transform it to another basis of a new lattice, and we consider the relation between this pair of lattice bases. Then we find that the components of all successive minimal vectors do not change. Moreover, the successive minima of the new lattice are all different, which lead to an algorithm to recover all components for successive minimal vectors of the original lattice. Similar to [16], the reduction from search-SMP to optimization-SMP is also rank-preserving. But by using some results of matrix analysis, we find that our reduction holds for every lattice, no matter whether it is full-rank or not.

Roadmap. The remainder of the paper is organized as follows. In Sect. 2, we give some preliminaries needed. In Sect. 3, we describe the reduction from search successively minimal vectors to its optimization version. Finally, we give a short conclusion in Sect. 4.

2 Preliminaries

We denote by \(\mathbb {Z}, \mathbb {R}, \mathbb {C}, \mathbb {Z}^+\), and \(\mathbb {R}^+\) the integer ring, the real field, the complex field, the set of positive integers, and the set of positive real numbers respectively.

For any vector \(v=(v_1,v_2,\cdots ,v_m)^T\in \mathbb {R}^m\) and \(m\in \mathbb {Z}^+\), we denote by \(\Vert v\Vert =\sqrt{\sum _{i=1}^mv_i^2}\) its length.

2.1 Lattice and the Successively Minima Problem

Given a matrix \(B=(b_{ij})\in \mathbb {R}^{m\times n}\) with rank n, the lattice \(\mathcal {L}(B)\) spanned by the columns of B is

$$\mathcal {L}(B)=\{Bx=\sum _{i=1}^n x_ib_i|x_i\in \mathbb {Z}\},$$

where \(b_i\) is the ith column of B. We call m, n the dimension and the rank of \(\mathcal {L}(B)\) respectively. The determinant of \(\mathcal {L}(B)\), say \(\det (\mathcal {L}(B))\), is defined as \(\sqrt{\det (B^{T}B)}\). It is easy to see when B is full-rank (\(n=m\)), its determinant becomes \(|\det (B)|\).

Definition 1

(Successive Minima). For a n-rank lattice \(\mathcal {L}(B)\) with \(B\in \mathbb {R}^{m\times n}\), and \(i\in \{1,2,\cdots ,n\}\), we define the ith successive minimum as

$$ \lambda _i(\mathcal {L}(B)) = \inf \Big \{r\in \mathbb {R}^+|\dim (\text {span}(\overline{B(0,r)}\bigcap \mathcal {L}(B)))\ge i\Big \}, $$

where \(\overline{B(0,r)}\) is the closed ball centered at 0 with radius \(r\in \mathbb {R}^+\), i.e., \(\overline{B(0,r)}= \{x\in \mathbb {R}^m|\Vert x\Vert \le r\}\).

Simply speaking, \(\lambda _i(\mathcal {L}(B))\) means the infimum of the maximal length of i linearly independent vectors in \(\mathcal {L}(B)\).

It is well-known that the successive minima can be achieved, that is, there exist n linearly independent lattice vectors \(v_1, v_2, \cdots , v_n\in \mathcal {L}(B)\) such that \(\Vert v_i\Vert =\lambda _i(\mathcal {L}(B))\). Therefore, we can define

Definition 2

(Successively Minimal Vectors). Given a lattice basis \(B\in \mathbb {R}^{m\times n}\) with rank n, any n linearly independent lattice vectors \(v_1, v_2, \cdots , v_n\in \mathcal {L}(B)\) satisfying \(\Vert v_i\Vert =\lambda _i(\mathcal {L}(B))\) are called the successively minimal vectors of \(\mathcal {L}(B)\).

In [5, 21], the Successive Minima Problem is defined as below:

Definition 3

(SMP\(_\gamma \)). Given a lattice \(\mathcal {L}(B)\) and a constant \(\gamma \ge 1\), output n linearly independent vectors \(v_1,v_2,\dots ,v_n\) in \(\mathcal {L}(B)\) such that \(\Vert v_i\Vert \le \gamma \lambda _i(\mathcal {L}(B))\).

When \(\gamma = 1\), it becomes Search-SMP:

Definition 4

(Search-SMP). Given a lattice \(\mathcal {L}(B)\), find a set of the successively minimal vectors in \(\mathcal {L}(B)\).

It is proved that SMP is equivalent to SIVP and the closest vector problem (CVP) in [21]. We can define its optimization version similar to SVP as following:

Definition 5

(Optimization-SMP). Given a lattice \(\mathcal {L}(B)\), find the successive minima of \(\mathcal {L}(B)\).

2.2 Linear Algebra

For a matrix \(A\in \mathbb {C}^{m\times n}\), we denote by \(A^*\) its conjugate transpose and \(A^T\) its transpose. The singular values of A is defined to be the nonnegative square root of the eigenvalues of \(A^*A\).

Using the singular values or the eigenvalues of matrices, we can obtain the following Lemma stated in [15]:

Lemma 1

(Rayleigh quotient). Let \(A\in \mathbb {C}^{m\times n}\) and \(0\le \mu _1\le \mu _2\le \cdots \le \mu _n\) be all eigenvalues of \(A^*A\), then for any \(x = (x_i)\in \mathbb {C}^n\backslash \{0\}\), we have

$$\mu _1\le \frac{x^*A^*Ax}{x^*x}\le \mu _n.$$

We present a lower bound for the smallest singular value of a matrix in the following lemma, whose proof can be found in [24].

Lemma 2

Given a matrix \(A=(a_{ij})\in \mathbb {R}^{n\times n}\) with \(\det (A)\ne 0\), we let \(0\le \sigma _1\le \sigma _2\le \dots \le \sigma _n\) be all singular values of A, then the smallest singular value \(\sigma _1\) satisfies the inequality:

$$\sigma _1 \ge (\frac{n-1}{\Vert A\Vert _F^2})^{\frac{n-1}{2}}|\det (A)|,$$

where \(\Vert A\Vert _F = (\sum _{i,j=1}^n|a_{ij}|^2)^\frac{1}{2}\) is the Frobenius norm of A.

Finally, we give a lemma to illustrate the perturbation bound for the determinant of a matrix [17]:

Lemma 3

Let \(B,C\in \mathbb {C}^{n\times n}\), then

$$ |\det (B+C)-\det (B)|\le n\Vert C\Vert _F\max \{\Vert B\Vert _F,\Vert B+C\Vert _F\}^{n-1}. $$

3 The Search-SMP Is Equivalent to Optimization-SMP

It is obvious that if we could solve search-SMP, then we can solve optimization-SMP easily. To show the equivalence between the two problems, what we really need is a reduction from search-SMP to its optimization version.

In this section, we give such a reduction, which consists of three main steps. Suppose we want to find the successively minimal vectors in a given lattice \(\mathcal {L}(B)\), we first construct a new lattice basis \(B_\epsilon \) by perturbing the original basis B. By the optimization-SMP oracle, we can then get the successive minima of \(\mathcal {L}(B_\epsilon )\). In fact, using the successive minima, we can efficiently recover the coefficients of the successively minimal vectors under the basis \(B_\epsilon \). With the recovered coefficients, we can get the successively minimal vectors in \(\mathcal {L}(B)\) finally.

First we present a lemma to show that the coefficients of the successively minimal vectors under the basis can be well bounded.

Lemma 4

Given a lattice basis \(B=(b_{ij})\in \mathbb {Z}^{m\times n}\) with rank n, let \(M= \max \{|b_{ij}|\}\). For any \(x=(x_i)\in \mathbb {Z}^n\) such that \(\Vert Bx\Vert \le \lambda _n(\mathcal {L}(B))\), we have

$$x_i^2\le 2^{\frac{n+1}{2}}n^{\frac{n-1}{2}}(mM^2)^n. $$

Proof

Note that when \(n = 1\), the result is trivial, so we assume \(n\ge 2\).

It is easy to check that \(\lambda _i(\mathcal {L}(B))^2\le \max \{\Vert b_i\Vert ^2\}\le mM^2,\) so for any \(x=(x_i)\in \mathbb {Z}^n\) such that \(\Vert Bx\Vert \le \lambda _n(\mathcal {L}(B))\), we have

$$\Vert Bx\Vert ^2\le mM^2.$$

Considering the Gram matrix \(A=B^TB\), that is,

$$A = \begin{pmatrix} b_{11}^2+b_{21}^2+\cdots +b_{m1}^2 &{} \cdots &{} b_{11}b_{1n}+b_{21}b_{2n}+\cdots +b_{m1}b_{mn}\\ \vdots &{} \ddots &{} \vdots \\ b_{1n}b_{11}+b_{2n}b_{21}+\cdots +b_{mn}b_{m1} &{} \cdots &{} b_{1n}^2+b_{2n}^2+\cdots +b_{mn}^2 \end{pmatrix}, $$

by Lemma 1, we know that

$$0<\mu _1(A)=\mu _1(B^TB)\le \frac{x^TB^TBx}{x^Tx}=\frac{\Vert Bx\Vert ^2}{\Vert x\Vert ^2}.$$

Together with \(\Vert Bx\Vert ^2\le mM^2,\) we have

$$\Vert x\Vert ^2\le \frac{mM^2}{\mu _1(A)}.$$

So for each \(i(1\le i\le n)\), we have

$$\begin{aligned} |x_i|\le \frac{\sqrt{m}M}{\sqrt{\mu _1(A)}}. \end{aligned}$$
(1)

Note that the singular values of \(A=B^TB\) are in fact their eigenvalues. By the lower bound of the smallest singular value in Lemma 2, we know

$$\begin{aligned} \mu _1(A)\ge (\frac{n-1}{\Vert A\Vert _F^2})^{\frac{n-1}{2}}|\det (A)|. \end{aligned}$$
(2)

Since the entries of B are integers, then

$$ |\det (A)|\ge 1 > \frac{1}{2}.$$

Notice that the absolute values of entries of B are bounded by M, then the absolute values of entries of A are bounded by \(mM^2\), which implies that

$$\Vert A\Vert _F^2\le n^2(mM^2)^2=(nmM^2)^2.$$

Since \(n\ge 2\), we have \(n-1\ge \frac{n}{2}\). Hence, we have:

$$ \mu _1 \ge \frac{1}{2}(\frac{1}{2nm^2M^4})^{\frac{n-1}{2}} $$

By (1),

$$ x_i^2\le 2mM^2(2nm^2M^4)^{\frac{n-1}{2}}=2^{\frac{n+1}{2}}n^{\frac{n-1}{2}}(mM^2)^n. $$

Remark 1

We can also use \(B^TBx\) to evaluate the upper bound of each component of x by the Cramer’s Rule and Hadamard inequality, and the upper bound will be \(n^{n/2}m^{n+1/2}M^{2n}\). This bound is not so tight as in Lemma 4.

In the following, we describe our reduction in detail.

Theorem 1

Given an oracle \(\mathcal {O}\) that can solve the optimization SMP for any lattice \(\mathcal {L}(B')\) with basis \(B'\in \mathbb {Z}^{m\times n}\), there is a deterministic polynomial time algorithm that can solve the search SMP for \(\mathcal {L}(B)\) with the input basis \(B\in \mathbb {Z}^{m\times n}\).

Proof

We will complete the proof in the following 4 steps:

  1. (1)

    First we construct a matrix \(B_{\epsilon }\in \mathbb {Z}^{m\times n}\):

    $$ B_{\epsilon } = \epsilon _{n+1}B+ \begin{pmatrix} \epsilon _1 &{} \epsilon _2 &{} \dots &{} \epsilon _n\\ 0 &{} 0 &{} \dots &{} 0\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} \dots &{} 0 \end{pmatrix}$$

    where \(\epsilon _i\)’s are determined as below.

    • Let \(M_1 =2^{\frac{n+1}{4}}n^{\frac{n-1}{4}}m^{\frac{n}{2}}(M+1)^n\) and \(M_2 = \sqrt{m}(M+1)\) where \(M=\max \{|b_{ij}|\}\), then we choose

      $$p = 2\max \{M_2^2,2M_1M_2,2M_1^2\}+1.$$

    Note that \(\log p=\text {poly}(n,m,\log M)\), where \(\text {poly}(n,m,\log M)\) stands for a polynomial of n, m and \(\log M\).

    • Then we choose \(n+1\) positive integers \(a_1<a_2<\dots<a_n<a_{n+1}\) such that all \(a_i+a_j(1\le i\le j\le n+1)\)’s are distinct and \(a_{n+1}\) is bounded by \(\text {poly}(n)\). As in Lemma 1 of [16], we can first choose

    $$a_i=i^2+(2(n+1)^2)i+4(n+1)^4,$$

    for \(i=1, 2, \cdots ,n\). Then we let

    $$a_{n+1}=3a_n.$$

    By Lemma 1 in [16], all \(a_i+a_j(1\le i\le j\le n)\)’s are distinct. Together with the fact that \(a_{n+1}>2a_{n}\), it is easy to see that \(a_i+a_j(1\le i\le j\le n+1)\)’s are distinct and \(a_{n+1}\) is bounded by \(\text {poly}(n)\). Finally we let

    $$\epsilon _i = p^{a_i}.$$
    • Notice that for every entry \(b_{\epsilon ij}\) in \(B_\epsilon \), \(\log |b_{\epsilon ij}|=\text {poly}(n,m,\log M)\). Hence \(B_\epsilon \) can be constructed efficiently.

  2. (2)

    Next we claim that the columns of \(B_\epsilon \) are linearly independent, so \(B_\epsilon \) forms a lattice basis of \(\mathcal {L}(B_\epsilon )\). In fact we can prove the claim by showing that \(\det (B_\epsilon ^TB_\epsilon )\ne 0\). In the following, we prove that

    $$|\det ((\frac{1}{\epsilon _{n+1}}B_{\epsilon })^T(\frac{1}{\epsilon _{n+1}}B_{\epsilon }))|=|\det (\frac{1}{\epsilon _{n+1}^2}B_\epsilon ^TB_\epsilon )|>\frac{1}{2}.$$

    Notice that the absolute values of entries of \(\frac{1}{\epsilon _{n+1}}B_{\epsilon }\) can be bounded by \(M+1\), then the absolute values of entries of \((\frac{1}{\epsilon _{n+1}}B_{\epsilon })^T(\frac{1}{\epsilon _{n+1}}B_{\epsilon })\) are bounded by \(m(M+1)^2\). Note that

    $$ \begin{aligned} (\frac{1}{\epsilon _{n+1}}B_{\epsilon })^T(\frac{1}{\epsilon _{n+1}}B_{\epsilon }) =B^TB+ \frac{1}{\epsilon _{n+1}}B^T \begin{pmatrix} \epsilon _1 &{} \epsilon _2 &{} \cdots &{} \epsilon _n\\ 0 &{} 0 &{} \cdots &{} 0\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} \cdots &{} 0 \end{pmatrix}+\\ \frac{1}{\epsilon _{n+1}} \begin{pmatrix} \epsilon _1 &{} 0 &{}\cdots &{} 0\\ \epsilon _2 &{} 0 &{}\cdots &{} 0\\ \vdots &{} \vdots &{}\ddots &{}\vdots \\ \epsilon _n &{} 0 &{}\cdots &{} 0 \end{pmatrix}B +\frac{1}{\epsilon _{n+1}^2}\begin{pmatrix} \epsilon _1^2 &{} \epsilon _1\epsilon _2 &{} \cdots &{} \epsilon _1\epsilon _n\\ \epsilon _2\epsilon _1 &{} \epsilon _2^2 &{} \cdots &{} \epsilon _2\epsilon _n\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \epsilon _n\epsilon _1 &{} \epsilon _n\epsilon _2 &{} \cdots &{} \epsilon _n^2 \end{pmatrix} \end{aligned} $$

    Let \(A = B^TB\) and \(C = (\frac{1}{\epsilon _{n+1}}B_{\epsilon })^T(\frac{1}{\epsilon _{n+1}}B_{\epsilon }) - A\), then each entry of C can be bounded by \(2M\frac{\epsilon _n}{\epsilon _{n+1}} + \frac{\epsilon _n^2}{\epsilon _{n+1}^2}\le (2M+1)\frac{\epsilon _n}{\epsilon _{n+1}}\le 2(M+1)\frac{\epsilon _n}{\epsilon _{n+1}}\), which implies

    $$\Vert C\Vert _F\le 2n(M+1)\frac{\epsilon _n}{\epsilon _{n+1}}.$$

    By Lemma 3, we have

    $$\begin{aligned} |\det (A+C)-\det (A)|&\le n\Vert C\Vert _F\max \{\Vert A\Vert _F,\Vert A+C\Vert _F\}^{n-1}\nonumber \\&\le 2n^2(M+1)\frac{\epsilon _n}{\epsilon _{n+1}}(nm(M+1)^2)^{n-1}\\&=2n^{n+1}m^{n-1}(M+1)^{2n-1}\frac{\epsilon _n}{\epsilon _{n+1}}. \end{aligned}$$

    By the choice of \(a_n\ge n\) and \(p> M_2^2\), we have

    $$ \begin{aligned} p^{a_{n+1}-a_n}&\ge p^{2n}\\&>(m(M+1)^2)^{2n}\\&\ge 4m^{2n}(M+1)^{2n}\\&> 4m^{n-1}n^{n+1}(M+1)^{2n-1}. \end{aligned} $$

    That is

    $$\frac{\epsilon _n}{\epsilon _{n+1}}<\frac{1}{4n^{n+1}m^{n-1}(M+1)^{2n-1}}.$$

    Then we immediately have \(|\det (A+C)-\det (A)|<\frac{1}{2}\), which is in fact

    $$|\det ((\frac{1}{\epsilon _{n+1}}B)^T(\frac{1}{\epsilon _{n+1}}B)-\det (B^TB))|<\frac{1}{2}.$$

    Note that \(\det (B^TB)\) is a nonzero integer, then we finally have

    $$|\det ((\frac{1}{\epsilon _{n+1}}B)^T(\frac{1}{\epsilon _{n+1}}B))|>\frac{1}{2}.$$

    For a vector satisfying \(\Vert B_{\epsilon }x\Vert \le \lambda _n(\mathcal {L}(B_\epsilon ))\), all the components \(|x_i|\) of x can also be bounded by \(M_1\).

  3. (3)

    Moreover, we claim that if n linearly independent vectors \(B_\epsilon x_1, B_\epsilon x_2, \cdots , B_\epsilon x_n\in \mathcal {L}(B_\epsilon )\) form a set of the successively minimal vectors in \(\mathcal {L}(B_\epsilon )\) where \(x_1\), \(x_2\), \(\cdots \), \(x_n\in \mathbb {Z}^n\), that is, \(\Vert B_{\epsilon }x_i\Vert = \lambda _i(\mathcal {L}(B_{\epsilon })),1\le i\le n\), then \(Bx_1, Bx_2, \cdots , Bx_n\in \mathcal {L}(B)\) also form a set of the successively minimal vectors in \(\mathcal {L}(B)\).

    • First note that since \(B_\epsilon x_1, B_\epsilon x_2, \cdots , B_\epsilon x_n\in \mathcal {L}(B_\epsilon )\) are linearly independent and \(B_\epsilon \) is a basis, then \(x_1, x_2, \cdots , x_n\) are linearly independent, which implies that \(Bx_1, Bx_2, \cdots , Bx_n\in \mathcal {L}(B)\) are linearly independent.

    • Second we will prove that \(\Vert Bx_i\Vert = \lambda _i(\mathcal {L}(B))\), for \(1\le i\le n\). For contradiction, let l be the smallest index such that for \(1\le i < l\), \(\Vert Bx_i\Vert = \lambda _i(\mathcal {L}(B))\), whereas \(\Vert Bx_l\Vert > \lambda _l(\mathcal {L}(B))\). We have

      $$\begin{aligned} \Vert Bx_l\Vert ^2\ge \lambda _l(\mathcal {L}(B))^2+1. \end{aligned}$$
      (3)

    By the definition of successively minimal vectors, there must exist vectors \(y_i=(y_{i1},y_{i2},\cdots ,y_{in})^T\in \mathbb {Z}^n, 1\le i\le l\) such that \(\Vert By_i\Vert =\lambda _i(\mathcal {L}(B))\) and \(By_1, By_2, \cdots , By_l\) are linearly independent.

    Considering \(B_\epsilon y_i\), note that

    $$\begin{aligned} \Vert B_\epsilon y_i\Vert ^2=\epsilon _{n+1}^2\Vert By_i\Vert ^2+\sum _{j=1}^{n}y_{ij}^2\epsilon _j^2+\sum _{j=1}^{n}2c(y_i)y_{ij}\epsilon _j\epsilon _{n+1}+\sum _{1\le j<k\le n}2y_{ij}y_{ik}\epsilon _j\epsilon _k, \end{aligned}$$
    (4)

    where \(c(y_i) = \sum _{j=1}^nb_{1j}y_{ij}\) for any \(y_i\in \mathbb {Z}^n\). Since \(\Vert By_i\Vert =\lambda _i(\mathcal {L}(B))\), we know that

    $$ \Vert By_i\Vert \le M_2.$$

    By Lemma 4, we have for \(1\le j\le n\)

    $$|y_{ij}| \le M_1.$$

    Note that \(|c(y_i)|\le \Vert By_i\Vert \), we have also

    $$ |c(y_i)|\le M_2.$$

    By the choice of p, we know that all coefficients \(\Vert By_i\Vert ^2, y_{ij}^2, 2c(y_i)y_{ij}, 2y_{ij}y_{ik}\) of \(\epsilon _j\epsilon _k\) in Eq. (4) are in the interval \((-\lfloor \frac{p}{2}\rfloor ,\lfloor \frac{p}{2}\rfloor )\). Since \(\epsilon _j\epsilon _k\)’s are different powers of p, when we take \(\Vert B_\epsilon y_i\Vert ^2\) as a number with base p, it is easy to check that

    $$ \begin{aligned} \Vert B_\epsilon y_i\Vert ^2&<\epsilon _{n+1}^2\Vert By_i\Vert ^2+\frac{1}{2}\epsilon _{n+1}^2\\&\le \epsilon _{n+1}^2(\lambda _l(\mathcal {L}(B))^2+\frac{1}{2}). \end{aligned} $$

    However, by Eq. (3), we know that

    $$ \begin{aligned} \Vert B_\epsilon x_l\Vert ^2&>\epsilon _{n+1}^2\Vert Bx_l\Vert ^2-\frac{1}{2}\epsilon _{n+1}^2\\&\ge \epsilon _{n+1}^2(\lambda _l(\mathcal {L}(B))^2+1-\frac{1}{2})\\&=\epsilon _{n+1}^2(\lambda _l(\mathcal {L}(B))^2+\frac{1}{2}). \end{aligned} $$

    Note that \(B_\epsilon y_1, B_\epsilon y_2, \cdots , B_\epsilon y_l\) are linearly independent, and \(\lambda _l(\mathcal {L}(B_{\epsilon }))=\Vert B_\epsilon x_l\Vert > \Vert B_\epsilon y_i\Vert \), \(1\le i\le l\), which leads to a contradiction to the definition of the successive minima. Hence, for \(1\le i\le n\), we have

    $$\Vert Bx_i\Vert = \lambda _i(\mathcal {L}(B)).$$
  4. (4)

    Finally, we recover all successively minimal vectors as following. Querying the oracle \(\mathcal {O}\) with \(B_\epsilon \), we obtain \(\lambda _i(\mathcal {L}(B_\epsilon )),1\le i\le n\). We next show we can efficiently find \(x_i\in \mathbb {Z}^n\), such that \(\Vert B_\epsilon x_i\Vert =\lambda _i(\mathcal {L}(B_\epsilon ))\) by the value of \(\lambda _i(\mathcal {L}(B_\epsilon ))\) for \(1\le i\le n\).

    • Let \(x_i=(x_{i1},x_{i2},\dots ,x_{in})^T\in \mathbb {Z}^n\) satisfy

      $$ \lambda _i(\mathcal {L}(B_\epsilon ))^2=\Vert B_\epsilon x_i\Vert ^2. $$
    • First note that \(\log (\lambda _n(\mathcal {L}(B_\epsilon )))\) is bounded by \(\text {poly}(m,n,\log M)\), and by Lemma 4, we know that \(\log |x_{ij}|\) can also be bounded by \(\text {poly}(m,n,\log M)\).

    • Second we expand \(\Vert B_\epsilon x_i\Vert ^2\) as follows:

      $$\begin{aligned} \Vert B_\epsilon x_i\Vert ^2=\epsilon _{n+1}^2\Vert Bx_i\Vert ^2+\sum _{j=1}^{n}x_{ij}^2\epsilon _j^2+\sum _{j=1}^{n}2c(x_i)x_{ij}\epsilon _j\epsilon _{n+1}+\sum _{1\le j<k\le n}2x_{ij}x_{ik}\epsilon _j\epsilon _k. \end{aligned}$$
      (5)

    Similarly, since \(\Vert B_\epsilon x_i\Vert =\lambda _i(\mathcal {L}(B_\epsilon ))\), we know that \(\Vert Bx_i\Vert = \lambda _i(\mathcal {L}(B))\). As discussed in Step (3), all the coefficients \(\Vert Bx_i\Vert ^2\), \(x_{ij}^2\), \(2c(x_i)x_{ij}\), \(2x_{ij}x_{ik}\) of \(\epsilon _i\epsilon _j\) in Eq. (5) are in the interval \((-\lfloor \frac{p}{2}\rfloor ,\lfloor \frac{p}{2}\rfloor )\). It is easy to recover all the coefficients in \(\text {poly}(m,n,\log M)\) time by Lemma 2 in [16]. More precisely, we can recover all \(x_{ij}^2\) and \(x_{ij}x_{il},j\ne l\) for each \(x_i\). In fact for \(x_i\), let \(k_i=\min \{j|x_{ij}\ne 0\}\), and we can fix \(x_{ik_i}\) positive, that is \(x_{ik_i}=\sqrt{x_{ik_i}^2}\). For the remaining \(x_{ij}\), we can recover their absolute values according to \(x_{ij}^2\), and their signs according to the signs of \(x_{ik_i}x_{ij}\). This can be done in \(\text {poly}(m,n,\log M)\) time.

    • After recovering \(x_1, x_2, \cdots , x_n\in \mathbb {Z}^n\) such that \(\Vert B_{\epsilon }x_i\Vert = \lambda _i(\mathcal {L}(B_{\epsilon })),1\le i\le n\), we compute \(Bx_1, Bx_2, \cdots , Bx_n\in \mathcal {L}(B)\). Then they form a set of the successively minimal vectors in \(\mathcal {L}(B)\).

All the reduction above is in \(\text {poly}(m,n,\log M)\) time. The proof is completed.

Hence, we finally have:

Corollary 1

Search-SMP is equivalent to optimization-SMP.

Remark 2

In our proof of the main theorem, we use the expansion of base p to recover all the successive minimal vectors. An obvious observation is that all the values \(\Vert B_\epsilon x_i\Vert \) must be different since the same value must have the same expansion for base p in the interval \((-\lfloor \frac{p}{2}\rfloor ,\lfloor \frac{p}{2}\rfloor )\). That is, when we add these errors to a given lattice basis B to transform it to be \(B_\epsilon \), all the successive minima \(\lambda _i(\mathcal {L}(B_\epsilon ))\) will be different.

4 Conclusions

In this paper, we revisit the problem SMP in lattices, and propose a rank-preserving reduction in polynomial time from search-SMP to optimization-SMP with only one call to the optimization-SMP oracle, which leads to the equivalence between search-SMP and its optimization version.