Keywords

1 Introduction

Coppersmith’s method to solve univariate modular polynomial [5] and bivariate integer polynomial [4] enjoys prevalent cryptographic applications, such as breaking the RSA crypto system as well as many of its variant schemes [1, 12, 14, 16, 1820], cracking the validity of the multi-prime \(\varPhi \)-hiding assumptions [9, 21], revealing the secret information of kinds of pseudorandom generators [2, 6, 10], and analyzing the security of some homomorphic encryption schemes [22]. The essence of this famed algorithm is to find integer linear combinations of polynomials which share a common root modulo a certain integer. These derived polynomials possess small coefficients and can be transformed into ones holding true over integers. Thus one can extract the desired roots using standard root-finding algorithms.

A noted theorem of Fermat addresses those integers which can be expressed as the sum of two squares. This property relies on the factorization of the integer, from which a sum of two squares decomposition (if exists) can be efficiently computed [8]. Recently, Gutierrez et al. [7] gave an algorithm to recover a decomposition of an integer \(N=rA^2+sB^2\), where rs are known integers, and AB are the wanted unknowns. When approximations \(A_0,B_0\) to AB are given, their first algorithm can recover the two addends under the condition that the approximation errors \(|A-A_0|,|B-B_0|\) are no bigger than \(N^{\frac{1}{6}}\).

In this paper, we first illustrate a method to solve a certain bivariate modular polynomial \(f_N(x,y)=a_1x^2+a_2x+a_3y^2+a_4y+a_0\) based on Coppersmith’s method. The trick to solve this kind of polynomial can be directly used to recover the two addends AB of \(N=rA^2+sB^2\) from their approximations with an error tolerance \(N^{\frac{1}{4}}\). The least significant bits exposure attacks on A and B can also be quickly executed by applying the method to solve this certain type polynomial. Next, we present a better method for recovering AB from its approximations \(A_0,B_0\). This improved approach transforms the problem into seeking the coordinates of a certain vector in our built lattice. The problem of finding these coordinates can be reduced to extracting the small roots of a different bivariate modular polynomial \(f'_N(x,y)=b_1x^2+b_2x+b_3y^2+b_4y+b_5xy+b_0\). The derived error bound is \(N^{\frac{1}{3}}\) in this way.

The rest of this paper is organized as follows. In Sect. 2, we recall some preliminaries. In Sect. 3, we first describe the method to solve \(f_N(x,y)=a_1x^2+a_2x+a_3y^2+a_4y+a_0\) and then give our deduction on error bound \(N^{\frac{1}{4}}\) as well as the least significant bits exposure attacks on AB, both of which are based on finding the small roots of \(f_N(x,y)\). In Sect. 4, we elaborate a better method for recovering the addends of a sum of two squares. The theoretical error bound derived by this approach is \(N^{\frac{1}{3}}\). Finally, we give some conclusions in Sect. 5.

2 Preliminaries

2.1 Lattices

Let \(\mathbf {{b}_1},\dots , \mathbf {{b}_{\omega }}\) be linear independent row vectors in \(\mathbb {R}^n\), and a lattice \(\mathcal {L}\) spanned by them is

$$ \mathcal {L}=\{\sum ^{\omega }_{i=1} k_i\mathbf {b_i} \mid k_i \in \mathbb {Z}\}, $$

where \(\{\mathbf {{b}_1},\dots , \mathbf {{b}_{\omega }}\}\) is a basis of \(\mathcal {L}\) and \(B=[\mathbf {{b}_1}^T,\dots , \mathbf {{b}_{\omega }}^T]^T\) is the corresponding basis matrix. The dimension and determinant of \(\mathcal {L}\) are respectively

$$ \dim (\mathcal {L})=\omega , \det (\mathcal {L})=\sqrt{\det (BB^T)}. $$

For any two-dimensional lattice \(\mathcal {L}\), the Gauss algorithm can find out the reduced basis vectors \(\mathbf {v_1}\) and \(\mathbf {v_2}\) satisfying

$$ \Vert \mathbf {v_1}\Vert \le \Vert \mathbf {v_2}\Vert \le \Vert \mathbf {v_1\pm v_2}\Vert $$

in polynomial time. One can deduce that \(\mathbf {v_1}\) is the shortest nonzero vector in \(\mathcal {L}\) and \(\mathbf {v_2}\) is the shortest vector in \(\mathcal {L}\setminus \{k\mathbf {v_1} \mid k \in \mathbb {Z}\} \). Moreover, there are following results, which will be used in Sect. 4.

Lemma 1

(See Gómez et al., 2006 [6], Lemma 3). Let \(\mathbf {v_1}\) and \(\mathbf {v_2}\) be the reduced basis vectors of \(\mathcal {L}\) by the Gauss algorithm and \(\mathbf {x} \in \mathcal {L}\). For the unique pair of integers \((\alpha , \beta )\) that satisfies \(\mathbf {x}=\alpha \mathbf {v_1}+\beta \mathbf {v_2}\), we have

$$ \Vert \alpha \mathbf {v_1}\Vert \le \frac{2}{\sqrt{3}}\Vert \mathbf {x}\Vert , \,\, \Vert \beta \mathbf {v_2}\Vert \le \frac{2}{\sqrt{3}}\Vert \mathbf {x}\Vert . $$

Lemma 2

(See Gómez et al., 2006 [6], Lemma 5). Let \(\{\mathbf {u},\mathbf {v}\}\) be a reduced basis of a 2-rank lattice \(\mathcal {L}\) in \(\mathbb {R}^r\). Then we have

$$ det(\mathcal {L})\le \parallel \mathbf u\parallel \parallel \mathbf v\parallel \le \frac{2}{\sqrt{3}}det(\mathcal {L}). $$

The reduced basis calculation in two-rank lattices is far from being obtained for general lattices. The subsequently proposed reduction definitions all have to make a choice between computational efficiency and good reduction performances. The distinguished LLL algorithm takes a good balance, outputting a basis reduced enough for many applications in polynomial time.

Lemma 3

[17]. Let \(\mathcal {L}\) be a lattice. In polynomial time, the LLL algorithm outputs reduced basis vectors \(\mathbf {{v}_1},\dots ,\mathbf {{v}_{\omega }}\) that satisfy

$$ \Vert \mathbf {{v}_1}\Vert \le \Vert \mathbf {{v}_2}\Vert \le \dots \le \Vert \mathbf {{v}_i}\Vert \le 2^{\frac{\omega (\omega -1)}{4(\omega +1-i)}}\det (\mathcal {L})^{\frac{1}{\omega +1-i}}, 1\le i \le \omega . $$

2.2 Finding Small Roots

Coppersmith gave rigorous methods for extracting small roots of modular univariate polynomials and bivariate integer polynomials. These methods can be heuristically extended to multivariate cases. Howgrave-Graham’s [11] reformulation to Coppersmith’ s method is widely adopted by researchers for cryptanalysis.

Lemma 4

[11]. Let \(g(x_1,x_2) \in \mathbb {Z}[x_1,x_2]\) be an integer polynomial that consists of at most \(\omega \) nonzero monomials. Define the norm of \(g(x_1,x_2)=:\sum \limits {b_{i_1,i_2}}x_1^{i_1} x_2^{i_2}\) as the Euclidean norm of its coefficient vector, namely,

$$\Vert g(x_1,x_2)\Vert =\sqrt{\sum {b_{i_1,i_2}}^2}.$$

Suppose that

  1. 1.

    \(g(x^{(0)}_1, x^{(0)}_2)=0\pmod {N}\), \(\mathrm {for}\) \(|x^{(0)}_1|<X_1\), \(|x^{(0)}_2|<X_2\);

  2. 2.

    \(\Vert g(X_1x_1,X_2x_2)\Vert <\frac{N}{\sqrt{\omega }}\).

Then \(g(x^{(0)}_1,x^{(0)}_2)=0\) holds over integers.

Combining Howgrave-Graham’s lemma with the LLL algorithm, one can deduce that if

$$\begin{aligned} 2^{\frac{\omega (\omega -1)}{4(\omega +1-i)}}\det (\mathcal {L})^{\frac{1}{\omega +1-i}}<\frac{N}{\sqrt{\omega }}, \end{aligned}$$

the polynomials corresponding to the shortest i reduced basis vectors hold over integers. Neglecting the low order terms which are independent on N, the above condition can be simplified as

$$\begin{aligned} \det (\mathcal {L})<{N}^{\omega +1-i}. \end{aligned}$$
(1)

After obtaining enough equations over integers, one can extract the shared roots by either resultant computation or Gröbner basis technique.

We need the following assumption through our analyses, which is widely adopted in previous works.

Assumption 1

The Gröbner basis computations for the polynomials corresponding to the first few LLL-reduced basis vectors produce non-zero polynomials.

3 Recovering the Addends from \(N=rA^2+sB^2\)

In this section, we first describe the trick for finding the small roots of polynomial \(f_N(x,y)=a_1x^2+a_2y^2+a_3x+a_4y+a_0\). Next, we address the problem of recovering the decomposition of a given number \(N=rA^2+sB^2\) only from its approximations to its addends AB, where N, r, s are public positive integers. Then, we discuss how to achieve A and B when the least significant bits of them are revealed. Both of these two attacks can be transformed into solving the studied polynomial \(f_N(x,y).\)

3.1 Solving Polynomial \(f_N(x,y)\)

Without loss of generosity, we assume \(a_1=1\) since we can make it by multiplying \(f_N\) with \(a_1^{-1}~mod~N\). If this inverse does not exist, one can factorize N. Set

$$ f(x,y)=a_1^{-1}f_N(x,y)~mod~N. $$

Next, we find the small roots of f(xy) by Coppersmith’s method. Build shifting polynomials

$$ g_{k,i,j}(x,y)=x^iy^jf^k(x,y)N^{m-k}, $$

where \(i=0,1;k=0,...,m-i;j=0,...,2(m-k-i)\). Obviously,

$$g_{k,i,j}(x,y) \equiv 0~mod~N^m.$$

Construct a lattice \(\mathcal {L}\) using the coefficient vectors of \(g_{k,i,j}(xX,yY)\) as basis vectors. We sort the polynomials \(g_{k,i,j}(xX,yY)\) and \(g_{k',i',j'}(xX,yY)\) according to the lexicographical order of vectors (kij) and \((k',i',j')\). In this way, we can ensure that each of our shifting polynomials introduces one and only one new monomial, which gives a lower triangular structure for \(\mathcal {L}\). We give an example for \(m=2\) in the following Table 1.

Table 1. Example of the lattice formed by vectors \(g_{k,i,j}(xX,yY)\) when \(m = 2\). The upper triangular part of this matrix is all zero, so omitted here, and the non-zero items below the diagonal are marked by \(*\).

Then its determinant can be easily calculated as products of the entries on the diagonal as \(det(\mathcal {L})=X^{S_X}Y^{S_Y}N^{S_N}\) as well as its dimension \(\omega \) where

$$\begin{aligned} \begin{aligned}&\omega =\sum _{i=0}^{1}\sum _{k=0}^{m-i}\sum _{j=0}^{2(m-k-i)}1=2m^2+2m+1=2m^2+o(m^2).\\&S_x=\sum _{i=0}^{1}\sum _{k=0}^{m-i}\sum _{j=0}^{2(m-k-i)}(2k+i)=\frac{1}{3}m(4m^2+3m+2)=\frac{4}{3}m^3+o(m^3).\\&S_y=\sum _{i=0}^{1}\sum _{k=0}^{m-i}\sum _{j=0}^{2(m-k-i)}j=\frac{1}{3}m(4m^2+3m+2)=\frac{4}{3}m^3+o(m^3).\\&S_N=\sum _{i=0}^{1}\sum _{k=0}^{m-i}\sum _{j=0}^{2(m-k-i)}(m-k)=\frac{2}{3}m(2m^2+3m+1)=\frac{4}{3}m^3+o(m^3). \end{aligned} \end{aligned}$$

Put these relevant values into inequality \(\det (\mathcal {L})<{N}^{m\omega }\). After some basic calculations, we gain the bound

$$XY<N^{\frac{1}{2}}.$$

When \(X=Y\), which means the two unknowns are balanced, the above result is

$$X=Y<N^{\frac{1}{4}}.$$

We summarize our result in the following theorem.

Theorem 1

Let N be a sufficiently large composite integer of unknown factorization. Given a bivariate polynomial \(f_N(x,y)=a_1x^2+a_2x+a_3y^2+a_4y+a_0~mod~N\), where \(|x|\le X\), \(|y|\le Y\) . Under Assumption 1, if

$$ XY<N^{\frac{1}{2}}, $$

one can extract all the solutions (xy) of equation \(f_N(x,y)\equiv ~0~(mod~N)\) in polynomial time.

3.2 Recovering a Decomposition from Approximations

In this subsection, we describe the method to recover AB of \(N=rA^2+sB^2\) from their approximations.

Supposing that positive integers r and s are given. Set \(N=rA^2+sB^2\), where AB are balanced addends, and \(A_0,B_0\) are the approximations to AB, that is \(A=A_0+x\) and \(B=B_0+y\), where xy are bounded by \(\varDelta \). Then, one can recover A and B according to Theorem 1 when

$$\varDelta <N^{\frac{1}{4}}.$$

The concrete analysis is as follows. Note that

$$\begin{aligned} N=r(A_0+x)^2+s(B_0+y)^2, \end{aligned}$$
(2)

which gives rise to a bivariate modular polynomial

$$f_1(x,y)=rx^2+sy^2+2A_0rx+2B_0sy+rA_0^2+sB_0^2 \equiv 0~mod~N,$$

this is exactly the same type of the polynomial we discussed in Sect. 3.1. So we gain the result \(\varDelta <N^{\frac{1}{4}}\) simply by substituting both X and Y appeared in Theorem 1 to \(\varDelta \).

The experimental results to support the above analysis is displayed in Table 2, which matches well with the derived theoretical bound.

Table 2. Experimental results for error bound \(\varDelta =\frac{1}{4}\) with 512 bit N
Table 3. Experimental results for Remark 1 with 512 bit N
Table 4. Experimental results for different modulus with 1024 bit N

Remark 1

Gutierrez et al. discussed the same problem in [7]. They arranged Eq. (2) to a bivariate integer polynomial as follows,

$$\begin{aligned} f'_1(x,y)=rx^2+sy^2+2A_0rx+2B_0sy+rA_0^2+sB_0^2 -N. \end{aligned}$$
(3)

By directly applying Coppersmith’s theorem [3], their derived error bound is only \(N^{1/6}\). We do experiments for their method, part of the results are displayed in Table 3. The experimental results show that our method works much better.

Coppersmith’s original method [3] for solving bivariate integer polynomial is difficult to understand. Coron [13] first reformulated Coppersmith’s work and the key idea of which can be described as follows, choosing a proper integer R, and transforming the situation into finding a small root modulo R. Then, by applying LLL algorithm, a polynomial with small coefficients can be found out, which is proved to be algebraically independent with the original equation.

Our approach described above also transforms the integer equation into a modular polynomial. The difference between our method and Coppersmith’s theorem [3] lies in the construction of shifting polynomials. We take use of the information of the power of the original polynomial. Although we didn’t prove that the obtained polynomial with small coefficients is algebraically independent with the original polynomial, which is true in most cases during the experiments.

Remark 2

We studied different situations to transform Eq. (3) into modular ones as the modulus varies. For instance \(q(x,y)=f_1(x,y)+M\equiv 0~mod~(N+M)\). The experimental results for different M are shown in Table 4.

Specifically, we also consider non-constant modular polynomial

$$\begin{aligned} f_2(x,y)=rx^2+sy^2+2A_0rx+2B_0sy \equiv 0~mod~(N-rA_0^2-sB_0^2). \end{aligned}$$
(4)

In this way, the corresponding theoretical error bound for recovering the addends from their approximations is \(N^{1/6}\)( please refer to Appendix A for the detailed analyses). However, the experimental results show a much better performance, which is displayed in Table 5.

3.3 Recovering a Decomposition from Non-approximations

Actually, the most significant bits exposure attack of A and B can be viewed as a special case of the above problem (recovering a a sum of two squares from its approximations). In this subsection, we consider the case when the least significant bits of AB are leaked.

Given rs are positive integers, set \(N=rA^2+sB^2\), where AB are balanced addends. When half bits of A and B in the LSBs are intercepted, one can recover AB according to Theorem 1.

Suppose \(A=xM+A_0\), \(B=yM+B_0\), where \(M,A_0\) and \(B_0\) are the gained integers, and xy refers to the unknown parts. Then we have the following relation

$$N=r(xM+A_0)^2+s(yM+B_0)^2,$$

which can be expanded to a bivariate modular polynomial

$$f_3(x,y)=rM^2x^2+sM^2y^2+2rA_0Mx+2sB_0My+rA_0^2+sB_0^2\equiv ~0~mod~N.$$

Set the upper bound for x and y as \(\varDelta _1\) and put it into Theorem 1, we get \(\varDelta _1<N^{\frac{1}{4}}\). Since

$$M=\frac{A-A_0}{x}>\frac{A-A_0}{N^{\frac{1}{4}}}\approx \frac{A}{N^{\frac{1}{4}}}\approx \frac{N^{\frac{1}{2}}}{N^{\frac{1}{4}}}=N^{\frac{1}{4}},$$

From these analyses, we get that half information from A and B can reveal the whole knowledge of both addends, no matter the leaked bits are LSBs or MSBs.

Table 5. Experimental results for Remark 2 with 512 bit N

4 A Better Method for Recovering the Addends

In this section, we reduce the problem of recovering a sum of two squares decomposition to seeking the coordinates of a desired vector in a certain lattice. Then we can find these coordinates by applying Coppersmith’s method to solve a type of modular polynomials where the concerned monomials are \(x^2,y^2,xy,x,y\) and 1. Dealt this way, the theoretical error tolerance can be improved to \(N^{1/3}\), and the involved lattices in this approach possess much smaller dimensions compared to the ones in Sect. 3.

4.1 The Reduction of Recovering the Addends

From the initial key relation \(N=r(A_0+x)^2+s(B_0+y)^2\) we have

$$\begin{aligned} 2rA_0x+2sB_0y+rx^2+sy^2=N-rA_0^2-sB_0^2. \end{aligned}$$
(5)

Hence, the recovery of vector

$$\mathbf {e}:=(X_1,X_2,X_3)=((r+s)\varDelta x, (r+s)\varDelta y, rx^2+sy^2)$$

solves the problem. Here \(\varDelta \) represents the upper bound for x and y. It is not hard to see that vector \(\mathbf {e}\) is in a shifted lattice \(\mathbf {c}+\mathcal {S}\), \(\mathbf {c}=(c_1,c_2,c_3) \in \mathbb {Z}^3\), where \((\frac{c_1}{(r+s)\varDelta },\frac{c_2}{(r+s)\varDelta },c_3)\) is a particular solution of (5) and \(\mathcal {S}\) is a two-dimensional lattice

$$\begin{aligned} \begin{pmatrix} (r+s)\varDelta &{} 0 &{}-2A_0r \\ 0 &{} (r+s)\varDelta &{}-2B_0s \\ \end{pmatrix}. \end{aligned}$$

According to Minkowski’s theorem [15], when \(|| \mathbf {e} || <\sqrt{2}\sqrt{det({\mathcal {S}})}\), one can recover \(\mathbf {e}\) by solving the closet vector problem. Further, the norm of \(\mathbf {e}\) satisfies \(|| \mathbf {e} || \le \sqrt{3}(r+s)\varDelta ^2\), and \(det(\mathcal {S}) \ge 2(r+s)\varDelta \sqrt{\frac{min(r,s)*N}{2}}\) with condition \(min(r,s)*N\ge 4\sqrt{N}\varDelta (r^{3/2}+s^{3/2})\). These constraints give rise to the error bound \(\varDelta <N^{1/6}\), as discussed in [7].

Next, we present our analysis for the case when \(\varDelta >N^{1/6}\). Here, we tag \(\mathbf {f}=((r+s)\varDelta f_1, (r+s)\varDelta f_2, f_3)\) as the output of the CVP algorithm on \(\mathcal {S}\), and use \(\{\mathbf {u}=((r+s)\varDelta u_1, (r+s)\varDelta u_2, u_3), \mathbf {v}=((r+s)\varDelta v_1, (r+s)\varDelta v_2, v_3)\}\) to denote the Gauss reduced basis for \(\mathcal {S}\). Then \(\mathbf {e}=\mathbf {f}+\alpha \mathbf {u}+\beta \mathbf {v}\), where \(\alpha , \beta \) represent the corresponding coordinates of vector \(\mathbf {e}-\mathbf {f}\) in lattice \(\mathcal {S}\). Thus, the problem is converted to finding the parameters \(\alpha \) and \(\beta \), which satisfy equation

$$\begin{aligned} \begin{aligned}&2A_0r(f_1+\alpha u_1 +\beta v_1)+2B_0s(f_2+\alpha u_2 +\beta v_2)\\&+r(f_1+\alpha u_1 +\beta v_1)^2+s(f_2+\alpha u_2 +\beta v_2)^2+rA_0^2+sB_0^2 -N=0. \end{aligned} \end{aligned}$$
(6)

We first derive the upper bounds for the unknowns \(\alpha , \beta \). Since \(\mathbf {e}-\mathbf {f}=\alpha \mathbf {u}+\beta \mathbf {v}\), from Lemma 1, we get

$$\begin{aligned} ||\alpha \mathbf{u}|| ||\beta \mathbf{v} ||&\le \frac{2}{\sqrt{3}}||\mathbf{e} -\mathbf{f}||\le 4(r+s)\varDelta ^2. \end{aligned}$$

Thus, \(|\alpha |\le \frac{4(r+s)\varDelta ^2}{|| \mathbf{u}||}\), \(|\beta |\le \frac{4(r+s)\varDelta ^2}{|| \mathbf{v}||}\). Further, according to Lemma 2, there is \(det(\mathcal {S})\le || \mathbf {u}|| ||\mathbf {v} || \le \frac{2}{\sqrt{3}}det(\mathcal {S})\). Then we have

$$ |\alpha ||\beta |\le \frac{4(r+s)\varDelta ^2}{det(\mathcal {S})}\le c_1\varDelta ^{3/2}N^{-1/4}, $$

where \(c_1=2^{7/4}(r+s)^{1/2}min(r,s)^{-1/4}\) is a constant.

Notice that Eq. (6) can be arranged to

$$\begin{aligned} \begin{aligned}&(ru_1^2+su_2^2)\alpha ^2+(rv_1^2+sv_2^2)\beta ^2+2(ru_1v_1+su_2v_2)\alpha \beta +2(A_0ru_1\\&+B_0su_2+rf_1u_1+sf_2u_2)\alpha +2(A_0rv_1+B_0sv_2+rf_1v_1+sf_2v_2)\beta \\&+2A_0rf_1+2B_0sf_2+rf_1^2+sf_2^2+rA_0^2+sB_0^2 \equiv ~0~mod~N, \end{aligned} \end{aligned}$$
(7)

which represents a certain type of modular polynomials consisting of monomials \(x^2,y^2,xy,x,y\) and 1. Next, we describe our analysis for solving such polynomials.

4.2 Solving a Certain Type of Modular Polynomials

Let \(f'_N(x,y)=b_1x^2+b_2y^2+b_3xy+b_4x+b_5y+b_0~mod~N.\) Assume \(b_1=1\), otherwise, set

$$f'(x,y)=b_1^{-1}f'_N(x,y)~mod~N.$$

If the inverse \(b_1^{-1}~mod~N\) does not exist, one can factorize N. Next, we use Coppersmith’s method to find the small roots of this polynomial. Build shifting polynomials \(h_{k,i,j}(x,y)\) which possess the same roots modular \(N^m\) with \(f'(x,y)\equiv 0~mod~N\) as follows:

$$ h_{k,i,j}(x,y)=x^iy^jf'^k(x,y)N^{m-k}, $$

where \(i=0,1;k=0,...,m-i;j=0,...,2(m-k)-i.\)

Construct a lattice \(\mathcal {L'}\) using the coefficient vectors of \(h_{k,i,j}(xX,yY)\) as basis vectors. We sort the polynomials \(h_{k,i,j}(xX,yY)\) and \(h_{k',i',j'}(xX,yY)\) according to lexicographical order of vectors (kij) and \((k',i',j')\). Therefore, we can ensure that each of our shifting polynomials introduces one and only one new monomial, which gives a triangular structure for \(\mathcal {L'}\).

Then the determinant of \(\mathcal {L'}\) can be easily calculated as products of the entries on the diagonal as \(det(\mathcal {L'})=X^{S_X}Y^{S_Y}N^{S_N}\) as well as its dimension \(\omega \) where

$$\begin{aligned} \begin{aligned}&\omega =\sum _{k=0}^{m-i}\sum _{i=0}^1\sum _{j=0}^{2(m-k)-i}1=2m^2+o(m^2),\\&S_X=\sum _{k=0}^{m-i}\sum _{i=0}^1\sum _{j=0}^{2(m-k)-i}(2k+i)=\frac{4}{3}m^3+o(m^3),\\&S_Y=\sum _{k=0}^{m-i}\sum _{i=0}^1\sum _{j=0}^{2(m-k)-i}j=\frac{4}{3}m^3+o(m^3),\\&S_N=\sum _{k=0}^{m-i}\sum _{i=0}^1\sum _{j=0}^{2(m-k)-i}(m-k)=\frac{4}{3}m^3+o(m^3). \end{aligned} \end{aligned}$$

Put these relevant values into inequality \(\det (\mathcal {L'})<{N}^{m\omega }\). After some basic calculations, we gain the bound

$$XY<N^{\frac{1}{2}}.$$

We summarize our result in the following theorem.

Theorem 2

Let N be a sufficiently large composite integer of unknown factorization and \(f'_N(x,y)=b_1x^2+b_2x+b_3y^2+b_4y+b_5xy+b_0~mod~N\) be a bivariate modular polynomial, where \(|x|\le X\), \(|y|\le Y\). Under Assumption 1, if

$$XY<N^{\frac{1}{2}},$$

one can extract all the solutions (xy) of equation \(f'_N(x,y)\equiv ~0~(mod~N)\) in polynomial time.

Next, we use the above method to solve Eq. (7), and then recover the unknown addends.

4.3 Recover the Addends

Notice that Eq. (7) is exactly the same type of polynomial discussed in Sect. 4.2. Put the derived upper bounds for \(|\alpha ||\beta |\) in Sect. 4.1 into Theorem 2,

$$ |\alpha ||\beta |\le c_1\varDelta ^{3/2}N^{-1/4}\le N^{1/2}. $$

Solve this inequality, omit the constant terms, and we obtain the optimized bound for the approximation error terms

$$\begin{aligned} \varDelta <N^{\frac{1}{3}}. \end{aligned}$$
(8)

Compared to Sect. 3, this method performs much better in practice since the dimensions of the involved lattices are much smaller when the error bounds are the same. We present the comparison results in Table 6, where one can see a remarkable improvement in the performing efficiency.

Table 6. A comparison between Sect. 4 (the left part datas) and Sect. 3 (the right part datas)
Table 7. Experimental results for Remark 3 with 512 bit N

Remark 3

As in Sect. 3, we also analyzed the case when transforming Eq. (6) into a non-constant modular polynomial. The corresponding error bound is then \(N^{1/4}\). Table 7 is the experimental results for this situation. Please refer to Appendix B for the detailed analysis.

5 Conclusions and Discussions

We revisit the problem of recovering the two addends in this paper. Our first algorithm improves Gutierrez et al.’s first result \(N^{1/6}\) to \(N^{1/4}\) by transforming the derived polynomial into a modular one. Then we improve this bound to \(N^{1/3}\) in theory by reducing the problem of recovering a sum of two squares decomposition to seeking the coordinates of a desired vector in a certain lattice. J.Gutierrez et al. did similarly in [7], and their optimized bound is \(N^{1/4}\). Our second approach performs much better than the first one since the dimension of the required lattice is much smaller when the same error bounds are considered. The tricks to solve the derived polynomials in Sects. 3 and 4 are similar, both of which transform integer relations to modular polynomials. We study four kinds of modular polynomials in our work (two types are discussed in Remarks 2 and 3). The tricks for solving these polynomials may find other applications in cryptanalysis.

We do experiments to testify the deduced results. The tests are done in Magma on a PC with Intel(R) Core(TM) Quad CPU (3.20 GHz, 4.00 GB RAM, Windows 7). These datas well support our analyses, however, as the error terms go larger, the dimensions of the required lattices are huger. The time, memory costs also increase greatly, which stops our experiment at a not good enough point. Hope people who are interested in this problem can bring us further supports for the experiments.