Keywords

1 Introduction

In cryptography one often encounters problems which are easy to solve using a secret private basis of a lattice \({\varLambda }\subset {{\mathbb {R}}}^n\), but are expected to be difficult to solve using suitably-chosen public bases. Famous examples include the Shortest Vector Problem (SVP) and Closest Vector Problem (CVP).

In [17] Lenstra and Silverberg posed the challenge of whether highly-symmetric lattices have hard bases, and proved several interesting results along these lines (related to earlier work of Gentry-Szydlo [10]; see also [16, 18]). One particularly beautiful question they posed is:

$$\begin{aligned} \text {can one efficiently recognize rotations of the standard}\; {\mathbb {Z}}^{n}\; \text {lattice?} \end{aligned}$$
(1.1)

To be more precise, this problem can be stated in two different group-theoretic ways (the second being the formulation in [17, §2]). Let \(\{b_1,\ldots ,b_n\}\) denote a basis for \(\varLambda \) and let B denote the \(n\times n\) matrix whose i-th row is \(b_i\):

figure a

Alternatively, following [9] and [17, §2] we may suppose one is given a positive-definite symmetric matrix \(G\in SL(n,{\mathbb {Z}})\) (which we think of as the Gram matrix \(G=BB^t\) of \(\varLambda \)):

figure b

Clearly, Problem 1 reduces to Problem 2 with \(G=BB^t\). Conversely, one can orthogonally diagonalize the matrix G in Problem 2 as \(G=PDP^t\) for some \(P\in O(n)\) and diagonal matrix D with positive diagonal entries. Then \(B=PD^{1/2}\) solves the equation \(G=BB^t\), and Problem 2 therefore reduces to Problem 1 (modulo technicalities we will not delve into, such as that the entries of P, D, and B may in general be irrational).

In particular, by orthogonal diagonalization it is trivial to find a non-integral solution \(M\in GL(n,{\mathbb {R}})\) to Problem 2. However, imposing the constraint that \(M\in GL(n,{\mathbb {Z}})\) adds an intricate dose of number theory, since Problem 2a then becomes a class number problem: indeed, in large dimensions n there is a combinatorial explosion of possible \(GL(n,{\mathbb {Z}})\)-equivalence classes.Footnote 1

Both Problems 1 and 2 have inefficient solutions using sufficiently strong lattice basis reduction. For example, the given information is sufficient to determine whether or not all lattice vector norms are square-roots of integers, and an SVP solver can determine the shortest nonzero norm \(\lambda _1(\varLambda )\). If \(\lambda _1(\varLambda )\ne 1\), the lattice \(\varLambda \) is definitely not a rotation of \({\mathbb {Z}}^n\) and Problems 1a and 2a have negative solutions. However, if one finds a vector of norm 1 and all lattice norms are square-roots of integers, it is then easy to see (by subtracting multiples of this vector to obtain an orthogonal complement) that the dimension in Problems 1b and 2b reduces from n to \(n-1\). It was recently shown in [13] that Problem 2a is in the class NP\(\cap \)co-NP, using results of Elkies [8] on characteristic vectors of lattices (see also [11, §9.6]).

This paper primarily concerns Problem 2b, i.e., one is handed a matrix of the form \(MM^t\) and wishes to efficiently recover M. Of course permuting the columns of M does not change \(MM^t\), nor does multiplying any subset of columns by \(-1\); thus we look for solutions up to such signed permutations of the columns. (For this reason it is equivalent to insist that \(M\in SL(n,{\mathbb {Z}})\).) We find that the choice of procedure to randomly generate instances of M has a drastic impact on the difficulty of the problem. We state this in terms of a probability density function \(p:GL(n,{\mathbb {Z}})\rightarrow {\mathbb {R}}_{\ge 0}\) (i.e., \(\sum _{M\in GL(n,{\mathbb {Z}})}p(M)=1\)):

figure c

In Sect. 2 we compare various methods of generating random bases of a lattice, corresponding to different probability densities p (generalizing [3, §5.1.2]; see also Sect. 4). Here one seeks distributions for which Problem 3 is hard on average, much like SIS and LWE are average-case hard instances of variants of SVP and CVP, respectively. We then perform experiments on them in Sect. 3. Some of the methods we describe, such as the long-known Algorithm 4 (see, for example, [5]), give relatively hard instances of Problem 3. However, our main finding is that a certain well-known existing method, namely generating matrices by multiplying unipotents (e.g., Magma’s RandomSLnZ command), is cryptographically weak: we were able to recover M in instances in dimensions nearly 1500 (in some measurable ways these instances are comparable to NTRU lattices having purported 256-bit quantum cryptographic strength). That gives an example of an average-case easy distribution. In Sect. 4 we similarly find that the random basis generation method used in the DRS NIST Post-Quantum Cryptography submission [21] also gives weak instances of Problem 3: in 708 hours we could recover M generated using DRS’s 256-bit security settings.

2 Choosing Random Elements of \(GL(n,{\mathbb {Z}})\)

We consider the problem of uniformly sampling matrices in a large boxFootnote 2

$$\begin{aligned} \varGamma _T \ \ := \ \ \left\{ M=(m_{ij}) \in GL(n,{\mathbb {Z}}) \ : \ |m_{ij}|\le T \right\} \,, \ \ T > 0\,, \qquad {(2.1)} \end{aligned}$$

inside \(GL(n,{\mathbb {Z}})\). For large T one has \(\#\varGamma _T \sim c_n T^{n^2-n}\), for some positive constant \(c_n\).Footnote 3 We now consider a series of algorithms to sample matrices in \(GL(n,{\mathbb {Z}})\). The most naive way to uniformly sample \(\varGamma _T\) is prohibitively slow:

figure d

Though we do not analyze it here, the determinant of such a randomly chosen matrix M is a very large integer, and highly improbable to be \(\pm 1\) as required for membership in \(GL(n,{\mathbb {Z}})\). One minor improvement that can be made is to first check that the elements of each row (and of each column, as well) do not share a common factor, which is a necessary condition to have determinant \(\pm 1\). Nevertheless, this fails to seriously improve the extreme unlikelihood of randomly producing an integral matrix of determinant \(\pm 1\).

figure e

We note that some computer algebra packages include commands for generating random elements of \(GL(n,{\mathbb {Z}})\). In addition to its command RandomSLnZ which we shall shortly come to in Algorithm 2, Magma’s documentation includes the command RandomUnimodularMatrix for fairly rapidly generating matrices in \(GL(n,{\mathbb {Z}})\) (not \(SL(n,{\mathbb {Z}})\) as the name indicates) having “most entries” inside a prescribed interval, but provides no further explanation. Even after accounting for a typo which switches the role of the command’s arguments, we found that in fact most of the entries were outside the prescribed interval (the documentation’s claims notwithstanding). Furthermore, the lattices constructed using this command appear to be much easier to attack than those generated by the closest analog considered here (Algorithm 4). SageMath’s random_matrix command has a unimodular constructor (designed for teaching purposes) which does produce matrices in \(GL(n,{\mathbb {Z}})\) whose entries are bounded by a given size, but it is not as fast as other alternatives and its outputs must satisfy further constraints. For these reasons we did not seriously examine RandomUnimodularMatrix and random_matrix.

Because Algorithm 1 is so slow, the rest of this section considers faster algorithms which do not uniformly sample \(\varGamma _T\), some coming closer than others.Footnote 4 For \(1\le i\ne j \le n\) let \(E_{i,j}\) denote the elementary \(n\times n\) matrix whose entries are all 0 aside from a 1 in the (ij)-th position. Here as elsewhere the abbreviation “i.i.d.” stands for “independently identically distributed”.

figure f

As we shall later see, the matrices produced by Algorithm 2 have a very special form, creating a cryptographic weakness.

Algorithm 2 can be thought of as a counterpart to the LLL algorithm [15], which applies successive unipotent matrices and vector swaps to reduce lattices. Although Algorithm 2 does not literally contain vector swaps, they are nevertheless present in the background because conjugates of \(\gamma _j\) by permutation matrices have the same form \(I_n+xE_{i,j}\) as \(\gamma _k\). In that light, the following algorithm can then be thought of as an analog of BKZ reduction [23], since it utilizes block matrices of size much smaller than n. Its statement involves the embedding maps \(\varPhi _{k_1,\ldots ,k_d}:GL(d,{\mathbb {R}})\hookrightarrow GL(n,{\mathbb {R}})\) for size-d subsets \(\{k_1,\ldots ,k_d\}\subset \{1,\ldots ,n\}\),

$$\begin{aligned} (\varPhi _{k_1,\ldots ,k_d}(h))_{i'j'} \ \ = \ \ \left\{ \begin{array}{ll} h_{ij}, &{} \text {if }i'=k_i\text { and }j'=k_j \text { for some }i,j\le d; \\ \delta _{i'=j'}, &{} \mathrm {otherwise\,} \\ \end{array} \right. \end{aligned}$$
(2.3)

where \(h=(h_{ij})\in GL(d,{\mathbb {R}})\).Footnote 5 The image of \(\varPhi _{k_1,\ldots ,k_d}\) is a subgroup of \(GL(n,{\mathbb {R}})\) isomorphic to \(GL(d,{\mathbb {R}})\). (Of course we will only apply the map \(\varPhi _{k_1,\ldots ,k_d}\) to elements of \(GL(d,{\mathbb {Z}})\).)

figure g

We expect Algorithm 3 produces more-uniformly distributed matrices as d increases. The role of the parameter d is essentially to interpolate between Algorithm 1 (which is the case \(d=n\)) and Algorithm 2 (which is close to the case \(d=2\), but not exactly: \(\gamma ^{(2)}\) need not be unipotent).

Next we turn to the following method, which among the algorithms we considered seems the best at rapidly creating uniformly-distributed entries of matrices in \(GL(n,{\mathbb {Z}})\). This algorithm was originally suggested to us by Joseph Silverman in a slightly different form, in which more coprimality conditions needed to be checked. It relies on the fact that an integral \(n\times n\) matrix \(M =(m_{ij})\) lies in \(GL(n,{\mathbb {Z}})\) if and only if the n determinants of \((n-1)\times (n-1)\) minors

$$\begin{aligned} \det \left( {\begin{array}{*{20}c} {m_{{22}} } &{} \cdots &{} {m_{{2n}} } \\ \vdots &{} \ddots &{} \vdots \\ {m_{{n2}} } &{} \cdots &{} {m_{{nn}} } \\ \end{array} } \right) ,\det \left( {\begin{array}{*{20}l} {m_{{21}} } &{} {m_{{23}} } &{} \cdots &{} {m_{{2n}} } \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ {m_{{n1}} } &{} {m_{{n3}} } &{} \cdots &{} {m_{{nn}} } \\ \end{array} } \right) ,\det \left( {\begin{array}{*{20}c} {m_{{21}} } &{} \cdots &{} {m_{{2n - 1}} } \\ \vdots &{} \ddots &{} \vdots \\ {m_{{n1}} } &{} \cdots &{} {m_{{nn - 1}} } \\ \end{array} } \right) \end{aligned}$$
(2.4)

share no common factors.

figure h

Remarks on Algorithm 4: The n large integers in (2.4) are unlikely to share a common factor: for example, the most probable common factor is 2, which happens only with probability \({\approx }2^{-n}\). Obviously the top row of M is chosen differently than the others, and its size is different as well since it typically has entries larger than size T – this is because the euclidean algorithm can produce large coefficients (as the minors in (2.4) are themselves so enormous). Also, it is likely that the first two or three minors will already be coprime, and hence that most of the entries in \([m_{11}\,m_{12}\,\cdots \,m_{1n}]\) will vanish. The use of rounding and least-squares cuts down this size and further randomizes the top row, while keeping the determinant equal to one.

One could instead try a different method to find an integral combination of the bottom \(n-1\) rows closer to the initial guess for the top row. One extreme possibility involves appealing to the Closest Vector Problem (CVP) itself, which is thought to be very difficult. We found Algorithm 4 gave good randomness properties in that nearly all of the matrix is equidistributed, and it is fairly fast to execute. In comparison, we will see that using Algorithm 2 requires many matrix multiplications to achieve random entries of a similar size, which are not as well distributed anyhow.

The following algorithm is folklore and has appeared in various guises in many references (for example [5], which uses Gaussian sampling and has provable hardness guarantees,Footnote 6 though not necessarily for Problem 3). As we shall see just below, it shares some similarities with Algorithm 4.

figure i

A surprising connection between Algorithms 4 and 5: Even though Algorithms 4 and 5 appear to be very different, they are actually extremely similar (in fact, arguably nearly identical) in practice. Algorithms for Hermite Normal Form (such as HermiteDecomposition in Mathematica) proceed by building the matrix M directly out of the rows of B whenever possible. For example, it is frequently the case that the first \(n-1\) rows of U agree with those of the identity matrix \(I_n\), or at least differ only very slightly; in other words, the first \(n-1\) rows of B and M are expected to coincide or nearly coincide.Footnote 7 Also, the last row of M is an integral combination of the first n rows of B. In contrast with Algorithm 4 this last combination, however, is mainly determined by arithmetic considerations, and in particular depends on the n-th row of B; thus more random information is used than in Algorithm 4, which uses only \(n^2-n\) random integers instead of the \(n^2\) here.Footnote 8

To summarize, in fairly typical cases both Algorithms 4 and 5 populate the matrix M by first generating all but one row uniformly at random, and then using integral combinations to create a final row having relatively small entries. The practical distinction is essentially how this final row is created, which utilizes further random information in Algorithm 5 but not in Algorithm 4. The final row also appears to be typically smaller (that is, closer to fitting in the box defined in (2.1)) when using Algorithm 4 than when using Algorithm 5; consequently, we did not perform any experiments with Algorithm 5.

Note that the Hermite decomposition as stated above is not unique, since there are lower triangular matrices in \(GL(n,{\mathbb {Z}})\). Thus there can be no immediate guarantee on the entry sizes of M unless this ambiguity is resolved. Algorithm 5 can be thought of as a p-adic analog of the following method of producing random rotations in O(n): apply the Gram-Schmidt orthogonalization process to a matrix chosen according to a probability density function (e.g., Gaussian) which is invariant under multiplication by O(n).

Remarks on an Algorithm in [22]: Igor Rivin makes the proposal in [22, §6.1] to generate matrices in \(GL(n,{\mathbb {Z}})\) by applying complete lattice basis reduction to a basis of \({\mathbb {R}}^n\) chosen inside a large ball. Let \(B\in GL(n,{\mathbb {R}})\) denote the \(n\times n\) matrix whose rows consist of this basis. Complete lattice reduction produces a random element \(\gamma \in GL(n,{\mathbb {Z}})\) of constrained size for which \(\gamma B\) lies in a fixed fundamental domain for \(GL(n,{\mathbb {Z}})\backslash GL(n,{\mathbb {R}})\).

This procedure is extremely slow, since complete lattice reduction is impractical in large dimensions. Rivin thus considers instead using weaker lattice basis reduction methods (such as LLL [15]) to speed this up, but at the cost of less-uniform distributions. For example, the results of LLL are thought to be skewed towards certain favored outputs avoiding “dark bases” [14]. Since our interest in generating random bases is to see how long incomplete lattice reduction takes on them, the use of lattice reduction to itself make the basis itself is too slow for our purposes (hence we did not consider this algorithm in our experiments).

3 Experiments on Recognizing \({\mathbb {Z}}^n\)

In this section we report on attempts to solve Problem 2b on instances of matrices M generated using some of the algorithms from Sect. 2 for sampling \(GL(n,{\mathbb {Z}})\). We first note that Geissler and Smart [9] reported on attempts to solve Problem 2b on NTRU lattices using LLL [15] (as well as their own modification, for which they report up to a factor of four speedup), and concluded from lattice reduction heuristics that LLL itself is insufficient for NTRU instances with dimensions and matrix entry size far smaller than those considered in (3.2) below (see Appendix C). Nevertheless LLL performs fairly well on rotations of the \({\mathbb {Z}}^n\) lattice as compared to on a random lattice, which is not unexpected since the latter has shortest vector on the order of \(\sqrt{n}\) (as opposed to 1 for rotations of the \({\mathbb {Z}}^n\) lattice). Given that LLL typically outperforms its provable guarantees, it is not surprising it is fairly effective on Problem 2b.

Our main emphasis is that LLL and BKZ perform better on certain distributions with respect to Problem 2b than on others. Instead of LLL alone, we try the following:

figure j

We chose to use Magma’s built-in lattice basis reduction routines, partly because of slow running times with other implementations (such as fplll in SageMath) on matrices with very large integer entries. In step 2 one can of course continue further with block sizes larger than 5, but we fixed this as a stopping point in order to be systematic.

Our main finding is that Algorithm 2 in Sect. 2 (as implemented in Magma’s RandomSLnZ) is insecure for generating hard instances of Problem 2b. Algorithms 3, 4, and 5 fare much better. It is not surprising that Algorithm 5 (and the nearly-equivalent Algorithm 4) give harder instances, since there are provable guarantees attached to Algorithm 5 in a different context [5]; there is a serious difference between these and Algorithm 2 described below and in Appendices A and B.

3.1 Experiments with Algorithm 2 (Magma’s RandomSLnZ Command)

We begin with some comments on entropy and generating random products with a constrained number of bits. To mimic random elements of \(GL(n,{\mathbb {Z}})\), one may desire that the product matrix has as many nonzero entries as possible per random bit. For this reason, our experiments set the parameter \(b=1\) in Algorithm 2 in order to take longer products (thereby further increasing the number of nonzero entries of the matrix), while keeping the number of random bits constant. When the product length is less than n, one expects to have rows or columns of the product matrix which are unchanged by the successive matrix multiplications. (This much less likely to be the case for the Gram matrices, however.)

Thus each random factor has at most a single nonzero off-diagonal entry, which is \(\pm 1 \). It is prohibitive to pack in as many random bits as the total number of entries this way, since multiplication of large matrices is slow. As an extreme example, as part of a comparison with the last row of (C.3) we generated a random matrix in \(GL(1486,{\mathbb {Z}})\) using products of length 55,000, again with \(b=1\). Generating the product alone took about half a day. Its row lengths were between \(2^{14}\) and \(2^{20}\) in size. For comparison, an NTRU matrix with similar row lengths (as in Table C.3) uses 8,173 random bits. The comparison with NTRU is made here simply because concrete bit-strengths have been asserted for NTRU lattices; this is why we took the particular values of n in (3.2) (see Appendix C for more details). One might hypothesize that having more random bits in the matrix makes solving Problem 2b more difficult, but as we shall see this in fact turns out to not always be the case: the structure of the matrix plays a very important role, and the product structure from Algorithm 2 seems to be a contributing weakness. In particular, the larger the value of the parameter b, the more unusual properties the product matrix possesses.

figure k

From the success of our trials one immediately sees the Lenstra-Silverberg Problem 2b is fairly easy for matrices M generated by Magma’s RandomSLnZ command. (Of course it is well known to be impossible to solve Problem 2b using LLL or BKZ with small block sizes on NTRU matrices of the comparable size listed in (3.2) and (C.3), or even those much smaller.)

3.2 Experiments with Algorithm 3 (Random \(GL(d,{\mathbb {Z}})\) Matrices)

Next we consider matrices generated by Algorithm 3 (random \(GL(d,{\mathbb {Z}})\)’s), and find that for small d they are also cryptographically weak for the Lenstra-Silverberg problem, but stronger than those generated by Algorithm 2. Furthermore, we see their strength increases with increasing d.

The tables in Appendix A list the outcomes of several experiments attacking instances of Problem 2b for matrices M generated by Algorithm 3. One sees the dramatic effect of the product length \(\ell \). For example, if \(\ell \) is too short there may be rows and columns of the matrix not touched by the individual multiplications by the embedded random \(d\times d\) matrices; if \(\ell \) is too long, the matrix entries become large and lattice basis reduction becomes difficult.

3.3 Experiments with Algorithm 4

Finally, we turn to the opposite extreme of random elements of \(GL(n,{\mathbb {Z}})\) generated by Algorithm 4, in which the bottom \(n-1\) rows are uniformly distributed among entries in the range \([-T,T]\). Here we were able to solve Problem 2b with instances having \(n=100\), even with entry sizes up to \(T=50\) (again, using the testing procedure in (3.1)). However, none of our experiments with \(n\ge 110\) were successful at all, even with \(T=1\) (i.e., all entries below the top row are \(-1\), 0, or 1). See the tables in Appendix B for more details.

4 Random Basis Generation in the DRS NIST Post-Quantum Cryptography Competition Submission

In [3, §5.1.2] some examples of methods for generating random lattice bases are described, which are closely related to Algorithms 2, 3, and 5. The authors reported their experiments on those methods resulted in similar outcomes in practice. Our experiments, however, do show a difference (as was explained in Sect. 3).

In this section we wish to make further comments about one method highlighted in [3], which is from the DRS NIST Post-Quantum competition submission [21, §2.2]. Random elements of \(GL(n,{\mathbb {Z}})\) there are constructed as products of length \(2R+1\) of the form

$$\begin{aligned} P_1 \gamma _1 P_2 \gamma _2 P_3 \gamma _3\cdots P_R \gamma _R P_{R+1}\,, \qquad \qquad \qquad \qquad \qquad {(4.1)} \end{aligned}$$

where \(P_1,\ldots ,P_{R+1}\) are chosen uniformly at random among permutation matrices in \(GL(n,{\mathbb {Z}})\) and \(\gamma _1,\ldots ,\gamma _R\) are elements in \(SL(n,{\mathbb {Z}})\) produced by the following random process. Let and . Then each \(\gamma _i\) is a block diagonal matrix with \(\frac{n}{2}\) \(2\times 2\) entries chosen uniformly at random from \(\{A_+,A_{-}\}\). This construction has some similarities with Algorithm 3 for \(d=2\), but note that here many of the SL(2) matrices commute (being diagonal blocks of the same matrix). In fact, since \(A_+\) is conjugate by to \(A_{-}\) one may replace each \(\gamma _j\) with the block diagonal matrix

$$D \ \ = \ \ {\text {diag}}(A_+,A_+,\ldots ,A_+)\,,$$

at the cost of allowing the \(P_i\)’s to be signed permutation matrices. Alternatively, by rearranging the permutation matrices and applying an extra rotation on the right, Problem 2b on matrices of the form (4.1) is equivalent to it on products of the form

$$\begin{aligned} M \ \ = \ \ M_1M_2\cdots M_R\,, \qquad \qquad \qquad \qquad \quad \qquad {(4.2)} \end{aligned}$$

in which each \(M_i\) is conjugate of D by a random signed permutation matrix.

Since Algorithm 3 with \(d=2\) performed relatively weakly in the experiments of Sect. 3, we suspect Problem 2b is relatively easy to solve on matrices generated using (4.1) (as compared to those, say, generated using Algorithm 4). The experiments described below bear this out. (All of our remaining comments in this section pertain solely to (4.1) in the context of Problem 2b, and not to any other aspect of [21].)

The parameters listed in [21, §3.2] assert 128-bit security for their scheme when \((n,R)=(912,24)\), 192-bit security when \((n,R)=(1160,24)\), and 256-bit security when \((n,R)=(1518,24)\). Our main finding is that the testing procedure (3.1) was able to recover M chosen with the 256-bit security parameters in 708 hours of running time. We could also recover M chosen with the 192-bit security parameters in 222 hours of running time but (as we describe below) could not fully recover M with the 128-bit security parameters.

Fig. 1.
figure 1

We experimentally tried to solve Problem 2b on instances generated by the random basis construction from the DRS NIST submission [21, §2.2], using its suggested parameters \((n,R)=(912,24)\) for 128-bit security. This failed with \(n=912\) and \(R=24\) itself (the gray bar on the right), but was successful for \(n=912\) and \(1\le R \le 23\). We were able to solve all cases for \(R\le 22\) in less than 60 hours using LLL alone, and the \(R=23\) case in slightly more time using the procedure in (3.1). We conclude that method of random basis generation in the DRS digital signature scheme is insecure with the recommended parameter setting \((n,R)=(912,24)\), at least for Problem 2b. Times are shown for runs on a Dell PowerEdge R740xd server equipped with two Intel Xeon Silver 4114 2.2 GHz processors and 256 GB RAM.

The testing procedure (3.1) also easily solves Problem 2b when n or R are smaller yet still relatively large. For example, it took roughly an hour to recover M from \(MM^t\) when \((n,R)=(180,24)\) using BKZ with block sizes up to 26. In Fig. 1 we show the results of several experiments for the parameter choice of \(n=912\) and increasing values of R up to the recommended choice of \(R=24\) for 128-bit security. The results were strikingly successful, in that each trial for \(R\le 22\) successfully recovered M from \(MM^t\) using only LLL (without requiring BKZ). We additionally tried \(R=23\) and nearly recovered M using LLL this way: the longest vector in the LLL output had length \(\sqrt{7}\), and subsequently applying BKZ reduction with block size 3 for less than five minutes then fully recovered M. However, we were unsuccessful in the \(R=24\) case suggested in [21].

Again, these results are only for Problem 2b applied to the random basis construction used in the DRS digital signature scheme [21]; nevertheless, this may indicate a weakness in the digital signature scheme as well. Somewhat counter intuitively, our experiments for fixed values of the product length parameter R sometimes fared better for larger values of n. For example, we were successful with \((n,R)=(912,22)\) despite not being successful for \((n,R)=(200,22)\), and we were successful with \((n,R)=(1160,24)\) and (1518, 24) despite not being successful for \((n,R)=(912,24)\). Our explanation is that as n grows there may be a weakness in that it is hard to randomly fill out the full matrix M (a similar phenomenon occurs in Algorithms 2 and 3 for small \(\ell \)). Indeed, matrices of the form (4.1) seem to have a very special form: Fig. 2 shows the entry sizes in \(MM^t\) have a banded structure.

Fig. 2.
figure 2

Mathematica’s MatrixPlot command displays the nonzero entries of Gram matrix \(MM^t\) as darkened pixels, where M was generated according to (4.1) with recommended parameters \(n=912\) and \(R=24\) from [21]. Similarly banded plots arise when M is generated using Algorithm 3 with \(d=2\). In contrast, Gram matrices generated by Algorithm 4 have a (provably) far more uniform structure.

5 Conclusions

We have considered the role of generating random elements in \(GL(n,{\mathbb {Z}})\) in the difficulty of lattice problems, and have found that it can have a profound influence. Concretely, Magma’s RandomSLnZ command (Algorithm 2) gives easy instances of Lenstra-Silverberg’s “Recognizing \({\mathbb {Z}}^n\) Decision” Problem 2b from (1.2). We were able to successfully attack lattices of dimension up to 1,486, which are in some measurable ways comparable to NTRU lattices having claimed 256-bit quantum security. On the other hand, using the apparently stronger methods of Algorithms 3 and 4 make Problem 2b much more difficult to solve (as expected).

We would thus recommend not using Algorithm 2 in generating random bases for cryptographic applications. We also recommend not using the random basis algorithm from the NIST Post-Quantum Competition submission DRS [21], because we were similarly able to solve Problem 2b on instances of its random basis generation method with its recommend parameters for 256-bit security.

We have not fully understood the weaknesses of these algorithms. It seems plausible that the failure to quickly fill out the matrix entries in a uniform way is at least partly to blame, since many do not get sufficiently randomized. The construction of Algorithm 2 in some sense reverses the steps of an LLL basis reduction, which might explain why LLL is particularly effective against it. More generally one might expect the block sizes in Algorithm 3 to be related to the block sizes in the BKZ algorithm. It is natural from this point of view to expect Algorithms 4 and 5 to be the strongest lattice basis generation algorithms considered in this paper, consistent with the results of our experiments.