1 Introduction

At repeated occasions [8, 22, 24, 33, 45], and over more than 30 years, it has been attempted to adapt the public-key encryption scheme of McEliece [25] from codes to lattices. More specifically, these works attempted to construct particularly good lattices with efficient decoding algorithms, to use it as a secret-key, and to give a bad description of a similar lattice as the corresponding public-key. For example, it was analysed in [11] that the Chor-Rivest cryptosystem [8] was implicitly relying on a family of lattices for which it is possible to efficiently decode errors up to a radius within a factor of \(O(\log n)\) from optimal (Minkowski bound). For comparison, the decoding algorithm underlying schemes based on the Learning with Error problem [39] (LWE) fall short from the Minkowski bound by polynomial factors; they essentially reduce decoding to the case of the trivial lattice \(\mathbb {Z}^n\).

This McEliece-like approach has unfortunately not been very popular lately. Perhaps it has suffered from the failure of the Merkle-Hellman Knapsack-based cryptosystem [26, 43] more than it should have. Indeed, from the “knapsack-era”, only the Merkle-Hellman cryptosystem and its variants were completely devastated by a polynomial-time attack [32]. In contrast, the best known attack against the scheme of Chor and Rivest [8, 24] remains sub-exponential in the dimension n; what may be concerning is that those attacks are not pure lattice reduction attacks. For both versions of this scheme, the canonical coordinates are partially brute-forced during the best attack. Lapiha [20] found that an Information Set Decoding attack was possible against the variant of Li et al. [24]. Brickell’s attack against the original scheme also relies on guessing over a few canonical coordinates, inside of an arithmetic attack [8, Sec. VII.5].

However, we note that these attacks are enabled by the fact that these schemes only re-randomize the lattice by applying a permutation of the coordinates.Footnote 1 Such permutations are isometries, i.e. lattice isomorphism, but those are not the only ones... The isometry group \({\mathcal {O}_n(\mathbb R)}\) acting on lattices is much larger than the one acting on codes, and applying a random isometry from this larger group should convincingly thwart those code-style attacks: the canonical coordinate system becomes irrelevant.

All these remarks point toward the Lattice Isomorphism Problem (LIP) as a potential theoretical platform for finally getting this natural approach properly formalized, and hopefully, truely “lattice-based” in the cryptanalytic sense: the best known attack should be based on generic lattice reduction algorithms such as LLL [21] and BKZ [40]. The current state of the art on LIP supports this hypothesis: all known algorithms [17, 37, 38, 44] rely on finding short vectors. This is the case even for algorithms specialized to the trivial lattice \(\mathbb {Z}^n\) [46]. However, experimental studies [6] show that the basis randomization step requires care.

While instantiating LIP with \(\mathbb {Z}^n\) may already give rise to secure cryptosystems, the end goal of this work is to enable lattice-based cryptosystems that could be even more secure than those based on LWE and SIS, by instantiating the constructed schemes with remarkably decodable lattices.

1.1 Contributions

We propose a formal and convenient framework for LIP-based cryptography, from which we build an identification scheme based on search-LIP (\(\mathrm {sLIP}\)), a (passively secure) Key Encapsulation Mechanism (KEM) based on distinguish-LIP (\(\mathrm{{\Delta LIP}}\)), as well as signature scheme also based on \(\mathrm{{\Delta LIP}}\). In more details:

  • We first discuss the LIP problem, recall the quadratic form formalism (Sect. 2.2), and rephrase the LIP problem in terms of quadratic forms to conveniently avoid real numbers. Then, thanks to Gaussian Sampling [12, 34], we define an average-case distribution for LIP and establish a worst-case to average-case reduction within an isomorphism class (Sect. 3). This addresses the concerns raised by Blanks and Miller [6], and formalizes their heuristic countermeasure.

  • The above cryptographic foundations are directly inspired by the Zero-Knowledge proof of lattice non-isomorphism of Haviv and Regev [16]. We further extend on their techniques by proposing a Zero-Knowledge proof of knowledge (ZKPoK) of a lattice isomorphism (Sect. 4). This directly implies an identification scheme based on \(\mathrm {sLIP}\).

  • We propose a KEM scheme (Sect. 5) and a hash-then-sign signature scheme (Sect. 6), both based on \(\mathrm{{\Delta LIP}}\). Perhaps surprisingly, and unlike the original scheme of McEliece for codes, we circumvent the additional assumption that decoding a certain class of random lattices would be hard. This is done via a lossyness argument [36] for the KEM, and a dual argument for the signature scheme.

  • We review the state of the art for solving LIP (Sect. 7). In particular we note that all known algorithms go through lattice reduction, and we quantify the required approximation factor.

  • We discuss natural instantiations for each scheme (Sect. 8) from any remarkable lattice. This section handles the construction of the auxiliary lattice appearing in \(\mathrm{{\Delta LIP}}\) for the lossyness arguments to get through.

1.2 Potential Advantages

The KEM. To instantiate our KEM, consider a lattice L (w.l.o.g. of volume 1) such that:

  • the minimal distance is within a factor f from Minkowski’s bound: \(\lambda _1(L) \geqslant \varOmega (\sqrt{n}/f)\),

  • there exists an efficient algorithm that can decode errors in L up to radius \(\rho \) within a factor \(f'\) from Minkowski’s bound: \(\rho \geqslant \varOmega (\sqrt{n}/f')\).Footnote 2

  • the dual minimal distance is within a factor \(f^*\) from Minkowski’s bound: \(\lambda _1(L^*) \geqslant \varOmega (\sqrt{n}/f^*)\).

Then, our instantiated KEM appears to resist lattice attack down to an approximation factor \(O(\max (f, f^*) \cdot f^* \cdot f')\). More specifically, it’s security is based on \(\mathrm{{\Delta LIP}}\) for two lattices whose primals and duals are all within a factor \(O(\max (f, f^*) \cdot f^* \cdot f')\) from Minkowski’s bound.

The trivial lattice \(\mathbb {Z}^n\) gives all three factors \(f, f', f^*\) of the order \(\varTheta (\sqrt{n})\). The Barnes-Wall [28] lattice improves all three factors down to \(\varTheta (\root 4 \of {n})\).

The endgame would be to instantiate with lattices for which all three factors would be very small. In particular, one would naturally turn to recent work on decoding the Chor-Rivest lattices [8, 11, 20, 24] and the Barnes-Sloane lattices [31] giving \(f = \mathrm {polylog}(n)\) and \(f' = \mathrm {polylog}(n)\), but unfortunately their dual are not that good: \(f^* \geqslant \varTheta (\sqrt{n})\). Indeed, all these constructions are integer lattices \(L \subset \mathbb {Z}^n\) with single exponential volume \(\det (L) = c^n\): their dual \(L^*\) have a Minkowski’s bound of \(\varTheta (\sqrt{n} / \det (L)^{1/n}) = \varTheta (\sqrt{n}) \), but contain all of \(\mathbb {Z}^n \subset L^*\), including vectors of norm 1.

Note nevertheless that there is no geometric impossibility to the existence of the desired remarkably decodable lattice: random lattices have \(f = O(1)\) and \(f^* = O(1)\); so decoding is possible down to \(f'=O(1)\) but the best known algorithm is conjectured to take exponential time.

The Signature Scheme. The same principle also applies to our signature scheme, but this time with respect to Gaussian sampling rather than decoding: lattices with tight sampling (and large dual minimal distance) would lead to a scheme resisting attacks down to very small approximation factors. Alas, even ignoring the constraint on the dual lattice, we do not know of any lattice much better than \(\mathbb {Z}^n\) for efficient gaussian sampling. Yet, instantiated with \(\mathbb {Z}^n\) our scheme still has an interesting feature: not having to deal with any Gram-Schmidt or Cholesky matrices over the reals. This may be a worthy practical advantage over current hash-then-sign signature schemes [12].

The Identification Scheme. Because \(\mathrm {sLIP}\) seems super-exponentially hard in the dimension for well chosen lattices (large kissing number), it might be secure to instantiate our ZKPoK with a rather small lattice dimension, maybe down to about a hundred. Yet, this is more a theoretical curiosity than a practical advantage—the protocol still needs soundness amplification, and each round requires exchanging \(\tilde{O}(n^2)\) bits.

1.3 Open Questions

A KEM with \(\mathrm {polylog}\)-Approximation Factor Security. Is there any family of lattices that can be efficiently decoded within a \(\mathrm {polylog}\) factor from Minkowski’s bound such as [8, 11, 20, 24, 31], but whose dual would also have an equally large minimal distance?

Tight Gaussian Sampling for Signatures. Is there any family of lattices L (of volume 1) in which one can efficiently sample Gaussian with small parameter \(\sigma < o(\sqrt{n})\), if not \(\sigma = \mathrm {polylog}(n)\) (with exponential smoothing \(\sigma > \eta _{2^{-n}}(L)\))? And if so, do they and their dual have a large minimal distance? Note that quantumly, this question is related to the previous one via the reduction of Regev [39]: decoding in the primal for a large radius gives Gaussian sampling in the dual for a small width. But a classical algorithm would be much preferable.

Concrete Instantiation with Simple Lattices. Instantiated with \(\mathbb {Z}^n\), our signature scheme has the advantage of not requiring any Gram-Schmidt or Cholesky decomposition, contrary to existing hash-then-sign signature schemes; and may therefore be of practical interest. It could also be reasonable to instantiate our KEM with the lattice of Barnes and Wall, thanks to the decoder of Micciancio and Nicolesi [28].

Module-LIP. At last, it also seems natural to explore structured variants of LIP, where both the lattice and the isometry should be structured. We note that for any ideal lattice in complex-multiplication number fields, a classical polynomial time algorithm is known [13, 23]. Could the module variant be secure? Can our constructions gain a linear factor on key sizes from this variant? And are there remarkably decodable lattices that are also ideals in certain number fields? The repeated-difference lattices (a.k.a. Craig’s lattices [9]) are indeed ideal lattices in cyclotomic number field with large minimal distances, but a polynomial decoding algorithm for them remains to be discovered.

2 Preliminaries

2.1 Notation

Vectors \({\boldsymbol{x}}\) are denoted in bold and should be interpreted as column vectors. For a matrix B with columns \({\boldsymbol{b}}_1, \ldots , {\boldsymbol{b}}_n\) we denote its Gram-Schmidt orthogonalisation by \(B^*\) with columns \({\boldsymbol{b}}_1^*, \ldots , {\boldsymbol{b}}_n^*\), and we denote the matrix norm by \(\left\Vert B\right\Vert := \max _i \left\Vert {\boldsymbol{b}}_i\right\Vert _2\). We denote \(\mathbb T_q\) the discretized torus \(\mathbb T_q := (\frac{1}{q} \mathbb Z) / \mathbb Z\) and identify it with its set of reduced representatives \(\{ 0, \frac{1}{q}, \ldots , \frac{q-1}{q} \}\). The statistical distance between two random variable X and Y will be denoted \(\varDelta (X, Y)\).

2.2 Lattice Isomorphism and Quadratic Forms

Abstractly, the set of (full-rank, n-dimensional) lattices can be thought as the homogeneous spaceFootnote 3 \({\mathcal {GL}_n(\mathbb R)}/ {\mathcal {GL}_n(\mathbb Z)}\): a lattice \(L = \mathcal L(B) := B \cdot \mathbb {Z}^n\) is generated by the columns of a basis \(B \in {\mathcal {GL}_n(\mathbb R)}\), and two basis \(B, B' \in {\mathcal {GL}_n(\mathbb R)}\) generate the same lattice if and only if there exists a unimodular matrix \(U \in {\mathcal {GL}_n(\mathbb Z)}\) such that \(B' = BU\).

Two lattices are isomorphic if there exists an orthonormal transformation \(O \in {\mathcal {O}_n(\mathbb R)}\) sending one to the other. Finding this transformation, if it exists, is known as the Lattice Isomorphism Problem (\(\mathrm {LIP}\)).

Definition 2.1

(\(\mathrm {LIP}\), lattice version). Given two isomorphic lattices \(\mathcal L, \mathcal L' \subset \mathbb {R}^n\) find an orthonormal transformation \(O \in \mathcal O_n(\mathbb {R})\) such that \(\mathcal L' = O \cdot \mathcal L\).

Algorithmically lattices \(\mathcal L= \mathcal L(B), \mathcal L'=\mathcal L(B')\) are represented by bases \(B, B' \in \mathcal {GL}_n(\mathbb {R})\), and if \(\mathcal L' = O \cdot \mathcal L\), then OB is a basis of \(\mathcal L'\). If \(OB=B'\), then we can easily compute \(O := B'B^{-1}\), however in general OB will only be equal to \(B'\) up to some unimodular transformation. More specifically when \(\mathcal L= \mathcal L(B)\), and \(\mathcal L' = \mathcal L(B')\) for some lattice bases \(B, B' \in \mathcal {GL}_n(\mathbb {R})\) the Lattice Isomorphism Problem asks to find an orthonormal \(O \in \mathcal O_n(\mathbb {R})\) and a unimodular \(U \in \mathcal {GL}_n(\mathbb {Z})\) such that \(B' = OBU\). The presence of both the orthonormal and the unimodular transformation is what makes \(\mathrm {LIP}\) a hard problem. In other words, reconstructing (or even testing) equivalence in either quotient \({\mathcal {GL}_n(\mathbb R)}/{\mathcal {GL}_n(\mathbb Z)}\) or \({\mathcal {O}_n(\mathbb R)}\backslash {\mathcal {GL}_n(\mathbb R)}\) is easy, doing so in the double quotient \({\mathcal {O}_n(\mathbb R)}\backslash {\mathcal {GL}_n(\mathbb R)}/ {\mathcal {GL}_n(\mathbb Z)}\) appears to be hard.

The real-valued coordinates of the basis and orthonormal transformation can be inconvenient and inefficient to work with. We can alleviate some of these concerns by moving to the (equivalent) quadratic form setting, where instead of a basis B we focus on the Gram matrix \(Q = B^tB\).

Quadratic Forms and Integral Equivalence. The idea of the Quadratic Form point of view on LIP is to consider the quotient in the opposite order than in the lattice point of view: first on the left by \({\mathcal {O}_n(\mathbb R)}\) and then only on the right by \({\mathcal {GL}_n(\mathbb Z)}\).

We define a quadratic form as a positive definite real symmetric matrix. A quadratic form can be thought as a basis modulo rotation; they realize the quotient \({\mathcal {O}_n(\mathbb R)}\backslash {\mathcal {GL}_n(\mathbb R)}\). More precisely, consider the surjective Gram map \(\gamma : {\mathcal {GL}_n(\mathbb R)}\rightarrow {\mathcal {S}^{>0}_n(\mathbb R)}\) sending a lattice basis B to the quadratic form \(Q = B^tB\). Note that the preimages of \(\gamma (B)\) are precisely the OB for \(O \in {\mathcal {O}_n(\mathbb R)}\).

For a lattice basis B the Gram matrix \(Q=B^tB\) naturally gives a quadratic form. Additionally every quadratic form Q induces a unique upper-triangular lattice basis \(B_Q\) such that \(Q=B_Q^tB_Q\) (Cholesky decomposition). In the quadratic form setting lattice vectors \(B{\boldsymbol{x}}\in \mathbb {R}^n\) are represented by their integral basis coefficients \({\boldsymbol{x}}\in \mathbb {Z}^n\). The inner product with respect to a quadratic form is naturally given by \(\langle {\boldsymbol{x}}, {\boldsymbol{y}}\rangle _Q := {\boldsymbol{x}}^tQ{\boldsymbol{y}}\), and the norm by \(\left\Vert {\boldsymbol{x}}\right\Vert ^2_Q :={\boldsymbol{x}}^tQ{\boldsymbol{x}}\). Note that this perfectly coincides with the geometry between the original lattice vectors. We denote the ball of radius r by \(\mathcal B_Q(r) := \{ {\boldsymbol{x}}\in \mathbb {R}^n : \left\Vert {\boldsymbol{x}}\right\Vert _Q \leqslant r \}\). Translating the lattice definition, one get the first minimum \(\lambda _1(Q)\) defined by

$$\begin{aligned} \lambda _1(Q) := \min \limits _{{\boldsymbol{x}}\in \mathbb {Z}^n \setminus \{ 0\}} \left\Vert {\boldsymbol{x}}\right\Vert _Q, \end{aligned}$$

and more generally the i-th minimal distance \(\lambda _i(Q)\) defined as the smallest \(r>0\) such that \(\{{\boldsymbol{x}}\in \mathbb {Z}^n \mid \left\Vert {\boldsymbol{x}}\right\Vert _Q \leqslant r\}\) spans a space of dimension at least i.

In this realization \({\mathcal {S}^{>0}_n(\mathbb R)}\) of the quotient \({\mathcal {O}_n(\mathbb R)}\backslash {\mathcal {GL}_n(\mathbb R)}\), the action of \(U \in {\mathcal {GL}_n(\mathbb Z)}\) is given by \(Q \mapsto U^t Q U\). We may now rephrase \(\mathrm {LIP}\) for two lattice bases B and \(B'\). Note that if \(B' = OBU\), then for \(Q := B^tB\) we have:

$$Q' := (B')^tB' = U^tB^tO^tOBU = U^tB^tBU = U^tQU,$$

and we call Q and \(Q'\) equivalent if such a unimodular \(U \in \mathcal {GL}_n(\mathbb {Z})\) exists, and denote the equivalence class by \([Q ]\), moving the real-valued orthonormal transform \(O \in {\mathcal {O}_n(\mathbb R)}\) out of the picture. Additionally many remarkable lattices attain a rational-valued Gram matrix Q, removing the need for real-valued or approximate arithmetic. Later in this work we will restrict ourselves to integer-valued quadratic forms.

Weaker Equivalence (Genus). The study of integral equivalence of quadratic forms is classically approached via weaker notions, namely, equivalence over larger rings [9, Chapter 15, Sec 4]. In particular, we shall consider the rational equivalence class \([Q]_\mathbb {Q}\) of all \(U^t Q U\) for \(U \in \mathcal {GL}_n(\mathbb {Q})\), as well as the p-adic integer equivalence class \([Q]_{\mathbb {Z}_p}\) of all \(U^t Q U\) for \(U \in \mathcal {GL}_n(\mathbb {Z}_p)\). These equivalences are coarser than integral equivalence: \([Q] = [Q'] \Rightarrow [Q]_\mathbb {Q}= [Q']_\mathbb {Q}\) and \([Q]_{\mathbb {Z}_p} = [Q']_{\mathbb {Z}_p}\). These data \(([Q]_\mathbb {Q}, ([Q]_{\mathbb {Z}_p})_p)\) about a quadratic form are called the genus of the quadratic form.

One could also consider equivalence over the reals \(\mathbb {R}\), or over the p-adic rationals \(\mathbb {Q}_p\). By a local-global principle (Minkowski-Hasse Theorem [42, Thm. 9, pp. 44]) these data are redundant with the rational class \([Q]_\mathbb {Q}\).

The Lattice Isomorphism Problem, Quadratic Form Formulation. The Lattice Isomorphism Problem can now be restated. We start by properly defining the worst-case problems, in both a search and distinguishing variant.

Definition 2.2

(\(\mathrm {wc\hbox {-}sLIP}^Q\)). For a quadratic form \(Q \in {\mathcal {S}^{>0}_n}\) the problem \(\mathrm {wc\hbox {-}sLIP}^Q\) is, given any quadratic form \(Q' \in [ Q ]\), to find a unimodular \(U \in {\mathcal {GL}_n(\mathbb Z)}\) such that \(Q' = U^t Q U\).

Note that the problem is equivalent to the original \(\mathrm {LIP}\) problem as we can still extract an orthonormal transformation by computing \(O = B'(BU)^{-1}\). Moreover, the automorphism group \(\mathrm {Aut}(Q) := \{ V \in {\mathcal {GL}_n(\mathbb Z)}: V^tQV=Q \}\) is finite, and for any solution \(U \in {\mathcal {GL}_n(\mathbb Z)}\) to \(\mathrm {wc\hbox {-}sLIP}^Q\) such that \(Q' = U^t Q U\), the full set of solutions is given by \(\{ VU : V \in \mathrm {Aut}(Q)\}\).

We also consider a distinguishing variant of LIP, denoted \(\mathrm {wc\hbox {-}\Delta LIP}\). It is not to be confused with the decisional version of LIP (which we will refer to as d\(\mathrm {LIP}\)).Footnote 4

Definition 2.3

(\(\mathrm {wc\hbox {-}\Delta LIP}^{Q_0, Q_1}\)). For two quadratic forms \(Q_0, Q_1 \in {\mathcal {S}^{>0}_n}\) the problem \(\mathrm {wc\hbox {-}\Delta LIP}^{Q_0, Q_1}\) is, given any quadratic form \(Q' \in [Q_b]\) where \(b \in \{0, 1\}\) is a uniform random bit, to find b.

Because (part of) the genus is efficiently computable (see Sect. 7), we will make sure that \([Q_0]_R = [Q_1]_R\) for all relevant ring extensions .

Hardness Statements. When we discuss the hardness of LIP problems, we will implicitly assume that we are not talking of a single quadratic form Q (or of a single pair \((Q_0, Q_1)\) for \(\mathrm{{\Delta LIP}}\)), but of a family \((Q_n)_n\) (or a family of pairs \((Q_{0,n}, Q_{1,n})_n\) for \(\mathrm{{\Delta LIP}}\)) where n ranges over an infinite set of positive integer.

2.3 Discrete Gaussians and Sampling

Discrete Gaussian sampling has been fundamental to the development of lattice based cryptography, by allowing to return short or nearby lattice vectors without leaking information about the secret key [12]. We rephrase the relevant definitions and propositions in the quadratic form language.

Distribution. For any quadratic form \(Q \in {\mathcal {S}^{>0}_n}\) we define the Gaussian function on \(\mathbb {R}^n\) with parameter \(s>0\) and center \({\boldsymbol{c}}\in \mathbb {R}^n\) by

$$\begin{aligned} \forall {\boldsymbol{x}}\in \mathbb {R}^n, \rho _{Q,s,{\boldsymbol{c}}}({\boldsymbol{x}}) := \exp (-\pi \left\Vert {\boldsymbol{x}}-{\boldsymbol{c}}\right\Vert _Q^2/s^2). \end{aligned}$$

The discrete Gaussian distribution is obtained by restricting the continuous Gaussian distribution to a discrete lattice. In the quadratic form setting the discrete lattice will always be \(\mathbb {Z}^n\), but with the geometry induced by the quadratic form. For any quadratic form \(Q \in {\mathcal {S}^{>0}_n}\) we define the discrete Gaussian distribution \(\mathcal D_{Q,s,{\boldsymbol{c}}}\) with center \({\boldsymbol{c}}\in \mathbb {R}^n\) and parameter \(s>0\) by

$$\begin{aligned} \mathop {\Pr }\limits _{X \sim \mathcal D_{Q,s,{\boldsymbol{c}}}} \left[ X = {\boldsymbol{x}}\right]&:= \frac{\rho _{Q,s,{\boldsymbol{c}}}({\boldsymbol{x}})}{\rho _{Q,s, {\boldsymbol{c}}}(\mathbb {Z}^n)} \text { if }{\boldsymbol{x}}\in \mathbb {Z}^n, \text {and}\; 0\; \text {otherwise.} \end{aligned}$$

If the center \({\boldsymbol{c}}\) is not denoted we have \({\boldsymbol{c}}=\boldsymbol{0}\). An important property of the discrete gaussian distribution is the smoothing parameter, i.e. how much gaussian noise \(s > 0\) is needed to ‘smooth out’ the discrete structure.

Definition 2.4

(Smoothing Parameter). For a quadratic form \(Q \in {\mathcal {S}^{>0}_n}\) and \(\epsilon >0\) we define the smoothing parameter \(\eta _\epsilon (Q)\) as the minimal \(s > 0\) such that \(\rho _{Q^{-1},1/s}(\mathbb {Z}^n) \leqslant 1+\epsilon \).

The smoothing parameter is a central quantity for gaussians over lattice, for example it permits to control the variations of \(\rho _{Q,s,{\boldsymbol{c}}}(\mathbb {Z}^n)\) is over all centers \({\boldsymbol{c}}\).

Lemma 2.5

([29]). For any quadratic form \(Q \in {\mathcal {S}^{>0}_n}\), \(\epsilon > 0\), center \({\boldsymbol{c}}\in \mathbb {R}^n\) and parameter \(s > \eta _\epsilon (Q)\) we have:

$$\begin{aligned} (1-\epsilon ) \frac{s^n}{\sqrt{\det (Q)}} \leqslant \rho _{Q,s,{\boldsymbol{c}}}(\mathbb {Z}^n) \leqslant (1+\epsilon ) \frac{s^n}{\sqrt{\det (Q)}}. \end{aligned}$$

Note that the smoothing parameter \(\eta _\epsilon (Q)\) is an invariant property of the similarity class [Q], and so we might also denote \(\eta _\epsilon ([ Q ])\) for a similarity class. While computing or even approximating the exact smoothing parameter is hard, we can obtain sufficient bounds via the dual form.

Lemma 2.6

(Smoothing bound [29]). For any quadratic form \(Q \in {\mathcal {S}^{>0}_n}\) we have \(\eta _{2^{-n}}(Q) \leqslant \sqrt{n} / \lambda _1(Q^{-1})\) and \(\eta _\epsilon (Q) \leqslant \Vert B_Q^*\Vert \cdot \sqrt{\ln (2n(1+1/\epsilon ))/\pi }\) for \(\epsilon > 0\).

Above the smoothing parameter the discrete gaussian distribution is in some sense ‘well behaved’ and we have the following tailbound that one would expect from a Gaussian distribution.

Lemma 2.7

(Tailbound [30, Lemma 4.4]). For any quadratic form \(Q \in {\mathcal {S}^{>0}_n}\), \(\epsilon \in (0,1)\), center \({\boldsymbol{c}}\in \mathbb {R}^n\) and parameter \(s \geqslant \eta _\epsilon (Q)\), we have

A constant factor above the smoothing parameter we can furthermore lower bound the min-entropy of the distribution.

Lemma 2.8

(Min-entropy [35]). For any quadratic form \(Q \in {\mathcal {S}^{>0}_n}\), positive \(\epsilon >0\), center \({\boldsymbol{c}}\in \mathbb {R}^n\), parameter \(s \geqslant 2\eta _\epsilon (Q)\), and for every \({\boldsymbol{x}}\in \mathbb {Z}^n\), we have

Gaussian Sampling. While the discrete Gaussian distribution already is an important theoretical tool, for many practical purposes we want to actually sample (close to) the distribution in an efficient manner. In their breakthrough work Gentry et al. [12] showed that Klein’s [19] randomized Babai’s nearest plane algorithm does exactly that. Given a lattice basis one can sample statistically close to the discrete Gaussian distribution with parameters depending on the shortness of the (Gram-Schmidt) basis; a better reduced basis allows for a lower Gaussian width s. To simplify later proofs we use an exact sampling algorithm by Brakerski et al. [7].

Lemma 2.9

(Discrete Sampling [7, Lemma 2.3]). There is a polynomial-time algorithm \(\mathrm {\mathbf {DiscreteSample}}(Q, s, {\boldsymbol{c}})\) that given a quadratic form \(Q \in {\mathcal {S}^{>0}_n}\), center \({\boldsymbol{c}}\in \mathbb {R}^n\), and a parameter \(s \geqslant \Vert B_Q^*\Vert \cdot \sqrt{\ln (2n+4)/\pi }\), returns a sample distributed as \(\mathcal D_{Q,s,{\boldsymbol{c}}}\).

2.4 Randomness Extractors

A randomness extractor allows, using a publicly known random seed, to convert a non-uniform randomness source X with high min-entropy \(H_\infty (X) := -\log _2(\max _x \Pr [X=x])\) to a near-uniform random variable [3, 15].Footnote 5

Definition 2.10

(Extractor). An efficient function \(\mathcal E: \mathcal X \times \{0,1\}^z \rightarrow \{0,1\}^v\) is an \((m, \epsilon )\)-extractor, if, for all random variable X distributed over \(\mathcal X\) and \(H_\infty (X) \geqslant m\), it holds that

$$\begin{aligned} {{\,\mathrm{\Delta }\,}}\big ( (Z,\mathcal E(X, Z)) , (Z, V)\big ) \leqslant \epsilon \end{aligned}$$

where the seed \(Z \leftarrow \mathcal U(\{0,1\}^z)\) and \(V \leftarrow \mathcal U(\{0,1\}^v)\) are drawn uniformly at random, and independently of X.

When instantiating our scheme, we will rely on the existence of an \((m, \epsilon )\)-extractor with parameters \(m = \varTheta (v)\) and \(\epsilon = 2^{-\varTheta (m)}\).

3 LIP and Self-reducibility

In this section we lay the foundation for using the Lattice Isomorphism Problem in cryptography. We present an average-case distribution for any quadratic form equivalence class, show how to sample from it, and conclude with a worst-case to average-case reduction. Note that the worst-case to average-case reduction is realized within an equivalence class.

3.1 An Average-Case Distribution

First we define our average-case distribution within an equivalence class \([Q ]\), which can be seen as an extension of the techniques used by Haviv and Regev [17] to show that LIP lies in SZK. While in their work they use a discrete Gaussian sampler [12] to sample a generating set of the lattice, we extend this by a linear algebra step that returns a canonically distributed lattice basis—or in our case a quadratic form.

A posteriori, this algorithm appears very similar to the heuristic approach of [6], but the use of Gaussian sampling formally guarantees that the output distribution solely depends on the lattice and not on the specific input basis—or in our case, depends only on the class of the input quadratic form.

First we consider the linear algebra step, that given a quadratic form and short linearly independent vectors, returns a well reduced equivalent form.

Lemma 3.1

(Adapted from [27, Lemma 7.1]). There is a polynomial time algorithm \((R,U) \leftarrow \mathrm {\mathbf {Extract}}(Q, Y)\) that on input a quadratic form Q, and linearly independent vectors \(Y = ({\boldsymbol{y}}_1, \ldots , {\boldsymbol{y}}_n) \in \mathbb {Z}^{n \times n}\), outputs a transformation \(U \in {\mathcal {GL}_n(\mathbb Z)}\) and a quadratic form \(R=U^tQU\) equivalent to Q such that \(\left\Vert B_R^*\right\Vert \leqslant \max _i \left\Vert {\boldsymbol{y}}_i\right\Vert _Q\).

Proof

First let \(U \in {\mathcal {GL}_n(\mathbb Z)}\) be the unique transformation such that \(T = U^{-1}Y\) is the canonical upper-diagonal Hermite Normal Form of Y. Let \(R = U^tQU\) and note that R is equivalent to Q. Denote the column vectors of U by \({\boldsymbol{u}}_1, \ldots , {\boldsymbol{u}}_n\). Because T is upper triangular and in Hermite Normal Form we have \({\boldsymbol{y}}_i = \sum _{j=1}^i T_{j,i} {\boldsymbol{u}}_j\), where \(T_{j,j} \geqslant 1\). In particular we have that \(\mathrm {span}({\boldsymbol{y}}_1, \ldots , {\boldsymbol{y}}_k) = \mathrm {span}({\boldsymbol{u}}_1, \ldots , {\boldsymbol{u}}_k)\). Let \({\boldsymbol{y}}_i^*\) and \({\boldsymbol{u}}_i^*\) be the i-th Gram-Schmidt vector of Y and U respectively w.r.t. Q. Note that \({\boldsymbol{y}}_i^* = T_{i,i} \cdot {\boldsymbol{u}}_i^*\), and thus \(\left\Vert {\boldsymbol{u}}_i^*\right\Vert _Q = \left\Vert {\boldsymbol{y}}_i^*\right\Vert _Q / T_{i,i} \leqslant \left\Vert {\boldsymbol{y}}_i^*\right\Vert _Q \leqslant \left\Vert {\boldsymbol{y}}_i\right\Vert _Q\). We conclude by \(\left\Vert B_R^*\right\Vert = \max _i \left\Vert {\boldsymbol{u}}_i^*\right\Vert _Q \leqslant \max _i \left\Vert {\boldsymbol{y}}_i\right\Vert _Q.\)    \(\square \)

For our final distribution to be well defined we need that the extracted quadratic form only depends on the geometry of the input vectors, and not on the particular representative Q.

Lemma 3.2

(Independence of representative). Let \({\boldsymbol{y}}_1, \ldots , {\boldsymbol{y}}_n \in \mathbb {Z}^n\) be linearly independent. If \((R, U) \leftarrow \mathrm {\mathbf {Extract}}(Q, Y)\), and for some unimodular \(V \in {\mathcal {GL}_n(\mathbb Z)}\) we have \((R', U') \leftarrow \mathrm {\mathbf {Extract}}(V^tQV, V^{-1}Y)\), then \(R'=R\), and \(U' = V^{-1} \cdot U\).

Proof

From the canonicity of the Hermite Normal Form we immediately obtain that \((U')^{-1}V^{-1}Y = T = U^{-1}Y\), and thus \(U' = V^{-1} \cdot U\). It follows that \(R' = (V^{-1} \cdot U)^t V^tQV (V^{-1} \cdot U) = U^tQU = R\).    \(\square \)

Now we can formally define our average-case distribution for a parameter \(s > 0\).

Definition 3.3

Given a quadratic form equivalence class \([Q ]\subset {\mathcal {S}^{>0}_n}\) we define the Gaussian form distribution \(\mathcal D_s([Q ])\) over \([Q ]\) with parameter \(s > 0\) algorithmically as follows:

  1. 1.

    Fix a representative \(Q \in [Q ]\).

  2. 2.

    Sample n vectors \(({\boldsymbol{y}}_1, \ldots , {\boldsymbol{y}}_n) =: Y\) from \(\mathcal D_{Q,s}\). Repeat until linearly independent.

  3. 3.

    \((R,U) \leftarrow \mathrm {\mathbf {Extract}}(Q, Y)\).

  4. 4.

    Return R.

By Lemma 3.2 the output is independent of the chosen representative and thus the distribution is well-defined.

Given the algorithmic definition of \(\mathcal D_s([Q ])\), an actual efficient sampling algorithm follows with only a few adaptations. Firstly, we need to efficiently sample from \(\mathcal D_{Q,s}\) which puts some constraints on the parameter s depending on the reducedness of the representative Q. Secondly the probability that n sampled vectors are linearly independent can be quite small, instead we sample vectors one by one and only add them to our set Y if they are independent. Still we require the additional constraint \(s \geqslant \lambda _n(Q)\) to show that this succeeds with a polynomial amount of samples.

figure a

Lemma 3.4

For any quadratic form \(Q \in {\mathcal {S}^{>0}_n(\mathbb Z)}\), and parameter

$$s \geqslant \max \{\lambda _n(Q), \Vert B_Q^*\Vert \cdot \sqrt{\ln (2n+4)/\pi } \},$$

Algorithm 1 runs in expected polynomial time and returns (RU) where R is a sample from \(\mathcal D_s([Q ])\), and a unimodular \(U \in {\mathcal {GL}_n(\mathbb Z)}\) such that \(R=U^tQU\). Furthermore, the isomorphism U is uniform over the set of isomorphisms from Q to R.

Proof

By Lemmas 2.9 and 3.1 every step in Algorithm 1 runs in polynomial time. What remains is to show that the number of iterations is polynomially bounded. Let the random variable K be the number of samples before we find n independent ones. If \(|Y|<n\), then because \(s \geqslant \lambda _n(Q)\) we have by [17, Lemma 5.1] that every newly sampled vector \({\boldsymbol{x}}\leftarrow \mathcal D_{Q,s}\) is not in the span of Y with constant probability at least \(C:=1-(1+e^{-\pi })^{-1} > 0\). So K is bounded from above by a negative binomial distribution for n successes with success probability C, which implies that \(\mathbb {E}[K ]\leqslant \frac{n}{C}\), and in particular that \(\Pr [K > n^2 ]\leqslant e^{-\varOmega (n^2)}\). When the while loop succeeds the set Y is distributed as n vectors sampled from \(\mathcal D_{Q,s}\) under the linear independence condition, following exactly Definition 3.3.

Suppose that the algorithm runs and finishes with a final spanning set Y, and returning \((R,U) \leftarrow \mathrm {\mathbf {Extract}}(Q,Y)\). For any automorphism \(V\in \mathrm {Aut}(Q)\), i.e. such that \(V^tQV=Q\), it would have been just as likely that the final spanning set equalled VY, because the samples from \(\mathcal D_{Q,s}\) only depend on the norm of the vectors w.r.t. Q. Then by Lemma 3.2 we have:

$$\begin{aligned} \mathrm {\mathbf {Extract}}(Q, VY) = \mathrm {\mathbf {Extract}}((V^{-1})^tQV^{-1}, VY) = (R, VU), \end{aligned}$$

and thus the algorithm would have returned VU with the same probability as U, which makes the returned transformation uniform over the set of isomorphisms \(\{ VU : V \in \text {Aut}(Q) \}\) from Q to R.    \(\square \)

For (exponentially) large parameters s we can always efficiently sample from the average-case distribution by first LLL-reducing the representative.

Lemma 3.5

Given any quadratic form \(Q \in {\mathcal {S}^{>0}_n(\mathbb Z)}\) we can sample from \(\mathcal D_s([Q ])\) in polynomial time for \(s \geqslant 2^{\varTheta (n)} \cdot \lambda _n([Q ])\).

Proof

Run the LLL algorithm on Q to obtain a representative \(Q' \in [Q ]\) for which \(\Vert B_{Q'}^*\Vert \leqslant 2^{\varTheta (n)} \cdot \lambda _n([Q ])\). Then apply Lemma 3.4.    \(\square \)

Lemma 3.6

For any quadratic form \(Q \in {\mathcal {S}^{>0}_n}\), parameter \(\epsilon \in (0,1)\), and \(s \geqslant \max \{ \lambda _n(Q), \eta _\epsilon (Q)\}\), we have

Proof

Given linearly independent vectors \(Y = \{ {\boldsymbol{y}}_1, \ldots , {\boldsymbol{y}}_n\} \in \mathbb {Z}^n\) the extractor returns a quadratic form \(Q'\) such that \(\Vert B_{Q'}^*\Vert \leqslant \max _i \left\Vert {\boldsymbol{y}}_i\right\Vert _Q\) and thus we can just focus on the norms \(\left\Vert {\boldsymbol{y}}_i\right\Vert _Q\). Let the random variable K be the number of samples \({\boldsymbol{x}}_1, \ldots , {\boldsymbol{x}}_K \leftarrow \mathcal D_{Q,s}\) before we find n independent ones. By Lemma 2.7 we have \(\left\Vert {\boldsymbol{x}}_i\right\Vert > s\sqrt{n}\) with probability at most \((1+\epsilon )/(1-\epsilon ) \cdot 2^{-n}\). By the proof of Lemma 3.4 we have \(\mathbb {E}[K ]\leqslant \frac{n}{C} \leqslant 25n\), and by a union bound we conclude:

$$\begin{aligned} \Pr \left[\max _i \left\Vert {\boldsymbol{y}}_i\right\Vert _Q> s \sqrt{n} \right]&= \sum _{k=n}^\infty \Pr [K=k ]\Pr \left[\max _{1 \leqslant i \leqslant k} \left\Vert {\boldsymbol{x}}_i\right\Vert _Q > s\sqrt{n} \right]\\&\leqslant \underbrace{\sum _{k=n}^\infty \Pr [K=k ]\cdot k}_{\mathbb {E}[K ]} \cdot \frac{1+\epsilon }{1-\epsilon } \cdot 2^{-n} \leqslant \frac{n}{C} \cdot \frac{1+\epsilon }{1-\epsilon } \cdot 2^{-n}. \end{aligned}$$

   \(\square \)

3.2 Average Case LIP

The above definition of a distribution over a class which is efficiently sampleable from any representative of that class leads us to a natural average-case version of both version of LIP. It is parametrized by a width parameter \(s > 0\).

Definition 3.7

(\(\mathrm {ac\hbox {-}sLIP}_s^Q\)). For a quadratic form \(Q \in {\mathcal {S}^{>0}_n}\) and \(s > 0\) the problem \(\mathrm {ac\hbox {-}sLIP}_s^Q\) is, given a quadratic form sampled as \(Q' \leftarrow \mathcal D_s([Q])\), to find a unimodular \(U \in {\mathcal {GL}_n(\mathbb Z)}\) such that \(Q' = U^t Q U\).

Definition 3.8

(\(\mathrm {ac\hbox {-}\Delta LIP}_s^{Q_0, Q_1}\)). For two quadratic forms \(Q_0, Q_1 \in {\mathcal {S}^{>0}_n}\) and \(s > 0\) the problem \(\mathrm {ac\hbox {-}\Delta LIP}_s^{Q_0, Q_1}\) is, given a quadratic form sampled as \(Q' \leftarrow \mathcal D_s([Q_b])\) where \(b \in \{0, 1\}\) is a uniform random bit, to find b.

Trivially the average-case variants can be reduced to their respective worst-case variants. In the following section we show that the reverse is also true.

3.3 A Worst-Case to Average-Case Reduction

In general lattice problems become easier when given a short basis; and harder when given a long basis. Similarly one would expect that \(\mathrm {ac\hbox {-}sLIP}_s^Q\) and \(\mathrm {ac\hbox {-}\Delta LIP}_s^{Q_0, Q_1}\) become harder when the parameter \(s > 0\) increases. In fact when s is large enough the average-case problem becomes at least as hard as any worst-case instance; making the average-case and worst-case problems equivalent.

Lemma 3.9

(\(\mathrm {ac\hbox {-}sLIP}_s^Q \geqslant \mathrm {wc\hbox {-}sLIP}^Q\) for large s). Given an oracle that solves \(\mathrm {ac\hbox {-}sLIP}_s^Q\) for some \(s \geqslant 2^{\varTheta (n)} \cdot \lambda _n(Q)\) in time \(T_0\) with probability \(\epsilon > 0\), we can solve \(\mathrm {wc\hbox {-}sLIP}^Q\) with probability at least \(\epsilon \) in time \(T = T_0 + poly(n, \log s)\).

Proof

Given any \(Q' \in [Q]\), apply Lemma 3.5 to sample \(Q'' \leftarrow \mathcal D_s([Q])\) for some \(s \geqslant 2^{O(n)} \cdot \lambda _n([Q ])\), together with a \(U''\) such that \(Q'' = U''^t Q' U'' \). Note that \(\mathcal D_s([Q]) = \mathcal D_s([Q'])\); we can therefore apply our \(\mathrm {ac\hbox {-}sLIP}_s^Q\)-oracle to \(Q''\) and obtain \(U \in {\mathcal {GL}_n(\mathbb Z)}\) such that \(Q'' = U^tQU\). Now for \(U' := U U''^{-1} \in {\mathcal {GL}_n(\mathbb Z)}\) we have:

$$U'^tQU'= (U''^{-1})^t U^t Q U U''^{-1} = (U''^{-1})^t Q'' U''^{-1}=Q'.$$

So given an \(\mathrm {ac\hbox {-}sLIP}_s^Q\)-oracle we can solve \(\mathrm {wc\hbox {-}sLIP}^Q\).    \(\square \)

To allow for more efficient schemes we would like to decrease the parameter \(s > 0\) in the worst-case to average-case reduction. We can do so at the cost of stronger lattice reduction than LLL.

Lemma 3.10

Given an oracle that solves \(\mathrm {ac\hbox {-}sLIP}_s^Q\) for some \(s \geqslant \lambda _n(Q)\) in time \(T_0\) with probability \(\epsilon > 0\), we can solve \(\mathrm {wc\hbox {-}sLIP}^Q\) with probability at least \(\frac{1}{2}\) in time

$$\begin{aligned} T = \frac{1}{\epsilon } (T_0+\mathrm {poly}(n, \log s)) + C\left( n, \frac{s}{\lambda _n(Q) \cdot \sqrt{\ln (2n+4)/\pi }}\right) , \end{aligned}$$

where C(nf) is the cost of solving the Shortest Independent Vector Problem (SIVP, [39]) within an approximation factor of f.

Proof

The f-approx-SIVP oracle returns n linearly independent vectors of norm at most \(f \cdot \lambda _n(Q)\), and thus using Lemma 3.1 we can construct an equivalent form \(Q' \in [Q]\) with \(\Vert B_{Q'}^*\Vert \leqslant f \cdot \lambda _n(Q)\). For \(f := s/(\lambda _n(Q) \cdot \sqrt{\ln (2n+4)/\pi })\) we obtain that \(s \geqslant \Vert B_{Q'}^*\Vert \cdot \sqrt{\ln (2n+4)/\pi }\), and thus we can sample efficiently from \(\mathcal D_s([Q])\). The rest of the proofs follows similar to that of Lemma 3.9. Additionally the reduction succeeds with some probability \(\epsilon > 0\), so we need to repeat it \(\frac{1}{\epsilon }\) times to obtain a success probability of at least \(\frac{1}{2}\). Note that each additional sample can be computed in polynomial time from the same representative \(Q'\).    \(\square \)

Remark 3.11

Note that the overhead is entirely additive, in particular it does not suffer from the \(\frac{1}{\epsilon }\) amplification. So, while the reduction is not polynomial time, concretely, one can afford huge overheads; for example an overhead of \(2^{100}\) would barely affect a underlying hardness of \(2^{128}\) as \(2^{128} - 2^{100} = 2^{127.999\dots }\). This situation is quite different from the usual innefficient reductions found in the literature, where the overhead is multiplicative.

In Lemma 3.10, the SIVP oracle can be instantiated by a variant of the BKZ algorithm [40]. With a sub-linear blocksize of \(\beta := n/\log (n)\) we could decrease s to a quasi-polynomial factor \(\exp (\log ^2(n)) \cdot \lambda _n(Q)\), with only a sub-exponential additive cost to the reduction. For security based on exponential hardness (e.g. \(T_0/\epsilon = \exp (\varOmega (n))\)) this would still be meaningful, while maintaining a poly-logarithmic bitlength for the integer entries of the manipulated matrices.

Going down to polynomial factors \(s = \mathrm {poly}(n) \cdot \lambda _n(Q)\) (and hence single logarithmic integer bitlength) would require a linear blocksize \(\beta := \varTheta (n)\), and an exponential cost \(2^{cn}\). For small constants \(c > 0\) such that cn is smaller than the security parameter the reduction would still be meaningful. However for provable algorithms this constant c is rather large, and the gap between provable [1] and heuristic results [4] is significant. As we want to keep our reduction non-heuristic in this initial work, we will leave this regime for further research.

Using a similar strategy, one can also establish a worst-case to average-case reduction for \(\mathrm{{\Delta LIP}}\). Note that, because it is a distinguishing problem, the advantage amplification now requires \(O(1/\alpha ^2)\) calls to the average-case oracle.

Lemma 3.12

(\(\mathrm {ac\hbox {-}\Delta LIP}_s^{Q_0,Q_1} \geqslant \mathrm {wc\hbox {-}\Delta LIP}^{Q_0,Q_1}\) for large s). Given an oracle that solves \(\mathrm {ac\hbox {-}\Delta LIP}_s^{Q_0,Q_1}\) for some \(s \geqslant 2^{\varTheta (n)} \cdot \max \{\lambda _n(Q_0), \lambda _n(Q_1)\}\) in time \(T_0\) with advantage \(\alpha > 0\), we can solve \(\mathrm {wc\hbox {-}\Delta LIP}^{Q_0,Q_1}\) with advantage \(\alpha \) in time \(T + poly(n, \log s)\).

Lemma 3.13

Given an oracle that solves \(\mathrm {ac\hbox {-}\Delta LIP}_s^{Q_0,Q_1}\) in time \(T_0\) for some \(s \geqslant \max \{\lambda _n(Q_0),\lambda _n(Q_1)\}\) with advantage \(\alpha > 0\), we can solve \(\mathrm {wc\hbox {-}\Delta LIP}^{Q_0,Q_1}\) with advantage at least \(\frac{1}{4}\) in time

$$\begin{aligned} T = \frac{1}{\alpha ^2} (T_0+\mathrm {poly}(n, \log s)) + C\left( n, \frac{s}{ \max \{\lambda _n(Q_0), \lambda _n(Q_1)\} \cdot \sqrt{\ln (2n+4)/\pi }}\right) , \end{aligned}$$

where C(nf) is the cost of solving the Shortest Independent Vector Problem (SIVP, [39]) within an approximation factor of f.

4 Zero Knowledge Proof of Knowledge

At high level, the protocol of Haviv and Regev [17], as well as ours, is very similar to protocols for other types of isomorphisms, in particular protocols for graph ismorphism [14] and for code isomorphism [5].

A notable difference however, is that both these protocols [5, 14] relied on the action of a finite group (permutations), allowing to show zero-knowledgness by uniformity of the distribution over an orbit. In our case, the group acting \({\mathcal {GL}_n(\mathbb Z)}\) is not finite, and not even compact, admitting no such uniform distribution. It is perhaps surprising to see that uniformity is in fact not required.

4.1 The \(\varSigma \)-Protocol

figure b

Efficiency and Completeness. For efficiency of \(\varSigma \) we have to check that Algorithm 1 runs in polynomial time, and indeed by Lemma 3.4 this is the case because

$$s \geqslant \max \left\{ \lambda _n([Q_0]), \Vert B_{Q_0}^*\Vert \cdot \sqrt{\ln (2n+4)/\pi } \right\} .$$

For the verification we have that \(W \in {\mathcal {GL}_n(\mathbb Z)}\) if and only if W is integral and \(\det (W) = \pm 1\), both of which are easy to check in polynomial time.

For the completeness of \(\varSigma \) note that when the prover executes the protocol honestly we have \(W := U^{-c} \cdot V \in {\mathcal {GL}_n(\mathbb Z)}\) because U and V are both unimodular by definition. Additionally we have

$$\begin{aligned} Q' = V^tQ_0V = \underbrace{(V^t(U^{-c})^t)}_{W^t}\underbrace{((U^c)^tQ_0U^c)}_{Q_c}\underbrace{(U^{-c}V)}_W = W^tQ_cW, \end{aligned}$$

and thus the verifier accepts.

Special Soundness. Suppose we have two accepting conversations \((Q',0,W_0)\) and \((Q',1,W_1)\) of \(\varSigma \) where the first message is identical. The acceptance implies that \(W_0, W_1 \in {\mathcal {GL}_n(\mathbb Z)}\) and \(W_0^tQ_0W_0 = Q' = W_1^tQ_1W_1\), and thus \(U' := W_0 W_1^{-1} \in {\mathcal {GL}_n(\mathbb Z)}\) gives an isomorphism from \(Q_0\) to \(Q_1\) as

$$\begin{aligned} U'^tQ_0U' = (W_1^{-1})^t (W_0^tQ_0W_0)W_1^{-1} = (W_1^{-1})^t (W_1^t Q_1 W_1 )W_1^{-1} = Q_1. \end{aligned}$$

We conclude that \(\varSigma \) has the special soundness property.

Special Honest-Verifier Zero-Knowledge. We create a simulator that given the public input \(Q_0, Q_1\) outputs an accepting conversation with the same probability distribution as between a honest prover and verifier. Note that the first message \(Q'\) is always distributed as \(\mathcal D_s([Q_0])\), the challenge c as \(\mathcal U(\{0,1\})\), and V is uniform over the set of isomorphisms from \(Q_0\) to \(Q'\) by Lemma 3.4. Because U is an isomorphism from \(Q_0\) to \(Q_1\) we have, given the challenge c, that \(W = U^{-c} \cdot V\) is uniform over the set of isomorphisms from \(Q_c\) to \(Q'\).

To simulate this we first sample the uniformly random challenge \(c \leftarrow \mathcal U(\{0,1\})\). If \(c=0\) we can proceed the same as in \(\varSigma \) itself, e.g. sample \(Q' \leftarrow \mathcal D_s([Q_0])\) using Algorithm 1, together with a V such that \(Q'=V^tQ_0V\), and set \(W := V\). The final conversation \((Q',0,W)\) is accepting and follows by construction the same distribution as during an honest execution conditioned on challenge \(c=0\).

If \(c=1\) we use the fact that \([Q_0]=[Q_1]\), and that we can use Algorithm 1 with representative \(Q_1\) as input instead of \(Q_0\). So again we obtain \(Q' \leftarrow \mathcal D_s([Q_1])=\mathcal D_s([Q_0])\) following the same distribution, but now together with a unimodular \(W \in {\mathcal {GL}_n(\mathbb Z)}\) such that \(Q' = W^tQ_1W\). The conversation \((Q',1,W)\) is accepting by construction, and \(Q'\) follows the same distribution \(\mathcal D_s([Q_0])\). Additionally by Lemma 3.4 the transformation W is indeed uniform over the set of isomorphisms from \(Q_1\) to \(Q'\).

We conclude that \(\varSigma \) has the special honest-verifier zero-knowledge property.

4.2 Identification Scheme

The Zero Knowledge Proof of Knowledge in the previous section is worst-case in the sense that given any two equivalent forms \(Q_0, Q_1 \in {\mathcal {S}^{>0}_n(\mathbb Z)}\) and a secret isomorphism \(U \in {\mathcal {GL}_n(\mathbb Z)}\) from \(Q_0\) to \(Q_1\) we can show knowledge of such an isomorphism. However to turn this \(\varSigma \)-protocol into an Identification Scheme (see e.g. [10]) we need to define a distribution of \(U \in {\mathcal {GL}_n(\mathbb Z)}\) (or alternatively of \(Q_1\) w.r.t \(Q_0\)). Finding an isomorphism between \(Q_0\) and \(Q_1\) is at most as hard as solving either \(\mathrm {ac\hbox {-}sLIP}_s^{Q_0}\) or \(\mathrm {ac\hbox {-}sLIP}_s^{Q_1}\) for parameter s as in \(\varSigma \). Therefore a natural choice is to have \(Q_1\) distributed according to \(\mathcal D_{s'}([Q_0])\) for some parameter \(s' \geqslant \max \{\lambda _n([Q_0]), \Vert B_{Q_0}^*\Vert \cdot \sqrt{\ln (2n+4)/\pi } \}\), which we can efficiently sample from using Algorithm 1. The security of our identification scheme is then solely based on the hardness of \(\mathrm {ac\hbox {-}sLIP}_{s'}^{Q_0}\).

5 Key Encapsulation Mechanism

In this section we construct a Key Encapsulation Mechanism (KEM) with a security proof based on the hardness of \(\mathrm{{\Delta LIP}}\). In short we will need a quadratic form S along with an efficient decoder up to some radius \(\rho < \lambda _1(S)/2\). The public key will consist of a long equivalent form \(P := U^tSU \leftarrow \mathcal D_s([S ])\), while the unimodular transformation U will be the secret key. Knowledge of the transformation U allows to decode w.r.t. P via S; without any loss in decoding performance. The key will be a random error \({\boldsymbol{e}}\) of norm \(\left\Vert {\boldsymbol{e}}\right\Vert _P \leqslant \rho \), and it can be encapsulated as the syndrome \(\overline{{\boldsymbol{e}}} := {\boldsymbol{e}}\bmod \mathbb {Z}^n \in [0, 1)^n\). The receiver with knowledge of the secret transformation U can recover \({\boldsymbol{e}}\) by decoding via S. The correctness follows from the fact that the decoding is unique due to \(\rho < \lambda _1(S)/2\).

For the security we assume that it is (computationally) hard to differentiate between \(P \leftarrow \mathcal D_s([S])\) and some random sample \(R \leftarrow \mathcal D_s([Q])\) from a special class [Q], a class corresponding to a lattice admitting a dense sublattice. This assumption allows us to replace P by R, which completely breaks the uniqueness of the decoding. That is, the syndrome \(\overline{{\boldsymbol{e}}}\) has many (say \(\exp {\varOmega (\lambda )}\)) nearby points w.r.t. R, and retrieving the exact original point becomes statistically hard.

figure c

Efficiency and Correctness. We consider the efficiency and correctness of the KEM \(\mathcal K:= (\mathrm {\mathbf {Gen}}, \mathrm {\mathbf {Encaps}}, \mathrm {\mathbf {Decaps}})\) instantiated with quadratic form \(S \in {\mathcal {S}^{>0}_n(\mathbb Z)}\) and public parameter

$$s \geqslant \max \{\lambda _n(S), \Vert B_S^*\Vert \cdot \sqrt{\ln (2n+4)/\pi } \}.$$

By the above constraint on s, Algorithm 1 will run in polynomial-time by Lemma 3.4. Furthermore by Lemma 3.6 we have with overwhelming probability that

$$q\rho /\sqrt{n} \geqslant s\sqrt{n}\cdot \sqrt{\ln (2n+4)/\pi } \geqslant \Vert B_P^*\Vert \cdot \sqrt{\ln (2n+4)/\pi },$$

and thus we can efficiently sample from \(\mathcal D_{P,q\rho /\sqrt{n}}\) by Lemma 2.9.

For correctness note that in the key encapsulation algorithm the sampled error \({\boldsymbol{e}}\) has norm at most \(\left\Vert {\boldsymbol{e}}\right\Vert _P \leqslant \rho \) except with negligible probability by Lemma 2.7, and we denote the encapsulated key by \(k := \mathcal E({\boldsymbol{e}}, Z)\), where Z denotes the randomness extractor’s seed. Because \(\rho < \lambda _1(S)/2\) the vector \({\boldsymbol{x}}:= {\boldsymbol{c}}- {\boldsymbol{e}}\in \mathbb {Z}^n\) is the unique closest vector to \({\boldsymbol{c}}\) with respect to P, which makes \(U{\boldsymbol{x}}\) the unique closest vector to \(U{\boldsymbol{c}}\) with respect to \(S=(U^{-1})^tPU^{-1}\). In the decapsulation the decoder computes the unique vector \({\boldsymbol{y}}\) at distance at most \(\rho \) from \(U{\boldsymbol{c}}\), which implies that \({\boldsymbol{y}}= U{\boldsymbol{x}}\). So indeed the output \(k' := \mathcal E({\boldsymbol{c}}-U^{-1}{\boldsymbol{y}}, Z) = \mathcal E({\boldsymbol{c}}-{\boldsymbol{x}}, Z) = \mathcal E({\boldsymbol{e}}, Z) = k\) equals the encapsulated key with overwhelming probability.

CPA Security. To show that our KEM is CPA-secure we fall back to a lossy trapdoor argument a la [36]. Under the hardness of decisional LIP we can replace our unique \(\rho \)-decodable quadratic form by one that is far from uniquely decodable. For the latter it is enough to have a dense sublattice.

Lemma 5.1

Let \(Q \in {\mathcal {S}^{>0}_n(\mathbb Z)}\) be a quadratic form with a rank r sublattice \(D\mathbb {Z}^r \subset \mathbb {Z}^n\). For positive \(\epsilon > 0\), center \({\boldsymbol{c}}\in \mathbb {R}^n\), parameter \(s:=\rho /\sqrt{n} \geqslant 2\eta _\epsilon ([D^tQD ])\), and for every \({\boldsymbol{x}}\in \mathbb {Z}^n\) we have

$$\mathop {\Pr }\limits _{X \sim \mathcal D_{Q,s,{\boldsymbol{c}}}} [X={\boldsymbol{x}}]\leqslant \frac{1+\epsilon }{1-\epsilon } \cdot 2^{-r}.$$

Proof

Let \({\boldsymbol{y}}:= {\boldsymbol{x}}-{\boldsymbol{c}}\in \mathbb {R}^n\), and decompose \({\boldsymbol{y}}=: {\boldsymbol{y}}_D + {\boldsymbol{y}}_{D^\perp }\) where \({\boldsymbol{y}}_D \in \text {span}(D\mathbb {Z}^r)\), and \({\boldsymbol{y}}_{D^\perp }\) is orthogonal to \({\boldsymbol{y}}_D\) w.r.t Q. Then we have

$$\begin{aligned} \mathop {\Pr }\limits _{X \sim \mathcal D_{Q,s,{\boldsymbol{c}}}} [X={\boldsymbol{x}}]&= \frac{\rho _{Q,s,{\boldsymbol{c}}}({\boldsymbol{x}})}{\rho _{Q,s,{\boldsymbol{c}}}(\mathbb {Z}^n)} = \frac{\rho _{Q,s}({\boldsymbol{y}})}{\rho _{Q,s}({\boldsymbol{y}}+\mathbb {Z}^n)} \leqslant \frac{\rho _{Q,s}({\boldsymbol{y}})}{\rho _{Q,s}({\boldsymbol{y}}+D\mathbb {Z}^{r})} \\&= \frac{\rho _{Q,s}({\boldsymbol{y}}_{D^\perp }) \cdot \rho _{Q,s}({\boldsymbol{y}}_D)}{\rho _{Q,s}({\boldsymbol{y}}_{D^\perp }) \cdot \rho _{Q,s}({\boldsymbol{y}}_D+D\mathbb {Z}^{r})} = \frac{\rho _{Q,s}({\boldsymbol{y}}_D)}{ \rho _{Q,s}({\boldsymbol{y}}_D+D\mathbb {Z}^{r})}. \end{aligned}$$

Note that we can write \({\boldsymbol{y}}_D = D{\boldsymbol{z}}\) for some \({\boldsymbol{z}}\in \mathbb {R}^r\), then the above equals \(\Pr _{X \sim \mathcal D_{D^tQD,s,{\boldsymbol{z}}}}[X = 0 ]\), which by Lemma 2.8 is bounded by \(\frac{1+\epsilon }{1-\epsilon } \cdot 2^{-r}\).    \(\square \)

Theorem 5.2

We consider the KEM \(\mathcal K:= (\mathrm {\mathbf {Gen}}, \mathrm {\mathbf {Encaps}}, \mathrm {\mathbf {Decaps}})\) instantiated with quadratic form \(S \in {\mathcal {S}^{>0}_n(\mathbb Z)}\), decoding radius \(\rho \), and public key parameter \(s > 0\). Let \(Q \in {\mathcal {S}^{>0}_n(\mathbb Z)}\) be a quadratic form with a dense rank \(r = \varTheta (n)\) sublattice \(D\mathbb {Z}^{r}\subset \mathbb {Z}^n\), in particular such that \(\eta _\frac{1}{2}(D^tQD) \leqslant \rho /(2\sqrt{n})\). Then \(\mathcal K\) is CPA-secure if \(\mathrm {ac\hbox {-}\Delta LIP}_s^{S,Q}\) is hard.

Proof

Let \(\mathcal A\) be a probabilistic polynomial-time adversary. We present two games \(\mathrm {\mathbf {Game}}_1\) and \(\mathrm {\mathbf {Game}}_2\), where \(\mathrm {\mathbf {Game}}_1\) is the regular CPA-security game with the original scheme, and \(\mathrm {\mathbf {Game}}_2\) is almost identical but with the only change that the public key is drawn from \(\mathcal D_s([Q ])\) instead of \(\mathcal D_s([S ])\). By the hardness of \(\mathrm {ac\hbox {-}\Delta LIP}_s^{S,Q}\) the two games are computationally indistinguishable, and due to the dense sublattice we can conclude that winning \(\mathrm {\mathbf {Game}}_2\) with a non-negligible advantage is statistically impossible.

Let the key-size \(\ell = \varTheta (n)\) be such that \(\ell \leqslant r-\log _2(3)\). The original KEM CPA game \(\mathrm {\mathbf {Game}}_1\) is as follows [18]:

  • \(\mathrm {\mathbf {Gen}}(1^n)\) is run to obtain a public key \(pk=P\). Then \(\mathrm {\mathbf {Encaps}}(pk)\) is run to generate (ck) with \(k \in \{0,1\}^\ell \).

  • A uniform bit \(b \in \{0,1\}\) is chosen. If \(b=0\), set \(\hat{k} := k\), if \(b=1\), choose a uniform \(\hat{k} \in \{0,1\}^\ell \).

  • Given \((pk, c=({\boldsymbol{c}}, Z), \hat{k})\) the adversary \(\mathcal A\) wins the experiment if b is guessed correctly.

The only difference between \(\mathrm {\mathbf {Game}}_1\) and \(\mathrm {\mathbf {Game}}_2\) is that in \(\mathrm {\mathbf {Game}}_2\) we sample the public key P from \(\mathcal D_s([Q])\) instead of \(\mathcal D_s([ S ])\). Note that \(\mathrm {\mathbf {Game}}_1\) and \(\mathrm {\mathbf {Game}}_2\) both only use public information and thus by the hardness of \(\mathrm {ac\hbox {-}\Delta LIP}_s^{S,Q}\) the two are computationally indistinguishable by \(\mathcal A\).

Now we take a look at \(\mathrm {\mathbf {Game}}_2\). Consider the output \((c=({\boldsymbol{c}}, Z),k) \leftarrow \mathrm {\mathbf {Encaps}}(pk)\) where \(pk := Q' \in [Q ]\). For any fixed \({\boldsymbol{c}}\) we have by construction that \(k := \mathcal E({\boldsymbol{e}}, Z)\), where \({\boldsymbol{e}}\leftarrow \frac{1}{q}\mathcal D_{Q',q\rho /\sqrt{n}}\) under the condition that \({\boldsymbol{e}}= {\boldsymbol{c}}\bmod \mathbb {Z}^n\). Equivalently we could say that \({\boldsymbol{e}}\leftarrow {\boldsymbol{c}}-\mathcal D_{Q', \rho /\sqrt{n}, {\boldsymbol{c}}}\), then by Lemma 5.1 we know that \({\boldsymbol{e}}\) has a min-entropy of at least \(r-\log _2(3) \geqslant l\), and thus \(k := \mathcal E({\boldsymbol{e}}, Z) \in \{0,1\}^\ell \) is negligibly close to uniform independent of c. So in \(\mathrm {\mathbf {Game}}_2\) we have that \(\hat{k}\) is negligibly close to uniform, independent of c and the choice of \(b \in \{0,1\}\), making it impossible for \(\mathcal A\) to guess b with non-negligible advantage.    \(\square \)

6 Signature Scheme

Similar to the Key Encapsulation Mechanism we propose in this section a hash-then-sign signature scheme based on \(\mathrm{{\Delta LIP}}\). The main requirement is a quadratic form S along with an efficient discrete Gaussian sampling algorithm of smallish width \(\rho /\sqrt{n} \geqslant \eta _{2^{-\varTheta (n)}}(S)\).

Again the public key will consist of some lesser reduced form \(P := U^tSU \leftarrow \mathcal D_s([ S ])\) equivalent to S, where the unimodular transformation U is the secret key. To sign a message we use a full domain hash to obtain a uniform coset \({\boldsymbol{t}}+ \mathbb {Z}^n\), the signature then consists of a nearby vector \({\boldsymbol{\sigma }}\leftarrow \mathcal D_{P,\rho /\sqrt{n},{\boldsymbol{t}}}\) w.r.t. the form P. The nearby vector is obtained via S by the secret transformation U.

The security assumption is similar, but in some way dual to that of the KEM. Again assume that it is computationally hard to differentiate between P and some special class of forms [Q]; however in this case Q must admit a sparse projection (equivalently, their dual should contain a dense lattice). The sparsity implies that a uniformly random target \({\boldsymbol{t}}\) does not have a nearby vector with overwhelming probability, making the signage vacuously hard.

figure d

Correctness. For correctness we mainly have to check that the returned signature \({\boldsymbol{\sigma }}\in \mathbb {Z}^n\) is indeed close to \({\boldsymbol{t}}:= \mathcal H(m)\) w.r.t P. Because \(P=U^tSU\) we have:

$$\begin{aligned} \left\Vert {\boldsymbol{\sigma }}-{\boldsymbol{t}}\right\Vert _P = \left\Vert U({\boldsymbol{\sigma }}-{\boldsymbol{t}})\right\Vert _S = \left\Vert {\boldsymbol{\sigma }}'-U{\boldsymbol{t}}\right\Vert _S, \end{aligned}$$

and by Lemma 2.7 we have with overwhelming probability that \(\left\Vert {\boldsymbol{\sigma }}-{\boldsymbol{t}}\right\Vert _P = \left\Vert {\boldsymbol{\sigma }}'-U{\boldsymbol{t}}\right\Vert _S \leqslant \rho /\sqrt{n} \cdot \sqrt{n} = \rho \), concluding the correctness.

Security. For the security proof we first consider a class of quadratic forms for which the signage is vacuously hard, e.g. for a random target \({\boldsymbol{t}}\in \mathbb {R}^n/\mathbb {Z}^n\) there exists no nearby vector.

Lemma 6.1

Let \(Q \in {\mathcal {S}^{>0}_n(\mathbb Z)}\) be a quadratic form with a dense rank k sublattice \(D\mathbb {Z}^k \subset \mathbb {Z}^n\), in particular such that \(\rho /\sqrt{k} \leqslant 1/(\sqrt{8\pi e} \cdot \det (D^tQD)^{1/2k})\). Then for the dual form \(Q^{-1}\) we have

$$\mathop {\Pr }\limits _{{\boldsymbol{t}}\sim \mathcal U([ 0, 1 ]^n)} [ |({\boldsymbol{t}}+\mathcal B^n_{Q^{-1}, \rho }) \cap \mathbb {Z}^n| \geqslant 1 ] \leqslant 2^{-k}.$$

Proof

Let \(V := \text {span}(D) \subset \mathbb {R}^n\) such that the orthogonal projection w.r.t. \(Q^{-1}\) of \(\mathbb {Z}^n\) onto V defines a projected lattice \(C\mathbb {Z}^k := \pi _{Q^{-1},V}(\mathbb {Z}^n)\) of rank k, with \(\det (C^tQ^{-1}C) \geqslant 1/\det (D^tQD)\). Because a projection is non-increasing in length we have

$$\begin{aligned} \mathop {\Pr }\limits _{{\boldsymbol{t}}\sim \mathcal U(\mathbb {R}^n/\mathbb {Z}^n)}[ |({\boldsymbol{t}}+\mathcal B^n_{Q^{-1}, \rho }) \cap \mathbb {Z}^n| \geqslant 1 ] \leqslant \mathop {\Pr }\limits _{{\boldsymbol{t}}\sim \mathcal U(\mathbb {R}^k/\mathbb {Z}^k)}[ |({\boldsymbol{t}}+\mathcal B^k_{C^tQ^{-1}C, \rho }) \cap \mathbb {Z}^n| \geqslant 1 ] = (*). \end{aligned}$$

Then using Markov’s inequality we can bound the above by

$$\begin{aligned} (*) \leqslant \mathbb {E}_{{\boldsymbol{t}}\sim \mathcal U(\mathbb {R}^k/\mathbb {Z}^k)}[ |({\boldsymbol{t}}+\mathcal B^k_{C^tQ^{-1}C, \rho }) \cap \mathbb {Z}^n| ]&= \frac{\text {Vol}_{C^tQ^{-1}C}(\mathcal B^k_{C^tQ^{-1}C, \rho })}{\text {Vol}_{C^tQ^{-1}C}(\mathbb {R}^k/\mathbb {Z}^k)} \\&\leqslant \frac{(2\pi e/k)^{k/2} \cdot \rho ^k}{\sqrt{\det (C^tQ^{-1}C)}} \leqslant 2^{-k}. \end{aligned}$$

   \(\square \)

Theorem 6.2

We consider the signature scheme \(\mathcal S:= (\mathrm {\mathbf {Gen}}, \mathrm {\mathbf {Sign}}, \mathrm {\mathbf {Verify}})\) instantiated with quadratic form \(S \in {\mathcal {S}^{>0}_n(\mathbb Z)}\), sampling parameter \(\rho \), and public key parameter \(s > 0\). Let \(Q \in {\mathcal {S}^{>0}_n(\mathbb Z)}\) be a quadratic form with a dense rank \(k =\varTheta (n)\) sublattice \(D\mathbb {Z}^k \subset \mathbb {Z}^n\), in particular such that \(2\rho /\sqrt{k} \leqslant (\sqrt{8\pi e} \cdot \det (D^tQD^t)^{1/k})^{-1}\). Then \(\mathcal S\) is EUF-CMA secure if \(\mathrm {ac\hbox {-}\Delta LIP}_s^{S, Q^{-1}}\) is hard.

Proof

Let \(\mathcal A\) be a probabilistic polynomial-time adversary. We present three games \(\mathrm {\mathbf {Game}}_1, \mathrm {\mathbf {Game}}_2, \mathrm {\mathbf {Game}}_3\) where \(\mathrm {\mathbf {Game}}_1\) is the regular EUF-CMA game with the original scheme, \(\mathrm {\mathbf {Game}}_2\) reprograms the random oracle to generate valid signatures without knowledge of the secret key, and \(\mathrm {\mathbf {Game}}_3\) samples the public key from \([ Q^{-1} ]\) instead of [S]. By a standard smoothness argument the adversary’s view of \(\mathrm {\mathbf {Game}}_1\) and \(\mathrm {\mathbf {Game}}_2\) is statistically indistinguishable, and \(\mathrm {\mathbf {Game}}_2\) and \(\mathrm {\mathbf {Game}}_3\) are indistinguishable by the hardness of \(\mathrm {ac\hbox {-}\Delta LIP}_s^{S, Q^{-1}}\). Then we conclude by Lemma 6.1 that the problem of forging a signature in \(\mathrm {\mathbf {Game}}_3\) is statistically hard. The original EUF-CMA game \(\mathrm {\mathbf {Game}}_1\) is as follows [18]:

  • \(\mathrm {\mathbf {Gen}}(1^n)\) is run to obtain keys \((pk=P, sk=U)\).

  • Adversary \(\mathcal A\) is given \(pk=P\) and access to an oracle \(\mathrm {\mathbf {Sign}}(sk, \cdot )\). The adversary then outputs \((m, {\boldsymbol{\sigma }})\) where m was not queried before to the oracle.

  • \(\mathcal A\) succeeds if and only if \(\mathrm {\mathbf {Verify}}(pk, m, {\boldsymbol{\sigma }}) = 1\).

To show that our signature scheme \(\mathcal S\) is EUF-CMA secure we have to show that \(\mathrm {\mathbf {Game}}_1\) succeeds only with negligible probability. We assume that the adversary queries the oracle on \(l = \mathrm {poly}(n)\) distinctFootnote 6 message \(m_1, \ldots , m_l\). In \(\mathrm {\mathbf {Game}}_1\) the secret key is used to obtain a valid signature \((m_i, {\boldsymbol{\sigma }}_i)\) where \({\boldsymbol{\sigma }}_i \leftarrow \mathcal D_{P, \rho /\sqrt{n},\mathcal H(m_i)}\). In \(\mathrm {\mathbf {Game}}_2\) instead we first sample a random error \({\boldsymbol{e}}_i \leftarrow \frac{1}{q} \cdot \mathcal D_{P, q\rho /\sqrt{n}}\). By Lemma 3.6 we have \(q\rho /\sqrt{n} \geqslant \Vert B_P^*\Vert \cdot \sqrt{\ln (2n+4)/\pi }\) with overwhelming probability, and thus by Lemma 2.9 we can do the sampling without using the secret key. Then we reprogram the random oracle such that \(\mathcal H(m_i) := {\boldsymbol{t}}_i = {\boldsymbol{e}}\bmod \mathbb {Z}^n \in \mathbb T_q\), and return the signature pair \((m_i, {\boldsymbol{\sigma }}_i := {\boldsymbol{t}}_i - {\boldsymbol{e}}_i)\). Note that the probability that \({\boldsymbol{t}}_i\) equals any target \({\boldsymbol{t}}\in \mathbb T_q^n\) is proportional to \(\rho _{P, \rho /\sqrt{n}, {\boldsymbol{t}}}(\mathbb {Z}^n)\). So \({\boldsymbol{t}}_i\) is close to uniform by Lemma 2.5 because \(\rho /\sqrt{n} \geqslant \eta _{2^{-\varTheta (n)}}([S])= \eta _{2^{-\varTheta (n)}}([ P ])\), and thus the random oracle is still simulated correctly. Additionally the conditional probability of \({\boldsymbol{\sigma }}_i\) conditioned on \({\boldsymbol{t}}_i\) is exactly the same as in \(\mathrm {\mathbf {Game}}_1\), so we can conclude that \(\mathrm {\mathbf {Game}}_1\) and \(\mathrm {\mathbf {Game}}_2\) are statistically indistinguishable from the adversary’s point of view.

The only difference between \(\mathrm {\mathbf {Game}}_2\) and \(\mathrm {\mathbf {Game}}_3\) is that in \(\mathrm {\mathbf {Game}}_3\) we sample the public key P from \(\mathcal D_s([Q^{-1}])\) instead of \(\mathcal D_s([ S ])\). Note that \(\mathrm {\mathbf {Game}}_2\) and \(\mathrm {\mathbf {Game}}_3\) both only use public information and thus by the hardness of \(\mathrm {ac\hbox {-}\Delta LIP}_s^{S, Q^{-1}}\) the two are computationally indistinguishable.

To conclude note that for any message m we obtain a random target \({\boldsymbol{t}}:= \mathcal H(m) \in \mathbb T_q^n\). Let \({\boldsymbol{e}}'\) be uniform over the Babai nearest plane region defined by P, then \(\left\Vert {\boldsymbol{e}}'\right\Vert _P \leqslant \frac{\sqrt{n}}{2} \Vert B_P^*\Vert \), and \({\boldsymbol{t}}' := {\boldsymbol{t}}+\frac{1}{q}{\boldsymbol{e}}'\) is uniform over \(\mathbb {R}^n/\mathbb {Z}^n\). By Lemma 6.1 the uniformly random target \({\boldsymbol{t}}'\) lies at distance at least \(2\rho \) from \(\mathbb {Z}^n\) w.r.t. P with overwhelming probability. So for \({\boldsymbol{t}}\) we have with overwhelming probability that:

$$\begin{aligned} \text {dist}_P({\boldsymbol{t}}, \mathbb {Z}^n)&\geqslant \text {dist}_P({\boldsymbol{t}}', \mathbb {Z}^n) - \left\Vert \frac{1}{q}{\boldsymbol{e}}'\right\Vert _P \geqslant 2\rho - \frac{\sqrt{n} \cdot \Vert B_P^*\Vert }{2q} \\&\geqslant 2\rho - \rho /(2\sqrt{\ln (2n+4)/\pi }) > \rho . \end{aligned}$$

Therefore it is statistically impossible for the adversary to return a valid signature for m, and thus to win \(\mathrm {\mathbf {Game}}_3\).    \(\square \)

7 Cryptanalysis of LIP

Equivalent quadratic forms \(Q, Q' := U^tQU\) (for some \(U \in \mathcal {GL}_n(\mathbb {Z})\)) share many common properties, and these invariants can be used to decide that two quadratic forms cannot be equivalent, or can guide the search for an isomorphism.

7.1 Invariants

Arithmetic Invariants. Firstly we have \(\det (U) = \pm 1\), and thus for two equivalent quadratic forms we have \(\det (Q') = \det (U^t)\det (Q)\det (U) = \det (Q)\). Secondly because U and \(U^{-1}\) are both integral, the quantity \(\gcd (Q) = \gcd \{ Q_{ij} : 1 \leqslant i,j \leqslant n \}\) is also an invariant.

A third and less obvious invariant is the parity of the quadratic form. The notion is standard for unimodular lattices: it is called even if all norms are even, and odd otherwise. More generally, writing \( \Vert {\boldsymbol{x}}\Vert _Q = \sum _i Q_{ii} x^2_i + 2 \sum _{i<j} x_j Q_{ij} x_i\) one gets that \(\gcd \{ \Vert {\boldsymbol{x}}\Vert _Q : x \in \mathbb {Z}^n \} \in \{1, 2\} \cdot \gcd (Q)\). We call this factor \(\mathrm {par}(Q) \in \{1,2\}\) the parity of Q. It is also efficiently computable by noting that \(\mathrm {par}(Q) = \gcd (\{ Q_{ii} : 1 \leqslant i \leqslant n\} \cup \{2\gcd (Q)\}) / \gcd (Q)\).

Further arithmetic invariants are induced by R-equivalence of quadratic forms for extensions \(R \supset \mathbb {Z}\). The invariants for \(\mathbb {Q}\)-equivalence can be decomposed via a local-global principle, namely the Hasse-Minkowski theorem [42, Thm. 9, pp. 44]. Together with the discriminant, these invariants are complete (they entirely determine quadratic forms up to \(\mathbb Q\)-equivalence), and they can be computed efficiently. They consists of the signature, and the Cassel-Hasse invariant at each prime p. The Sylvester signature (\(\mathbb R\)-equivalence) is always (n, 0) in our case as we are only considering positive quadratic forms. The Cassel-Hasse invariant (\(\mathbb Q_p\)-invariance) for a prime p is given for a diagonal matrix \(D = \mathrm {diag}(d_1, \dots , d_n)\) by

$$\begin{aligned} h_p = \prod _{i<j} (d_i, d_j)_p \end{aligned}$$
(1)

where \((\, \cdot \, , \, \cdot \,)_p\) denotes the Hilbert Symbol at p. Using \(LDL^t\) decomposition (Cholesky decomposition over the rationals), one can efficiently compute Hasse invariant for any positive quadratic form.

Similarly, there are also invariants induced by p-adic equivalence of quadratic forms: \(Q' = V^t Q V\) for \(V \in \mathcal {GL}_n(\mathbb {Z}_p)\), see [9, Chap. 15, Sec 4.1].

All these arithmetic invariants provide a fingerprint

$$\begin{aligned} \mathrm {ari}(Q) = (\det (Q), \gcd (Q), \mathrm {par}(Q), [Q]_{\mathbb {Q}}, ([Q]_{\mathbb {Z}_p})_p) \end{aligned}$$
(2)

and they appear to all be efficiently computable, but are essentially only useful to answer the \(\mathrm{{\Delta LIP}}\) problem in the negative. When instantiating \(\mathrm{{\Delta LIP}}\), we should therefore make sure that these fingerprint matches.

The Hull. In the literature for linear code equivalence a relevant notion is that of the efficiently computable hull \(C \cap C^\perp \) of a code \(C \subset \mathbb {F}_q^n\). Properties such as the rank of the hull are invariant under equivalence, and a small rank even allows to efficiently find the isometry [41]. For a lattice \(\mathcal L\) and its dual \(\mathcal L^*\) we could define the hull as \(\mathcal L\cap \mathcal L^*\). However, for integral lattices (or more generally if the associated quadratic form is integral) we always have \(\mathcal L\subset \mathcal L^*\) and thus the hull \(\mathcal L\cap \mathcal L^* = \mathcal L\) does not present us with new information. We could generalize definition to consider \(\mathcal L\cap (k \cdot \mathcal L^*)\) for rational \(k \in \mathbb {Q}_{\ne {0}}\), and although we do not see a direct threat for our instantiation (in Sect. 8) from this, we do encourage more research into the geometric properties of the resulting lattices.

Geometric Invariant. The defining and most important property of a unimodular transformation \(U \in \mathcal {GL}_n(\mathbb {Z})\) is that it gives a bijection \(\mathbb {Z}^n \rightarrow \mathbb {Z}^n\) by \({\boldsymbol{x}}\mapsto U{\boldsymbol{x}}\) (or \({\boldsymbol{x}}\mapsto U^{-1}{\boldsymbol{x}}\)). With respect to the quadratic forms \(Q, Q' := U^tQU\) this even gives an isometry (from \(Q'\) to Q) as

$$\begin{aligned} \langle {\boldsymbol{x}}, {\boldsymbol{y}}\rangle _{Q'} = {\boldsymbol{x}}^tQ'{\boldsymbol{y}}={\boldsymbol{x}}^tU^tQU{\boldsymbol{y}}= \langle U{\boldsymbol{x}}, U{\boldsymbol{y}}\rangle _Q \text { for }{\boldsymbol{x}},{\boldsymbol{y}}\in \mathbb {R}^n. \end{aligned}$$

This isometry results in several natural geometric invariants related to the norms and inner products of integral vectors. We have already seen some, namely the first minimum \(\lambda _1(Q)\) and the i-th minimum \(\lambda _i(Q)\). Further geometric invariants can be defined, such as the kissing number \(\kappa (Q) = |\mathrm {Min}(Q)|\) where

$$\begin{aligned} \mathrm {Min}(Q) := \{ {\boldsymbol{x}}\in \mathbb {Z}^n : \left\Vert {\boldsymbol{x}}\right\Vert _Q = \lambda _1(Q) \}, \end{aligned}$$

and more generally the (formal) Theta-series \(\varTheta _Q(q) = \sum _{\ell \geqslant 0} N_\ell q^\ell \) associated to Q, where \(N_\ell = |\left\{ {\boldsymbol{x}}\in \mathbb {Z}^n : \Vert {\boldsymbol{x}}\Vert _Q = \ell \right\} |\).

All these geometric invariant appears to involve finding or even enumerating short vectors; in particular they are plausibly hard to compute.

7.2 Algorithms for Distinguish-LIP and Hardness Conjecture

In Sect. 8, we will use \(\mathrm{{\Delta LIP}}\) with quadratic forms with different minimal distances \(\lambda _1(Q_0) < \lambda _1(Q_1)\). However we will be careful to ensure that their arithmetic invariant match \(\mathrm {ari}(Q_0) = \mathrm {ari}(Q_1)\) to not make the problem trivial.

Approximate-SVP Oracle. An f-approx-SVP oracle applied to a form Q finds a short vector of length at most \(f \cdot \lambda _1(Q)\). So \(\mathrm{{\Delta LIP}}\) is no harder than f-approx-SVP for \(f = \lambda _1(Q_1) / \lambda _1(Q_0)\) in any of those lattices.

Unusual-SVP via Lattice Reduction. However even when the gap between \(\lambda _1(Q_0)\) and \(\lambda _1(Q_1)\) is small, the minimal vectors may individually still be unusually short, which make them significantly easier to find than in a random lattice. This is usually formalized via the f-unique-SVP problem, but many instances of interest do not have such a gap between \(\lambda _1\) and \(\lambda _2\); in fact \(\mathbb {Z}^n\), Barnes-Wall and Barnes-Sloane lattices all have \(\lambda _1 = \lambda _2 = \dots = \lambda _n\). But practical and heuristic studies have showed that uniqueness is not that relevant to lattice attacks [2]. We therefore introduce yet another lattice problem, called unusual-SVP to discuss such instances. A formal complexity reduction between unusual-SVP and unique-SVP matching or approaching the heuristic state of the art appears to be a valuable research objective, but is beyond the scope of the present article.

We define f-unusual-SVP: find a minimal vector under the promise that \(\lambda _1(Q) \leqslant \mathrm {gh}(Q)/f\), where the Gaussian Heuristic \(\mathrm {gh}(Q)\) is a heuristic estimate for \(\lambda _1(Q)\) given by:

$$\begin{aligned} \mathrm {gh}(Q) := {\det (Q)}^{1/{2n}} \cdot \frac{1}{\sqrt{\pi }} \cdot \varGamma (1+n/2)^{1/n} \approx {\det (Q)}^{1/{2n}} \cdot \sqrt{\frac{n}{2\pi e}}. \end{aligned}$$

State of the art lattice reduction techniques find these unusually short vector more easily than longer vectors with length around \(\mathrm {gh}(Q)\), and (heuristically) the hardness is directly driven by the ratio \(f=\mathrm {gh}(Q)/\lambda _1(Q)\) [2]. Given a form \(Q' \in [Q_0 ]\cup [Q_1 ]\) we parametrize the lattice reduction algorithm to find a unusual short vector with length \(\min \{ \lambda _1(Q_0), \lambda _1(Q_1)\}\), then depending on success we learn that either \(Q' \in [Q_0 ]\) or \(Q' \in [Q_1 ]\).

An Approach of Szydlo. Additionally there is one heuristic algorithm in the literature [46] for \(\mathrm{{\Delta LIP}}\), that applies to lattices obtained by mild sparsification of the orthogonal lattice \(\mathbb {Z}^n\). This algorithm proceeds by sampling vectors of length \(O(\sqrt{n})\) and then decides via a statistical test: the Theta-series appears sufficiently different at such low lengths to distinguish the two lattices. Remarkably, the parameters for state of the art lattice reduction algorithms parametrized to solve \(O(\sqrt{n})\)-approx-SVP for (mild sparsifications of) \(\mathbb {Z}^n\), match those to solve \(\mathrm {gh}(\mathbb {Z}^n)/\lambda _1(\mathbb {Z}^n) = O(\sqrt{n})\)-unusual-SVP; instead of finding approximate vectors we immediately find the shortest vectors. Again we see that the ratio \(\mathrm {gh}(Q)/\lambda _1(Q)\) is what seems to matter.

Conclusion. To conclude, let us also note that any of the above attack can also be run over the dual. To state a hardness conjecture capturing these attacks we define the primal-dual gap to the Gaussian Heuristic as:

$$\begin{aligned} \mathrm {gap}(Q)&= \max \left\{ \frac{\mathrm {gh}(Q)}{\lambda _1(Q)}, \frac{\mathrm {gh}(Q^{-1})}{\lambda _1(Q^{-1})} \right\} . \end{aligned}$$

Note that this quantity might be slightly lower than 1 (but no lower than 1/2 by Minkowski bound): there might exist excellent lattice packings beating the Gaussian Heuristic. We will be assumingFootnote 7 \(\mathrm {gap}(Q_i) \geqslant 1\), which implies that \(\lambda _1(Q_i) / \lambda _1(Q_{1-i}) \leqslant \mathrm {gap}(Q_i)\), therefore also capturing the first approach.

In all the attacks above, one first searches for vector no larger than \(f \cdot \lambda _1(Q_i)\) w.r.t. \(Q_i\) for \(f = \mathrm {gap}(Q_i)\), hence the following conjecture.

Conjecture 7.1

(Hardness of \(\mathrm{{\Delta LIP}}\) (Strong)). For any class of quadratic forms \([Q_0], [Q_1]\) of dimension n, with \(\mathrm {ari}([Q_0]) = \mathrm {ari}([Q_1])\), \(1 \leqslant \mathrm {gap}([Q_i]) \leqslant f\), the best attack against \(\mathrm {wc\hbox {-}\Delta LIP}^{Q_0, Q_1}\) requires solving f-approx-SVP in the worst-case from either \([Q_0]\) or \([Q_1]\).

This conjecture is meant to offer a comparison point with existing lattice-based cryptography in terms of the approximating factor. Beyond contradicting this assumption, we also invite cryptanalysis effort toward concrete comparison of f-approx-SVP on those instances to SIS and LWE with the same approximation factor f. If one only wishes to argue exponential security in n of the schemes proposed in this paper, a sufficient conjecture is the following.

Conjecture 7.2

(Hardness of \(\mathrm{{\Delta LIP}}\) (Mild)). For any class of quadratic forms \([Q_0], [Q_1]\) of dimension n, with \(\mathrm {ari}([Q_0]) = \mathrm {ari}([Q_1])\), \(\mathrm {gap}([Q_i]) \leqslant \mathrm {poly}(n)\), \(\mathrm {wc\hbox {-}\Delta LIP}^{Q_0, Q_1}\) is \(2^{\varTheta (n)}\)-hard.

Note that the conjecture above are “best-case” over the choice of the isomorphism class, and worst-case over the representation of the class (however note that we have a worst-case to average-case reduction over that representation). That is, even though we may only want to use \(\mathrm{{\Delta LIP}}\) for specific choices of isomorphism classes, we gladly invite cryptanalysis effort on \(\mathrm{{\Delta LIP}}\) on any choice of isomorphism classes.

7.3 Algorithms for Search-LIP and Challenges

While the above invariants allow to semi-decide LIP, the search version requires more effort; though all methods known to us at least require the enumeration of short primal or dual vectors. In the extended version of this workFootnote 8 we discuss these methods in more detail.

8 Instantiating \(\mathrm{{\Delta LIP}}\) Pairs from Remarkable Lattices

To instantiate our schemes, we do not only need a lattice with efficient decoding or sampling; we also need a second lattice with a specific property to instantiate the \(\mathrm{{\Delta LIP}}\) problem and argue security. This section deals with how the \(\mathrm{{\Delta LIP}}\) pair is constructed from a single remarkable lattice.

8.1 Key Encapsulation Mechanism

To instantiate our KEM we need two quadratic forms: a form S along with an efficient decoder that can decode up to some distance \(\rho < \lambda _1(Q)/2\), and a form Q with a dense rank k sublattice \(D \cdot \mathbb {Z}^k \subset \mathbb {Z}^n\) such that \(\eta _{\frac{1}{2}}(D^tQD) \leqslant \rho /(2\sqrt{n})\). For simplicity of notation we move to the lattice point of view.

We assume to have an n-dimensional lattice \(\varLambda \) for which \(\mathrm {gap}(\varLambda ) \leqslant f=f(n)\), and for which we can decode up to \(\rho = \varTheta (1/f) \cdot \mathrm {gh}(\varLambda ) < \lambda _1(\varLambda )/2\). We consider a general construction leading to a 2n-dimensional primary lattice \(\varLambda _S\) and secondary lattice \(\varLambda _Q\) with gap bounded by \(O(f^3)\) and such that \(\varLambda _Q\) has a dense enough sublattice to instantiate our KEM.

Note that due to the bounded gap of \(\varLambda \) we have by Lemma 2.6 that

$$\eta _{\frac{1}{2}}(\varLambda ) \leqslant \eta _{2^{-n}}(\varLambda ) \leqslant \frac{\sqrt{n}}{\lambda _1(\varLambda ^*)} \leqslant \frac{\sqrt{n} \cdot f}{\mathrm {gh}(\varLambda ^*)} = \varTheta (f \cdot \det (\varLambda )^{1/n}).$$

Now let \(g = \varTheta (f^2)\) be a positive integer and consider the lattices:

$$\begin{aligned} \varLambda _S := g \cdot \varLambda \oplus (g+1) \cdot \varLambda&, \text { and } \varLambda _Q := \varLambda \oplus g(g+1)\varLambda , \end{aligned}$$

where by construction \(\varLambda _Q\) has a dense sublattice \(\varLambda \). Note that we can still decode \(\varLambda _S\) up to radius \(\rho ' := g \cdot \rho = \varTheta (g/f) \cdot \mathrm {gh}(\varLambda )\).

Invariants Match. Both lattices have determinant \(g^n(g+1)^n \det (\varLambda )^2\). Due to the coprimality of g and \(g+1\) we still have \(\gcd (\varLambda _S) =\gcd (\varLambda _Q)=\gcd (\varLambda )\), and similarly for the parity. It remains to check rational equivalence and p-adic equivalence for all primes p. Let R denote a quadratic form representing \(\varLambda \). Up to integral equivalence, we have:

$$\begin{aligned} S := \left( {\begin{array}{cc} g^2R &{} 0 \\ 0 &{} (g+1)^2R \end{array}} \right)&\qquad \qquad \;\; Q := \left( {\begin{array}{cc} R &{} 0 \\ 0 &{} g^2(g+1)^2R \end{array}} \right) . \end{aligned}$$

Let \(I_n\) be the \(n\times n\) identity matrix and consider the transformations:

$$\begin{aligned} U_1 := \left( {\begin{array}{cc} g^{-1}I_n &{} 0 \\ 0 &{} gI_n \end{array}} \right)&\qquad \qquad U_2 := \left( {\begin{array}{cc} 0 &{} (g+1)I_n \\ (g+1)^{-1}I_n &{} 0 \end{array}} \right) \end{aligned}$$

Then \(Q = U_1^t S U_1\) over \(\mathbb Q\): this implies \([S]_{\mathbb {Q}} = [Q]_{\mathbb {Q}}\). For any prime p we have that either \(\gcd (g, p)=1\) or \(\gcd (g+1, p)=1\) (or both). So either g or \((g+1)\) is invertible over the p-adic integers \(\mathbb {Z}_p\), and thus either \(U_1 \in \mathcal {GL}_d(\mathbb {Z}_p)\) exists and \(Q = U_1^t S U_1\) over \(\mathbb {Z}_p\) or \(U_2 \in \mathcal {GL}_d(\mathbb {Z}_p)\) exists and \(Q = U_2^t S U_2\) over \(\mathbb {Z}_p\). In either case, we have established \([S]_{\mathbb {Z}_p} = [Q]_{\mathbb {Z}_p}\), which concludes the comparison of arithmetic invariants: \(\mathrm {ari}(S) = \mathrm {ari}(Q)\).

Dense Sublattice. We now check the requirements for Theorem 5.2, namely that \(\eta _{\frac{1}{2}}(\varLambda ) \leqslant \rho ' / (2\sqrt{2n})\). Given that \(\eta _{\frac{1}{2}}(\varLambda ) \leqslant \varTheta (f \cdot \mathrm {gh}(\varLambda )/\sqrt{n})\), it is sufficient if

$$\begin{aligned} \varTheta (f \cdot \mathrm {gh}(\varLambda )/\sqrt{n}) \leqslant \rho '/(2\sqrt{2n}) = \varTheta (g/f) \cdot \mathrm {gh}(\varLambda )/\sqrt{n}, \end{aligned}$$

and thus we can conclude that some \(g = \varTheta (f^2)\) indeed suffices.

Following the conclusions from the cryptanalysis in Sect. 7.2 and more specifically Conjecture 7.1, we take a look at the primal-dual gap for \(\varLambda _S\) and \(\varLambda _Q\). We have that \(\mathrm {gap}(\varLambda _S) = \varTheta (\mathrm {gap}(\varLambda )) \leqslant O(f)\), and \(\mathrm {gap}(\varLambda _Q) = \varTheta (g \cdot \mathrm {gap}(\varLambda )) \leqslant O(f^3)\). Note that following the same computation above but for a primal gap of f, dual gap of \(f^*\), and a decoding gap of \(f' \geqslant 2f\) we would have \(g = \varTheta (f^* \cdot f')\) and obtain a final primal-dual gap of \(O(\max (f,f^*)\cdot f^* \cdot f')\).

8.2 Signature Scheme

Our signature scheme can be instantiated with any lattice for which we can sample efficiently at small Gaussian widths, following a similar \(\mathrm{{\Delta LIP}}\) pair as above. Namely, we assume to have a lattice \(\varLambda \) with \(\mathrm {gap}(\varLambda )\leqslant f\) and such that we can sample efficiently with parameter \(\rho /\sqrt{n} = \varTheta (\eta _{2^{-\varTheta (n)}}(\varLambda ))\) close to the smoothing bound. Similarly to the KEM we set \(\varLambda _S := g \cdot \varLambda \oplus (g+1) \cdot \varLambda \), and \(\varLambda _{Q^{-1}} = \varLambda \oplus g(g+1) \cdot \varLambda \) for some integer \(g \geqslant 1\). In particular, as in the KEM, we do have \(\mathrm {ari}(S) = \mathrm {ari}(Q^{-1})\).

Then for the dual we have \(\varLambda _{Q} = \varLambda ^* \oplus \frac{1}{g(g+1)}\varLambda ^*\), with \(\frac{1}{g(g+1)}\varLambda ^*\) as a dense sublattice. The constraint of Theorem 6.2 boils down to the inequality \(\varTheta (g \cdot f \cdot \det (\varLambda )^{1/n}) \leqslant \varTheta (g^2 \det (\varLambda )^{1/n})\), and thus some \(g = \varTheta (f)\) suffices. The final primal-dual gap of \(\varLambda _S\) and \(\varLambda _{Q^{-1}}\) is then bounded by \(O(f^2)\).

The simplest lattice for which we have very efficient samplers is of course the integer lattice \(\mathbb {Z}^n\), leading to a gap of O(n) via the above construction. Instantiating our scheme with this lattice would lead to an interesting signature scheme where there is no need to compute any Cholesky decomposition, even for signing, and that could be fully implemented with efficient integer arithmetic.

We refer to our last open question (Sect. 1.3) regarding lattices with a tighter Gaussian sampler, in order to obtain a signature scheme with a better underlying approximation factor.

Getting Down to O(f). The general constructions presented turn a good decodable or sampleable lattice \(\varLambda \) with gap f into a primary and secondary lattice with gap \(O(f^3)\) and \(O(f^2)\) to instantiate our KEM and signature scheme respectively. We suggest here that these losses might be an artifact of the security proof.

Suppose we can generate a random lattice \(\varLambda _Q\) such that \(\mathrm {ari}(\varLambda _Q)=\mathrm {ari}(\varLambda )\); without the arithmetic constraint we would have with overwhelming probability that \(\mathrm {gap}(\varLambda _Q) = O(1)\) (but even O(f) would suffice). Let’s assume that the constraint does not affect this gap. Then similar to the scheme of McEliece, by adding the extra security assumption that it is hard to decode in \(\varLambda _Q\) (or hard to sample for the signature scheme), we could remove the lossyness argument from the security proof and directly instantiate our schemes with the pair \((\varLambda , \varLambda _Q)\), leading to a gap of O(f).