1 Introduction

A succinct non-interactive argument of knowledge (SNARK) [45, 58] allows a prover to convince a verifier that they know a witness to an NP statement. The succinctness property demands that the size of the proof and, after preprocessing, the work of the verifier are sublinear in (ideally independent of) the time needed to check the validity of the witness. Over the last decade, SNARKs have witnessed a meteoric rise in their efficiency and applicability [9, 11, 13, 22, 30, 62]. More recently, SNARKs have found their way into real-world systems in the context of blockchain-based cryptocurrencies [10, 15, 18, 20, 47].

The looming threat of quantum computers has given rise to a movement in the cryptographic community to investigate cryptographic constructions from assumptions that would plausibly withstand the presence of a quantum attacker. Unfortunately, present SNARKs based on post-quantum assumptions are in many ways inferior to pre-quantum constructions based on bilinear pairings. The goal of this work is to make progress in this area.

1.1 The Seascape of SNARKs

To put our work into context, we give a brief outline of the current seascape of SNARK constructionsFootnote 1. We split the schemes depending on the underlying cryptographic assumptions used as the source of hardness.

Bilinear Pairings. To date, the most efficient and feature-rich SNARKs are constructed over bilinear pairing groups (e.g. [42]) with a trusted setup. Typically, a pairing-based SNARK proof consists of only a small constant number of base group elements and is also publicly verifiable. Furthermore, offline preprocessing can often be performed, such that the online verification time is sublinear in the size of the statement being proved and the corresponding witness. Moreover, pairing-based SNARKs are favourable because of their algebraic structures that is known to enable proof batching [21, 50] and efficient recursive composition [12]. However, due to their reliance on the hardness of problems related to discrete logarithms, pairing-based SNARKs are not sound against a cheating quantum prover.

Random Oracles. Promising post-quantum candidate for SNARKs are constructions based on Micali’s CS proofs paradigm: They are obtained by first building an interactive argument using (generalisations of) probabilistically checkable proofs (PCP) [45], then compiling it into a non-interactive one using the Fiat-Shamir transformation [27] in the random oracle (RO) model.

A major difference between pairing-based and RO-based SNARKs, from both theoretical and practical perspectives, is the algebraic structure of the verification algorithm. In RO-based SNARKs, the verification algorithms query the RO, which is a combinatorial object. This is especially important when recursively composing the SNARK: On the theoretical side, proving the knowledge of a valid RO-based SNARK proof requires specifying the circuit computing the RO. This makes it challenging to formally argue about soundness, even in the RO model. From a practical perspective, the RO is instantiated with cryptographic hash functions, which typically have high multiplicative degree.Footnote 2 Since the multiplicative degree of the relation being proven often dominates the prover computation complexity in SNARKs, proving the satisfiability of a cryptographic hash function becomes computationally expensive.

Lattices. A prominent source of hardness for post-quantum security are computational problems over lattices. Not only do lattice-based assumptions allow us to build most standard cryptographic primitives, e.g. [34, 66], but also enable new powerful primitives [33, 38, 39, 72], which are currently out of the reach of group-based assumptions. Unfortunately, in the context of SNARKs, lattices have yet to be established as competitive alternatives to group-based constructions. So far, lattice-based SNARKs either require designated verifiers [32, 43] or linear-time verification [6, 19].

Beyond their theoretical appeal, one additional motivation for constructing lattice-based SNARKs is that they are potentially more compatible with other basic lattice-based primitives when composing them to construct more advanced systems. More concretely, consider the task of proving the satisfiability of certain algebraic relations over a ring \(\mathcal {R}\) by a solution vector of norm bounded by some \(\delta \), a language which arises naturally when composing lattice-based building blocks. Using an argument system for proving algebraic relations over a finite field without norm constraints, arithmetisation would be needed to express certain witness component in, say, binary representation and translate the bounded-norm condition to the satisfiability of a potentially-high-degree polynomial, depending on the choice of the norm and the norm bound \(\delta \). In contrast, the bounded-norm constraint could be proven natively if we have an argument system which supports proving the satisfiability of algebraic relations over \(\mathcal {R}\) by solutions of norm bounded by some \(\alpha \le \delta \). This is done by expressing the solution vector in a likely more compact \(O(\alpha )\)-ary representation such that, if the representation has norm bounded by \(\alpha \), then the original solution has norm bounded by \(\delta \).

1.2 Our Contributions

In this work, we construct the first lattice-based SNARK for an NP-complete language defined over a ring \(\mathcal {R}\). Specifically, the language being supported is the satisfiability of polynomial maps over \(\mathcal {R}\) by bounded-norm solutions. Our construction qualitatively matches pairing-based SNARKs, i.e. it is publicly verifiable and can achieve sublinear verification time given preprocessing, while requiring a trusted setup. In addition, it is tentatively post-quantum secure. Furthermore, our construction uses only algebraic operations over a ring \(\mathcal {R}\), and is therefore friendly to recursive composition. The soundness of our scheme is based on new lattice-based (knowledge) assumptions. The introduction of new knowledge assumptions is, to some extent, necessary: The work of Gentry and Wichs [35] shows that the soundness of any SNARK cannot be based on falsifiable assumptions in a black-box manner. We summarise the main steps of our work in the following.

(1):

Translation Technique. We put forward a new paradigm for translating pairing-based constructions to the lattice world. Our constructions stem from techniques from the literature on pairing-based cryptography [53], while simultaneously exploiting the ring structure offered by the lattice setting. We develop the necessary technical toolkit that helps us mimic operations of pairing-based VC constructions in the lattice setting. We view this translation strategy as a major conceptual contribution of our work and we expect it to be instrumental in enabling new applications of lattice-based cryptography.

(2):

Vector Commitments for Constant-Degree Polynomials. A vector commitment (VC) allows a committer to commit to a vector of \(w\) values \({\boldsymbol{\textbf{x}}} :=(x_0, \ldots , x_{w-1}) \in \mathcal {R}^w\) and then reveal selected portions of the input vector, or more generically a function \(f: \mathcal {R}^w \rightarrow \mathcal {R}^t\) over the input vector, along with a proof \(\pi \) that can be publicly verified. We require both the commitment and the opening proof to be compact. In terms of security, we want to ensure an adversary cannot output a valid opening proof for an incorrect function evaluation of the input vector. VCs have been established as a central primitive in cryptography [23, 24, 29, 37, 49, 52]. As a central technical contribution, we present the first (lattice-based) VC that supports openings beyond linear functions. Specifically, our VC commits to short vectors of ring elements \({\boldsymbol{\textbf{x}}} \in \mathcal {R}^w\) and supports openings to constant-degree d multivariate polynomial maps. We then show how this VC is sufficient to construct SNARKs for the satisfiability of degree-d polynomial maps (which is NP-complete for \(d \ge 2\)) by bounded-norm solutions.

(3):

New Assumptions and Analysis. Our translation techniques (and consequently the resulting cryptographic schemes) rely on a new family of assumptions that we refer to as the k-Ring-Inhomogenous Short Integer Solution (or \(k\text {-}R\text {-}\textsf{ISIS}\) for short) assumptions. Roughly, a \(k\text {-}R\text {-}\textsf{ISIS} \) assumption says that it is hard to find a short preimage \({\boldsymbol{\textbf{u}}}_{g^*}\) satisfying \(\left\langle {{\boldsymbol{\textbf{a}}}, {\boldsymbol{\textbf{u}}}_{g^*}} \right\rangle = g^{*}({\boldsymbol{\textbf{v}}}) \bmod q\), where \(g^*\) is a Laurent monomialFootnote 3 and \({\boldsymbol{\textbf{v}}}\) is a random point, given short preimages of other Laurent monomials \(\mathcal {G} \) evaluated on the same random point. Our new assumptions can be viewed as inhomogenous ring variants of the \(k\text {-}\textsf{SIS} \) assumption [17, 54] (where the rational functions are zeros). The key difference to \(k\text {-}\textsf{SIS} \) is that we allow to hand out more preimages than the dimension of \({\boldsymbol{\textbf{a}}}\) but these preimages are all of different images.

In fact, the assumptions we introduce, \(k\text {-}M\text {-}\textsf{ISIS} \), are slightly more general in being defined over modules rather than rings. Our generalisation to modules is motivated by the knowledge assumptions that we also introduce. In the knowledge assumptions images live in a moderately sized submodule.

We consider the introduction and study of the \(k\text {-}R\text {-}\textsf{ISIS} \) assumptions as a contribution to the programme of charting the territory between LWE and multilinear maps assumptions called for in [1].

To gain confidence in our newly introduced assumptions, we initiate their study. We show that certain subclasses of the \(k\text {-}R\text {-}\textsf{ISIS} \) problems (parameterised by the algebraic structure on the \(k\text {-}R\text {-}\textsf{ISIS} \) images) are as hard as the \(R\text {-}\textsf{SIS} \) problem. We show that, as expected, the \(k\text {-}M\text {-}\textsf{ISIS} \) problems are as hard as their \(k\text {-}R\text {-}\textsf{ISIS} \) counterparts, although the former have slightly skewed parameters. We also show that certain \(k\text {-}M\text {-}\textsf{ISIS} \) problems are as hard as the \(k\text {-}M\text {-}\textsf{SIS} \) problem, the natural module variant of the \(k\text {-}\textsf{SIS} \) problem, where the former have higher module ranks. Furthermore, we show that the \(k\text {-}M\text {-}\textsf{ISIS} \) problems for \((\mathcal {G},g^*)\) is as hard as those for \((\mathcal {G},0)\), and that the hardness is preserved when scaling both \(\mathcal {G} \) and \(g^*\) multiplicatively by any non-zero Laurent monomial.

However, since none of the reductions from well-established problems cover the case we rely upon in our constructions, we perform cryptanalysis to assess the hardness of general \(k\text {-}M\text {-}\textsf{ISIS} \) problems. While we did not identify any structural weaknesses, we encourage independent analysis to gain confidence in or invalidate our assumptions.

(4) :

Post-Quantum Security. As a contribution of independent interest, we show that our VC satisfies a strong notion of binding known as collapsing (as an ordinary commitment, not with respect to functional openings), a recently introduced security notion in the quantum setting [70]. For this, we introduce a new technique of embedding NTRU ciphertexts into the public parameters of our VC. To the best of our knowledge, this is the first VC not based on Merkle trees that is shown to satisfy such a notion.

(5) :

New Applications. Our SNARK supports proving the satisfiability of polynomial maps over \(\mathcal {R}\) by bounded-norm solutions, a language which directly captures those statements which naturally arise in lattice-based cryptographic constructions. We highlight two native applications of our SNARK which do not rely on expensive conversions between different NP-complete languages.

The first application is the recursive composition of our SNARK, which refers to the process of using the SNARK to prove knowledge of another SNARK proof and the satisfiability of a polynomial map; for details see the full version. This is natively supported because the verification algorithm of our SNARK construction is itself checking the satisfiability of certain algebraic relations over \(\mathcal {R}\) by a bounded-norm solution. Recursive composition of SNARKs is a general purpose technique for aggregating proofs or proving complex statements in a piece-by-piece fashion. The technique is also useful for constructing incremental verifiable computation [71] and verifiable delay functions [14, 41].

The second application is the aggregation of GPV signatures [34]. While it is folklore that any signatures can be aggregated by a SNARK for an NP-complete language, we stress that the GPV verification algorithm, again, checks the satisfiability of certain algebraic relations over \(\mathcal {R}\) by a bounded-norm solution which our SNARK natively supports. We discuss how to handle relations in \(\mathcal {R}_{q}\) in the full version of this work. Apart from obtaining short aggregated GPV signatures, in the setting where a set of n signers are signing a common message at a time, the verification of the aggregated signatures could be preprocessed, resulting in an online verification time sublinear in n. As a bonus result on GPV signatures, we further show how to construct lattice-based adaptor signatures [7] based on the GPV paradigm. Combining the two results, we obtain the first aggregatable adaptor signatures from any assumption.

Open Problems. Our work paves the way for what we believe to be an exciting line of research. As we initiate the study of inhomogenous variants of the k-SIS assumptions, we ask whether better (possibly quantum) algorithms can be found for solving this problem that exploit the additional algebraic structure. We also presume that for further families of rational functions the \(k\text {-}R\text {-}\textsf{ISIS} \) assumption can be shown to be as hard as standard hard lattice problems. Another compelling question is to study new cryptographic applications of the \(k\text {-}R\text {-}\textsf{ISIS}\) family. We expect that such an abstraction will be useful in transferring techniques from pairing-based cryptography into the lattice world.

1.3 Technical Overview

We give a concise overview of the process of obtaining our lattice-based SNARK.

From Vector Commitments to SNARKs. In this work, we are interested in VCs supporting openings to constant-degree-d \(w\)-variate \(t\)-output polynomial maps with bounded coefficients. The standard properties of interest for VCs are:

  • Compactness. Commitments and opening proofs are of size \(\textsf{poly}(\lambda , \textrm{log}\,w, \textrm{log}\,t)\).

  • Binding. It is infeasible to produce a commitment c and proofs for polynomials maps, such that the system of equations induced by them is not satisfiable.Footnote 4

In addition, we require the following stronger notion of binding.

  • Extractability. To produce a commitment c and a proof that the image of a polynomial map f at the committed vector is \({\boldsymbol{\textbf{y}}}\), one must know a preimage \({\boldsymbol{\textbf{x}}}\) such that c is a commitment of \({\boldsymbol{\textbf{x}}}\) and \(f({\boldsymbol{\textbf{x}}}) = {\boldsymbol{\textbf{y}}}\).

It is well known that one can construct SNARKs from VCs supporting linear openings in the RO model [49]. However, in this work we take a different route and adopt a more structured approach to construct SNARKs. Specifically, recall that the satisfiability of systems of degree-d polynomials is NP-complete for any constant \(d \ge 2\). As such, a SNARK can be trivially constructed from a compact and extractable VC for degree-d polynomials: The prover simply commits to the root of the system \((f,{\boldsymbol{\textbf{y}}})\) and immediately produces an opening proof for \((f,{\boldsymbol{\textbf{y}}})\). As a concrete example, a popular NP-complete language supported by existing SNARKs is rank-1 constraint satisfiability (R1CS). An R1CS instance consists of three matrices \((\textbf{A},\textbf{B},\textbf{C})\) over a field or in general a ring. The instance is satisfied by a vector \({\boldsymbol{\textbf{x}}}\) if \((\textbf{A} \cdot (1,{\boldsymbol{\textbf{x}}})) \circ (\textbf{B} \cdot (1,{\boldsymbol{\textbf{x}}})) = (\textbf{C} \cdot (1,{\boldsymbol{\textbf{x}}}))\), where \(\circ \) denotes the Hardamard product. It is easy to see that an R1CS instance is a special case of an instance \((f,{\boldsymbol{\textbf{y}}})\) of degree-2 polynomial satisfiability where \(f({\boldsymbol{\textbf{X}}}) := (\textbf{A} \cdot (1,{\boldsymbol{\textbf{X}}})) \circ (\textbf{B} \cdot (1,{\boldsymbol{\textbf{X}}})) - (\textbf{C} \cdot (1,{\boldsymbol{\textbf{X}}}))\) and \({\boldsymbol{\textbf{y}}} = {\boldsymbol{\textbf{0}}}\). For a full description of our SNARK we refer the reader to the full version of the paper.

Throughout the rest of this overview, we therefore focus on constructing lattice-based VCs supporting degree-d openings. Since known constructions are restricted to positional openings, we turn our attention to pairing-based schemes (which support linear openings) and develop a new strategy to translate them into lattice-based VCs and simultaneously to extend the degree to \(d > 1\).

General Translation Strategy. Our strategy for constructing a lattice-based VC is a novel translation technique that lets us port techniques from the pairing-land to the lattice-land. We describe a general translation strategy for translating not only VC but also potentially other pairing-based constructions to the lattice setting. For the group setting, we adopt the implicit notation for bilinear groups \(\mathbb {G}_1\), \(\mathbb {G}_2\), and \(\mathbb {G}_t\) of prime order q, i.e. the vector of elements in \(\mathbb {G}_i\) with (entry-wise) discrete logarithm \({\boldsymbol{\textbf{x}}} \in \mathbb {Z}_q\) base an arbitrary fixed generator of \(\mathbb {G}_i\) is denoted by \([{\boldsymbol{\textbf{x}}}]_i\), with group operations written additively, and the pairing product between \([{\boldsymbol{\textbf{x}}}]_1\) and \([{\boldsymbol{\textbf{y}}}]_2\) is written as \(\left\langle {[{\boldsymbol{\textbf{x}}}]_1, [{\boldsymbol{\textbf{y}}}]_2} \right\rangle \). For the lattice setting, we let \(\mathcal {R}\) be a cyclotomic ring, \(q \in \mathbb {N}\) be a large enough rational prime such that random elements in \(\mathcal {R}_q :=\mathcal {R}/ q \mathcal {R}\) are invertible with non-negligible probability.

Consider a pairing-based construction where the elements \( \{{[1]}_1, {[g({\boldsymbol{\textbf{v}}})]}_t\} _{g \in \mathcal {G}}\) are publicly available to all parties, where \(\mathcal {G} \) is a set of linearly-independent rational functions and \({\boldsymbol{\textbf{v}}}\) is a vector of secret exponents. An authority, knowing the secret exponents \({\boldsymbol{\textbf{v}}}\), is responsible for giving out secret elements \( \{{[g({\boldsymbol{\textbf{v}}})]}_2\} _{g \in \mathcal {G}}\) to user A. In turn, user A can compute \({[u]}_2 :=\sum _{g \in \mathcal {G}} c_g \cdot {[g({\boldsymbol{\textbf{v}}})]}_2\) and present it to user B, who can then check the correctness of \({[u]}_2\) by checking

$$ \left\langle {{[1]}_1, {[u]}_2} \right\rangle {\mathop {=}\limits ^{?}} \sum _{g \in \mathcal {G}} c_g \cdot {[g({\boldsymbol{\textbf{v}}})]}_t. $$

Note that in this check one side of the pairing (i.e. \({[1]}_1\)) is public, while the other side (i.e. \({[u]}_2\)) is computed from secrets delegated by the authority to user A. This property will be crucial for our translation technique to apply.

The above structure can be seen in many pairing-based constructions. For example, the secret vector \({\boldsymbol{\textbf{v}}}\) could be a trapdoor, a master secret key of an identity-based encryption scheme, or a signing key; the delegated secrets \( \{{[g({\boldsymbol{\textbf{v}}})]}_2\} _{g \in \mathcal {G}}\) could be hints given alongside the public parameters of a VC, an identity-based secret key, or a signature; and the pairing-product check could be for opening proof verification, decryption, or signature verification.

Our strategy of translating the above to a lattice-based construction is as follows. First, the public elements \( \{{[1]}_1, {[g({\boldsymbol{\textbf{v}}})]}_t\} _{g \in \mathcal {G}}\) over \(\mathbb {G}_1\) and \(\mathbb {G}_t\) are translated to the public vector and elements \( \{{\boldsymbol{\textbf{a}}},\ g({\boldsymbol{\textbf{v}}})\} _{g \in \mathcal {G}}\), where \({\boldsymbol{\textbf{a}}}\) and \({\boldsymbol{\textbf{v}}}\) are random vectors over \(\mathcal {R}_q\) and \(\mathcal {R}_{q}^\times \) respectively. Since \( \{g({\boldsymbol{\textbf{v}}})\} _{g \in \mathcal {G}}\) does not necessarily hide \({\boldsymbol{\textbf{v}}}\) in the lattice setting (e.g. when \(\mathcal {G} \) consists of many linear functions), the authority might as well publicly hand out the vectors \( \{{\boldsymbol{\textbf{a}}}, {\boldsymbol{\textbf{v}}}\} \) directly. Next, the secret elements \( \{{[g({\boldsymbol{\textbf{v}}})]}_2\} _{g \in \mathcal {G}}\) are translated to the short secret vectors \( \{{\boldsymbol{\textbf{u}}}_g\} _{g \in \mathcal {G}}\) satisfying \(\left\langle {{\boldsymbol{\textbf{a}}}, {\boldsymbol{\textbf{u}}}_g} \right\rangle = g({\boldsymbol{\textbf{v}}}) \bmod q\). These short preimages can be sampled given a trapdoor of \({\boldsymbol{\textbf{a}}}\), which the authority should have generated alongside \({\boldsymbol{\textbf{a}}}\). Given \( \{{\boldsymbol{\textbf{u}}}_g\} _{g \in \mathcal {G}}\), user A can similarly compute \({\boldsymbol{\textbf{u}}} :=\sum _{g \in \mathcal {G}} c_g \cdot {\boldsymbol{\textbf{u}}}_g\), although the coefficients \(c_g\) are now required to be short. The pairing-product check is then translated to checking

$$\begin{aligned} \left\langle {{\boldsymbol{\textbf{a}}}, {\boldsymbol{\textbf{u}}}} \right\rangle {\mathop {\equiv }\limits ^{?}}\sum _{g \in \mathcal {G}} c_g \cdot g({\boldsymbol{\textbf{v}}}) \bmod q \qquad \quad \text {and} \qquad \quad {\boldsymbol{\textbf{u}}} \text { is short.} \end{aligned}$$

The same strategy can also be used to translate (conjectured-)hard computational problems over bilinear groups to the lattice setting to obtain also seemingly-hard problems. For example, consider a variant of the \(\ell \)-Diffie-Hellman Exponent problem, which asks to find \([v^\ell ]_2\) given \(([1]_1, [1]_2, [v]_2, \ldots , [v^{\ell -1}]_2)\). A natural lattice-counterpart of the problem is to find a short preimage \({\boldsymbol{\textbf{u}}}_{\ell }\) satisfying \(\left\langle {{\boldsymbol{\textbf{a}}}, {\boldsymbol{\textbf{u}}}_{\ell }} \right\rangle \equiv v^{\ell } \bmod q\) given short preimages \(({\boldsymbol{\textbf{u}}}_i)_{i\in \mathbb {Z}_\ell }\) each satisfying \(\left\langle {{\boldsymbol{\textbf{a}}}, {\boldsymbol{\textbf{u}}}_i} \right\rangle = v^i \bmod q\).

We remark that a direct translation of pairing-based constructions does not necessarily yield the most efficient lattice-based scheme. For this reason, it will be useful to generalise pairing-based constructions into a family parameterised by the function class \(\mathcal {G} \). We will then have the freedom to pick \(\mathcal {G} \) to optimise the efficiency of translated lattice-based scheme.

Translating Vector Commitments. We next demonstrate how the above translation strategy can be applied to translate pairing-based VCs, using the following pairing-based VC with openings to linear forms \(f: \mathbb {Z}_q^w\rightarrow \mathbb {Z}_q\) adapted from [24, 49, 52] as an example.

  • Public parameters: \(\bigg ([1]_1,[1]_2, \left( [v_i]_1\right) _{i \in \mathbb {Z}_w}, \left( [\bar{v}_j]_2\right) _{j \in \mathbb {Z}_w}, \left( [v_i \cdot \bar{v}_j]_2\right) _{i,j \in \mathbb {Z}_w: i \ne j},\)\( [\bar{v}]_t \bigg )\) where \(\bar{v} = \prod _{k \in \mathbb {Z}_w} v_k\) and \(\bar{v}_j = \bar{v}/v_j\).

  • Committing \({\boldsymbol{\textbf{x}}} \in \mathbb {Z}_q\): \({[c]}_1 :=\sum _{i \in \mathbb {Z}_w} x_i \cdot {[v_i]}_1 = \left\langle {{[{\boldsymbol{\textbf{v}}}]}_1, {\boldsymbol{\textbf{x}}}} \right\rangle \)

  • Opening f : \({[u]}_2 :=\sum _{i,j \in \mathbb {Z}_w: i \ne j} f_j \cdot x_i \cdot {[v_i \cdot \bar{v}_j]}_2\)

  • Verifying (fy): \(\left\langle {[1]_1, {[u]}_2} \right\rangle {\mathop {=}\limits ^{?}} \left\langle {{[c]}_1, \sum _{j \in \mathbb {Z}_w} f_j \cdot {[\bar{v}_j]}_2} \right\rangle - y \cdot [\bar{v}]_t\)

The weak binding property of the scheme, i.e. the infeasibility of opening a commitment c to both (fy) and \((f,y')\) with \(y \ne y'\), relies on the hardness of computing \([\bar{v}]_2\), whose exponent corresponds to evaluating the “target monomial” \(\prod _{k \in \mathbb {Z}_w} X_k\) at \({\boldsymbol{\textbf{v}}}\). Notice that the target monomial is set up in such a way that \([\bar{v}]_t = [v_i]_1 \cdot [\bar{v}_i]_2\) holds for all \(i \in \mathbb {Z}_w\), where \([\bar{v}_i]_2\) can be viewed as a “complement” of \([v_i]_1\). Consequently, the value \(y = \left\langle {{\boldsymbol{\textbf{f}}}, {\boldsymbol{\textbf{x}}}} \right\rangle \) appears as the coefficient of \([\bar{v}]_t\) in the inner product \(\left\langle {\sum _{i \in \mathbb {Z}_w} x_i \cdot {[v_i]}_1, \sum _{j \in \mathbb {Z}_w} f_j \cdot {[\bar{v}_j]}_2} \right\rangle \).

While the above pairing-based scheme is ready to be translated to the lattice setting using our translation strategy, to prepare for our generalised scheme for higher-degree polynomials, we divide the target and complement monomials by \(\prod _{k \in \mathbb {Z}_w} X_k\). The complement of \(X_i\) becomes \(X_i^{-1}\) and the target monomial becomes the constant 1. Concretely, we divide the opening and the verification equation by \(\bar{v}\) to obtain

$$\begin{aligned}{}[u']_2&:=\sum _{i,j \in \mathbb {Z}_w: i \ne j} f_j \cdot x_i \cdot {[v_i / v_j]}_2 \\ \left\langle {[1]_1, [u']_2} \right\rangle&{\mathop {=}\limits ^{?}} \left\langle {{[c]}_1, \sum _{j \in \mathbb {Z}_w} f_j \cdot {[v_j^{-1}]}_2} \right\rangle - y \cdot [1]_t. \end{aligned}$$

Recall that in the VC construction above we relied on the hardness of computing \([\bar{v}]_2\). What we have done here might seem absurd, since the element \([1]_2\) now is given in the group setting, but finding a short pre-image of a fixed image, say 1, is seemingly hard in the lattice setting. Indeed, translating the modified scheme, we derive the following lattice-based scheme.

  • Public Parameters: \(\left( {\boldsymbol{\textbf{a}}}, {\boldsymbol{\textbf{v}}}, \left( {\boldsymbol{\textbf{u}}}_{i,j}\right) _{i \ne j \in \mathbb {Z}_w}\right) \) where \(\left\langle {{\boldsymbol{\textbf{a}}}, {\boldsymbol{\textbf{u}}}_{i,j}} \right\rangle \equiv v_i/v_j\), \({\boldsymbol{\textbf{u}}}_{i,j}\) are short

  • Committing \({\boldsymbol{\textbf{x}}} \in \mathcal {R}^w\): \(c :=\left\langle {{\boldsymbol{\textbf{v}}}, {\boldsymbol{\textbf{x}}}} \right\rangle \bmod q\)

  • Opening f: \({\boldsymbol{\textbf{u}}} :=\sum _{i,j \in \mathbb {Z}_w: i \ne j} f_j \cdot x_i \cdot {\boldsymbol{\textbf{u}}}_{i,j}\)

  • Verifying (fy): \(\left\langle {{\boldsymbol{\textbf{a}}}, {\boldsymbol{\textbf{u}}}} \right\rangle {\mathop {\equiv }\limits ^{?}}\left( \sum _{j \in \mathbb {Z}_w} f_j \cdot v_j^{-1}\right) \cdot c - y \bmod q\) and \({\boldsymbol{\textbf{u}}}\) is short

For correctness, we require that the committed vector \({\boldsymbol{\textbf{x}}}\) and the function f both have short coefficients.

The weak binding property of the translated lattice-based scheme relies on the hardness of finding a short preimage of (a small multiple of) 1 given short preimages of \(v_i/v_j\) for all \(i,j \in \mathbb {Z}_w\) with \(i \ne j\) – a new computational assumption obtained by translating its pairing-counterpart, which belongs to a new family of assumptions called the \(k\text {-}R\text {-}\textsf{ISIS}\) assumption family.

Furthermore, the computation of \(\sum _{j \in \mathbb {Z}_w} f_j \cdot v_j^{-1}\) in the verification equation can be preprocessed before knowing the commitment c and the opening proof \({\boldsymbol{\textbf{u}}}\), such that the online verification can be performed in time sublinear in \(w\).

Supporting Higher-Degree Polynomials. Notice that in the group setting the (modified) verification algorithm can be seen as evaluating the linear form f at \(([v_0^{-1}]_2 \cdot [c]_1 , \ldots , [v_{w-1}^{-1}]_2 \cdot [c]_1)\) where \([c]_1\) supposedly encodes \({\boldsymbol{\textbf{x}}}\). In the group setting, f has to be linear since we cannot multiply two \(\mathbb {G}_1\) elements together to get an encoding of the Kronecker product \({\boldsymbol{\textbf{x}}} \otimes {\boldsymbol{\textbf{x}}}\).

In the lattice setting, however, the commitment c is a ring element and thus we can evaluate a non-linear polynomial f at \((v_0^{-1} \cdot c, \ldots , v_{w-1}^{-1} \cdot c)\). Moreover, we notice that each degree-d monomial \({\boldsymbol{\textbf{x}}}^{{\boldsymbol{\textbf{e}}}}\) is encoded in \(c^d\) as (a factor of) the coefficient of \({\boldsymbol{\textbf{v}}}^{{\boldsymbol{\textbf{e}}}}\), which has a natural complement \({\boldsymbol{\textbf{v}}}^{-{\boldsymbol{\textbf{e}}}}\) satisfying \(({\boldsymbol{\textbf{v}}}^{{\boldsymbol{\textbf{e}}}}) \cdot ({\boldsymbol{\textbf{v}}}^{-{\boldsymbol{\textbf{e}}}}) = 1\), our modified target monomial. This suggests the possibility of generalising the translated lattice-based scheme above to support openings to higher-degree polynomials. Indeed, this technique allows us to generalise the scheme to support bounded-coefficient polynomials of degrees up to a constant, whose weak binding property is now based on another member of the \(k\text {-}R\text {-}\textsf{ISIS}\) assumption family.

Achieving Compactness and Extractability. The VC scheme obtained above achieves succinctness, i.e. commitments and opening proofs are of size sublinear in \(w\) (not t), and weak binding, which fall short of the compactness and extractability required to construct a SNARK. Indeed, a black-box construction of SNARK using this VC is unlikely since, so far, we are only relying on falsifiable assumptions. To resolve this problem, we propose a knowledge version of the \(k\text {-}R\text {-}\textsf{ISIS}\) assumptions. For concreteness, we will use the following member of the knowledge \(k\text {-}R\text {-}\textsf{ISIS}\) assumption family:

Let and be random vectors and be a random element such that \(|t \cdot \mathcal {R}_q|\) is super-polynomial in \(\lambda \) and \(|t \cdot \mathcal {R}_q|/|\mathcal {R}_q|\) is negligible in \(\lambda \). If there exists an efficient algorithm \(\mathcal {A}\) which, given short vectors \({\boldsymbol{\textbf{u}}}'_i\) satisfying \(\left\langle {{\boldsymbol{\textbf{a}}}', {\boldsymbol{\textbf{u}}}'_i} \right\rangle = v_i \cdot t \bmod q\) for all \(i \in \mathbb {Z}_w\), produces \((c, {\boldsymbol{\textbf{u}}}')\) such that \({\boldsymbol{\textbf{u}}}'\) is a short vector satisfying \(\left\langle {{\boldsymbol{\textbf{a}}}', {\boldsymbol{\textbf{u}}}'} \right\rangle = c \cdot t \bmod q\), then there exists an efficient extractor \(\mathcal {E}_\mathcal {A}\) which extracts a short vector \({\boldsymbol{\textbf{x}}} \in \mathcal {R}^w\) such that \(\left\langle {{\boldsymbol{\textbf{v}}}, {\boldsymbol{\textbf{x}}}} \right\rangle = c \bmod q\).

Equipped with this \(k\text {-}R\text {-}\textsf{ISIS}\) of knowledge assumption, we can upgrade our VC construction to achieve extractability as follows. First, we let the public parameters to additionally include \(({\boldsymbol{\textbf{a}}}', ({\boldsymbol{\textbf{u}}}'_i)_{i \in \mathbb {Z}_w}, t)\). Here \(t\) generates an ideal that is small enough for random elements in \(\mathcal {R}_{q}\) not to be contained within it, but big enough to provide sufficient entropy. Next, we let the committer also include \({\boldsymbol{\textbf{u}}}' = \sum _{i \in \mathbb {Z}_w} x_i \cdot {\boldsymbol{\textbf{u}}}'_i\) in an opening proof. Finally, we let the verifier additionally check that \({\boldsymbol{\textbf{u}}}'\) is short and \(\left\langle {{\boldsymbol{\textbf{a}}}', {\boldsymbol{\textbf{u}}}'} \right\rangle = c \cdot t \bmod q\).

To see why the modified scheme is extractable, suppose an adversary is able to produce a commitment c and a valid opening proof for (fy). By the \(k\text {-}R\text {-}\textsf{ISIS}\) of knowledge assumption, we can extract a short vector \({\boldsymbol{\textbf{x}}} \in \mathcal {R}^w\) such that \(\left\langle {{\boldsymbol{\textbf{v}}}, {\boldsymbol{\textbf{x}}}} \right\rangle = c \bmod q\). Now, if \(f({\boldsymbol{\textbf{x}}}) = y' \ne y\), we can use the extracted \({\boldsymbol{\textbf{x}}}\) to compute a valid opening proof for \((f,y')\). However, being able to produce valid opening proofs for both (fy) and \((f,y')\) with \(y \ne y'\) violates the weak binding property. We therefore conclude that \(f({\boldsymbol{\textbf{x}}}) = y\).

It remains to show how we can achieve compactness. Since our lattice-based VC schemes preserve the property of the original pairing-based schemes that the verification algorithm is linearly-homomorphic in the opening proofs, a natural strategy towards compactness is to aggregate multiple opening proofs into one using a random linear combination, with coefficients generated using a random oracle. The binding property of an aggregated opening proof can be proven using a classic rewinding argument which involves inverting a Vandermode matrix defined by the randomness used for aggregation. This strategy works particularly well in the prime-order group setting since scalars are field elements and Vandermonde matrices defined by distinct field elements are always invertible. In the lattice setting, however, the coefficients used for aggregation have to be chosen from a set where the difference between any pair of elements is (almost) invertible (over \(\mathcal {R}\)) for an analogous argument to go through. This is a severe limitation since sets satisfying this property cannot be too large [4].

To achieve compactness in the lattice setting, we are forced to use a different strategy. Specifically, the coefficients \({\boldsymbol{\textbf{h}}} = (h_i)_{i \in \mathbb {Z}_t} \in \mathcal {R}\) that we use to aggregate opening proofs are given by an instance of the \(R\text {-}\textsf{SIS}\) problem over \(\mathcal {R}_p\) (taking smallest \(\mathcal {R}\)-representatives of \(\mathcal {R}_p\) elements) sampled as part of the public parameters, where p is chosen such that the \(R\text {-}\textsf{SIS}\) assumption is believed to hold over \(\mathcal {R}_p\) while p is small relative to q.

To see why extractability still holds, suppose an adversary is able to produce a commitment c and a valid opening proof for (fy) where \(f = \sum _{i \in \mathbb {Z}_t} h_i \cdot f_i\) and \(y = \sum _{i \in \mathbb {Z}_t} h_i \cdot y_i\). By our previous argument, we can extract \({\boldsymbol{\textbf{x}}}\) satisfying \(f({\boldsymbol{\textbf{x}}}) = y\). Suppose it is not the case that \(f_i({\boldsymbol{\textbf{x}}}) = y_i\) for all \(i \in \mathbb {Z}_t\), then \((f_i({\boldsymbol{\textbf{x}}}) - y_i)_{i \in \mathbb {Z}_t}\) is a short vector satisfying \( \sum _{i \in \mathbb {Z}_t} h_i \cdot (f_i({\boldsymbol{\textbf{x}}}) - y_i) = 0 \) over \(\mathcal {R}\), which implies \( \sum _{i \in \mathbb {Z}_t} h_i \cdot (f_i({\boldsymbol{\textbf{x}}}) - y_i) = 0 \bmod p, \) breaking the \(R\text {-}\textsf{SIS}\) assumption over \(\mathcal {R}_p\).

Discussion and Generalisations. We discuss the resulting VC scheme obtained through the aforementioned series of transformations. Our VC scheme supports openings to \(w\)-variate \(t\)-output constant-degree polynomial maps with bounded coefficients. The scheme achieves compactness and extractability, where the latter is based on the standard \(R\text {-}\textsf{SIS}\) assumption over \(\mathcal {R}_p\) and our two new assumptions: \(k\text {-}R\text {-}\textsf{ISIS}\) and the \(k\text {-}R\text {-}\textsf{ISIS}\) of knowledge assumption over \(\mathcal {R}_q\), where p is short relative to q. The construction uses only algebraic operations over \(\mathcal {R}\) and \(\mathcal {R}_q\). Furthermore, a major part of the verification equation can be precomputed, so that the online verification time is sublinear in \(w\) and \(t\).

Our construction and the \(k\text {-}R\text {-}\textsf{ISIS}\) (of knowledge) assumption families admit natural generalisations to the module setting, where the vector \({\boldsymbol{\textbf{a}}}\) is replaced by a matrix \(\textbf{A} \) and other components are modified accordingly. Expectedly, we show that the module versions of the \(k\text {-}R\text {-}\textsf{ISIS}\) assumptions are at least as hard as the ring versions for certain parameter choices.

In many applications (e.g. aggregating signatures), often only a main part (e.g. a set of signature verification keys) of the function-image tuple (fy) is known in advance, while the remaining small part (e.g. a message signed by all parties) is known when it comes the time to perform verification. It is desirable to preprocess the main part of (fy) offline, so that the online verification cost is only dependent on the size of the small part. In our formal construction, we capture this flexibility by considering y itself to be a polynomial map, and allowing f and y to take an (additional, for f) public input \({\boldsymbol{\textbf{z}}}\). This allows the maps (fy) to be preprocessed, such that the online cost depends mostly on \({\boldsymbol{\textbf{z}}}\).

1.4 Application

We highlight an application of interest of our VC, and in particular of the resulting SNARK, in aggregating GPV signatures [34]. As a bonus result, we also show how to build adaptor signatures [7] based on GPV signatures while preserving aggregatability. For more comprehensive details we refer the reader to the full version of the paper.

Aggregate GPV Signatures. GPV signatures [34] are a lattice-based signature scheme paradigm of which an instantiation is a finalist in the NIST Post-Quantum Process (Falcon [65]). On a high level, a GPV signature on a message m is a short vector \({\boldsymbol{\textbf{u}}}\) such that \(\textbf{A} \cdot {\boldsymbol{\textbf{u}}} \equiv {\boldsymbol{\textbf{v}}} \bmod q\), where \({\boldsymbol{\textbf{A}}}\) is the public key, \({\boldsymbol{\textbf{v}}} = H(m)\) with the hash function H modelled as a random oracle in the security analysis. The verification is simply the check of the linear relation \(\textbf{A} \cdot {\boldsymbol{\textbf{u}}} \equiv {\boldsymbol{\textbf{v}}} \bmod q\) and that \({\boldsymbol{\textbf{u}}}\) is short.

Our SNARK can be used to prove knowledge of GPV signatures natively given the signature verification involves algebraic operations only. For instance, to aggregate n signatures \(({\boldsymbol{\textbf{u}}}_i)_{i \in \mathbb {Z}_n}\) on the same message m (a scenario that arises in a PoS consensus protocol [26]), the aggregator can compute a SNARK proof of knowledge of short \(({\boldsymbol{\textbf{u}}}_i)_{i \in \mathbb {Z}_n}\) satisfying \(\textbf{A} _i \cdot {\boldsymbol{\textbf{u}}}_{i} = {\boldsymbol{\textbf{v}}} \bmod q \), where \(\textbf{A} _i\) is the public key of the i-th signer. The aggregated signature i.e. the SNARK proof, can be verified in time sublinear in the number of signers and signatures n by first preprocessing the part of the verification equation depending on \((\textbf{A} _i)_{i \in \mathbb {Z}_n}\). In fact, this preprocessing step is one-time for the given set of signers, and the online verification after knowing m is only logarithmic in n. If the signers sign different messages, a similar SNARK but now over the different messages results in a compact proof, but with verification time linear in n (similar to the case of BLS signatures [16]). Such aggregation can result in compact blocks in a blockchain as shown for the case of BLS signatures [16], but now with post-quantum security.

Aggregate Adaptor Signatures. Adaptor signatures [7] let a user generate an encryption \(\hat{\sigma }\) of a signature \(\sigma \) on a message m with respect to an instance Y of a hard language \(\mathcal {L} \). Here \(\hat{\sigma }\) is also referred to as a pre-signature. Given the public key, it is efficient to verify if a given pre-signature \(\hat{\sigma }\) is indeed valid with respect to the instance and the message. One can adapt the pre-signature \(\hat{\sigma }\) into a valid signature \(\sigma \) given the witness y for the instance Y, and given \(\hat{\sigma }\) and \(\sigma \) one can efficiently extract the witness y. The primitive has found itself useful in enhancing efficiency and privacy of conditional payments in cryptocurrencies [7], and aggregation of signatures adds clear benefits to this primitive. In the following we discuss how GPV signatures can be turned into adaptor signatures, which consequently implies that they can be aggregated via our newly constructed SNARK.

We consider the lattice trapdoor from [61] for our GPV signatures, and view the GPV signatures as follows. The public parameters are given by a uniformly random matrix \({\boldsymbol{\textbf{A}}}\), the signing key is \(\textsf{sk}:= \textbf{X} \), where \(\textbf{X} \) is a short norm matrix such that the public key, \(\textbf{Y} := \textbf{A} \cdot \textbf{X} \), is distributed statistically close to random. The signature is simply \(({\boldsymbol{\textbf{z}}}, {\boldsymbol{\textbf{c}}})\) such that during verification we have \([\textbf{A} | \textbf{G} + \textbf{Y} ] \cdot [{\boldsymbol{\textbf{z}}} | {\boldsymbol{\textbf{c}}}]^{\texttt{T}} = H(m)\) and \(\Vert ({\boldsymbol{\textbf{c}}}, {\boldsymbol{\textbf{z}}})\Vert \) is small as stipulated by GPV signatures. Here \(\textbf{G} \) is the gadget matrix. We choose the hard language

$$ \mathcal {L}:= \{({\boldsymbol{\textbf{A}}}, {\boldsymbol{\textbf{v}}}') : \exists \,~{\boldsymbol{\textbf{u}}}' \ { s.t.}\ \, {\boldsymbol{\textbf{A}}} \cdot {\boldsymbol{\textbf{u}}}' = {\boldsymbol{\textbf{v}}}' \wedge \Vert {\boldsymbol{\textbf{u}}}'\Vert \le \beta ^* \}, $$

where \({\boldsymbol{\textbf{A}}} \in \mathcal {R}_q^{\eta \times \ell } \), \( {\boldsymbol{\textbf{v}}}' \in \mathcal {R}_q^\eta \). A pre-signature \(\hat{\sigma }\) is simply \(({\boldsymbol{\textbf{c}}}, \hat{{\boldsymbol{\textbf{z}}}})\) with \({\boldsymbol{\mathbf {v'}}}\) as the hard instance, such that during pre-signature verification, it holds that \([\textbf{A} | \textbf{G} + \textbf{Y} ] \cdot [\hat{{\boldsymbol{\textbf{z}}}} | {\boldsymbol{\textbf{c}}}]^{\texttt{T}} = H(m) - {\boldsymbol{\textbf{v}}}'\) and \(\Vert ({\boldsymbol{\textbf{c}}}, \hat{{\boldsymbol{\textbf{z}}}})\Vert \) is small. It is easy to adapt \(\hat{\sigma }\) given the witness \({\boldsymbol{\textbf{u}}}'\) by setting \({\boldsymbol{\textbf{z}}} := \hat{{\boldsymbol{\textbf{z}}}} + {\boldsymbol{\textbf{u}}}'\) and \(\sigma := ({\boldsymbol{\textbf{c}}}, {\boldsymbol{\textbf{z}}})\). To extract a witness one can simply compute \({\boldsymbol{\textbf{u}}}' := {\boldsymbol{\textbf{z}}} - {\boldsymbol{\textbf{z}}}'\). We have that the extracted \({\boldsymbol{\textbf{u}}}'\) has a slightly higher norm than that was used to adapt the pre-signature. The security of our scheme only relies on the \(M\text {-}\textsf{SIS} \) problem and the RO model.

1.5 Related Work

Apart from applications to succinct arguments [49], VCs have found numerous applications, such as verifiable databases [24], verifiable decentralized storage [23], updatable zero-knowledge sets [55, 59], keyless Proofs of Retrievability (PoR) [28, 29], pseudonymous credentials [44], and cryptocurrencies with stateless transaction validation [25]. Several works have studied various extensions to VC, with updatable commitments and proofs [24], aggregatable opening proofs for different commitments [37], and incremental aggregatable proofs [23].

Libert, Ramanna, and Yung [52] showed that a VC for linear functions over \(\mathbb {Z}_q\) implies a polynomial commitment for polynomials over \(\mathbb {Z}_q\). The result was obtained by VC-committing to the coefficient vector of the polynomial and opening it to a linear function whose coefficients are evaluations of monomials at the evaluation point. Since our VC only allows committing to a short vector \({\boldsymbol{\textbf{x}}} \in \mathcal {R}^w\) and opening to a polynomial map f with short coefficients, we need to suitably tune the norm bound \(\alpha \) of f and \({\boldsymbol{\textbf{x}}}\) to obtain similar applications. Concretely, by setting \(\alpha \approx \delta ^{d+1} \cdot \gamma _\mathcal {R}^d\) where \(\gamma _\mathcal {R}\) is the ring expansion factor of \(\mathcal {R}\), we obtain a polynomial commitment for degree-d multivariate polynomials with coefficients bounded by \(\delta \) which supports evaluations at vectors of norm also bounded by \(\delta \). Note that only constant-degree polynomials are supported by our polynomial commitment since \(\alpha \) depends exponentially on d.

In the same work [52], Libert, Ramanna, and Yung also showed that the polynomial commitment constructed from a VC for linear functions over \(\mathbb {Z}_q\) implies an accumulator for \(\mathbb {Z}_q\) elements, the construction requires committing to the polynomial \(p(X) = \prod _{a \in A} (X - a)\) encoding the set A of elements to be accumulated. The polynomial commitment obtained via our VC unfortunately does not support committing to p(X) since its degree is as large as |A|.

In a recent work [63], Peikert, Pepin, and Sharp proposed a VC for positional openings based on the standard SIS assumption. Relative to our construction outlined in Sect. 1.3, their construction can be interpreted as follows. Instead of handing out preimages \({\boldsymbol{\textbf{u}}}_{i,j}\) with \(\left\langle {{\boldsymbol{\textbf{a}}}, {\boldsymbol{\textbf{u}}}_{i,j}} \right\rangle = v_j/v_i \bmod q\), they sample multiple \({\boldsymbol{\textbf{a}}}_i\) for \(i \in \mathbb {Z}_w\) and let \({\boldsymbol{\textbf{u}}}_{i,j}\) satisfy \(\left\langle {{\boldsymbol{\textbf{a}}}_i, {\boldsymbol{\textbf{u}}}_{i,j}} \right\rangle = v_j \bmod q\). To verify an opening to position i, the vector \({\boldsymbol{\textbf{a}}}_i\) is used. The removal of the non-linear term \(v_j/v_i\) allows proving security from the SIS assumption. On the flip side, using a different vector \({\boldsymbol{\textbf{a}}}_i\) to verify openings to different positions i forbids the standard technique of aggregating openings using a random linear combination. Furthermore, there seems to be no natural way of generalising their construction to support functional openings without significantly changing the VC model, e.g. introducing an authority responsible for issuing functional opening keys [63]. Even if we consider the model with an authority, the resulting VC only satisfies weak binding (using the terminology of our work) making it unsuitable to be transformed into a SNARG: There is in fact an explicit attack when compiling their VC (with authority) into a SNARG.Footnote 5

Prior to our work, all lattice-based SNARKs were in the designated-verifier setting. These constructions [32, 43] are based on “linear-only” assumptions which are similar in spirit to the knowledge \(k\text {-}M\text {-}\textsf{ISIS}\) assumptions introduced in this work but with a key difference: While linear-only assumptions are with respect to specific encryption schemes, our assumptions are with respect to general rings. In terms of applications, linear-only encryption has always been used to construct designated-verifier primitives. In contrast, knowledge \(k\text {-}M\text {-}\textsf{ISIS}\) naturally leads to constructions of publicly verifiable primitives.

2 Preliminaries

Let \(\lambda \in \mathbb {N}\) denote the security parameter. Define \(\mathbb {N}_0 := \mathbb {N}\cup \{0\} \). Let \(\mathcal {R}\) be a ring. We write \(\mathcal {R}[{\boldsymbol{\textbf{X}}}]\) for the (multivariate) polynomial ring over \(\mathcal {R}\) and \(\mathcal {R}({\boldsymbol{\textbf{X}}})\) for the ring of (multivariate) rational functions over \(\mathcal {R}\) with intermediates \({\boldsymbol{\textbf{X}}} = (X_i: i \in \mathbb {Z}_w)\). We write \(\left\langle {\mathcal {G}} \right\rangle \) for the ideal resp. module spanned by the elements of the set \(\mathcal {G} \subset \mathcal {R}^{\eta }\) for \(\eta \in \mathbb {N}\). When \(\mathcal {G} \) is a singleton set we may suppress the \( \{\cdot \} \) notation. We write for size of the ideal \(\left\langle {\mathcal {G}} \right\rangle \) as a set.

For \(m \in \mathbb {N}\), let \(\zeta _{m} \in \mathbb {C}\) be any fixed primitive m-th root of unity. Denote by \(\mathcal {K}= \mathbb {Q}(\zeta _m)\) the cyclotomic field of order \(m \ge 2\) and degree \(n= \varphi (m)\), and by \(\mathcal {R}= \mathbb {Z}[\zeta _m]\) its ring of integers, called a cyclotomic ring for short. We have \(\mathcal {R}\cong \mathbb {Z}[x]/\left\langle {\varPhi _{m}(x)} \right\rangle \), where \(\varPhi _{m}(x)\) is the m-th cyclotomic polynomial. If m is a power of 2, we call \(\mathcal {R}\) a power-of-2 cyclotomic ring. If m is a prime-power, we call \(\mathcal {R}\) a prime-power cyclotomic ring. Let \(q \in \mathbb {N}\) be prime, we write \(\mathcal {R}_q := \mathcal {R}/q\mathcal {R}\) and \(\mathcal {R}_q^\times \) for all invertible elements in \(\mathcal {R}_q\). We have that \(\mathcal {R}_{q}\) splits into \(f\) fields of degree \(\phi (m)/f\). We write \(\textrm{vec}(r) \in \mathbb {Z}^{n}\) for the coefficient vector of \(r\) (with the powerful basis). For any \(r \in \mathcal {R}\) there exists a matrix \(\textrm{rot}(r) \in \mathbb {Z}^{n\times n}\) s.t. \(\forall s \in \mathcal {R}\) we have \(\textrm{vec}(r \cdot s) = \textrm{rot}(r) \cdot \textrm{vec}(s)\). For elements \(x \in \mathcal {R}\) we denote the infinity norm of its coefficient vector as \(\Vert x\Vert :=\Vert \textrm{vec}(x)\Vert \). If \({\boldsymbol{\textbf{x}}} \in \mathcal {R}^{\ell }\) we write \(\Vert {\boldsymbol{\textbf{x}}}\Vert \) for the infinity norm of \({\boldsymbol{\textbf{x}}}\). We write \(\Vert \cdot \Vert _{p}\) for the \(\ell _{p}\)-norm, e.g. \(\Vert \cdot \Vert _{2}\) for the Euclidean norm. We write \(\mathcal {M}_{\mathcal {G}}(\cdot )\) for a function that takes vectors indexed by \(\mathcal {G} \) and returns a matrix where each column corresponds to one such vector. We write \(\textbf{I} _{n}\) for the identity matrix of dimension \(n\) over whatever ring is clear from context.

For \(w\in \mathbb {N}\), \({\boldsymbol{\textbf{x}}} = (x_i: i \in \mathbb {Z}_{w}) \in \mathcal {R}^w\), and \({\boldsymbol{\textbf{e}}} = (e_i: i \in \mathbb {Z}_{w}) \in \mathbb {Z}^w\), we write \({\boldsymbol{\textbf{x}}}^{{\boldsymbol{\textbf{e}}}} :=\prod _{i \in \mathbb {Z}_{w}} x_i^{e_i}\) whenever it is defined. For \({\boldsymbol{\textbf{v}}} = (v_i: i \in \mathbb {Z}_{w}) \in {(\mathcal {R}_{q}^\times )}^w\), we write \(\bar{{\boldsymbol{\textbf{v}}}} :=(v_i^{-1}: i \in \mathbb {Z}_{w})\) for the entry-wise inverse of \({\boldsymbol{\textbf{v}}}\). A Laurent monomial \(g({\boldsymbol{\textbf{X}}}) \in \mathcal {R}({\boldsymbol{\textbf{X}}})\) is an expression \(g({\boldsymbol{\textbf{X}}}) = {\boldsymbol{\textbf{X}}}^{\boldsymbol{\textbf{e}}} :=\prod _{i \in \mathbb {Z}_w} X_i^{e_i}\) with exponent vector \({\boldsymbol{\textbf{e}}} = (e_i: i \in \mathbb {Z}_w) \in \mathbb {Z}^w\).

We may suppress arbitrary subscripts and superscripts from problem and advantage notations when those are clear from context. We write \(x \leftarrow \mathcal {D}\) for sampling from the distribution \(\mathcal {D}\) and to sample an element from the finite space \(\mathcal {S}\) uniformly at random. We write \(U(\mathcal {S})\) for the uniform distribution over \(\mathcal {S}\) and \( \{{\boldsymbol{\textbf{u}}}_\mathcal {G} \} := \{{\boldsymbol{\textbf{u}}}_{g}\} _{g \in \mathcal {G}}\).

Definition 1

(Ring Expansion Factor). Let \(\mathcal {R}\) be a ring. The expansion factor of \(\mathcal {R}\), denoted by \(\gamma _{\mathcal {R}}\), is \(\gamma _{\mathcal {R}} :=\max _{a,b \mathcal {R}} \frac{\Vert a \cdot b\Vert }{\Vert a\Vert \cdot \Vert b\Vert }\).

Proposition 1

([4]). If \(\mathcal {R}= \mathbb {Z}[\zeta _m]\) is a prime-power cyclotomic ring, then \(\gamma _{\mathcal {R}} \le 2\,n\). If \(\mathcal {R}= \mathbb {Z}[\zeta _m]\) is a power-of-2 cyclotomic ring, then \(\gamma _{\mathcal {R}} \le n\).

Proposition 2

Let \(q = \omega ({(w\cdot f)}^{f/\phi (m)})\) be a rational prime such that \(\mathcal {R}_q\) splits into f fields each of size \(q^{\varphi (m)/f}\). For , we have \({\boldsymbol{\textbf{v}}} \in (\mathcal {R}_{q}^\times )^w\) with non-negligible probability.

Proof

The probability that \({\boldsymbol{\textbf{v}}} \in (\mathcal {R}_{q}^\times )^w\) is \({(1-1/q^{\varphi (m)/f})}^{w\cdot f} \ge 1 - (w\cdot f)/q^{\varphi (m)/f}\) which is non-negligible.    \(\square \)

For the rest of this work, we implicitly assume q is large enough so that a uniformly random satisfies \({\boldsymbol{\textbf{v}}} \in (\mathcal {R}_{q}^\times )^w\) with non-negligible probability.

2.1 Lattices

We write \(\varLambda (\textbf{B})\) for the Euclidean lattice generated by the columns of \(\textbf{B} \in \mathbb {Z}^{n \times d} = [{\boldsymbol{\textbf{b}}}_0 | \ldots {\boldsymbol{\textbf{b}}}_{d-1}]\), i.e. \( \{z_i \cdot {\boldsymbol{\textbf{b}}}_i \mid z_i \in \mathbb {Z}\} \). When \(\textbf{B} \) has full rank we call it a basis and when \(n=d\) we say that \(\varLambda (\textbf{B})\) has full rank. The determinant of a full rank lattice is the absolute value of the determinant of any of its bases. Minkowski’s theorem implies that there is a vector \({\boldsymbol{\textbf{x}}} \in \varLambda \subset \mathbb {R}^{d}\) of (infinity) norm \(\Vert {\boldsymbol{\textbf{x}}}\Vert \le {\det (\varLambda )}^{1/d}\) when \(\varLambda \) has full rank. The Gaussian heuristic predicts that a random full-rank lattice \(\varLambda \) contains a shortest vector of (Euclidean) norm \(\approx \sqrt{\frac{d}{2\pi \,e}} \cdot {\det (\varLambda )}^{1/d}\).

For any \({\boldsymbol{\textbf{c}}} \in \mathbb {R}^{n}\) and any real \(\sigma >0\), the (spherical) Gaussian function with standard deviation parameter \(\sigma \) and centre \({\boldsymbol{\textbf{c}}}\) is:

$$ \forall {\boldsymbol{\textbf{x}}} \in \mathbb {R}^{n}, \rho _{\sigma , {\boldsymbol{\textbf{c}}}}({\boldsymbol{\textbf{x}}})=\exp \left( -\frac{\pi \cdot \Vert {\boldsymbol{\textbf{x}}}-{\boldsymbol{\textbf{c}}}\Vert _2^{2}}{\sigma ^{2}}\right) . $$

The Gaussian distribution is \(\mathcal {D}_{\sigma , {\boldsymbol{\textbf{c}}}}({\boldsymbol{\textbf{x}}})=\rho _{\sigma , {\boldsymbol{\textbf{c}}}}({\boldsymbol{\textbf{x}}}) / \sigma ^{n}\). The (spherical) discrete Gaussian distribution over a lattice \(\varLambda \in \mathbb {R}^{n}\), with standard deviation parameter \(\sigma >0\) and centre \({\boldsymbol{\textbf{c}}}\) is:

$$ \forall {\boldsymbol{\textbf{x}}} \in \varLambda , \mathcal {D}_{\varLambda , \sigma , {\boldsymbol{\textbf{c}}}}=\frac{\rho _{\sigma , {\boldsymbol{\textbf{c}}}}({\boldsymbol{\textbf{x}}})}{\rho _{\sigma , {\boldsymbol{\textbf{c}}}}(\varLambda )}, $$

where \(\rho _{\sigma , {\boldsymbol{\textbf{c}}}}(\varLambda ) :=\sum _{{\boldsymbol{\textbf{x}}} \in \varLambda } \rho _{\sigma , {\boldsymbol{\textbf{c}}}}({\boldsymbol{\textbf{x}}})\). When \({\boldsymbol{\textbf{c}}} = {\boldsymbol{\textbf{0}}}\) we omit the subscript \({\boldsymbol{\textbf{c}}}\). We may write \(\mathcal {D}_{\mathcal {R},\sigma }\) where we interpret \(\mathcal {R}\) to be the lattice spanned by \(\mathcal {R}\).

The dual of a lattice \(\varLambda \) is defined by \(\varLambda ^{*}=\left\{ {\boldsymbol{\textbf{y}}} \in \mathbb {R}^{n}: {\boldsymbol{\textbf{y}}}^{\texttt{T}} \cdot \varLambda \subseteq \mathbb {Z}\right\} \). The smoothing parameter of an \(n\)-dimensional lattice \(\varLambda \) with respect to \(\epsilon > 0\), denoted \(\eta _{\epsilon }(\varLambda )\), is the smallest \(\sigma > 0\), such that \(\rho _{1/\sigma }(\varLambda ^* \setminus \{0\} ) \le \epsilon \).

Lattice reduction with parameter \(\kappa \) returns a vector of Euclidean norm \(\approx \delta ^{d-1} \cdot {\det (\varLambda )}^{1/d}\) where \(\delta \) is the root Hermite factor \(\delta \) and a function of \(\kappa \).Footnote 6 A root Hermite factor \(\delta \approx {\left( \frac{\kappa }{2\,\pi \,e}\right) }^{1/(2\kappa )}\) can be achieved in time \(2^{0.292\, \kappa + o(\kappa )}\) classically using the BKZ algorithm [67] with block size \(\kappa \) and sieving as the SVP oracle [8] (quantum algorithms do not promise a sufficiently substantial speed-up [3, 48]). Concretely, for \(\lambda = 128\) we require \(\kappa \ge 484\) and thus \(\delta \le 1.0034\).

2.2 Sampling Algorithms

The following relies on analogues of the Leftover Hash Lemma over rings attesting that given and where \(\mathcal {D}\) is a small uniform [60, 69] or discrete Gaussian distribution [57, 68], we have that \(\left( {\boldsymbol{\textbf{a}}}_0, \ldots , {\boldsymbol{\textbf{a}}}_{\ell -1}, \sum _{0 \le i < \ell } {\boldsymbol{\textbf{a}}}_i \cdot r_i\right) \) is close to uniform. In what follows, we will write \(\textsf{lhl} (\mathcal {R}, \eta , q, \mathcal {D})\) for an algorithm that outputs a minimal \(\ell \in \mathbb {N}\) ensuring that the resulting distribution is within \(\textsf{negl}(\lambda )\) to uniform. We may also write \(\textsf{lhl} (\mathcal {R}, \eta , q, \beta )\) for some \(\mathcal {D}\) outputting elements bounded by \(\beta \) (with overwhelming probability). In many cases the reader may think \(\ell \in O(\eta \log _\beta (q))\). Let \((\textsf{TrapGen}, \textsf{SampD}, \textsf{SampPre})\) be PPT algorithms with the following syntax and properties [31, 34, 61]:

  • \((\textbf{A}, \textsf{td}) \leftarrow \textsf{TrapGen}(1^\eta , 1^\ell , q ,\mathcal {R},\beta )\) takes dimensions \(\eta , \ell \in \mathbb {N}\), a modulus \(q \in \mathbb {N}\), a ring \(\mathcal {R}\), and a norm bound \(\beta \in \mathbb {R}\). It generates a matrix \(\textbf{A} \in \mathcal {R}^{\eta \times \ell }_q\) and a trapdoor \(\textsf{td}\). For any \(n \in \textsf{poly}(\lambda )\) and \(\ell \ge \textsf{lhl} (\mathcal {R}, \eta , q, \beta )\), the distribution of \(\textbf{A} \) is within \(\textsf{negl}(\lambda )\) statistical distance of \(U(\mathcal {R}_q^{\eta \times \ell })\).

  • \({\boldsymbol{\textbf{u}}} \leftarrow \textsf{SampD}(1^{\eta }, 1^\ell , \mathcal {R}, \beta ')\) with \(\ell \ge \textsf{lhl} (\mathcal {R}, \eta , q, \beta )\) outputs an element in \({\boldsymbol{\textbf{u}}} \in \mathcal {R}^{\ell }\) with norm bound \(\beta ' \ge \beta \). We have that \({\boldsymbol{\textbf{v}}} :={\textbf{A}}\cdot {{\boldsymbol{\textbf{u}}}} \bmod q\) is within \(\textsf{negl}(\lambda )\) statistical distance to \(U(\mathcal {R}_q^\eta )\).

  • \({\boldsymbol{\textbf{u}}} \leftarrow \textsf{SampPre}(\textsf{td}, {\boldsymbol{\textbf{v}}}, \beta ')\) with \(\ell \ge \textsf{lhl} (\mathcal {R}, \eta , q, \beta )\) takes a trapdoor \(\textsf{td}\), a vector \({\boldsymbol{\textbf{v}}} \in \mathcal {R}_q^{\eta }\), and a norm bound \(\beta ' \ge \beta \). It samples \({\boldsymbol{\textbf{u}}} \in \mathcal {R}^\ell \) satisfying \({\textbf{A}}\cdot {{\boldsymbol{\textbf{u}}}} \equiv {\boldsymbol{\textbf{v}}} \bmod q\) and \(\Vert {\boldsymbol{\textbf{u}}}\Vert \le \beta '\). Furthermore, \({\boldsymbol{\textbf{u}}}\) is within \(\textsf{negl}(\lambda )\) statistical distance to \({\boldsymbol{\textbf{u}}} \leftarrow \textsf{SampD}(1^{\eta }, 1^\ell , \mathcal {R}, \beta ')\) conditioned on \({\boldsymbol{\textbf{v}}} \equiv {{\boldsymbol{\textbf{A}}}}\cdot {{\boldsymbol{\textbf{u}}}} \bmod q\). The syntax can be extended in the natural way for \(\textsf{SampPre}\) to take a matrix \(\textbf{V} \) as input, in which case \(\textsf{SampPre}\) is run on each column of \(\textbf{V} \) and the output vectors are concatenated column-wise to form a matrix.

For all algorithms we may replace \(\beta \) by \(\mathcal {D}\) where it is understood that \(\mathcal {D}\) outputs samples bounded by \(\beta \) (with overwhelming probability).

2.3 Hard Problems

The Short Integer Solution problem was introduced in the seminal work of Ajtai [2]. It asks to find a short element (of Euclidean norm \(\beta _{2}\)) in the kernel of a random matrix mod \(q\). An inhomogeneous version, asking to find a short solution to a linear algebra problem mod \(q\) was formalised later [60].

For both problems, it was shown [34] that solving the problem for \(q \ge \beta _{2} \cdot \omega (\sqrt{n\cdot \log n})\) implies solving certain presumed hard lattice problems (finding a short basis) to within approximation factor \(\beta _{2} \cdot \tilde{\mathcal {O}}(\sqrt{n})\). Thus, since \(\beta _{2} \ge \beta _{\infty }\), an appropriate choice of parameters is \(n = \textsf{poly}(\lambda )\), \(q \ge \beta _{\infty } \cdot n \cdot \log n\) and \(\ell \ge 2\,n \log _{\beta _{\infty }} q\). An algorithm solving \(\textsf{ISIS} \) can be used to solve \(\textsf{SIS} \) (by making one of the columns of \(\textbf{A} \) the target) and solving \(\textsf{ISIS} \) twice allows to solve \(\textsf{SIS} \) by considering the difference of these solutions. Ring variants were introduced in [56, 60, 64]; module variants in [51].

Definition 2

Let \(\mathcal {R},\eta ,q,\ell ,\beta \) depend on \(\lambda \). The Module-SIS (or M-SIS) problem, denoted \(M\text {-}\textsf{SIS}_{\mathcal {R}_q,\eta ,\ell ,\beta ^*} \), is: Given a uniform , \({\boldsymbol{\textbf{t}}} \equiv 0 \bmod q\) find some \({\boldsymbol{\textbf{u}}} \ne {\boldsymbol{\textbf{0}}} \in \mathcal {R}^\ell \) such that \(\Vert {\boldsymbol{\textbf{u}}}\Vert _\infty \le \beta ^{*}\) and \({\textbf{A}}\cdot {{\boldsymbol{\textbf{u}}}} \equiv {\boldsymbol{\textbf{t}}} \bmod q\). We write for the advantage of any algorithm \(\mathcal {A}\) in solving \(M\text {-}\textsf{SIS}_{\mathcal {R}_q,\eta , \ell ,\beta ^*} \). We assume for appropriately chosen \(\mathcal {R}_q,\eta ,\ell ,\beta ^{*}\) and PPT \(\mathcal {A}\). When \({\boldsymbol{\textbf{t}}} \ne 0\) we speak of the Module-ISIS or M-ISIS problem, denoted \(M\text {-}\textsf{ISIS}_{\mathcal {R}_q,\eta ,\ell ,\beta ^*} \). When \(\eta =1\) we speak of Ring-(I)SIS or R-(I)SIS, denoted \(R\text {-}\textsf{SIS}_{\mathcal {R}_q,\ell ,\beta ^*} \) or \(R\text {-}\textsf{ISIS}_{\mathcal {R}_q,\ell ,\beta ^*} \).

In [51] it was shown that solving Module-SIS is as hard as finding a short basis in modules. In [56, 64] it was shown that solving Ring-SIS is as hard as find a short vector in any ideal in \(\mathcal {R}\). A similar result was established for Ring-ISIS [60]. From a cryptanalytic perspective, no known algorithm solves Ring/Module-(I)SIS significantly faster than those solving (I)SIS. Our assumption is a generalisation and adaptation to more general rings of the \(k\text {-}\textsf{SIS} \) assumption.

Definition 3

For any integer \(k \ge 0\), an instance of the \(k\text {-}M\text {-}\textsf{SIS}_{\mathcal {R}_q,\eta ,\ell ,\beta ,\beta ^*} \) problem is a matrix \(\textbf{A} \in \mathcal {R}_{q}^{\eta \times \ell }\) and a set of \(k\) vectors \({\boldsymbol{\textbf{u}}}_{0}, \ldots {\boldsymbol{\textbf{u}}}_{k-1}\) s.t. \(\textbf{A} \cdot {\boldsymbol{\textbf{u}}}_{i} \equiv {\boldsymbol{\textbf{0}}} \bmod q\). A solution to the problem is a nonzero vector \({\boldsymbol{\textbf{u}}} \in \mathcal {R}^{\ell }\) such that

$$\begin{aligned} \Vert {\boldsymbol{\textbf{u}}}\Vert _{\infty } \le \beta , \quad \textbf{A} \cdot {\boldsymbol{\textbf{u}}} \equiv {\boldsymbol{\textbf{0}}},\quad \text {and} \quad {\boldsymbol{\textbf{u}}} \notin \mathcal {K}\text {-}{\text {span}}( \{{\boldsymbol{\textbf{u}}}_i\} _{0 \le i < k}). \end{aligned}$$

If \(\mathcal {B}\) is an algorithms that takes as input a matrix \(\textbf{A} \in \mathcal {R}_{q}^{\eta \times \ell }\) and vectors \({\boldsymbol{\textbf{u}}}_{i} \in \mathcal {R}^{\ell }\) for \(0 \le i < k\), we define to be the probability that \(\mathcal {B}\) outputs a solution to the \(k\text {-}M\text {-}\textsf{SIS}_{\mathcal {R}_{q},\eta ,\ell ,\beta ,\beta ^*} \) problem instance \(\textbf{A},{\boldsymbol{\textbf{u}}}_{0}, \ldots , {\boldsymbol{\textbf{u}}}_{k-1}\) over uniformly random \(\textbf{A} \in \mathcal {R}_{q}^{\eta \times \ell }\) and \({\boldsymbol{\textbf{u}}}_{i}\) drawn from \(\textsf{SampD}(1^{\eta }, 1^\ell , \mathcal {R}, \beta )\) conditioned on \(\textbf{A} \cdot {\boldsymbol{\textbf{u}}}_{i} \equiv {\boldsymbol{\textbf{0}}} \bmod q\).

In [17, 54] it is shown that if SIS is hard for \(\mathbb {Z}_{q}^{n \times (\ell -k)}\) and norm bound \(\beta \) then \(k\text {-}M\text {-}\textsf{SIS}_{\mathbb {Z}_q,n,\ell ,\beta ',\beta ''} \) is hard for any \(k < \ell \), and certain \(\beta ', \beta '' \in \textsf{poly}(\beta )\). Looking ahead, here we are interested in \(k\text {-}R\text {-}\textsf{SIS}_{\mathcal {R}_q,\ell ,\beta ,\beta ^*} :=k\text {-}M\text {-}\textsf{SIS}_{\mathcal {R}_q,1,\ell ,\beta ,\beta ^*} \).

3 The \(k\text {-}M\text {-}\textsf{ISIS}\) Assumption

We first introduce a family of assumptions over modules – \(k\text {-}M\text {-}\textsf{ISIS} \) – which we then specialise to rings to obtain \(k\text {-}R\text {-}\textsf{ISIS} \) mentioned above.

We note that the most immediate candidate notion for \(k\text {-}\textsf{ISIS} \), i.e. generalising \(k\text {-}\textsf{SIS} \), is to simply hand out short preimages of random images and then ask the adversary to solve \(\textsf{ISIS} \). This notion is trivially equivalent to \(\textsf{ISIS} \) since short preimages of random images can be efficiently sampled by sampling short \({\boldsymbol{\textbf{u}}} \in \mathbb {Z}^{\ell }\) and computing \({\boldsymbol{\textbf{t}}} :=\textbf{A} \cdot {\boldsymbol{\textbf{u}}}\). The same reasoning can be lifted to \(\mathcal {R}\). On the other hand, \(k\text {-}\textsf{SIS} \) is trivially insecure when \(k \ge \ell \) in the intuitive sence since then \( \{{\boldsymbol{\textbf{u}}}_{i}\} \) constitutes a trapdoor for \(\textbf{A} \) when the \({\boldsymbol{\textbf{u}}}_{i}\) are linearly independent [34]. Formally, the problem as stated is impossible to solve since all vectors will be in \(\mathbb {Q}\text {-}{\text {span}}( \{{\boldsymbol{\textbf{u}}}_i\} _{0 \le i < k})\), i.e. there are no valid solutions.

Our variants are neither trivially equivalent to \(M\text {-}\textsf{ISIS} \) nor immediately broken when \(k>\ell \) by imposing on the images an algebraic structure which is independent of the challenge matrix \(\textbf{A} \). Before stating our family of assumptions, we define a notion of admissibility to formally rule out trivial wins.

Definition 4

(\(k\text {-}M\text {-}\textsf{ISIS} \)-Admissible). Let \(g({\boldsymbol{\textbf{X}}}) \in \mathcal {R}({\boldsymbol{\textbf{X}}})\) be a Laurent monomial, i.e. \(g({\boldsymbol{\textbf{X}}}) = {\boldsymbol{\textbf{X}}}^{\boldsymbol{\textbf{e}}} := \prod _{i \in \mathbb {Z}_w} X_i^{e_i}\) for some exponent vector \({\boldsymbol{\textbf{e}}} = (e_i: i \in \mathbb {Z}_w) \in \mathbb {Z}^w\). Let \(\mathcal {G} \subset \mathcal {R}({\boldsymbol{\textbf{X}}})\) be a set of Laurent monomials with \(k :=|\mathcal {G} |\) and let \({\boldsymbol{\mathbf {\mathcal {G}}}}\) be a vector of those monomials. Let \(g^{*} \in \mathcal {R}({\boldsymbol{\textbf{X}}})\) be a target Laurent monomial. We call a family \(\mathcal {G} \) \(k\text {-}M\text {-}\textsf{ISIS} \)-admissible if (i) all \(g \in \mathcal {G} \) have constant degree, i.e. \(\Vert {\boldsymbol{\textbf{e}}}\Vert _{1} \in O(1)\); (ii) all \(g \in \mathcal {G} \) are distinct, i.e. \(\mathcal {G} \) is not a multiset; and (iii) \(0 \not \in \mathcal {G} \). We call a family \((\mathcal {G}, g^*)\) \(k\text {-}M\text {-}\textsf{ISIS} \)-admissible if \(\mathcal {G} \) is \(k\text {-}M\text {-}\textsf{ISIS} \)-admissible, \(g^*\) has constant degree, and \(g^* \notin \mathcal {G} \).

Remark 1

Condition (i) rules out monomials that depend on the ring \(\mathcal {R}\), such as \(X^{\phi (m)}\). Condition (ii) rules out that trivial linear combinations of known preimages produce a preimage for the target. Condition (iii) rules out trivially producing multiple preimages of the same image. On the other hand, we do not target full generality here but restrict ourselves to a slight generalisation of what we require in this work. It is plausible that we can replace Laurent monomials by Laurent “terms”, i.e. with coefficients \(\ne 1 \) in \(\mathcal {R}_{q}\), or rational functions.

Definition 5

Let \(\ell , \eta \in \mathbb {N}\). Let \(q\) be a rational prime, \(\mathcal {R}\) the \(m\)-th cyclotomic ring, and \(\mathcal {R}_{q} :=\mathcal {R}/q \mathcal {R}\). Let \(\mathcal {T} \subset \mathcal {R}_q^\eta \) be such that, for any \({\boldsymbol{\textbf{t}}} = (t_i)_{i \in \mathbb {Z}_\eta } \in \mathcal {T}\), \(\langle \{t_i\} \rangle = \mathcal {R}_q\). Let \(\mathcal {G} \subset \mathcal {R}({\boldsymbol{\textbf{X}}})\) be a set of \(w\)-variate Laurent monomial. Let \(g^{*} \in \mathcal {R}({\boldsymbol{\textbf{X}}})\) be a target Laurent monomial. Let \((\mathcal {G},g^*)\) be \(k\text {-}M\text {-}\textsf{ISIS} \)-admissible. Let \(\bar{\mathcal {G}} :=\mathcal {G} \cup \{g^*\} \). Let \(\beta \ge 1\) and \(\beta ^{*} \ge 1\) be reals. For \(\eta , \ell \in \mathbb {N}\), \(g \in \bar{\mathcal {G}}\), \(\ell \ge \textsf{lhl} (\mathcal {R}, \eta , q, \beta )\), \(\textbf{A} \in \mathcal {R}_q^{\eta \times \ell }\), \({\boldsymbol{\textbf{t}}} \in \mathcal {T}\), and \({\boldsymbol{\textbf{v}}} \in (\mathcal {R}_{q}^\times )^w\), let \(\mathcal {D}_{g,\textbf{A},{\boldsymbol{\textbf{t}}},{\boldsymbol{\textbf{v}}}}\) be a distribution over

$$\begin{aligned} \{{\boldsymbol{\textbf{u}}}_g \in \mathcal {R}^\ell : \textbf{A} \cdot {\boldsymbol{\textbf{u}}}_g \equiv g({\boldsymbol{\textbf{v}}}) \cdot {\boldsymbol{\textbf{t}}} \bmod q, \Vert {\boldsymbol{\textbf{u}}}_g\Vert \le \beta \} . \end{aligned}$$

Let \(\mathcal {D} := \{\mathcal {D}_{g, \textbf{A}, {\boldsymbol{\textbf{t}}}, {\boldsymbol{\textbf{v}}}}: \eta , \ell \in \mathbb {N}, g \in \bar{\mathcal {G}}, \textbf{A} \in \mathcal {R}_q^{\eta \times \ell }, {\boldsymbol{\textbf{v}}} \in (\mathcal {R}_{q}^\times )^w\} \) be the family of these distributions. Write \(\textsf{pp} :=(\mathcal {R}_q, \eta , \ell , w, \mathcal {G}, g^*, \mathcal {D}, \mathcal {T}, \beta , \beta ^*)\). The \(k\text {-}M\text {-}\textsf{ISIS}_{\textsf{pp}} \) assumption states that for any PPT adversary \(\mathcal {A}\) we have , where

figure r

Remark 2

Since for any \({\boldsymbol{\textbf{t}}}' \in \mathcal {T}\) there exist matrices \(\textbf{X}, \textbf{Y} \) s.t. \(\textbf{X} \cdot \textbf{Y} \equiv \textbf{I} \), \(\textbf{X} \cdot {\boldsymbol{\textbf{t}}}' \equiv {(1,0,\ldots ,0)}^{\texttt{T}} \bmod q\) and , we can assume that \(\mathcal {T} = \{{(1, 0, \ldots , 0)}^{\texttt{T}}\} \) without loss of generality.

Definition 6

When \(\eta =1\) we may write

$$ k\text {-}R\text {-}\textsf{ISIS}_{\mathcal {R}_q, \ell , w, \mathcal {G}, g^*, \mathcal {D}, \mathcal {T}, \beta , \beta ^*} :=k\text {-}M\text {-}\textsf{ISIS}_{\mathcal {R}_q, 1, \ell , w, \mathcal {G}, g^*, \mathcal {D}, \mathcal {T}, \beta , \beta ^*}. $$

Remark 3

Analogous to the \(\ell \)-Diffie-Hellman exponent assumption, an example of \((w, \mathcal {G}, g^*)\) is \(w= 1\), \(\mathcal {G} = \{1,X,\ldots ,X^\ell , X^{\ell +2},\ldots ,X^{2\ell }\} \), and \(g^*(X) = X^{\ell + 1}\) for some \(\ell \in \mathbb {N}\).

As written above we have a separate assumption for each family of \((\mathcal {G}, g^*)\) which are application dependent. As we will show below, there are \((\mathcal {G}, g^*)\) that are as hard as \(M\text {-}\textsf{ISIS} \) and our discussion of admissibility indicates that some \((\mathcal {G}, g^*)\) are trivially insecure. However, to encourage analysis and to avoid “bodacious assumptions” [46] we make the following, strong, meta assumption.

Definition 7

For any \(k\text {-}M\text {-}\textsf{ISIS} \)-admissible \((\mathcal {G},g^*)\), \(k\text {-}M\text {-}\textsf{ISIS}_{\textsf{pp}} \) is hard.

3.1 Knowledge Variants

We next propose a “knowledge” version of the \(k\text {-}M\text {-}\textsf{ISIS}\) assumption. It captures the intuition that if the images are restricted to scalar multiples of \({\boldsymbol{\textbf{t}}}\) then the only way to produce preimages of them under \(\textbf{A} \) is to perform a linear combination of the given preimages under \(\textbf{A} \) with small coefficients.

Definition 8

(Knowledge \(k\text {-}M\text {-}\textsf{ISIS}\) Assumption). Adopt the notation fromDefinition 5, but let \(\textsf{pp} :=(\mathcal {R}_q, \eta , \ell , w, \mathcal {G}, \mathcal {D}, \mathcal {T}, \alpha , \beta , \beta ^*)\) where \(\alpha \ge 1\) is real and \(\eta > 1\). The knowledge \(k\text {-}M\text {-}\textsf{ISIS}_{\textsf{pp}} \) assumption states that for any PPT adversary \(\mathcal {A}\) there exists a PPT extractor \(\mathcal {E}_\mathcal {A}\) such that , where

figure w

where the notation \((\mathcal {A}\Vert \mathcal {E}_\mathcal {A})\) means that \(\mathcal {A}\) and \(\mathcal {E}_\mathcal {A}\) are run on the same input including the randomness, and \((c, {\boldsymbol{\textbf{u}}})\) and \({(x_g)}_{g \in \mathcal {G}}\) are the outputs of \(\mathcal {A}\) and \(\mathcal {E}_\mathcal {A}\) respectively.

The knowledge \(k\text {-}M\text {-}\textsf{ISIS}\) assumption, as stated, only makes sense for \(\eta \ge 2\), i.e. not for \(k\text {-}R\text {-}\textsf{ISIS} \). To see this, consider an adversary \(\mathcal {A}\) which does the following: First, it samples random short \({\boldsymbol{\textbf{u}}}\) and checks whether \({\boldsymbol{\textbf{A}}} \cdot {\boldsymbol{\textbf{u}}}\) is in the submodule of \(\mathcal {R}_q^{\eta }\) generated by \({\boldsymbol{\textbf{t}}}\). If not, \(\mathcal {A}\) aborts. If so, it finds c such that \(\textbf{A} \cdot {\boldsymbol{\textbf{u}}} = c \cdot {\boldsymbol{\textbf{t}}} \bmod q\) and outputs \((c,{\boldsymbol{\textbf{u}}})\). When \(\eta = 1\) and assuming without loss of generality that \(\mathcal {T} = \{{(1, 0, \ldots , 0)}^{\texttt{T}}\} \), we observe that \(t = 1\) generates \(\mathcal {R}_q\), which means \(\mathcal {A}\) never aborts. Clearly, when \(\mathcal {A}\) does not abort, it has no “knowledge” of how c can be expressed as a linear combination of \( \{g({\boldsymbol{\textbf{v}}})\} _{g \in \mathcal {G}}\). Note that when \(\eta \ge 2\) the adversary \(\mathcal {A}\) aborts with overwhelming probability since \({\boldsymbol{\textbf{A}}} \cdot {\boldsymbol{\textbf{u}}}\) is close to uniform over \(\mathcal {R}_q^\eta \) but the submodule generated by \({\boldsymbol{\textbf{t}}}\) is only a negligible faction of \(\mathcal {R}_q^\eta \). However, in order to be able to pun about “crises of knowledge”, we also define a ring version of the knowledge assumption. In the ring setting, we consider proper ideals rather than submodules.

Definition 9

(Knowledge \(k\text {-}R\text {-}\textsf{ISIS}\) Assumption). Let the parameters \(\textsf{pp} \) be as in Definition 5 except that \(\eta =1\) and \(\mathcal {T}\) contains elements \(t \in \mathcal {R}_{q}\) s.t. \(1/|\left\langle {t} \right\rangle | = \textsf{negl}(\lambda )\) and \(|\left\langle {t} \right\rangle |/|\mathcal {R}_{q}| = \textsf{negl}(\lambda )\).Footnote 7 The knowledge \(k\text {-}R\text {-}\textsf{ISIS}_{\textsf{pp}} \) assumption states that for any PPT adversary \(\mathcal {A}\) there exists a PPT extractor \(\mathcal {E}_\mathcal {A}\) such that , where

figure y

Definition 10

For any \(k\text {-}M\text {-}\textsf{ISIS} \)-admissible \(\mathcal {G} \), the knowledge \(k\text {-}M\text {-}\textsf{ISIS}_{\textsf{pp}} \) assumption holds.

We provide reductions for some parameter regimes and some preliminary cryptanalysis of our assumption in the full version of this work.

4 Compact Extractable Vector Commitments

We construct compact extractable vector commitments with openings to constant-degree multivariate polynomial maps from the knowledge \(k\text {-}M\text {-}\textsf{ISIS}\) assumption.

4.1 Definitions

We define a non-interactive variant of vector commitments with preprocessing.

Definition 11

(Vector Commitments (VC)). A (preprocessing non-interactive) vector commitment (VC) scheme is parameterised by the families

$$\begin{aligned} \mathcal {F}&= \{\mathcal {F} _{s,w,t} \subseteq \{f: \mathcal {R}^s\times \mathcal {R}^w\rightarrow \mathcal {R}^t\} \} _{s,w,t\in \mathbb {N}} \text { and }\\ \mathcal {Y}&= \{\mathcal {Y} _{s,t} \subseteq \{y: \mathcal {R}^s\rightarrow \mathcal {R}^t\} \} _{s,t\in \mathbb {N}} \end{aligned}$$

of functions over \(\mathcal {R}\) and an input alphabet \(\mathcal {X} \subseteq \mathcal {R}\). The parameters \(s\), \(w\), and \(t\) are the dimensions of public inputs, secret inputs, and outputs of f respectively. The VC scheme consists of the PPT algorithms \((\textsf{Setup}, \textsf{Com}, \textsf{Open}, \textsf{PreVerify}, \textsf{Verify})\) defined as follows:

  • \(\textsf{pp} \leftarrow \textsf{Setup} (1^\lambda , 1^s, 1^w, 1^t)\): The setup algorithm generates the public parameters on input the security parameter \(\lambda \in \mathbb {N}\) and the size parameters \(s, w, t\in \mathbb {N}\).

  • \((c, \textsf{aux}) \leftarrow \textsf{Com} (\textsf{pp}, {\boldsymbol{\textbf{x}}})\): The commitment algorithm generates a commitment c of a given vector \({\boldsymbol{\textbf{x}}} \in \mathcal {X} ^w\) with some auxiliary opening information \(\textsf{aux} \).

  • \(\pi \leftarrow \textsf{Open} (\textsf{pp}, f, {\boldsymbol{\textbf{z}}}, \textsf{aux})\): The opening algorithm generates a proof \(\pi \) for \(f({\boldsymbol{\textbf{z}}}, \cdot )\) for the public input \({\boldsymbol{\textbf{z}}} \in \mathcal {X} ^s\) and function \(f \in \mathcal {F} _{s,w,t}\).

  • \(\textsf{pp} _{f,y} \leftarrow \textsf{PreVerify} (\textsf{pp}, (f, y))\): Given functions \(f \in \mathcal {F} _{s,w,t}\) and \(y \in \mathcal {Y} _{s,t}\), the verification preprocessing algorithm generates the preprocessed public parameters \(\textsf{pp} _{f,y}\) for verifying proofs for (fy).

  • \(b \leftarrow \textsf{Verify}(\textsf{pp} _{f,y}, {\boldsymbol{\textbf{z}}}, c, \pi )\): The verification algorithm inputs a preprocessed public parameters \(\textsf{pp} _{f,y}\), a public input \({\boldsymbol{\textbf{z}}} \in \mathcal {X} ^s\), a commitment c, and an opening proof \(\pi \). It outputs a bit b deciding whether to accept or reject that the vector \({\boldsymbol{\textbf{x}}}\) committed in c satisfies \(f({\boldsymbol{\textbf{z}}},{\boldsymbol{\textbf{x}}}) = y({\boldsymbol{\textbf{z}}})\).

Definition 12

(Correctness). A VC scheme for \((\mathcal {F},\mathcal {X},\mathcal {Y})\) is said to be correct if for any \(\lambda , s, w, t\in \mathbb {N}\), any \(\textsf{pp} \in \textsf{Setup} (1^\lambda , 1^s, 1^w, 1^t)\), any \((f, {\boldsymbol{\textbf{z}}}, {\boldsymbol{\textbf{x}}}, y) \in \mathcal {F} _{s,w,t} \times \mathcal {X} ^s\times \mathcal {X} ^w\times \mathcal {Y} _{s,t}\) satisfying \(f({\boldsymbol{\textbf{z}}},{\boldsymbol{\textbf{x}}}) = y({\boldsymbol{\textbf{z}}})\), any \((c, \textsf{aux}) \in \textsf{Com} (\textsf{pp}, {\boldsymbol{\textbf{x}}})\), any \(\pi \in \textsf{Open} (\textsf{pp}, f, {\boldsymbol{\textbf{z}}}, \textsf{aux})\), and any \(\textsf{pp} _{f,y} \in \textsf{PreVerify} (\textsf{pp}, (f, y))\), it holds that \(\textsf{Verify}(\textsf{pp} _{f,y}, {\boldsymbol{\textbf{z}}}, c, \pi ) = 1\).

Informally, a VC scheme is extractable if, whenever an adversary \(\mathcal {A}\) is able to produce a commitment c and a valid opening proof \(\pi \) for some \((f({\boldsymbol{\textbf{z}}},\cdot ),y({\boldsymbol{\textbf{z}}}))\), then it must “know” a preimage \({\boldsymbol{\textbf{x}}}\) which is committed in c and satisfies \(f({\boldsymbol{\textbf{z}}},{\boldsymbol{\textbf{x}}}) = y({\boldsymbol{\textbf{z}}})\). Clearly, an extractable VC must also be binding, i.e. it is infeasible to open a commitment c to a set \( \{(f_i({\boldsymbol{\textbf{z}}}_i, \cdot ),y_i({\boldsymbol{\textbf{z}}}_i))\} _i\) of inconsistent function-image tuples.

Definition 13

(Extractability). Let \(\kappa : \mathbb {N}^4 \rightarrow [0,1]\). A VC scheme for \((\mathcal {F},\mathcal {X},\mathcal {Y})\) is said to be \(\kappa \)-extractable if for any PPT adversary \(\mathcal {A}\) there exists a PPT extractor \(\mathcal {E}_\mathcal {A}\) such that the following probability is at most \(\kappa (\lambda ,s,w,t)\):

figure aa

In case \(\textsf{Com} \) is deterministic, we suppress the output r of \(\mathcal {E}_\mathcal {A}\). We say that the scheme is extractable if it is \(\kappa \)-extractable and \(\kappa (\lambda ,s,w,t)\) is negligible in \(\lambda \) for any \(s, w, t\in \textsf{poly}(\lambda )\).

Definition 14

(Compactness). A VC scheme for \((\mathcal {F},\mathcal {X},\mathcal {Y})\) is said to be compact if there exists \(p(\lambda ,s,w,t) \in \textsf{poly}(\lambda , \textrm{log}\, s, \textrm{log}\, w, \textrm{log}\, t)\) such that for any \(\lambda , s, w, t\in \mathbb {N}\), any \(\textsf{pp} \in \textsf{Setup} (1^\lambda , 1^s, 1^w, 1^t)\), any \((f, {\boldsymbol{\textbf{z}}}, {\boldsymbol{\textbf{x}}}, y) \in \mathcal {F} _{s,w,t} \times \mathcal {X} ^s\times \mathcal {X} ^w\times \mathcal {Y} _{s,t}\), any \((c, \textsf{aux}) \in \textsf{Com} (\textsf{pp}, {\boldsymbol{\textbf{x}}})\), and any \(\pi \in \textsf{Open} (\textsf{pp}, f, {\boldsymbol{\textbf{z}}}, \textsf{aux})\), it holds that \(\max \{|c|, |\pi |\} \le p(\lambda ,s,w,t)\), where \(|\cdot |\) denotes the description size.

4.2 Construction

A formal description of our VC construction is in Fig. 1 where important parameters and shorthands are listed and explained in Table 1.

Table 1. Parameters and shorthands with \(\lambda \) as security parameter.
Fig. 1.
figure 1

Our VC Construction.

The public parameters consists of a \(k\text {-}M\text {-}\textsf{ISIS}\) instance \((\textbf{A} _0, {\boldsymbol{\textbf{t}}}_0, {\boldsymbol{\textbf{v}}}, \left( {\boldsymbol{\textbf{u}}}_{0,g}\right) _{g \in \mathcal {G} _0})\) over \(\mathcal {R}_q\), a correlated \(k\text {-}M\text {-}\textsf{ISIS}\) of knowledge instance \((\textbf{A} _1, {\boldsymbol{\textbf{t}}}_1, {\boldsymbol{\textbf{v}}}, \left( {\boldsymbol{\textbf{u}}}_{1,g}\right) _{g \in \mathcal {G} _1})\) over \(\mathcal {R}_q\) sharing the same \({\boldsymbol{\textbf{v}}}\) as the \(k\text {-}M\text {-}\textsf{ISIS}\) instance, and a \(R\text {-}\textsf{SIS}\) instance \({\boldsymbol{\textbf{h}}}\) over \(\mathcal {R}_p\), where p is short relative to q. Intuitively, the \(k\text {-}M\text {-}\textsf{ISIS}\) instance is for weak binding, the knowledge \(k\text {-}M\text {-}\textsf{ISIS}\) instance is for upgrading weak binding to extractability, and the \(R\text {-}\textsf{SIS}\) instance is for compactness. The commitment c to a vector \({\boldsymbol{\textbf{x}}}\) is simply \(c :=\left\langle {{\boldsymbol{\textbf{v}}}, {\boldsymbol{\textbf{x}}}} \right\rangle \bmod q\).

We next explain the opening and verification mechanism. Suppose for the moment that \(f({\boldsymbol{\textbf{z}}},\cdot )\) is a single-output polynomial, i.e. \(t= 1\). Consider the commitment c of \({\boldsymbol{\textbf{x}}}\) and the evaluation of \(f({\boldsymbol{\textbf{z}}},\cdot )\) at \((v_0^{-1} \cdot c, \ldots , v_{w}^{-1} \cdot c)\) as polynomials in \({\boldsymbol{\textbf{v}}}\). The value \(f({\boldsymbol{\textbf{z}}},{\boldsymbol{\textbf{x}}})\) is encoded as the constant term in the evaluation polynomial. To open the commitment c of \({\boldsymbol{\textbf{x}}}\) to a function \(f({\boldsymbol{\textbf{z}}}, \cdot )\), the committer computes the coefficient of each non-zero Laurent monomial \(g \in \mathcal {G} _0\) in the evaluation polynomial, and use these coefficients to compute a linear combination of \(({\boldsymbol{\textbf{u}}}_{0,g})_{g \in \mathcal {G} _0}\) to produce \({\boldsymbol{\textbf{u}}}_0\). In general, for \(t\ge 1\), the committer further compresses the multiple instances of \({\boldsymbol{\textbf{u}}}_0\) into a single one using a linear combination with coefficients given by \({\boldsymbol{\textbf{h}}}\). To enable extraction (in the security proof), the committer also provides \({\boldsymbol{\textbf{u}}}_1\) which is a linear combination of \(({\boldsymbol{\textbf{u}}}_{1,g})_{g \in \mathcal {G} _1}\) using \({\boldsymbol{\textbf{x}}}\) as coefficients. Given the above, the meaning behind the verification algorithm is immediate.

Finally, we explain the choice of p and q in Table 1. First, p is chosen such that the element \(f({\boldsymbol{\textbf{z}}},{\boldsymbol{\textbf{x}}}) - y({\boldsymbol{\textbf{z}}})\) is considered short (in the context of \(R\text {-}\textsf{SIS}\) problems) relative to p for all \(f \in \mathcal {F} _{s,w,t}\), \(y \in \mathcal {Y} _{s,t}\), \({\boldsymbol{\textbf{z}}} \in \mathcal {X} ^s\), and \({\boldsymbol{\textbf{x}}} \in \mathcal {X} ^w\). By some routine calculations, we can see that for such choice of \((f,{\boldsymbol{\textbf{z}}},{\boldsymbol{\textbf{x}}},y)\), we have \(\Vert f({\boldsymbol{\textbf{z}}},{\boldsymbol{\textbf{x}}}) - y({\boldsymbol{\textbf{z}}})\Vert \le {(s+w+d)}^d \cdot \alpha ^{d+1} \cdot \gamma _{\mathcal {R}}^d\). A standard choice for \(R\text {-}\textsf{SIS}\) problems over \(\mathcal {R}_p\) is for p to be at least \(n \log n\) times the norm bound; we thus simply pick this. Similarly, q is chosen such that \(\delta _0\) and \(\delta _1\) are both considered short relative to q, concretely by setting q to be \(n \log n\) times the maximum among them.Footnote 8

Remark 4

(Updating Commitments and Opening Proofs). We discuss the cost of updating a commitment of \({\boldsymbol{\textbf{x}}}\) to that of \({\boldsymbol{\textbf{x}}}'\), and an opening proof for \(f({\boldsymbol{\textbf{z}}},{\boldsymbol{\textbf{x}}})\) to that of \(f'({\boldsymbol{\textbf{z}}}',{\boldsymbol{\textbf{x}}}')\), omitting fixed \(\textsf{poly}(\lambda )\) factors. Due to the linearity of the commitment \(c = \left\langle {{\boldsymbol{\textbf{v}}}, {\boldsymbol{\textbf{x}}}} \right\rangle \bmod q\) and opening proof component \({\boldsymbol{\textbf{u}}}_1 = \sum _{i \in \mathbb {Z}_w} x_i \cdot {\boldsymbol{\textbf{u}}}_{1,X_i}\) in the committed vector \({\boldsymbol{\textbf{x}}}\), they can be updated for a new committed vector \({\boldsymbol{\textbf{x}}}'\) easily by adding \(\left\langle {{\boldsymbol{\textbf{v}}}, {\boldsymbol{\textbf{x}}}' - {\boldsymbol{\textbf{x}}}} \right\rangle \bmod q\) and \(\sum _{i \in \mathbb {Z}_w} (x'_i - x_i) \cdot {\boldsymbol{\textbf{u}}}_{1,X_i}\) respectively. The computation complexity of the update is \(O(\varDelta )\), where \(\varDelta \) is the Hamming distance between \({\boldsymbol{\textbf{x}}}\) and \({\boldsymbol{\textbf{x}}}'\). Updating the \({\boldsymbol{\textbf{u}}}_{0,{\boldsymbol{\textbf{e}}}}\) terms is more computationally expensive due to its non-linearity in \({\boldsymbol{\textbf{x}}}\). The cost of computing the difference term for \({\boldsymbol{\textbf{u}}}_{0,{\boldsymbol{\textbf{e}}}}\) is linear in \(\left( {\begin{array}{c}w\\ k\end{array}}\right) - \left( {\begin{array}{c}w-\varDelta \\ k\end{array}}\right) = O(\varDelta ^k)\) for each \({\boldsymbol{\textbf{e}}} \in \mathcal {E}_k\) and each \(k \in [d]\). The total work needed for updating \( \{{\boldsymbol{\textbf{u}}}_{0,{\boldsymbol{\textbf{e}}}}\} _{{\boldsymbol{\textbf{e}}} \in \mathcal {E}_k, k \in [d]}\) is thus \(O(w^d \cdot \varDelta ^d)\). For fixed \({\boldsymbol{\textbf{x}}}\) and hence fixed \( \{{\boldsymbol{\textbf{u}}}_{0,{\boldsymbol{\textbf{e}}}}\} _{{\boldsymbol{\textbf{e}}} \in \mathcal {E}_k, k \in [d]}\), updating \({\boldsymbol{\textbf{u}}}_0\) by the same method costs computation linear in the Hamming distance between the coefficient vector of \(f({\boldsymbol{\textbf{z}}},\cdot )\) and that of \(f'({\boldsymbol{\textbf{z}}}',\cdot )\).

We show that our VC construction is correct, extractable under a knowledge \(k\text {-}M\text {-}\textsf{ISIS} \) assumption, and compact. The formal analysis of the theorems are deferred to the full version.

Theorem 1

For \(d = O(1)\), \(\ell _0 :=\ell _1 :=\textsf{lhl} (\mathcal {R}, \eta , q, \beta )\),

$$\begin{aligned} \delta _0 = 2 \cdot p \cdot t \cdot {(s+d)}^{d} \cdot {(w+d)}^{2d} \cdot \alpha ^{2d+1} \cdot \beta \cdot \gamma _{\mathcal {R}}^{2d+2}\quad \text {and} \quad \delta _1 = w\cdot \alpha \cdot \beta \cdot \gamma _{\mathcal {R}}, \end{aligned}$$

our VC construction in Fig. 1 is correct.

Theorem 2

Our VC construction for \((\mathcal {F}, \mathcal {X}, \mathcal {Y})\) is extractable if it is correct, \(\beta \ge \alpha \), \(\ell _i \ge \textsf{lhl} (\mathcal {R}, \eta _i, q, \beta )\) for \(i \in \{0,1\}^{} \), and the \(k\text {-}M\text {-}\textsf{ISIS}_{\mathcal {R}_q, \eta _0, \ell _0, w, \mathcal {G} _0, 1, \mathcal {D}_0, \mathcal {T}_0, \beta , 2 \delta _0} \) assumption, the knowledge version \(k\text {-}M\text {-}\textsf{ISIS}_{\mathcal {R}_q, \eta _1, \ell _1, w, \mathcal {G} _1, \mathcal {D}_1, \mathcal {T}_1, \alpha , \beta , \delta _1} \) assumption, and the \(R\text {-}\textsf{SIS}_{\mathcal {R}_p,t,2 \delta _p} \) assumption hold, where \(\mathcal {D}_i\) is such that the distribution

figure ab

is statistically close to the distribution

figure ac
Fig. 2.
figure 2

Combined size (in KB) of a commitment and an opening proof for the concrete parameters chosen in Theorem 3, setting \(\lambda = 128\), optimising for \(\rho \) and comparing with SNARK proof sizes in prior works [36, Fig. 5]. We picked \(\alpha = s\).

Theorem 3

For \(n \in \textsf{poly}(\lambda )\), \(q, \delta _0, \delta _1 \in \textsf{poly}(\lambda ,s,w,t)\), and \(\ell _0, \ell _1 \in \varTheta (\log q) = \textsf{polylog}(\lambda ,s,w,t)\), covering the choices of parameters in Theorems 1 and 2, the VC construction in Fig. 1 is compact.

Concretely, let \(\mathcal {R}\) be a power-of-2 cyclotomic ring so that \(\gamma _{\mathcal {R}} = n\). For \(s= w= t\ge n\) and for the following choices of parameters,

$$\begin{aligned} d,\eta _0,\eta _1&= O(1), \quad \beta \ge \alpha \\ \delta _0&= 2 \cdot p \cdot t \cdot {(s+d)}^{d} \cdot {(w+d)}^{2d} \cdot {\alpha }^{2d+1} \cdot \beta \cdot \gamma _{\mathcal {R}}^{2d+2}, \\ \delta _1&= w\cdot \alpha \cdot \beta \cdot \gamma _{\mathcal {R}}, \\ p&\approx \delta _p \cdot n \cdot \log n,\quad q \approx \delta _0 \cdot n \cdot \log n,~\text {and}\\ \ell _0 = \ell _1&= \textsf{lhl} (\mathcal {R}, 1, q, \beta ) \approx 2 \log _{\beta } q, \end{aligned}$$

a commitment and openings are of size \(O(n \log s)\), and \(O(n \cdot {(\log s + \log \beta )}^2 / \log \beta )\), respectively. The minimum is attained at \(\beta = \varTheta (s)\), where an opening proof is of size \(O(n \log s)\).

To translate these into concrete sizes we need to pick \(n\) such that solving \(k\text {-}R\text {-}\textsf{ISIS} \) and \(R\text {-}\textsf{SIS} \) costs \(\approx 2^{\lambda }\) operations. Here it can be beneficial to set \(q = \delta _{0}^{\rho } \cdot n \cdot \log n\) for some parameter \(\rho \in \mathbb {N}\). Specifically, we require that \(R\text {-}\textsf{SIS}_{\mathcal {R}_q, \ell _0, 2\cdot \sqrt{n} \cdot \delta _0} \), \(R\text {-}\textsf{SIS}_{\mathcal {R}_q, \ell _1, 2\cdot \sqrt{n} \cdot \delta _1} \) and \(R\text {-}\textsf{SIS}_{\mathcal {R}_p, t, 2\cdot \sqrt{n} \cdot \delta _p} \) are hard where \(\delta _{p} :={( s + w + d)}^{d} \cdot \alpha ^{d+1} \cdot \gamma _{\mathcal {R}}^{d}.\) The factor of two arises from our reduction and the factor \(\sqrt{n}\) translates between \(\ell _{\infty }\) and \(\ell _{2}\). In Fig. 2 we report the concrete combined size (in KB) of a commitment and an opening proof for the concrete parameters chosen in Theorem 3, specifically setting \(d=2\), \(\eta _0 = \eta _1 = 1\), and \(\beta = s= w= t\in \{2^{10}, 2^{11}, \ldots , 2^{40}\} \).

Table 2. Computation complexities (in number of \(\mathcal {R}\) or \(\mathcal {R}_q\) operations) of our VC.

To analyse computation complexity, we assume the concrete parameter choices in Theorem 3 with the exception that \(s\), \(w\), \(t\) are treated as free variables for more fine-grained complexity measures and to highlight the benefits of preprocessing. For simplicity, we assume \(\max \{s, w, t\} \ge n\). The computation complexities (in number of \(\mathcal {R}\) or \(\mathcal {R}_q\) operations) of \(\textsf{Com} \), \(\textsf{Open} \), \(\textsf{PreVerify} \), and \(\textsf{Verify}\) are reported in Table 2. Note that each \(\mathcal {R}\) or \(\mathcal {R}_q\) operation takes at most \(\textsf{poly}(\lambda , \textrm{log}\, s, \textrm{log}\, w, \log t)\) time. In summary, the combined time needed to commit to \({\boldsymbol{\textbf{x}}}\) and open to \(f({\boldsymbol{\textbf{z}}},\cdot )\) is quasi-quadratic in the time needed to compute \(f({\boldsymbol{\textbf{z}}},{\boldsymbol{\textbf{x}}})\), and the time needed to pre-verify (fy) is quasi-linear in the time needed to compute \(f({\boldsymbol{\textbf{z}}},{\boldsymbol{\textbf{x}}})\). We highlight that the online verification cost, i.e. the computation complexity of \(\textsf{Verify}\), is dominated additively by \(s^d\) where \(s\) is the dimension of the public input. In applications where \(s^d = O(\log w+ \log t)\) and setting \(\beta = \varTheta (w+ t)\), the online verification cost (in number of bit operations) is \(O(n \log w+ n \log t)\).