Keywords

1 Introduction

Code-based cryptography introduced by McEliece [29] is one of the most promising solution for designing secure cryptosystems against quantum attacks. The McEliece public-key encryption scheme, based on binary Goppa codes, has so far successfully resisted all cryptanalysis efforts. But it is not used in real life because of the key length problem. In order to decrease the public-key size, some variants were proposed by concentrating on subclasses of alternant/Goppa codes which admit very compact public matrices, typically quasi-cyclic (QC), quasi-dyadic (QD), or quasi-monoidic (QM) matrices [2, 14, 18, 27, 28, 30, 36]. The security of the McEliece cryptosystem relies on the fact that the public key does not have any known structure. The attacker is faced with the problem of decoding a random code. A way to do this decoding is to use the Information-Set Decoding (ISD) algorithm. The ISD algorithm was introduced by Prange in 1962 [38]. Its principle is to find an information set where there are no errors positions. Its target is to answer to the Computational Syndrome Decoding (CSD) Problem.

In this paper, we will extend the best version of the ISD attack algorithm to arbitrary code over \(\mathbb {F}_q\) and analyze the security of such codes to this new improved version. It is important to note that Peters used the ISD attack to prove the security of arbitrary codes over \(\mathbb {F}_q\) [37], later Ayoub et al. introduced a polynomial attack against Wild McEliece over quadratic extensions and their attack is a structural attack [9]. Recently, Hirose applied the May-Ozerov algorithm for Nearest-Neighbor problem over \(\mathbb {F}_q\) to generalize Stern’s ISD version and he observed that the May-Ozerov algorithm for Nearest-Neighbor problem may not contribute to improve Stern’s ISD [19]. The contribution of our paper is the generalization of Becker, Joux, May, and Meurer’s ISD using the May-Ozerov algorithm for Nearest-Neighbor problem [32] over \(\mathbb {F}_q\) with q an arbitrary prime power. We analyze the contribution of the May-Ozerov algorithm for Nearest-Neighbor problem over an arbitrary finite field \(\mathbb {F}_q\) to the performance of Becker, Joux, May, and Meurer’s ISD. And we analyze the security over an arbitrary finite field \(\mathbb {F}_q\).

q -ary Computational Syndrome Decoding (CSD) Problem

Input: \(\mathbf {H}\in \mathbb {F}_{q}^{\left( n-k\right) \times n}\), \({\varvec{s}} \in \mathbb {F}_{q}^{n-k}\) and an integer \(\omega >0\).

Output: Find \({\varvec{e}} \in \mathbb {F}_{q}^{n}\) of weight \(\le \omega \) such that \( \mathbf {H}{\varvec{e}}^{T}={\varvec{s}}\)

Information-Set Decoding (ISD) Algorithm. The best known attacks against the classical McEliece code-based cryptosystem are generic decoding attacks that treat McEliece’s hidden binary Goppa codes as random linear codes. Introduced by Prange in 1962 (see [38]), the ISD algorithm is a generic decoding attack algorithm. Its target is to solve the CSD problem taking only as inputs a basis of the code and a noisy codeword. Improvements of this form of ISD were developed by Lee and Brickell [25], Stern [40], May, Meurer and Thomae [31], Becker, Joux, May and Meurer (BJMM-ISD) [4], later by May and Ozerov [32] used the nearest neighbor algorithm to improve the BJMM-ISD.

Organisation of Paper. The paper is organized as follows: in Sect. 2, we give some definitions and notations on coding theory, in Sect. 3 we give a summary of previous and recent results on ISD algorithm over an arbitrary finite fields \(\mathbb {F}_q\). In Sect. 4, we give the version of BJMM-ISD using the May-Ozerov Nearest Neighbor algorithm. And in Sect. 5, we give the asymptotic complexity of our algorithm.

2 Coding Theory Background

2.1 Definitions and Notations

Let \(\mathbb {F}_q\) be a finite field (\(q = p^m\), p is prime). A q-ary linear code \(\mathcal {C}\) of length n and dimension k over \(\mathbb {F}_q\) is a vectorial subspace of dimension k of the full vectorial space \(\mathbb {F}_q^n\). It can be specified by a full rank matrix \(\mathbf {G}\in \mathbb {F}_q^{k\times n}\) called generator matrix of \(\mathcal {C}\) whose rows span the code. Namely, \(C =\left\{ {\varvec{x}}\mathbf {G} \ \ such\ \ that \ \ {\varvec{x}}\in \mathbb {F}_q^{k}\right\} .\)

A linear code can be also defined by the right kernel of matrix \(\mathbf {H}\) called parity-check matrix of \(\mathcal {C}\) as follows:

$$\begin{aligned} \mathcal {C}=\left\{ {\varvec{x}}\in \mathbb {F}_q^{n} \ \ such\ \ that \ \ \mathbf {H}{\varvec{x}}^T={\varvec{0}}\right\} . \end{aligned}$$

The Hamming distance between two codewords is the number of positions (coordinates) where they differ. The minimal distance of a code is the minimal distance of all codewords. The weight of a word \({\varvec{x}}\in \mathbb {F}_q^n\) denote by \(wt\left( {\varvec{x}}\right) \) is the number of its nonzero positions. Then the minimal weight of a code \(\mathcal {C}\) is the minimal weight of all codewords. If a code \(\mathcal {C}\) is linear, the minimal distance is equal to the minimal weight of the code.

Let \(\mathcal {C}\) be a q-ary linear code of length n, dimension k and generator matrix \(\mathbf {G}=\left( {\varvec{g}}_0, {\varvec{g}}_1,...,{\varvec{g}}_{n-1} \right) \) with \({\varvec{g}}_i\in \mathbb {F}_q^n\) for all \(i\in \left\{ 0, 1,\ldots , n-1 \right\} \). Let \(I\subset \left\{ 0, 1,\ldots , n-1 \right\} \) with \(|I|=k\). We call I an information set if and only if the matrix \(\mathbf {G}_I=\left( {\varvec{g}}\right) _{i\in I} \) is inversible.

A vector \({\varvec{u}}\in \mathbb {F}_q^{\ell }\) is called a balanced vector if the number of its coordinates equal to x is \(\ell /q\) for all \(x \in \mathbb {F}_q\).

For \({\varvec{x}} = \left( x_1 ,...,x_n \right) \in \mathbb {F}_q^{n}\) and a non zero integer \(j < n\), let \(x_{[j]}= \left( x_1 ,...,x_j\right) \) and \({\varvec{x}}^{[j]} = \left( x_{n-j+1} ,...,x_n \right) \).

We denote the q-ary entropy function by:

$$\begin{aligned} H_q(x)=x\log \left( q-1\right) -x\log \left( x\right) -\left( 1-x\right) \log \left( 1-x\right) \end{aligned}$$

For all integer n, let \([n]=\left\{ 1,\ldots , n \right\} \). If I is a subset of [n], for all vector \({\varvec{x}}=\left( x_1,..., x_n\right) \), let \({\varvec{x}}_I=\left( x_i\right) _{i\in I}\).

2.2 McEliece’s Cryptosystem

McEliece’s cryptosystem is a public-key encryption scheme introduced in 1978 by McEliece. The original version used the Goppa binary code remained unbroken. It can also be used with any class of codes which has an efficient decoding algorithm.

Secret keys: A matrix \(\mathbf {G}\in \mathbb {F}_{2}^{k\times n}\), \(\mathbf {S}\in \mathbb {F}_{2}^{k\times k}\) (an invertible matrix), \(\mathbf {P} \in \mathbb {F}_{2}^{n\times n}\) (a random permutation matrix).

Public keys: The matrix \(\tilde{\mathbf {G}}=\mathbf {S}\mathbf {G}\mathbf {P}\) and the corrector capacity t.

Encryption: Let \({\varvec{m}}\) be a plaintext then the ciphertext \({\varvec{c}}\) is given by:

$$ {\varvec{c}}={\varvec{m}}\tilde{\mathbf {G}} + {\varvec{e}} $$

with \({\varvec{e}}\) a q-ary vector of length n and weight t.

Decryption: Compute

$$ \tilde{{\varvec{c}}}={\varvec{m}}\tilde{\mathbf {G}}\mathbf {P}^{-1} + {\varvec{e}}\mathbf {P}^{-1} $$

and use the decoding algorithm to find \( \tilde{{\varvec{m}}}={\varvec{m}}\mathbf {S}\) and finally find \({\varvec{m}}\) by computing \({\varvec{m}}= \tilde{{\varvec{m}}}\mathbf {S}^{-1}\).

2.3 Nearest-Neighbor Problem

The nearest-neighbor (NN) problem over the binary field defined in [32] is generalized over other finite fields in [19].

Neartest Neighbor Problem over \(\mathbb {F}_q\) : Let q be a prime power. Let m be a positive integer. Let \(0<\gamma <1/2\) and \(0<\lambda <1\). Then (m, \(\gamma \), \(\lambda \))-NN problem is defined by:

Input: The constant \(\gamma \) and two lists \(U\subset \mathbb {F}_q^m\), \(V\subset \mathbb {F}_q^m\) of size \(\left| U\right| = \left| U\right| =q^{\lambda n}\) with uniform and pairwise independent vectors.

Output: \(\mathcal {C}\subset U\times V\) which has \(\left( {\varvec{u}}, {\varvec{v}}\right) \) such that \(wt({\varvec{u}}-{\varvec{v}})=\gamma m\) with \(wt\left( {\varvec{u}}-{\varvec{v}}\right) \) is the weight of \({\varvec{u}}-{\varvec{v}}\).

3 Preview Work on Information-Set Decoding over \(\mathbb {F}_q\)

We denote in the rest of the paper the concatenation of two vectors \({\varvec{x}}\) and \({\varvec{y}}\) (respectively of two matrices \(\mathbf {A}\) and \(\mathbf {B}\)) by \(\left( {\varvec{x}}|{\varvec{y}}\right) \) (respectively \(\left( \mathbf {A} |\mathbf {B}\right) \)).

In this section we give a survey of the generalization of ISD algorithm over an arbitrary finite field.

Peters: In 2009, Peters was the first to propose a generalization of the ISD algorithm over an arbitrary finite field \(\mathbb {F}_q\). In her paper [37], she proposed the generalization of Stern-ISD which all of the ISD improvements are based on.

Cayrel et al.: In 2010 just few months after Peters’s paper, Cayrel et al. [34] improved the performance of the ISD over an arbitrary finite field by giving a lower bound of ISD algorithm and they generalized the formula of the lower bound introduced by Finiasz et al. in [15].

Meurer: In 2012 just after their ISD algorithm in the binary case in [4, 31], Meurer proposed a new generalization of the ISD algorithm over an arbitrary finite field in his dissertation thesis [33] based on these two papers.

Hirose: In 2016 Hirose gave a generalization of the nearest-neighbor algorithm introduced by May-Ozerov [32] to generalize the Stern-ISD algorithm. And he analyzed the contribution of the May-Ozerov’s nearest-neighbor algorithm over an arbitrary finite field to the performance of Stern-ISD algorithm over an arbitrary finite field.

The following tables give us a summary complexity results on the ISD algorithm generalization previous work. We denote the ISD algorithm generalization given byPeters by q-Stern-ISD, Hirose’s generalization by q-Hirose-ISD. In Meurer’s dissertation thesis, he gave two variants of ISD algorithm generalization then we denote the basic variant by BReps and the extended variant by XBReps.

Table 1. Complexity of ISD algorithm over an arbitrary finite field given in [19].
Table 2. Complexity of ISD algorithm over an arbitrary finite field given by Meurer in [33].

4 Becker, Joux, May and Meurer ISD Using May-Ozerov Nearest-Neighbor Algorithm over \(\mathbb {F}_q\)

The Becker, Joux, May and Meurer ISD using May-Ozerov Nearest-Neighbor algorithm over an arbitrary finite field \(\mathbb {F}_q\) is presented in Algorithm 1.

In this algorithm we construct Base Lists over \(\mathbb {F}_q\) like in [4]. For all \(j=0,\ 1\) we denote the Base Lists by \(\mathcal {B}_{j,1}^{\mathcal {L}_j}\), \(\mathcal {B}_{j,2}^{\mathcal {L}_j}\), \(\mathcal {B}_{j,1}^{\mathcal {R}_j}\) and \(\mathcal {B}_{j,2}^{\mathcal {R}_j}\). We define \(\mathcal {B}_{j,1}^{\mathcal {L}_j}\) as follows:

Let \(\mathcal {P}_{j,1}^{\mathcal {L}_j}\) and \(\mathcal {P}_{j,2}^{\mathcal {L}_j}\) be be a partition of \([k+\ell ]=\left\{ 1,...,k+\ell \right\} \) such that \(\left| \mathcal {P}_{j,1}^{\mathcal {L}_j}\right| =\left| \mathcal {P}_{j,2}^{\mathcal {L}_j}\right| =\dfrac{k+\ell }{2}\) then

$$ \mathcal {B}_{j,1}^{\mathcal {L}_j}=\left\{ {\varvec{x}}\in \mathbb {F}_q^{k+\ell }\times \left\{ {\varvec{0}}^{n-k-\ell }\right\} \ s.t\ \ wt\left( {\varvec{x}}\right) =\dfrac{p}{8}+\dfrac{\epsilon _1}{4}+\dfrac{\epsilon _2}{2} \ \ with\ \ {\varvec{x}}_{P_{j,2}^{\mathcal {L}_j}}=(0,0,...,0)\right\} $$

Where p, \(\epsilon _1\) and \(\epsilon _2\) are the parameters of the algorithm such that \(0\le p<k+\ell \), \(0<\epsilon _1 < k+\ell -p\), \(0<\epsilon _2< k+\ell -\dfrac{p}{2}-\epsilon _1\). The construction of \( \mathcal {B}_{j,2}^{\mathcal {L}_j}\), \( \mathcal {B}_{j,1}^{\mathcal {R}_j}\) and \( \mathcal {B}_{j,2}^{\mathcal {R}_j}\) is similar.

We use these Base Lists to compute a vector \({\varvec{e}}\in \mathbb {F}_q^{k+\ell }\times \left\{ {\varvec{0}}^{n-k-\ell }\right\} \) such that \(wt\left( {\varvec{e}}_{[k+\ell ]}\right) =p\) and \({\varvec{e}}={\varvec{e}}_1-{\varvec{e}}_2\) with \({\varvec{e}}_1\), \({\varvec{e}}_2\in \mathbb {F}_q^{k+\ell }\times \left\{ {\varvec{0}}^{n-k-\ell }\right\} \) and \(wt\left( {\varvec{e}}_1\right) =wt({\varvec{e}}_2)=\dfrac{p}{2}+\epsilon _1\).

Proposition 1

[33]. Let \(0\le p\le k+\ell \) be an integer and \({\varvec{e}}\in \mathbb {F}_q^{k+\ell }\times \left\{ {\varvec{0}}^{n-k-\ell }\right\} \) be a vector such that \(wt\left( {\varvec{e}}\right) =p\). For all integer \(\epsilon \) such that \(0\le \epsilon <k+\ell -p\), denote \(\vartheta \left( k, \ell , \epsilon , p, q \right) \) the number of pairs \(\left( {\varvec{e}}_1, {\varvec{e}}_2\right) \) such that \({\varvec{e}}={\varvec{e}}_1-{\varvec{e}}_2\) with \({\varvec{e}}_1\), \({\varvec{e}}_2\in \mathbb {F}_q^{k+\ell }\times \left\{ {\varvec{0}}^{n-k-\ell }\right\} \) and \(wt\left( {\varvec{e}}_1\right) =wt({\varvec{e}}_2)=\dfrac{p}{2}+\epsilon \). It holds

$$\begin{aligned} \vartheta \left( k, \ell , \epsilon , p, q \right) =\sum \limits _{i=0}^{\min \left( \frac{p}{2}, \epsilon \right) } \left( {\begin{array}{c}p-2i\\ \frac{p}{2}-i\end{array}}\right) \left( q-2\right) ^{2i}\left( {\begin{array}{c}k+\ell -p\\ \epsilon -i\end{array}}\right) \left( q-1\right) ^{\epsilon -i} \end{aligned}$$

Then \(\vartheta \left( k, \ell , \epsilon , p, q \right) \ge \left( {\begin{array}{c}p\\ \frac{p}{2}\end{array}}\right) \left( {\begin{array}{c}k+\ell -p\\ \epsilon \end{array}}\right) \left( q-1 \right) ^{\epsilon }\).

And asymtopticalLy by using the inequality \(\log _q2<H_q\left( \frac{1}{2}\right) \), we implicity lower bound \(\log _q \vartheta \left( k, \ell , \epsilon , p, q \right) \ge p\log _q2+\left( k+\ell -p\right) H_q\left( \frac{\epsilon }{k+\ell -p}\right) \) [33]. This brief analysis will allow us to give a constraint on some parameters of our algorithm.

figure a

The complexity the q-BJMM-MO is given by:

Theorem 1

Let \(\varepsilon >0\) be a real. The q-BJMM-MO algorithm solves the Syndrome Decoding problem of random [nk]-linear code over \(\mathbb {F}_q\) with overwhelming probability in time

$$\begin{aligned} \tau \left( q, n, k, p, \omega , h_x, \varepsilon \right) =\tilde{\mathcal {O}}\left( q^{n\tau _1}\left( q^{n\tau _2}+q^{2n\tau _2-r_1}+q^{4n\tau _2-r_1-\ell }+q^{n\mu }+q^{\left( y+\varepsilon \right) \left( n-k-\ell \right) }\right) \right) \end{aligned}$$

where

$$\begin{aligned} \tau _1=\left( H\left( \frac{\omega }{n}\right) -\left( \frac{k+\ell }{n}\right) H\left( \frac{p}{k+\ell }\right) -\left( 1-\frac{k+\ell }{n} \right) H\left( \frac{\omega -p}{n-k-\ell }\right) \right) \log _q 2, \end{aligned}$$
$$\begin{aligned} \tau _2=\frac{k+\ell }{2n}H_q\left( \frac{\frac{p}{4}+\frac{\epsilon _1}{2}+\epsilon _2}{k+\ell }\right) \ \ \ and \ \ \ \mu =\frac{k+\ell }{n}H_q\left( \frac{\frac{p}{2}+\epsilon _1}{k+\ell }\right) -\frac{\ell }{n} \end{aligned}$$

with

$$\begin{aligned}y=\left( 1-\gamma \right) \left( H_q\left( \beta \right) -\frac{1}{q}\sum \limits _{x\in \mathbb {F}_q}H_q\left( \frac{qh_x-\gamma }{1-\gamma }\beta \right) \right) ,\ \ \ \gamma =\frac{\omega -p}{n-k-p}, \ \ \ 0<\beta <1, \end{aligned}$$
$$\begin{aligned} \max \left\{ 0, \omega +k+\ell -n\right\} \le p \le \min \left\{ k+\ell , \omega \right) , \ \ \ \sum \limits _{x\in \mathbb {F}_q}h_x=1, \end{aligned}$$
$$\begin{aligned} \frac{\gamma }{q}<h_x<\frac{\gamma }{q}+\frac{1-\gamma }{q\beta }\ \ \ for \ \ \ each\ \ \ x\in \mathbb {F}_q , \end{aligned}$$
$$\begin{aligned} \ell =p\log _q2+\left( k+\ell -p\right) H_q \left( \frac{\epsilon _1}{k+\ell -p}\right) \ \ \ and \ \ \ \ell \le \min \left\{ n-k-\omega +p, n-k\right\} \end{aligned}$$
$$\begin{aligned} r_1=\left( \frac{p}{2}+\epsilon _1\right) \log _q2+\left( k+\ell -\frac{p}{2}-\epsilon _1\right) H_q\left( \frac{\epsilon _2}{ k+\ell -\frac{p}{2}-\epsilon _1}\right) \end{aligned}$$
$$\begin{aligned} \lambda =\frac{n \mu }{n-k-\ell }\le H_q\left( \beta \right) -\frac{1}{q}\sum \limits _{x\in \mathbb {F}_q} H_q\left( qh_x\beta \right) . \end{aligned}$$

Proof

Recall that

$$\begin{aligned} \mathcal {T}(q, n, k, p,\omega , h_{x}, \varepsilon )=\frac{1}{\mathbb {P}(\pi _{succ})}\mathcal {C}_{in} \end{aligned}$$

where \(\mathbb {P}(\pi _{succ})\) is a the probability to have the good permutation (permutation allowing to have a success decoding) and \(\mathcal {C}_{in}\) is the cost of each iteration with:

$$\begin{aligned} \mathbb {P}(\pi _{succ})=\tilde{\mathcal {O}}\left( \frac{\left( {\begin{array}{c}k+\ell \\ p\end{array}}\right) \left( {\begin{array}{c}n-k-\ell \\ \omega -p\end{array}}\right) }{\left( {\begin{array}{c}n\\ \omega \end{array}}\right) }\right) \Longrightarrow \frac{1}{\mathbb {P}(\pi _{succ})}=\tilde{\mathcal {O}}\left( \frac{\left( {\begin{array}{c}n\\ \omega \end{array}}\right) }{\left( {\begin{array}{c}k+\ell \\ p\end{array}}\right) \left( {\begin{array}{c}n-k-\ell \\ \omega -p\end{array}}\right) }\right) . \end{aligned}$$

Using the equality

$$\begin{aligned} \left( {\begin{array}{c}n\\ k\end{array}}\right) =2^{nH\left( \frac{k}{n}\right) } \end{aligned}$$

with H the binary entropie function.

$$\begin{aligned} \begin{aligned} \mathbb {P}(\pi _{succ})&=\tilde{\mathcal {O}}\left( 2^{n\left( H(\frac{\omega }{n})- \frac{k+\ell }{n}H(\frac{p}{k+\ell })-(1-\frac{k+\ell }{n})H(\frac{\omega -p}{n-k-\ell })\right) }\right) \\&= \tilde{\mathcal {O}}\left( q^{n\left( H(\frac{\omega }{n})- \frac{k+\ell }{n}H(\frac{p}{k+\ell })-(1-\frac{k+\ell }{n})H(\frac{\omega -p}{n-k-\ell })\right) \log _{q}2}\right) \\&= \tilde{\mathcal {O}}\left( q^{n\tau _{1}} \right) . \end{aligned} \end{aligned}$$

Let us examine the complexity of each iteration. First we construct Base Lists and the cardinality of each Base List is given by, for each \(i=1, 2\) and \(j=1, 2\)

$$\begin{aligned} |\mathcal {B}_{j, i}^{\mathcal {L}_{j}}|=\left( {\begin{array}{c}\frac{k+\ell }{2}\\ \frac{p}{8}+\frac{\epsilon _{1}}{4}+\frac{\epsilon _{2}}{2}\end{array}}\right) (q-1)^{\frac{p}{8}+\frac{\epsilon _{1}}{4}+\frac{\epsilon _{2}}{2}}. \end{aligned}$$

Then by using the equality

$$\begin{aligned} \left( {\begin{array}{c}n\\ k\end{array}}\right) (q-1)^{k}=\tilde{\mathcal {O}}\left( q^{nH_{q}(\frac{k}{n})} \right) , \end{aligned}$$

the complexity to compute Base Lists is given by

$$\begin{aligned} \tilde{\mathcal {O}}\left( q^{n\left( \frac{k+\ell }{2n} H_{q}\left( \frac{\frac{p}{4}+\frac{\epsilon _{1}}{2}+\epsilon _{2}}{k+\ell }\right) \right) } \right) =\tilde{\mathcal {O}}\left( q^{n\tau _{2} } \right) . \end{aligned}$$

Second we use Base Lists to make a filtering to compute \(\mathcal {L}_{i}\) and \(\mathcal {R}_{i}\) for each \(i=1, 2\) and the cost of this filtering is given by:

$$\begin{aligned} \tilde{\mathcal {O}}\left( \frac{|\mathcal {B}_{i, 1}^{\mathcal {L}_{i}}||\mathcal {B}_{i, 2}^{\mathcal {L}_{j}}|}{q^{r_{1}}}\right) =\tilde{\mathcal {O}}\left( q^{2n \tau _{2} -r_{1} } \right) . \end{aligned}$$

Third we compute the lists \(\mathcal {L}\) and \(\mathcal {R}\) with a filtering and the cost of this filtering is given by

$$\begin{aligned} \tilde{\mathcal {O}}\left( \frac{|\mathcal {L}_{i}||\mathcal {L}_{j}|}{q^{\ell -r_{1}}}\right) =\tilde{\mathcal {O}}\left( q^{4n \tau _{2} -r_{1}-\ell }\right) . \end{aligned}$$

Line 20 only gives the upper bound on \(|\mathcal {L}|=|\mathcal {R}|\).

$$\begin{aligned} \tilde{\mathcal {O}}\left( \frac{ \left( {\begin{array}{c}k+\ell \\ \frac{p}{2}+\epsilon _{1}\end{array}}\right) (q-1)^{\frac{p}{2}+\epsilon _{1}}}{q^{\ell }}\right) =\tilde{\mathcal {O}}\left( q^{n\left( \frac{k+\ell }{n}H_{q}\left( \frac{\frac{p}{2}+\epsilon _{1}}{k+\ell }\right) -\frac{\ell }{n}\right) }\right) =\tilde{\mathcal {O}}\left( q^{\mu n}\right) . \end{aligned}$$

And finally we make a last filtering using the May-Ozerov Nearest Neighbor algorithm and the cost of this filtering is given by:

$$\begin{aligned} \tilde{\mathcal {O}}\left( q^{(y+\varepsilon )(n-k-\ell )}\right) . \end{aligned}$$

We have \(|\mathcal {L}|=|\mathcal {R}|=q^{\mu n}\). Thus MO-NN is given an instance of \((m, \gamma , \lambda )\)-NN problem with:

$$\begin{aligned} m=n-k-\ell , \ \ \ \gamma =\frac{\omega -p}{n-k-\ell } \end{aligned}$$

and

$$\lambda =\frac{\mu n }{n-k-\ell }.$$

According to Lemma 3 in [19] we must have

$$\begin{aligned} \lambda \le H_{q}(\beta ) -\frac{1}{q}\sum \limits _{x\in \mathbb {F}_{q}}H_{q}\left( qh_{x} \beta \right) . \end{aligned}$$

5 Numerical Analysis of Time Complexity

We give in this section a optimization numerical time complexity of our algorithm in the half distance decoding using the code’s parameters given in [19] and in the full distance decoding using the code’s parameters given in [33]. We give these complexities for \(q \ge 3\) because the case \(q = 2\) is already done in [4, 31,32,33]

Table 3. Complexity of the q-BJMM-MO algorithm in the half distance decoding for parameters in [19].
Table 4. Complexity of the q-BJMM-MO algorithm in the full distance decoding for parameters in [33].

6 Conclusion

The May-Ozerov’s Nearest Neighbor algoritm allows us to improve the generalization of BJMM-ISD. We show in the Tables 1 and 3 that our generalization is faster than Hirose’s generalization in the half distance decoding and in addition by comparing the Tables 2 and 4 we show that is faster than Meurer’s generalization.