1 Introduction

The linear complexity and the k-error linear complexity of periodic sequences over finite fields are important randomness measures for pseudorandom sequence generators employed in stream ciphers. The linear complexity of a sequence is the length of the shortest linear feedback shift register (LFSR) which generates the sequence. If the shortest LFSR has length of l, then 2l consecutive bits of the output can determine its feedback polynomial in terms of Berlekamp-Massey algorithm [1, 2]. In other words, if the linear complexity of a sequence is l, then the Berlekamp-Massey algorithm requires just 2l bits to recover the whole sequence. Therefore, to resist known-plaintext attack, the linear complexity of a sequence must be enough large. But the sequence with large linear complexity may not be difficult to predict. If the linear complexity of a sequence immediately decreases as few bits in one period are changed, then the sequence is still cryptographically weak because we can at least determine a sequence which coincides with the correct sequence in all but a “small” number of positions. Thus the linear complexity of a sequence should also remain large even if some of its terms are changed. This observation leads to the study of the k-error linear complexity. The k-error linear complexity of a sequence s with period N is the minimal linear complexity that can be obtained for s by modifying up to k terms in one period (and modifying all other periods in the same way). This notion was defined in [3] and is closely related to previously defined notions of sphere complexity [4] and weight complexity [5]. In [6], Niederreiter showed that there is a class of periodic sequences which possesses large linear complexity and large k-error linear complexity simultaneously.

There exist several efficient algorithms [3, 79] which compute the linear complexity and k-error linear complexity of a given periodic sequence. Stamp and Martin [3] extended the Games-Chan algorithm [7] to compute the k-error linear complexity of a binary sequence of period l=2n using \(\mathcal {O}(l\log l)\) bit operations. The Stamp and Martin algorithm can be used to compute entirely linear complexity spectrum of a period l=2n sequence using \(\mathcal {O}(l^{2}\log l)\) bit operations. Further, Lauder and Paterson[10] showed that the whole error linear complexity spectrum (i.e., the k-error linear complexity for each value of k) of a binary sequence of period l=2n can be computed using \(\mathcal {O}(l(\log l)^{2})\) bit operations. The k-error linear complexity spectrum contains all the information about how the linear complexity of a sequence decreases as the number k of allowed bit changes increases. However, it remains a challenging open problem to devise an algorithm which efficiently computes the error linear complexity spectrum of a binary sequence of general period. Recent developments in stream ciphers point towards an interest in word-based (or vectorized) stream ciphers [1117]. The theory of such stream ciphers requires the study of the complexity measures for multisequences, i.e., for parallel streams of finite many sequences. In this direction, joint linear complexity and k-error joint linear complexity of multisequences have been investigated [1317]. Let \(\mathbb {F}_{2}\) be the finite field with 0,1 elements, and let t and N be positive integers. An t-fold N-periodic multisequences S over \(\mathbb { F}_{2}\) is of the form S=(S[1],S[2],⋯ ,S[t]), where S[λ]=(S[λ,0],S[λ,1],⋯ ,S[λ,N−1]) is an N-periodic sequence with bits in \(\mathbb {F}_{2}\) for each λ=1,2,⋯ ,t.Actually, (S[λ,0],S[λ,1],⋯ ,S[λ,N−1]) denotes constantly duplication of successive N terms in the firstly period of sequence S[λ]. By N-periodic we note that N is a period length of the sequence, but not necessarily the least period length. The joint linear complexity L C(S) is the least order of a linear recurrence relation that S[1],S[2],⋯ ,S[t] satisfy simultaneously, with L C(S)=0 if S is the zero multisequence. For multisequences, the definition of k-error joint linear complexity was presented in [1416]. For an integer k with 0≤kt N, the k-error joint linear complexity of t-fold N-periodic multisequence is defined by \(LC_{k}(\mathbf {S})=\min \limits _{\mathbf {T}} LC(\mathbf {T})\), where the minimum is taken over all N-periodic t-fold multisequences T with term distance d(S,T)≤k. The term distance d(S,T) is defined to be the number of terms in S that are different from the corresponding terms in T. Especially, for k=0, k-error joint linear complexity is its joint linear complexity for a multisequence. The paper [15] marks the beginning of the theory of k-error joint linear complexity measures for multisequences. As suggested in that paper, an interesting task would be to find analogs of all major results on the k-error linear complexity of single sequences for the case of multisequences. Sethumadhavana et al. presented an algorithm (see in [17]) to compute the k-error joint linear complexity of 2n-periodic multisequence over \(\mathbb {F}_{2}\). We presented an algorithm (See [16]) to compute the k-error joint linear complexity of a p n-periodic multisequence over \(\mathbb {F}_{q}\), where p is an odd prime, q a primitive root modulo p 2. The k-error joint linear complexity spectrum contains all the information about how the joint linear complexity of a multisequence decreases as the number k of allowed bit changes increases.

In this paper, we present an efficient algorithm by which the k-error joint linear complexity spectrum for a p n-periodic binary multisequence can be entirely determined, where p is an odd prime, 2 a primitive root modulo p 2 and n a positive integer. On one hand, our main results in Section 3, when reduced to the case of single sequence, i.e., if t=1, are actually further generalization of results in [10]; On the other hand, although the algorithm [16] can also be used to determine entirely the k-error joint linear complexity spectrum of a p n-periodic binary multisequence, it requires \(\mathcal {O}(t(p-1)p^{n}\log p^{n})\) bit operations, where p is an odd prime and 2 a primitive root modulo p 2. This paper’s contribution is to accomplish the same task, i.e., compute entirely the k-error joint linear complexity spectrum, using \(\mathcal {O}(tp^{n}\log p)\) bit operations by presenting an algorithm.

This paper is organized as follows. In Section 2, we give necessary definitions and present some preliminary results, and explain the concept of cost binary multisequences. Section 3 contains pseudo-code for the main algorithm of the paper, as well as a proof of the correctness and an analysis of the computational complexity of this algorithm, which compute the k-error joint linear complexity spectrum of a binary multisequence S by computing the k-error joint linear complexity spectrum of its costed binary multisequence \(\mathcal {S}=(\mathbf {S},\sigma ,l)\). Section 4 concludes this paper by presenting an interesting open problem.

2 Definitions and basic results

Let S=(S[1],S[2],⋯ ,S[t]) be a binary multisequence with length l, where S[λ]=(s[λ,0],s[λ,1],⋯ ,s[λ,l−1]),1≤λt denote the λth single sequence of S. Let σ=(σ[1],σ[2],⋯ ,σ[t]) is a nonnegative real multisequence with length l, where σ[λ]=(σ[λ,0],σ[λ,1],⋯ ,σ[λ,l−1]),1≤λt; and l is a positive integer. Let triple \(\mathcal {S}=(\mathbf {S},\sigma ,l)\). We call the triple \(\mathcal {S}=(\mathbf {S},\sigma ,l)\) a costed binary multisequence and σ the cost multisequence of S. In this paper, our main algorithm is carried out step by step. In fact, σ[λ,j] is intended to measure the cost of changing the current one bit s[λ,j],0≤jl−1 without disturbing the results of any previous step. The cost is the contribution to error. Especially, when σ[λ,i]=1 for all λ and i, 1≤λt,0≤il−1, the cost is the Hamming weight of error vector. The notation is similar to the cost of single sequence which is introduced firstly in [1].

Now, we suppose that the period of a binary multisequence is l=p n, where p is an odd prime, 2 a primitive root modulo p 2 and n a positive integer. In fact, l=p n can be comprehended to be initial value of variable l. We define the two maps \(B(\mathcal {S})=(B(\mathbf {S}),B(\sigma ),\frac {l}{p})\) and \(D(\mathcal {S})=(D(\mathbf {S}),D(\sigma ),\frac {l}{p})\).

The pseudo-code definitions of \(B(\mathcal {S})=(B(\mathbf {S}),B(\sigma ),\frac {l}{p})\) and \(D(\mathcal {S})=(D(\mathbf {S}),D(\sigma ),\frac {l}{p})\) are as follows:

  1. (1)

    The B map:

    $$\begin{array}{@{}rcl@{}} &Input& \ \ \mathcal{S}=(\mathbf{S},\sigma,l)\\ &Output& \ \ B(\mathcal{S})=(B(\mathbf{S}),B(\sigma),\frac{l}{p})\\ &For& 0\leq i<\frac{l}{p}\\ &&\ \ \ \ \ \ \ B(\mathbf{S})[\lambda,i]=\sum\limits_{j=0}^{p-1}s[\lambda,i+j\frac{l}{p}]\\ &&\ \ \ \ \ \ \ B(\sigma)[\lambda,i]=\min\limits\{\sigma[\lambda, i+j\frac{l}{p}]:j=0,1,\cdots,p-1\} \end{array} $$
  2. (2)

    The D map:

    $$\begin{array}{@{}rcl@{}} &Input& \; \; \mathcal{S}=(\mathbf{S},\sigma,l)\\ &Output& \; \; D(\mathcal{S})=(D(\mathbf{S}),D(\sigma),\frac{l}{p})\\ &For& 0\leq i<\frac{l}{p},1\leq \lambda \leq t\\ &&\; \; \; \; \; \; \; T_{\lambda,i0}=\sum\limits_{j=0}^{p-1}\sigma[\lambda,i+j\frac{l}{p}]({s[\lambda,i+j\frac{l}{p}]\oplus 1)}\\ &&\ \ \ \ \ \ \ T_{\lambda,i1}=\sum\limits_{j=0}^{p-1}\sigma[\lambda,i+j\frac{l}{p}](s[\lambda,i+j\frac{l}{p}])\\ &&\ \ \ \ \ \ \ if \ s[\lambda,i]=s[\lambda,i+\frac{l}{p}]=\cdots=s[\lambda,i+(p-1)\frac{l}{p}]\\ &&\ \ \ \ \ \ \ \ \ D(\mathbf{S})[\lambda,i]=s[\lambda,i];D(\sigma)[\lambda,i]=\sum\limits_{j=0}^{p-1}\sigma[\lambda,i+j\frac{l}{p}]\\ &&\ \ \ \ \ \ \ else\\ &&\ \ \ \ \ \ \ \ \ \ \ if \ T_{\lambda,i0}>T_{\lambda,i1}\\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ D(\mathbf{S})[\lambda,i]=0,D(\sigma)[\lambda,i]=T_{\lambda,i0}-T_{\lambda,i1}\\ &&\ \ \ \ \ \ \ \ \ \ \ else\\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ D(\mathbf{S})[\lambda,i]=1,D(\sigma)[\lambda,i]=T_{\lambda,i1}-T_{\lambda,i0} \end{array} $$

Now, we can present two important results in [16] into the forms of Lemmas 1 and 2 as follows.

Lemma 1

[16] Let S be a t-fold binary multisequence with period p n, where p is an odd prime and 2 is a primitive root modulo p 2. Then LC (S) = LC(D(S)), if \(s[\lambda ,i]=s[\lambda ,i+\frac {l}{p}]=\cdots =s[\lambda ,i+(p-1)\frac {l}{p}], 1\leq \lambda \leq t, 0\leq i\leq \frac {l}{p}-1\) ; else, LC(S) = (p−1)p n−1 + LC(B(S)).

Definition 2

Let \(\mathcal {S}=(\mathbf {S},\sigma ,l)\) be a costed binary multisequence and E=(E[1],E[2],⋯ ,E[t]) be a binary multisequence with length l, where E[λ]=(e[λ,0],e[λ,1],⋯ ,e[λ,l−1]),1≤λt. Set \(cost (\mathbf {S} \rightarrow \mathbf {S}\oplus \mathbf {E})={\sum }_{e[\lambda ,i]=1}\sigma [\lambda ,i]\) and then the k-error joint linear complexity of the costed binary multisequence \(\mathcal {S}\) is defined as

$$LC_{k}(\mathcal{S})=\min\limits_{cost(\mathbf{S}\rightarrow \mathbf{S}\oplus \mathbf{E})\leq k}LC(\mathbf{S}\oplus \mathbf{E})$$

where E is called the error multisequence of \(\mathcal {S}=(\mathbf {S},\sigma ,l)\).

If σ[λ,i]=1 for all λ and i, then the k-error joint linear complexity of \(\mathcal {S}\) given here agrees with the k-error joint linear complexity S for a binary multisequence in [14,15].

On the basis of the definition of two maps B and D, by Lemma 1, the algorithm 2 in [16] can be expressed in the following form.

Lemma 2

[16] Let \(\mathcal {S}=(\mathbf {S},\sigma ,l)\) be a costed binary multisequence, where l = p n, p is an odd prime, and 2 is a primitive root modulo p 2. Let \(T_{\lambda ,i0}=\sum \limits _{j=0}^{p-1}\sigma [\lambda ,i+j\frac {l}{p}](s[\lambda ,i+j\frac {l}{p}]\oplus 1)\) , and \(T_{\lambda ,i1}=\sum \limits _{j=0}^{p-1}\sigma [\lambda ,i+j\frac {l}{p}]{s[\lambda ,i+j\frac {l}{p}]}\) , \(T=\sum \limits _{\lambda =1}^{t}\sum \limits _{i=0}^{l-1}\min \{T_{\lambda ,i0},T_{\lambda ,i1}\}\). Then the k-error joint linear complexity of multisequence S is as follows

  1. (1)

    \(LC_{k}(\mathcal {S})=(p-1)p^{n-1}+LC_{k}(B(\mathcal {S}))\), for 0≤k<T;

  2. (2)

    \(LC_{k}(\mathcal {S})=LC_{k-T}(D(\mathcal {S}))\), for k≥T.

Remark 1

Actually, readers can directly prove Lemma 2 by using the method of Lemma 3 in [10]. However, it is found that process of proof for multisequence may be extremely tedious without any technical improvements by using the method in [10]. Here, we present an alternative method to redisplay Lemma 2 on the basis of our results in [16].

Remark 2

By Lemma 1, the joint linear complexity does not increase if and only if \(s[\lambda ,i]=s[\lambda ,i+\frac {l}{p}]=\cdots =s[\lambda ,i+(p-1)\frac {l}{p}]\), for every λ and i, 1≤λt,0≤il−1. If we add k allowable errors to a binary multisequence so as to force \(s[\lambda ,i]=s[\lambda ,i+\frac {l}{p}]=\cdots =s[\lambda ,i+(p-1)\frac {l}{p}]\), for every λ and i, 1≤λt,0≤il−1, then the number of k bits is at least \(T=\sum \limits _{\lambda =1}^{t}\sum \limits _{i=0}^{l-1}\min \{T_{\lambda ,i0},T_{\lambda ,i1}\}\).

Definition 3

Let \(\mathcal {S}=(\mathbf {S},\sigma ,l)\) be a costed binary multisequence. The k-error joint linear complexity spectrum (EJLCS) of \(\mathcal {S}\) is defined as the k-error joint linear complexity sequence of \(\mathcal {S}\):

$$LC_{0}(\mathcal{S}),LC_{1}(\mathcal{S}),\cdots,LC_{cost(\mathbf{S}\rightarrow \mathbf{0})}(\mathcal{S}).$$

The spectrum is actually composed by the set of the ordered list of \(\{(k,LC_{k}(\mathcal {S})):0\leq k\leq cost(\mathbf {S}\rightarrow \mathbf {0})\}\), where c o s t(S0) is actually c o s t(SSS) on the basis of Definition 2.

Note that we do not restrict k to be an integer, so the EJLCS of \(\mathcal {S}\) is not a finite set. However, it is clear from Definition 3 that \(LC_{k}(\mathcal {S})\) takes on only finitely many values, so the EJLCS of \(\mathcal {S}\) can be visualized as a graph with axes for cost and joint linear complexity and with points \((k,LC_{k}(\mathcal {S})), 0\leq k\leq cost(\mathbf {S}\rightarrow 0)\).

Lemma 3

Let \(\mathcal {S}=(\mathbf {S}, \sigma , p^{n})\) be a costed binary multisequence, where p is an odd prime and 2 a primitive root modulo p 2 . Let \(B(\mathcal {S})=(B(\mathbf {S}), B(\sigma ), p^{n-1})\) with \(T=\sum \limits _{\lambda =1}^{t}\sum \limits _{i=0}^{p^{n-1}-1}\min \{T_{\lambda ,i0},T_{\lambda ,i1}\}\) and \(D(\mathcal {S})=(D(\mathbf {S}), D(\sigma ), p^{n-1})\) with U = cost(D(S)→0). Then the EJLCS of multisequence \(\mathcal {S}\) is

$$\{(k,LC_{k}(B(\mathcal{S}))+(p-1)p^{n-1}):0\leq k < T\}\bigcup \{(T+k, LC_{k}(D(\mathcal{S}))):0\leq k \leq U\}. $$

Proof

By Definition 3 of the EJLCS of multisequence \(\mathcal {S}\), we immediately obtain the correctness of Lemma 3 from Lemma 2. □

Definition 4

Let \((k,LC_{k}(\mathcal {S}))\) be a point on the EJLCS of the costed binary multisequence \(\mathcal {S}\). We say that \((k,LC_{k}(\mathcal {S}))\) is critical if for all points \((k^{\prime }, LC_{k^{\prime }}(\mathcal {S})\) of the EJLCS with k <k we have that \(LC_{k^{\prime }}(\mathcal {S})>LC_{k}(\mathcal {S})\).

In others words, critical points are the points on the graph of the EJLCS where a decrease in the k-error joint linear complexity occurs. The sublist of all critical points in the EJLCS of \(\mathcal {S}\) is called the critical error joint linear complexity spectrum (CEJLCS) of \(\mathcal {S}\). We observe that the EJLCS of \(\mathcal {S}\) contains the point \((0, LC(\mathcal {S}))\). Because the joint linear complexity of SE can take on only finitely many different values, the CEJLCS of \(\mathcal {S}\) contains a finite number of points.‘We observe that the EJLCS of \(\mathcal {S}\) contains the point \((0, LC(\mathcal {S}))\).Note that the CEJLCS of a costed binary multisequence entirely determines its EJLCS and vice verse, and we use the terminology “critical” here. In Example 1 below text, the CEJLCS of the period 27 binary multisequence \(\mathcal {S}\) is the set

$$\{(0,27),(1,26),(2,25),(3,24),(4,8),(6,7),(9,6),(13,2),(21,1), (33,0)\}. $$

Lemma 4

Let \(\mathcal {S}=(\mathbf {S}, \sigma , p^{n})\) be a costed binary multisequence, where p is an odd prime and 2 a primitive root modulo p 2 . Let \(B(\mathcal {S})=(B(\mathbf {S}), B(\sigma ), p^{n-1})\) with \(T=\sum \limits _{\lambda =1}^{t}\sum \limits _{i=0}^{p^{n-1}-1}\min \{T_{\lambda ,i0},T_{\lambda ,i1}\}\) and \(D(\mathcal {S})=(D(\mathbf {S}), D(\sigma ), p^{n-1})\) with U=cost(D(S)→0). Let the CEJLCS of \(B(\mathcal {S})\) be \(\{(k_{i}, LC_{k_{i}}(B(\mathcal {S}))): 0 \leq i \leq m\}\) for some m, where k 0 =0 and k m =T, and the CEJLCS of \(D(\mathcal {S})\) be \(\{(K_{i},LC_{K_{i}}(D(\mathcal {S}))):0 \leq i \leq u\}\) for some u, where K 0 =0 and K u =U. Then the CEJLCS of \(\mathcal {S}\) is

$$\{(k_{0},LC_{k_{0}}(B(\mathcal{S}))+(p-1)p^{n-1}),\cdots, (k_{m-1},LC_{k_{m-1}}(B(\mathcal{S}))+(p-1)p^{n-1})\} $$
$$\bigcup\{(T+K_{0},LC_{K_{0}}(D(\mathcal{S}))),\cdots,(T+K_{u},LC_{K_{u}}(D(\mathcal{S})))\}. $$

Proof

Note that the CEJLCS of \(B(\mathcal {S})\) and \(D(\mathcal {S})\) are their respective critical point list. By Definition 4, we can suppose all critical points of \(B(\mathcal {S})\) are \((k_{0},LC_{k_{0}}(B(\mathcal {S}))+(p-1)p^{n-1}),\cdots , (k_{m-1},\ LC_{k_{m-1}} (B(\mathcal {S}))+(p-1)p^{n-1})\), where k i T,i=0,1,⋯ ,m−1; and all critical points of \(D(\mathcal {S})\) are \((K_{0},LC_{K_{0}}(D(\mathcal {S}))),\cdots ,(K_{u},LC_{K_{u}}(D(\mathcal {S})))\), where 0≤K i U. From Lemma 3, we obtain immediately the result. □

3 Computing CEJLCS of a costed binary multisequence

In this subsection we present an algorithm for computing the CEJLCS of a costed binary multisequence with period p n, where p is an odd prime and 2 a primitive root modulo p 2. The algorithm is recursive, calling itself with progressively shorter costed multisequences as input. It can be thought of as exploring a binary tree where a node at depth i (0≤in) corresponds to a costed binary multisequence of length p ni and the two edges emanating from a node correspond to the two mappings B and D that can be applied to this multisequence (Fig. 1).

Fig. 1
figure 1

The graph of the CEJLCS of multisequence \(\mathcal {S}=(\mathbf {S},\sigma ,27)\)

As well as a costed binary multisequence \(\mathcal {S}=(\mathbf {S},\sigma , p^{n})\), the algorithm has three integers t s f,l i m, and c as inputs. We now explain briefly the three variable functions t s f,l i m, and c . At any stage in the execution of the algorithm, the variable tsf is set to the total cost of changes made to the multisequence so far, and the variable lim denotes a limit to the total cost of changes one should consider when searching a particular part of the tree.

figure a

Theorem 1

Algorithm CEJLCS ((S,σ,l),tsf,lim,0)), where tsf≤lim, outputs the following list of points:

$$\left\{(tsf+k_{0},c+LC_{k_{0}}(\mathcal{S})),(tsf+k_{1},c+LC_{k_{1}}(\mathcal{S})),\cdots, (tsf+k_{u},c+LC_{k_{u}}(\mathcal{S}))\right\} $$

where \((k_{0},LC_{k_{0}}(\mathcal {S})),\cdots ,(k_{u}, LC_{k_{u}}(\mathcal {S}))\) are the critical points of \(\mathcal {S}=(\mathbf {S},\sigma ,l)\) whose first coordinates lie in the range [0,lim−tsf].

Proof

Firstly, we prove the algorithm CEJLCS (\((\mathbf {S},\sigma ,l),tsf, \lim , c)\)), where t s fl i m, outputs the following list of points

$$\left\{(tsf+k_{0},c+LC_{k_{0}}(\mathcal{S})),(tsf+k_{1},c+LC_{k_{1}}(\mathcal{S})),\cdots,(tsf+k_{u},c+LC_{k_{u}}(\mathcal{S}))\right\} $$

where \((k_{i},LC_{k_{i}}(\mathcal {S})), \ k_{i}\in [0, lim-tsf], \ 0\leq i \leq u\) are the critical points of \(\mathcal {S}=(\mathbf {S},\sigma ,l)\). For l=1, there exist two cases as follows

  1. (1)

    \((s[1,0],s[2,0],\cdots ,s[t,0])=\vec {0}\), i.e., c o s t(S→0)=0. Note that there is the single critical point (0,0). The algorithm outputs (t s f,c) as required.

  2. (2)

    \(s[1,0],s[2,0],\cdots ,s[t,0])\neq \vec {0}\), i.e., c o s t(S→0)> 0. If \(\sum \limits _{\lambda =1}^{t}\sigma [\lambda ,0] \leq lim-tsf\), then there exist two critical points (0,1) and \(\left (\sum \limits _{\lambda =1}^{t}\sigma [\lambda ,0],0\right )\). The algorithm outputs (t s f,c+1) and \(\left (tsf+\sum \limits _{\lambda =1}^{t}\sigma [\lambda ,0],c\right )\) as required. If \(\sum \limits _{\lambda =1}^{t}\sigma [\lambda ,0] > lim-tsf\), there exists the single point (0,1). The algorithm outputs (t s f,c+1) as required.

Suppose now that the algorithm is correct for all the costed binary multisequence of length \(p^{n^{\prime }}\) with n <n. Then the computation of the CEJLCS of a costed binary multisequence \(\mathcal {S}=(\mathbf {S},\sigma ,p^{n})\) calls the two subroutines as follows

  1. (1)

    CEJLCS \(((B(\mathcal {S}),B(\sigma ),(p-1)p^{n-1}),tsf,\min \{tsf+T-1,lim \},c+(p-1)p^{n-1})\), where \(T=\sum \limits ^{t}_{\lambda =1}\sum \limits ^{l-1}_{i=0}\min \{T_{\lambda ,i0},T_{\lambda ,i1}\}\). Note that T>0, we have t s fm i n(t s f+T−1,l i m).

  2. (2)

    If t s f+Tl i m, we have the CEJLCS \(((D(\mathcal {S}),D(\sigma ),(p-1)p^{n-1}),tsf+T,lim,c)\), where \(T=\sum \limits ^{t}_{\lambda =1}\sum \limits ^{l-1}_{i=0}\min \{T_{\lambda ,i0},T_{\lambda ,i1}\}\). Note that min{t s f+T−1,l i m}=t s f+T−1 and the inductive hypothesis subroutine (1), the following points list

    $$\{(tsf+k_{0},c+(p-1)p^{n-1}+LC_{k_{0}}(B(\mathcal{S}))),\cdots,(tsf+k_{v},c+(p-1)p^{n-1}+LC_{k_{v}}(B(\mathcal{S})))\} $$

    are the critical points of \(B(\mathcal {S})\), where \((k_{i},LC_{k_{i}}B(\mathcal {S})),k_{i}\in [0,\min \{T-1,lim-tsf\}],0\leq i\leq v\). By Lemma 4, if l i mt s fT−1, then these are the all points of the required output. Similarly, by Lemma 4, if l i mt s fT, then this is the first portion of the required output. In terms of subroutine (2), the remained critical points are described as follows

    $$\{((tsf+T)+k_{0},c+LC_{k_{0}}(D(\mathcal{S}))),\cdots,((tsf+T)+k_{u},c+LC_{k_{u}}(D(\mathcal{S})))\}, $$

    where \((k_{i},\ LC_{k_{i}}(D(\mathcal {S}))),k_{i}\in [0,lim-(tsf+T)],0\leq i\leq u\).

Thus, algorithm CEJLCS ((S,σ,l),t s f,l i m,c)), where \(tsf \leq \lim \), outputs the list of points

$$\{(tsf+k_{0},c+LC_{k_{0}}(\mathcal{S})),(tsf+k_{1},c+LC_{k_{1}}(\mathcal{S})),\cdots,(tsf+k_{u},c+LC_{k_{u}}(\mathcal{S}))\} $$

where \((k_{i},\ LC_{k_{i}}(\mathcal {S})), \ k_{i}\in [0,lim-tsf], \ 0\leq i \leq u\). These points are all the critical points over CEJLCS of \(\mathcal {S}=(\mathbf {S},\sigma ,l)\). Especially, Algorithm CEJLCS ((S,σ,p n),0,N,0), where N=c o s t(S→0), correctly outputs the critical error joint linear complexity of S. □

Proposition 1

Let \(\mathcal {S}=(\mathbf {S},\sigma , l)\) be a costed binary multisequence and write N = cost(S→ 0). Then the algorithm CEJLCS ((S, σ, p n),0, N, 0) correctly outputs the critical error joint linear complexity spectrum \(\mathcal {S}\).

Proof

By Theorem 1, as this input, the algorithm outputs the critical points of \(\mathcal {S}=(\mathbf {S},\sigma ,l)\) whose first coordinates lie between 0 and N. This is the complete critical error joint linear complexity spectrum of \(\mathcal {S}\) because c o s t(S→0)=N implies that the critical point with the highest first coordinate is (N,0). □

Remark 3

In Algorithm 1, the conditions T>0 and t s f+Tl i m are overlapped. A node can be divided into two nodes when two conditions are simultaneously satisfied. Else, the number of subsequent nodes is only one. Moreover, the conditions \(\sum \limits _{\lambda =1}^{t}\sigma [\lambda ,0]> 0 \) and \(tsf+\sum \limits _{\lambda =1}^{t}\sigma [\lambda ,0]\leq lim\) are also overlapped. When two conditions are simultaneously satisfied, two critical points can be output. Else, there exits a single critical point.

Now we consider the computational complexity of Algorithm 1. For simplicity, we assume that the entries in the cost vectors are scaled and quantized to be nonnegative integers rather than real numbers.

Remark 4

Algorithm 1 explores part of a binary tree with each node at depth i having bit operations of \(\mathcal {O}[t(p-1)p^{n-i}(\log M+i\log p)+tp^{n-i}\log M)]\), where M denotes a maximal integer in entries of the cost multisequence σ. Hence, taking into account the possible doubling in size of the components of the cost multisequences at each depth in the tree, the total bit operations are

$$\mathcal{O}\left\{\sum\limits_{0\leq i\leq n}2^{i}[t(p-1)p^{n-i}(\log M+i\log p)+tp^{n-i}\log M]\right\}. $$

By taking M=1, Algorithm CEJLCS can correctly output the entire critical error joint linear complexity spectrum of a binary multisequence with period p n in \(\mathcal {O}(tp^{n}\log p)\) bit operations, where p is an odd prime and 2 a primitive root modulo p 2.

If directly using the algorithm in [16] to compute the entire error joint linear complexity spectrum of a p n-periodic multisequence, \(\mathcal {O}(t(p-1)p^{n}\log p^{n})\) bit operations are required.

Example 1

Let \(\mathbf {S}\,=\,(S_{1}^{27},S_{2}^{27})\,=\,(101101101101101101101101011;101010101101010101\break 10 1010011)\) bea27-periodicbinarymultisequence. Thentheresultforcomputing the CEJLCS of S is a list of points as follows {(0,27),(1,26),(2,25),(3,24),(4,8),(6,7),(9,6),(13,2),(21,1),(33,0)}. The execution of Algorithm CEJLCS (S,0,33,0) is in Fig. 2.

Fig. 2
figure 2

Schematic execution of Algorithm CEJLCS (S,0,33,0 )

4 Conclusions

In this paper, we present an efficient algorithm by which the k-error joint linear complexity spectrum for a p n-periodic binary multisequence can be completely determined using \(\mathcal {O}(tp^{n}\log p)\) bit operations, where p is an odd prime, 2 is a primitive root modulo p 2, and n is a positive integer. Although the algorithm in [16] can also be used to complete the same task, it requires \(\mathcal {O}(t(p-1)p^{n}\log p^{n})\) bit operations. Moreover, our results are the further generalization of results in [10]. How to efficiently compute the k-error joint linear complexity spectrum of multisequences with general period remains to be an interesting open problem.