1 Introduction

Let \(\mathbbm {K}\) be an effective field of constants of characteristic zero, so that all field operations can be carried out by algorithms. Given an indeterminate x and the derivation \(\delta = x \partial \), where \(\partial = \partial / \partial x\), it is well known [8, 12, 20, 22, 23] that the skew polynomial ring \(\mathbbm {K} (x) [\delta ]\) behaves very much like an ordinary polynomial ring: there are skew analogues for each of the classical operations of division with remainder, greatest common divisors, least common multiples, etc. In this paper, we will study the complexity of these operations. For this purpose, it will be more appropriate to work in the ring \(\mathbbm {K} [x, \delta ]\) instead of \(\mathbbm {K} (x) [\delta ]\). In analogy with the commutative case, we will give bounds for the computational complexities of the various operations in terms of the complexity of operator multiplication.

For our complexity measures, we make the customary assumption that all field operations in K can be carried out in constant time O (1). We will try to express the complexities of our algorithms in terms of the following standard complexities:

  • The time \(\mathsf {M} (n)\) required for the multiplication of two polynomials of degrees \({<} n\) and coefficients in \(\mathbbm {K}\). It is classical [9, 25, 26] that \(\mathsf {M} (n) = O (n \log n \log \log n)\) and \(\mathsf {M} (n) = O (n \log n)\) if \(\mathbbm {K}\) admits sufficiently many \(2^p\)-th roots of unity [10].

  • The complexity \(O (r^{\omega })\) of multiplying two \(r \times r\) matrices with entries in \(\mathbbm {K}\). It is classical [11, 18, 24, 28] that \(\omega < 2.376\), although \(\omega \approx 3\) in practice.

We will denote by \(\mathbbm {K} [x]_n\) the subset of \(\mathbbm {K} [x]\) of polynomials of degree \(< n\). Likewise, we denote by \(\mathbbm {K} [x, \delta ]_{n, r}\) the set of operators \(L \in \mathbbm {K} [x, \delta ]\) of degree \(\deg _x L < n\) in x and degree \(\deg _{\delta } L < r\) in \(\delta \).

Now consider two linear differential operators \(K, L \in \mathbbm {K} [x, \delta ]_{n, r}\). We start with studying the following complexities:

  • The complexity \(\mathsf {SM} (n, r)\) of multiplying K and L.

  • The complexity \(\mathsf {SV} (n, r)\) of applying L to a vector of r polynomials in \(\mathbbm {K} [x]_n\).

  • The cost \(\mathsf {SF} (n, r)\) to compute a fundamental system of r solutions to the monic equation \((\delta ^r + L) f = 0\) in \(\mathbbm {K} [[x]]\), up to order \(O (x^n)\), while assuming the existence of such a fundamental system.

  • Given a vector V of r truncated power series in \(\mathbbm {K} [x]\), the cost \(\mathsf {SA} (n, r)\) of computing a monic operator in \(A = \delta ^r +\mathbbm {K} [x, \delta ]_{n, r}\) with \(A (V) = O (x^n)\).

The special case \(n = r\) was first studied in [30], where it was shown that \(\mathsf {SM} (n, n) = O (n^{\omega })\), using evaluation–interpolation techniques. The inverse bound \(n^{\omega } = O (\mathsf {SM} (n, n))\) has been proved in [5]; this paper also contains detailed information on the constant factors involved in these bounds. Recently (and after the writing of a first version of this paper), the quasi-optimal bound \(\mathsf {SM} (n, r) = \tilde{O} (nr (nr)^{\omega - 2})\) was proved in [2].

For fixed constants \(\alpha , \beta > 0\), one has \(\mathsf {M} (\alpha n) = O (\mathsf {M} (n)), (\beta r)^{\omega } = O (r^{\omega }), \mathsf {SM} (\alpha n, \beta r) = O (\mathsf {SM} (n, r))\), etc., by splitting the multiplicands in a finite number of pieces. In this paper, we will freely use this remark without further mention. In order to simplify our complexity estimates, it will be convenient to make a few additional assumptions. First of all, we will assume that \(\omega > 2\), whence in particular \(\mathsf {M} (n) \log n = O (n^{\omega - 1})\). We will also assume that the function \(\mathsf {M} (n) / n\) is increasing and that \(\mathsf {SM} (n, r) / (nr)\) is increasing in both n and r. This will indeed be the case for the complexity bounds for \(\mathsf {SM} (n, r)\) that will be given in Sect. 2.

In Sect. 2, we will first prove (see Theorems 1 and 2) that the problems of multiplication and operator–vector application are essentially equivalent when \(n \geqslant r\). We also recall the best existing bounds for operator multiplication.

In Sect. 3, we show that the problems of computing fundamental systems of solutions and its inverse can be reduced to operator multiplication modulo a logarithmic overhead (see Theorems 5 and 6). This provides a dual way to perform operations on differential operators by working on their fundamental systems of solutions. In Sect. 3 and all subsequent sections, we always assume that \(n \geqslant r\). This is indeed required for the truncations of the fundamental systems of solutions at order \(O (x^n)\) to be linearly independent.

In Sect. 4, we start with the operations of exact right division and right division with remainder. In Sect. 5, we consider greatest common right divisors (gcrds) and least common left multiples (lclms). Again, we will show how to express the complexities of these operations essentially in terms of the complexity \(\mathsf {SM} (n, r)\) of multiplication (see Theorems 7, 8, 9 and 10).

For several of our algorithms, we need to work at a point where certain operators are non singular. If we only need the input operators to be non singular, then it is easy to find a point where this is the case. If we also need the output operators or certain auxiliary operators to be non singular (as in Sect. 5), then we resort to picking random points, which are non singular with probability 1. In Sect. 5.2 we present additional techniques for turning algorithms which rely on random point picking into randomized algorithms of Las Vegas type and into fully deterministic algorithms.

For technical reasons, we found it convenient to work with respect to the Euler derivation \(\delta \) instead of \(\partial \). Nevertheless, operators L in \(\mathbbm {K} [x, \delta ]\) can be converted efficiently into operators in \(\mathbbm {K} [x, \partial ]\) and vice versa, modulo an increase of the degree n in x with the degree r in \(\delta \) or \(\partial \) (see Lemma 2). Using our assumption that \(n \geqslant r\), such increases of the degree n by r only gives rise to constant overheads in the complexity bounds. Hence, the complexity bounds for our main algorithms from Sects. 3, 4 and 5 still hold when replacing \(\delta \) by \(\partial \). In addition, some of the algorithms can be adapted to directly use \(\partial \) instead of \(\delta \), without the need for any conversions (see Remark 3).

To the best of our knowledge, the idea to perform operations on linear differential operators via power series solutions was first proposed (but only partially worked out) in [4, Chapter 10]. In this paper, we use a slightly different technique: instead of a single power series solution, we prefer to consider a fundamental system of solutions. This has the advantage of forcing a clean bijection between operators and solution spaces, thereby avoiding part of the randomness in the proposals from [4, Chapter 10].

It is also possible to mimic classical divide and conquer algorithms for right division, greatest common right divisors and least common left multiples, while using adjoints in order to perform the recursive operations on the appropriate side. Such algorithms were partially implemented inside Mathemagix [34] and we plan to analyze this technique in more details in a forthcoming paper.

Various complexity results for computations with linear differential operators and other skew polynomials were previously obtained [4, 1316, 19]. Especially the computation of greatest common right divisors and least common left multiples of two or more operators has received particular attention. After the publication of a first version of this paper [33], the complexities of several classical algorithms [17, 19, 27] for the computation of least common right multiples were studied in great detail in [6], and new improvements were proposed there.

The complexities of most of the algorithms in this paper are stated in terms of the input and output sizes. The uncertified randomized algorithms for gcrds and lclms are optimal up to logarithmic factors from this perspective, which yields an improvement with respect to the previously known complexity bounds. In the context of certified randomized algorithms (i.e. of Las Vegas type), the complexity bounds remain quasi-optimal in terms of the size of a suitable certificate. From the deterministic point of view, the new algorithms for gcrds and lclms are suboptimal.

2 Evaluation and interpolation

The key argument behind the proof from [30] that \(\mathsf {SM} (n, n) = O (n^{\omega })\) is the observation that an operator \(L \in \mathbbm {K} [x, \delta ]_{n, r}\) is uniquely determined by its images on the vector \({x^{; r} = (1, \ldots , x^{r - 1})}\). This makes it possible to use a similar evaluation–interpolation strategy for the multiplication of differential operators as in the case of FFT-multiplication of commutative polynomials. More precisely, given \(L \in \mathbbm {K} [x, \delta ]_{n, r}\), let \(\varPhi _L^{r + n, r}\) be the matrix of the mapping \(\mathbbm {K} [x]_r \rightarrow \mathbbm {K} [x]_{r + n} ; P \mapsto L (P)\) with respect to the bases \(x^{; r}\) and \(x^{; r + n}\):

$$\begin{aligned} \varPhi _L^{r + n, r}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c} L (1)_0 &{} \cdots &{} L (x^{r - 1})_0\\ \vdots &{} &{} \vdots \\ L (1)_{r + n - 1} &{} \cdots &{} L (x^{r - 1})_{r + n - 1} \end{array}\right) . \end{aligned}$$

The evaluation and interpolation steps can be done efficiently using the following lemma, which is essentially contained in [5]:

Lemma 1

Let \(L \in \mathbbm {K} [x, \delta ]_{n, r}\). Then

  1. (a)

    We may compute \(\varPhi _L^{r + n, r}\) as a function of L in time \(O \left( n \mathsf {M} (r) \log r \right) \).

  2. (b)

    We may recover L from \(\varPhi _L^{r + n, r}\) in time \(O \left( n \mathsf {M} (r) \log r \right) \).

Proof

Consider the expansion of L with respect to x

$$\begin{aligned} L (x, \delta )= & {} L_0 (\delta ) + \cdots + x^{n - 1} L_{n - 1} (\delta ) . \end{aligned}$$

For all ij, we have

$$\begin{aligned} L (x, \delta ) (x^j)_{i + j}= & {} [x^i L_i (\delta )] (x^j)_{i + j}\\= & {} [x^{i + j} L_i (\delta + j) (1)]_{i + j}\\= & {} L_i (j) . \end{aligned}$$

In other words, \(\varPhi _L^{r + n, r}\) is a lower triangular band matrix

$$\begin{aligned} \varPhi _L^{r + n, r}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c} L_0 (0) &{} &{} \\ \vdots &{} \ddots &{} \\ L_{n - 1} (0) &{} &{} L_0 (r - 1)\\ &{} \ddots &{} \vdots \\ &{} &{} L_{n - 1} (r - 1) \end{array}\right) \end{aligned}$$

of bandwidth \(\leqslant n\). The coefficients on the i-th subdiagonal of \(\varPhi _L^{r + n, r}\) are exactly the result of a multipoint evaluation of \(L_i\) at \(0, \ldots , r - 1\). It is classical [3, 21, 29] that both multipoint evaluation and the inverse operation of interpolation can be performed in time \(\mathcal {O} \left( \mathsf {M} (r) \log r \right) \). Doing this for each of the polynomials \(L_0, \ldots , L_{n - 1}\) yields the result. \(\square \)

Theorem 1

If \(n \geqslant r\), then

$$\begin{aligned} \mathsf {SM} (n, r)= & {} O (\mathsf {SV} (n, r) + n \mathsf {M} (r) \log r) \end{aligned}$$
(1)

Proof

Let \(K, L \in \mathbbm {K} [x, \delta ]_{n, r}\) and assume that we want to compute KL. We may evaluate \(L (x^{; 2 r})\) in time \({\mathsf {SV} (\max (n, 2 r), 2 r)} = O (\mathsf {SV} (n, r))\). We may also evaluate \(K (L (x^{; 2 r}))\) in time \({\mathsf {SV} (n + 2 r, 2 r)} = O (\mathsf {SV} (n, r))\). Using Lemma 1, we may recover KL from \(K (L (x^{; 2 r}))\) in time \(O (n \mathsf {M} (r) \log r)\). This completes the proof. \(\square \)

Theorem 2

If \(n \geqslant r\), then

$$\begin{aligned} \mathsf {SV} (n, r)= & {} O (\mathsf {SM} (n, r) + n \mathsf {M} (r) \log r) . \end{aligned}$$
(2)

Proof

Assume now that we are given \(K (x, \delta ) \in \mathbbm {K} [x, \delta ]_{n, r}\), as well as a vector \(V = {(V_0, \ldots , V_{r - 1})} \in \mathbbm {K} [x]_n^r\) and that we want to evaluate \(K (V) = (K (V_0), \ldots , K (V_{r - 1}))\). This is equivalent to evaluating the operator \(K^{*} = K (x, \delta - r)\) at the vector \(x^r V\). It is classical [1] that \(K^{*}\) can be computed in time \(O \left( n \mathsf {M} (r) \right) \). Using Lemma 1, we may compute the unique operator \(L \in \mathbbm {K} [x, \delta ]_{n + r, r}\) with \(L (x^{; r}) = x^r V\) in time \(O ((n + r) \mathsf {M} (r) \log r) = O \left( n \mathsf {M} (r) \log r \right) \). We may next compute the product \(K^{*} L\) in time \({\mathsf {SM} (n + r, r)} = O \left( \mathsf {SM} (n, r) \right) \). Lemma 1 finally allows us to evaluate \(K^{*} L\) at \(x^{; r}\) in time \(O \left( n \mathsf {M} (r) \log r \right) \), thereby yielding K (V). \(\square \)

The above results immediately imply the bound \(\mathsf {SM} (n, n) = O (n^{\omega })\) from [30] by the computation of a product KL to the computation of a matrix product

$$\begin{aligned} \varPhi _{KL}^{2 r + 2 n, 2 r}= & {} \varPhi _K^{2 r + 2 n, 2 r + n} \varPhi _L^{2 r + n, 2 r} . \end{aligned}$$

After the publication of a first version of this paper, the following quasi-optimal bound for \(\mathsf {SM} (n, r)\) was established in [2, Theorems 3 and 5].

Theorem 3

  1. (i)

    For \(r \geqslant n\), we have \(\mathsf {SM} (n, r) = O \left( n^{\omega - 1} r + n \mathsf {M} (r) \log r \right) \).

  2. (ii)

    For \(n \geqslant r\), we have \(\mathsf {SM} (n, r) = O \left( r^{\omega - 1} n + r \mathsf {M} (n) \log n \right) \).

The inverse bound \(n^{\omega } = O (\mathsf {SM} (n, n))\) from [5] can also be generalized:

Theorem 4

If \(n \geqslant r\), then the product of an \(r \times n\) matrix and an \(r \times r\) matrix with coefficients in \(\mathbbm {K}\) can be computed in time \(O (\mathsf {SM} (n, r))\).

Proof

By the result from [5], the problem is equivalent to the computation of \(k = \lceil n / r \rceil \) operators \(K_0, \ldots , K_{k - 1}\) in \(\mathbbm {K} [x, \delta ]_{r, r}\) with a fixed operator \(L \in {\mathbbm {K} [x, \delta ]_{r, r}}\). Setting \(K = K_0 + x^{2 r} K_1 + \cdots + x^{2 r (k - 1)} K_{k - 1}\), we may compute KL in time \(O (\mathsf {SM} (n, r))\). We may directly read off the products \(K_0 L, \ldots , K_{k - 1} L\) from the result.

\(\square \)

In this paper, we have chosen to work with respect to the derivation \(\delta \) instead of \(\partial \). The following result from [5, Section 3.3] can be used to efficiently convert between operators in \(\mathbbm {K} [x, \delta ]\) and \(\mathbbm {K} [x, \partial ]\) (in [30], we proved a somewhat weaker result which would also suffice for the purposes of this paper). We have written \(K [x, \partial ]_{n, r}\) for the set of operators of degree \(< n\) in x and degree \(< r\) in \(\partial \).

Lemma 2

  1. (a)

    Any operator in \(\mathbbm {K} [x, \delta ]_{n, r}\) can be converted into an operator in \(\mathbbm {K} [x, \partial ]_{n + r, r}\) in time \(O \left( (n + r) \mathsf {M} (r) \log r \right) \).

  2. (b)

    Any operator in \(x^r \mathbbm {K} [x, \partial ]_{n, r}\) can be converted into an operator in \(\mathbbm {K} [x, \delta ]_{n + r, r}\) in time \(O \left( (n + r) \mathsf {M} (r) \log r \right) \).

3 Local solutions

From now on, we will assume that \(n \geqslant r\). We recall that an operator \(L \in \mathbbm {K} [x, \partial ]\) of order r is said to be non singular at \(x_0\), if its leading coefficient \(L_r\) does not vanish at \(x_0\). We will say that an operator \(L \in \mathbbm {K} [x, \delta ]\) of order r is non singular (at the origin) if \(x^{-r} L \in {\mathbbm {K} [x, \partial ]}\) and \(x^{- r} L\) is non singular as an operator in \(\partial \).

Given a non singular differential operator \(L \in \mathbbm {K} [x, \delta ]_{n, r + 1}\) of order r, the equation \(L (H) = 0\) admits a canonical fundamental system \(H = {(H_0, \ldots , H_{r - 1})}\) of solutions in \(\mathbbm {K} [[x]]^r\), with the property that \((H_i)_i = 1\) and \((H_i)_j = 0\) for all \(i, j < r\) with \(i \ne j\). Conversely, given a \(\mathbbm {K}\)-linearly independent vector of power series \(H \in \mathbbm {K} [[x]]^r\), there exists a unique monic operator \(L \in \delta ^r +\mathbbm {K} [[x]] [\delta ]\) of order r with \(L (H) = 0\). Let us show how to convert efficiently between these two representations.

Theorem 5

Let \(L \in \mathbbm {K} [x, \delta ]_{n, r + 1}\) be a differential operator of order \(r \leqslant n\), which is non singular at the origin, and let H be its canonical fundamental system of solutions. Then we may compute H up to order \(O (x^n)\) in time \(O (\mathsf {SV} (n, r) \log n)\). In other words,

$$\begin{aligned} \mathsf {SF} (n, r)= & {} O (\mathsf {SM} (n, r) \log n) . \end{aligned}$$
(3)

Proof

Modulo multiplying L on the left by \(L_r^{-1}\), we may assume without loss of generality that L is monic. Since L is non singular at the origin, we have \(x^{- r} L \in \mathbbm {K} [x, \partial ]\). Rewritten in terms of \(\delta \), this means that L is of the form

$$\begin{aligned} L= & {} \varDelta _r (\delta ) + xC_{r - 1} \varDelta _{r - 1} (\delta ) + \cdots + x^r C_0 \varDelta _0 (\delta ) .\\ \varDelta _k (\delta )= & {} \delta (\delta - 1) \cdots (\delta - k + 1), \end{aligned}$$

for certain \(C_0, \ldots , C_{r - 1} \in \mathbbm {K} [x]\). Setting \(R = \varDelta _r (\delta ) - L \in x\mathbbm {K} [x, \delta ]_{n - 1, r}\), we observe that R maps \(\mathbbm {K} [[x]]\) into \(x^r \mathbbm {K} [[x]]\). We now compute H using the “recursive” formula

$$\begin{aligned} H= & {} \left( \begin{array}{c} 1\\ \vdots \\ x^{r - 1} \end{array}\right) + \varDelta _r (\delta )^{- 1} (R (H)), \end{aligned}$$
(4)

where

$$\begin{aligned} \varDelta _r (\delta )^{- 1} \left( \sum _{k \geqslant r} A_k x^k \right)= & {} \sum _{k \geqslant r} \frac{A_k}{\varDelta _r (k)} x^k . \end{aligned}$$

Equation (4) is a schoolbook example for applying the strategy of relaxed resolution of power series equations [31, 32]. Since \(\varDelta _r (\delta )^{- 1}\) operates coefficientwise, it can be computed in linear time. The main cost of the computation therefore reduces to the relaxed evaluation of R (H). Using fast relaxed multiplication, this amounts to a cost

$$\begin{aligned} \mathsf {SF} (n, r)= & {} 2 \mathsf {SV} (\lceil n / 2 \rceil , r) + 4 \mathsf {SV} (\lceil n / 4 \rceil , r) + \cdots + n \mathsf {SV} (1, r) . \end{aligned}$$

Using the monotonicity assumption and Theorem 2, the result follows. \(\square \)

In what follows, given a non zero series Y in x, we denote by v (Y) its valuation. Given a vector V of elements in a \(\mathbbm {K}\)-vector space, we will also denote by \({\text {Vect}} (V)\) the subvector space generated by the entries of V, and

$$\begin{aligned} v^{\max } (V)= & {} \max \{v (Y) : Y \in {\text {Vect}} (V) {\setminus } \{0\}\} . \end{aligned}$$

Notice that \(v^{\max } (V) \geqslant \dim ({\text {Vect}} (V)) - 1\).

Theorem 6

Let \(H = (H_0, \ldots , H_{r - 1}) \in \mathbbm {K} [[x]]^r\) be \(\mathbbm {K}\)-linearly independent. Then there exists a unique monic operator \(L = {\text {ann}} (H) \in \delta ^r +\mathbbm {K} [[x]] [\delta ]_r\) with \(L (H) = 0\). Moreover, given the truncation of H at order \(O (x^n)\), we may compute L at order \(O (x^{n - v^{\max } (H)})\) in time \(O (\mathsf {SM} (n, r) \log r)\). In other words,

$$\begin{aligned} \mathsf {SA} (n, r)= & {} O (\mathsf {SM} (n, r) \log r) . \end{aligned}$$
(5)

Proof

Modulo a triangularization of H, we may assume without loss of generality that \(v (H_0) < \cdots < v (H_{r - 1}) = v^{\max } (H)\). We define operators \(L^{[0]}, \ldots , L^{[r]}\) by

$$\begin{aligned} L^{[0]}= & {} 1\\ L^{[i + 1]}= & {} \left( \delta - \frac{\delta L^{[i]} (H_i)}{L^{[i]} (H_i)} \right) L^{[i]} . \end{aligned}$$

Then \(L = L^{[r]}\) annihilates H and for any other operator \(\tilde{L} \in \delta ^r +\mathbbm {K} [[x]] [\delta ]_r\) with \(\tilde{L} (H) = 0\), we would have \((\tilde{L} - L) (H) = 0\), which is in contradiction with the fact that \(\dim \ker (\tilde{L} - L) < r\). Moreover, by induction over i, we observe that the coefficient of \(x^0\) in \(L^{[i]}\) is given by \((\delta - v (H_0)) \cdots (\delta - v (H_{i - 1}))\) and the coefficients of \(x^0, \ldots , x^{n - 1}\) in \(L^{[i]}\) can be expressed in terms of the coefficients of \(x^0, \ldots , x^{n - 1 + v (H_{i - 1})}\) in \(H_0, \ldots , H_{i - 1}\). In particular, the truncation of L at order \(O (x^{n - v^{\max } (H)})\) is uniquely determined by the truncation of H at order \(O (x^n)\).

In order to explicitly compute L up to a given order, it is more efficient to use a divide and conquer approach. More precisely, given \(H \in (H_0, \ldots , H_{r - 1}) \in \mathbbm {K} [x]_n^r\) we compute \({\text {ann}}_n (H) \in \delta ^r +\mathbbm {K} [x, \delta ]_{n, r}\) using the following method:

  • If \(r = 1\), then we take \({\text {ann}}_n (H) = \delta - (\delta H_0 / H_0)\, {\text {mod}} x^n\).

  • Otherwise, let \(r = a + b\) with \(a = \lceil r / 2 \rceil \).

  • Compute \(A :={\text {ann}}_n (H_0, \ldots , H_{a - 1})\).

  • Evaluate \(I :=(A (H_a), \ldots , A (H_{r - 1}))\, {\text {mod}} x^n\).

  • Compute \(B :={\text {ann}}_n (I_0, \ldots , I_{b - 1})\).

  • Return \(L = BA\, {\text {mod}} x^n\).

If \(n > v^{\max } (H)\), then it is easy to check that \({\text {ann}}_n (H) (H) = O (x^{n - v^{\max } (H)})\). For a fixed constant C, we thus have

$$\begin{aligned} \mathsf {SA} (n, 2 r)\leqslant & {} 2 \mathsf {SA} (n, r) + C \mathsf {SM} (n, r) . \end{aligned}$$

The result now follows from the monotonicity assumption. \(\square \)

Remark 1

If \(\mathsf {SM} (n, r) / r^{1 + \epsilon }\) is increasing in r for some \(\epsilon > 0\), then the bound further simplifies to \(\mathsf {SA} (n, r) = O (\mathsf {SM} (n, r))\).

Remark 2

We notice that the operator L in Theorem 6 is singular if and only if \(v^{\max } (H) = r - 1\), and if and only if \(\{v (Y) : Y \in {\text {Vect}} (H) {\setminus } \{0\}\} = \{0, \ldots , r - 1\}\).

Remark 3

The algorithm from the proof can be adapted so as produce a vanishing operator in \(x^r \partial ^r +\mathbbm {K} [[x]] [\partial ]_r\) instead of \(\delta ^r +\mathbbm {K} [[x]] [\delta ]_r\). Indeed, for this, it suffices to take

$$\begin{aligned} L^{[i + 1]}= & {} x \left( \partial - \frac{\partial L^{[i]} (H_i)}{L^{[i]} (H_i)} \right) L^{[i]}, \end{aligned}$$

and carefully adapt the truncation orders. \(\square \)

Although a general operator \(L \in \mathbbm {K} [x, \delta ]\) can be singular at the origin, many operations on operators (such as right division and greatest common right divisors) commute with translations \(x \mapsto x + x_0\), and Lemma 2 may be used in conjunction with the following lemma in order to reduce to the case when L is non singular at the origin.

Lemma 3

Given a non zero operator \(L \in \mathbbm {K} [x, \partial ]_{n, r}\), we may find a point \(x_0 \in \mathbbm {K}\) where L is non singular in time \(O (\mathsf {M} (n))\).

Proof

Let \(L_k\) be the leading coefficient of L. Since \(\deg _x L_k < n\), we have \(L_k (x_0) \ne 0\) for some \(x_0 \in \{0, \ldots , n\}\). Using fast multipoint evaluation [7], we may find such a point \(x_0\) in time \(O (\mathsf {M} (n))\). \(\square \)

4 Right division

Both the degrees in x and \(\delta \) are additive for the multiplication of operators \({K, L} \in {\mathbbm {K} [x, \delta ]}\). In particular, if \(K, L \in \mathbbm {K} [x, \delta ]_{n, r}\) and L is left or right divisible by K, then the quotient is again in \(\mathbbm {K} [x, \delta ]_{n, r}\).

Theorem 7

Let \(K, L \in \mathbbm {K} [x, \delta ]_{n, r}\) be such that \(L = QK\) for some \(Q \in \mathbbm {K} [x, \delta ]\) and \(n \geqslant r\). Then we may compute Q in time \(O \left( \mathsf {SM} (n, r) \log n \right) \).

Proof

By Lemmas 2 and 3, and modulo a shift \(x \mapsto x + x_0\), we may assume without loss of generality that K and L are non singular at the origin. We now use the following algorithm:

  • We first compute the canonical fundamental system of solutions H to \(L (H) = 0\) up to order \(O (x^{n + r})\). By Theorem 5, this can be done in time \(O \left( \mathsf {SM} (n, r) \log n \right) \).

  • We next evaluate \(I = K (H)\) and compute a \(\mathbbm {K}\)-basis G for \({\text {Vect}} (I)\) at order \(O (x^{n + r})\). This can be done in time \(O (\mathsf {SM} (n, r))\), by Theorems 2 and 4, and using linear algebra. Since K is non singular, we have \(v (Y) \geqslant \deg _{\delta } K \Rightarrow v (K (Y)) = v (Y)\) for all \(Y \in \mathbbm {K} [[x]]\). In particular, the \(\deg _{\delta } Q = \deg _{\delta } L - \deg _{\delta } K\) elements of H of valuations \(\deg _{\delta } K, \ldots , \deg _{\delta } L - 1\) are mapped to set which spans a vector space of dimension \(\deg _{\delta } Q\). This shows that \(s = \dim ({\text {Vect}} (I) {\text {mod}} x^r) = \deg _{\delta } Q\).

  • We now compute the monic annihilator \(\varOmega = {\text {ann}} (G)\) of G at order \(O (x^n)\). This can be done in time \(O (\mathsf {SM} (n, r) \log r) = O \left( \mathsf {SM} (n, r) \log n \right) \), by Theorem 6.

  • We return the truncation of \(Q_s \varOmega \) at order \(O (x^n)\), where \(Q_s = L_{\deg _{\delta } L} / K_{\deg _{\delta } K}\).

Since each of the steps can be carried out in time \(O (\mathsf {SM} (n, r) \log n)\), the result follows. \(\square \)

It is classical that euclidean division generalizes to the skew polynomial ring \(\mathbbm {K} (x) [\delta ]\). In other words, given operators \(A, B \in \mathbbm {K} (x) [\delta ]\) where \(B \ne 0\), there exist unique operators \(Q = {\text {quo}} (A, B)\) and \(R = {\text {rem}} (A, B)\) in \(\mathbbm {K} (x) [\delta ]\) with

$$\begin{aligned} A= & {} QB + R, \end{aligned}$$

and \(\deg _{\delta } R < \deg _{\delta } B\). If \(A, B \in \mathbbm {K} [x, \delta ]\) and I is the leading term of B with respect to \(\delta \), then left multiplication of A by \(I^{\deg _{\delta } A - \deg _{\delta } B + 1}\) allows us to remain in the domain \(\mathbbm {K} [x, \delta ]\): there exist unique \(Q = {\text {pquo}} (A, B)\) and \(R = {\text {prem}} (A, B)\) in \(\mathbbm {K} [x, \delta ]\) with

$$\begin{aligned} I^{\deg _{\delta } A - \deg _{\delta } B + 1} A= & {} QB + R, \end{aligned}$$
(6)

and \(\deg _{\delta } R < \deg _{\delta } B\). The operators Q and R are usually called pseudo-quotients and pseudo-remainders. In some cases, a non trivial polynomial can be factored out in the relation (6). Let J be monic, of maximal degree, such that \(J^{- 1} QB, J^{-1} R \in \mathbbm {K} [x, \delta ]\). Then we call \(J^{-1} Q = {\text {quo}}^{*} (A, B)\) and \(J^{-1} R = {\text {rem}}^{*} (A, B)\) the “simplified” pseudo-quotient and pseudo-remainder of A and B.

Lemma 4

Let \(H = (H_0, \ldots , H_{r - 1}) \in \mathbbm {K} [[x]]^r\) be \(\mathbbm {K}\)-linearly independent and define \(p = v^{\max } ({\text {Vect}} (H)) + 1\). Given \(G \in (x^p \mathbbm {K}[[x]])^r\), there exists a unique operator \(L \in \mathbbm {K} [[x]] [\delta ]_r\) of order \(< r\) with \(L (H) = G\) and we may compute its first n terms with respect to x in time \(O (\mathsf {SM} (n + p, r) \log n)\).

Proof

Let \(\alpha _i = v (H_i)\) for each i. Modulo a base change, we may assume without loss of generality that \(\alpha _0 < \cdots < \alpha _{r - 1}\). Let \(\varPhi : \mathbbm {K} [[x]]^r \rightarrow \mathbbm {K} [[x]]^r\) be the operator with

$$\begin{aligned} \varPhi (V_0, \ldots , V_{r - 1})= & {} (x^{\alpha _0} V_0, \ldots , x^{\alpha _{r - 1}} V_{r - 1}), \end{aligned}$$

and let \(\varPhi ^{- 1}\) denote the inverse operator. Let \(\varPsi : \mathbbm {K} [[x]] [\delta ]_r \rightarrow \mathbbm {K} [[x]]^r\) be the operator with

$$\begin{aligned} \varPsi (K)= & {} \varPhi ^{-1} (K (\varPhi (1))) . \end{aligned}$$

Writing \(K = \sum _{i, k} K_{i, k} x^k \delta ^i\) and \(\varPsi (K)_{i, k} = (\varPsi (K)_i)_k\), we have

$$\begin{aligned} \left( \begin{array}{c} \varPsi (K)_{0, k}\\ \vdots \\ \varPsi (K)_{r - 1, k} \end{array}\right)= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 1 &{} k + \alpha _0 &{} \cdots &{} (k + \alpha _0)^{r - 1}\\ \vdots &{} \vdots &{} &{} \vdots \\ 1 &{} k + \alpha _{r - 1} &{} \cdots &{} {(k + \alpha _{r - 1})^{r - 1}} \end{array}\right) \left( \begin{array}{c} K_{0, k}\\ \vdots \\ K_{r - 1, k} \end{array}\right) . \end{aligned}$$

In other words, \(\varPsi \) and its inverse \(\varPsi ^{- 1}\) operate coefficientwise and n coefficients can be computed in time \(O (r^{\omega } n)\).

Putting \(H_i = x^{\alpha _i} + E_i\) with \(E_i = o (x^{\alpha _i})\) for each i, we may rewrite the equation \(L (H) = G\) as

$$\begin{aligned} L= & {} \varPsi ^{-1} (\varPhi ^{- 1} (G - L (E))) \end{aligned}$$

and we observe that the coefficient of \(x^k\) in the righthand side only depends on earlier coefficients of \(1, \ldots , x^{k - 1}\) in L. In particular, we may solve the equation using a relaxed algorithm. Then the main cost is concentrated in the relaxed evaluation of L (E). As in the proof of Theorem 5, this evaluation can be done in time \(O (\mathsf {SM} (n + p, r) \log n)\). \(\square \)

Theorem 8

Let \(K, L \in \mathbbm {K} [x, \delta ]_{n, r}\) with \(n \geqslant r\) and \(s = \deg _{\delta } K > 0\). Right pseudo-division of L by K and simplification yields a relation

$$\begin{aligned} AL= & {} QK + R, \end{aligned}$$

where \(A, Q = {\text {quo}}^{*} (L, K), R = {\text {rem}}^{*} (L, K) \in \mathbbm {K} [x, \delta ]\). If \(n' \geqslant n\) is such that \({A, Q, R} \in {\mathbbm {K} [x, \delta ]_{n', r}}\), then AQ and R can be computed in time \(O (\mathsf {} \mathsf {SM} (n', r) \log n')\).

Proof

Modulo a shift \(x \mapsto x + x_0\), we may assume without loss of generality that K and L are non singular at the origin. We now use the following algorithm:

  • We compute the canonical fundamental system H of solutions to \(K (H) = 0\) up to order \(O (x^{2 n' + r})\). This requires a time \(O (\mathsf {SM} (n', s) \log n')\).

  • We compute \(G = L (H)\) with \(R (H) = AG\) up to order \(O (x^{2 n' + r})\). This requires a time \(O (\mathsf {SM} (n', r))\).

  • We determine the operator \(\varOmega \in \mathbbm {K} [[x]] [\delta ]_s\) with \(\varOmega (H) = x^s G\) up to order \(O (x^{2 n' + r})\). Lemma 4 shows that this can be done in time \(O (\mathsf {M} (n', s) \log n')\).

  • By Theorem 6, we have \(R = x^{- s} A \varOmega \) and \(x^{-s} \varOmega \) is known up to order \(O (x^{2 n'})\). Now \(x^{-s} \varOmega _0, \ldots , x^{- s} \varOmega _{s - 1}\) are truncated rational functions, for which the degrees of the numerators and denominators are bounded by \(n'\). Using rational function reconstruction [35], we may thus compute \(N_k / D_k = x^{- s} \varOmega _k\) with \(\gcd (N_k, D_k) = 1\) in time \(sO (\mathsf {M} (n) \log n)\). Taking \(A = {\text {lcm}} (D_0, \ldots , D_{s - 1})\), we find R.

  • Once A and R are known, we compute Q using the algorithm from Theorem 7.

The total complexity of this algorithm is bounded by \(O (\mathsf {} \mathsf {SM} (n', r) \log n')\). \(\square \)

Remark 4

In the above proof, we have assumed that \(n'\) is known beforehand. In general, we may still apply the above algorithm for a trial value \(n^{*}\). Then the algorithm may either fail (for instance, if \(\deg {\text {lcm}} (D_0, \ldots , D_{s - 1}) \geqslant n^{*}\)), or return the triple (AQR) under the assumption that \(A, Q, R \in \mathbbm {K} [x, \delta ]_{n^{*}, r}\). We may then check whether the triple is correct in time \(O (\mathsf {} \mathsf {SM} (n^{*}, r))\). Applying this procedure for successive guesses \(n^{*} = n, 2 n, 4 n, \ldots \), the algorithm ultimately succeeds for an \(n^{*}\) with \(n^{*} \leqslant 2 n'\). Using the monotonicity hypothesis, the total running time thus remains bounded by \(O (\mathsf {SM} (n^{*}, r) \log n^{*}) = O (\mathsf {SM} (n', r) \log n')\).

5 Euclidean operations

5.1 Randomized algorithms

It is classical that greatest common right divisors and least common left multiples exist in the skew euclidean domain \(\mathbbm {K} (x) [\delta ]\): given two operators \(K, L \in \mathbbm {K} (x) [\delta ]\), the greatest common right divisor \(\varGamma = {\text {gcrd}} (K, L)\) and the least common left multiple \(\varLambda = {\text {lclm}} (K, L)\) are the unique monic operators with

$$\begin{aligned} \mathbbm {K} (x) [\delta ] \varGamma= & {} \mathbbm {K} (x) [\delta ] K +\mathbbm {K} (x) [\delta ] L\\ \mathbbm {K} (x) [\delta ] \varLambda= & {} \mathbbm {K} (x) [\delta ] K \cap \mathbbm {K} (x) [\delta ] L. \end{aligned}$$

Assume now that \(K, L \in \mathbbm {K} [x, \delta ]\) and let A and B be monic polynomials of minimal degrees, such that \(A \varGamma \) and \(B \varLambda \) are in \(\mathbbm {K} [x, \delta ]\). Then we call \(\varGamma ^{*} = {\text {gcrd}}^{*} (K, L) = A \varGamma \) and \(\varLambda ^{*} = {{\text {lclm}}^{*} (K, L)} = B \varLambda \) the (simplified) pseudo-gcrd and pseudo-lclm of K and L.

Lemma 5

Let \(K, L \in \mathbbm {K} [x, \delta ]_{n, r}\) be such that K and L are non singular at the origin, as well as \({\text {gcrd}}^{*} (K, L)\) or \({\text {lclm}}^{*} (K, L)\). Let G and H be the canonical fundamental systems of solutions to \(K (G) = 0\) and \(L (H) = 0\). Then

$$\begin{aligned} \deg _{\delta } {\text {lclm}}^{*} (K, L)= & {} \dim ([{\text {Vect}} (G) + {\text {Vect}} (H)] {\text {mod}} x^{2 r})\\ \deg _{\delta } {\text {gcrd}}^{*} (K, L)= & {} \dim ([{\text {Vect}} (G) \cap {\text {Vect}} (H)] {\text {mod}} x^{2 r}) . \end{aligned}$$

Proof

Let \(\varGamma ^{*} = {\text {gcrd}}^{*} (K, L), \varLambda ^{*} = {\text {lclm}}^{*} (K, L), s = \deg _{\delta } \varGamma ^{*}\) and \(t = \deg _{\delta } \varLambda ^{*} \leqslant 2 r\). If \(\varLambda ^{*}\) is non singular, then it admits a canonical fundamental system of solutions \(M = (M_0, \ldots , M_{t - 1})\) with \((M_i)_i = 1\) and \((M_i)_j = 0\) for all \(i, j < t\) with \(i \ne j\). In particular, \(\dim ({\text {Vect}} (M) {\text {mod}} x^{2 r}) = t\). Since \(\varLambda ^{*}\) is the least common left multiple of K and L, we also have \({\text {Vect}} (M) = {\text {Vect}} (G) + {\text {Vect}} (H)\), which completes the proof of the first equality. If \(\varGamma ^{*}\) is non singular, then we obtain the second equality in a similar way.

If \(\varLambda ^{*}\) is non singular, then we also have \(\dim ({\text {Vect}} (K) {\text {mod}} x^{2 r}) = \deg _{\delta } K\) and \(\dim ({\text {Vect}} (L) {\text {mod}} x^{2 r}) = \deg _{\delta } L\), since K and L are non singular. Now \(\dim ([{\text {Vect}} (G) \cap {\text {Vect}} (H)] {\text {mod}} x^{2 r}) = \dim ({\text {Vect}} (K) {\text {mod}} x^{2 r}) + \dim ({\text {Vect}} (L) {\text {mod}} x^{2 r}) - \dim ([{\text {Vect}} (G) + {\text {Vect}} (H)] {\text {mod}} x^{2 r})\), whence \(\dim ([{\text {Vect}} (G) \cap {\text {Vect}} (H)] {\text {mod}} x^{2 r}) = \deg _{\delta } K + \deg _{\delta } L - \deg _{\delta } \varLambda ^{*} = \deg _{\delta } \varGamma ^{*}\). If \(\varGamma ^{*}\) is non singular, then we obtain the first equality in a similar way. \(\square \)

Theorem 9

Let \(K, L \in \mathbbm {K} [x, \delta ]_{n, r}\) and \(n' \geqslant n\) be such that \(\varGamma ^{*} = {\text {gcrd}}^{*} (K, L) \in \mathbbm {K} [x, \delta ]_{n', r}\) and \(n \geqslant r\). Assume that KL and \({\text {gcrd}}^{*} (K, L)\) (or \({\text {lclm}}^{*} (K, L)\)) are non singular at the origin. Then we may compute \(\varGamma ^{*}\) in time \(O (\mathsf {SM} (n', r) \log n')\).

Proof

We compute \(\varGamma ^{*}\) using the following algorithm:

  • We compute the canonical fundamental systems of solutions G and H to \(K (G) = 0\) and \(L (H) = 0\) at order \(O (x^{2 n' + r})\). This can be done in time \(O (\mathsf {SM} (n', r) \log n')\).

  • Using linear algebra, we compute a basis B for \(V = {\text {Vect}} (G) \cap {\text {Vect}} (H)\) at order \(O (x^{2 n' + r})\). This can be done in time \(O (\mathsf {} n' r^{\omega - 1})\). By Lemma 5, we have \(s :=\dim (V {\text {mod}} x^{2 r}) = \deg _{\delta } \varGamma ^{*}\). We also notice that \(v^{\max } (B) < r\).

  • We compute \(\varOmega = {\text {ann}} (B) = {\text {gcrd}} (K, L)\) at order \(O (x^{2 n'})\). By Theorem 6, this can be done in time \(O (\mathsf {SM} (n', r) \log n')\).

  • We compute \(\varGamma ^{*}\) from \(\varOmega {\text {mod}} x^{2 n'}\) using rational function reconstruction.

This algorithm requires a total running time \(O (\mathsf {SM} (n', r) \log n')\). \(\square \)

Remark 5

In the above proof, we have again assumed that \(n'\) is known beforehand. Below, we will discuss ways to check the correctness of the computed result for a trial value \(n^{*}\), after which a similar strategy as in remark 4 can be applied. During the relaxed computation of G and H, we may also check whether \(V = \varnothing \) at each next coefficient. In the particular case when \(\varGamma = 1\), the running time of the algorithm will then be bounded by \(O (\mathsf {SM} (n^{*}, r) \log n^{*})\), where \(n^{*}\) is the smallest order at which common solutions no longer exist. This kind of early termination only works for this very special case.

Remark 6

Notice that \(\varGamma ^{*}\) might be singular at the origin, even if KL and \({\text {lclm}}^{*} (K, L)\) are not. This happens for instance when K is the minimal annihilator of the vector (1, x) and L the minimal annihilator of the vector \((\mathrm {e}^x, x)\), so that \(\varGamma = \delta - 1\).

Theorem 10

Let \(K, L \in \mathbbm {K} [x, \delta ]_{n, r}\) and \(n' \geqslant n\) be such that \(\varLambda ^{*} = {\text {lclm}}^{*} (K, L) \in {\mathbbm {K} [x, \delta ]_{n', 2 r}}\) and \(n \geqslant r\). If KL and \({\text {lclm}}^{*} (K, L)\) (or \({\text {gcrd}}^{*} (K, L)\)) are non singular at the origin, then we may compute \(\varLambda ^{*}\) in time \(O (\mathsf {SM} (n', r) \log n')\).

Proof

Similar to the proof of Theorem 9, by taking \(V = {\text {Vect}} (K) + {\text {Vect}} (L)\) instead of \(V = {\text {Vect}} (K) \cap {\text {Vect}} (L)\). \(\square \)

5.2 Certifying correctness

The assumption that \({\text {lclm}}^{*} (K, L)\) should be non singular is still a bit unsatisfactory in Theorems 9 and 10, even though the probability that a randomly chosen point is singular is infinitesimal. If we drop this assumption, then we still have \(s \geqslant \deg _{\delta } \varGamma ^{*}\) in the proof of Theorem 9. Consequently, “candidate” pseudo-gcrds \(\varGamma ^{*}\) found by the algorithm are genuine pseudo-gcrds whenever \(\varGamma ^{*}\) pseudo-divides both K and L. Using the right division algorithms from the previous section, this can be checked in time \(O (\mathsf {SM} (n' r, r) \log n')\) in the case of gcrds and \(O (\mathsf {SM} (nr, r) \log n')\) in the case of lclms.

Remark 7

Using the polynomial linear algebra techniques from [6, 16], it is likely that one may prove that \(PK = A \varGamma ^{*}\) for some \(P \in \mathbbm {K} [x]_{nr}\) and \(A \in \mathbbm {K} [x, \delta ]_{nr, r}\). If this is indeed the case, then the trial divisions of K and L by \(\varGamma ^{*}\) can actually be carried out in time \(O (\mathsf {SM} (nr, r) \log n')\).

An alternative way to check whether candidate gcrds and lclms are correct is to compute Bezout and Ore relations. More precisely, given \(K, L \in \mathbbm {K} (x) [\delta ]\) with \(L \not \in \mathbbm {Q} (x) K\), there exist operators \({A, B, C, D} \in \mathbbm {K} (x) [\delta ]\) with

$$\begin{aligned} \left( \begin{array}{c} \varGamma \\ 0 \end{array}\right)= & {} \left( \begin{array}{c@{\quad }c} A &{} B\\ C &{} D \end{array}\right) \left( \begin{array}{c} K\\ L \end{array}\right) , \end{aligned}$$

\(\deg _{\delta } AK, \deg _{\delta } BL < \deg _{\delta } \varLambda \) and \(CK = - DL = \varLambda \). The \(2 \times 2\) matrix at the righthand side will be called the Euclidean matrix \(\mathrm {E}= {\text {Eucl}} (K, L)\) of K and L. In a similar way as above, we may define a (simplified) pseudo-Euclidean matrix \(\mathrm {E}^{*} = {\text {Eucl}}^{*} (K, L)\) with entries \(A^{*}, B^{*}, C^{*}, D^{*}\) in \(\mathbbm {K} [x, \delta ]\), whenever \(K, L \in \mathbbm {K} [x, \delta ]\). We will say that \({\text {Eucl}} (K, L)\) is non singular at \(x_0\), if the denominators of ABC and D do not vanish at \(x_0\).

Theorem 11

Let \(K, L \in \mathbbm {K} [x, \delta ]_{n, r}\) and \(n' \geqslant n\) be such that \(\mathrm {E}^{*} = {\text {Eucl}}^{*} (K, L) \in {\mathbbm {K} [x, \delta ]_{n', r}^{2 \times 2}}\) and \(n \geqslant r\). If \(K, L, {\text {lclm}}^{*} (K, L)\) and \({\text {Eucl}} (K, L)\) are non singular at the origin, then we may compute \(\varLambda ^{*}\) in time \(O (\mathsf {SM} (n', r) \log n') = \tilde{O} (n' r^{\omega - 1})\).

Proof

Assuming \(n'\) known, we compute \({\text {Eucl}} (K, L) = \left( \begin{array}{c@{\quad }c} A &{} B\\ C &{} D \end{array}\right) \) at order \(O (x^{2 n'})\) as follows:

  • We compute the canonical fundamental systems of solutions G and H to \(K (G) = 0\) and \(L (H) = 0\) at order \(O (x^{2 n' + 3 r})\).

  • We compute a basis X for \({\text {Vect}} (G) \cap {\text {Vect}} (H)\) at order \(O (x^{2 n' + 3 r})\), together with bases \(\hat{G}\) and \(\hat{H}\) for the supplements of \({\text {Vect}} (X)\) in \({\text {Vect}} (G)\) resp. \({\text {Vect}} (H)\). We also compute \(\varGamma = {\text {ann}} (X)\) at order \(O (x^{2 n' + 2 r})\).

  • We solve the systems \(A (K (\hat{H})) = \varGamma (\hat{H})\) and \(B (L (\hat{G})) = \varGamma (\hat{G})\) in A resp. B at order \(O (x^{2 n'})\), using Lemma 4.

  • We compute a basis Y for \({\text {Vect}} (G) + {\text {Vect}} (H)\) at order \(O (x^{2 n' + 2 r})\), as well as bases \(\tilde{H}\) and \(\tilde{G}\) for the vector spaces \({\text {Vect}} (K (Y))\) resp. \({\text {Vect}} (L (Y))\) at order \(O (x^{2 n' + 2 r})\).

  • We compute \(C = K_{\deg _{\delta } K}^{- 1} {\text {ann}} (\tilde{H})\) and \(D = - L_{\deg _{\delta } L} {\text {ann}} (\tilde{G})\) at order \(O (x^{2 n'})\).

We finally compute \(\mathrm {E}^{*}\) from ABC and D using rational function reconstruction. The complexity analysis and the remainder of the proof is done in a similar way as in the proofs of Theorems 8 and 9. \(\square \)

With the above techniques, we may at least verify whether computed pseudo-gcrds or pseudo-lclms are correct. For a fully deterministic algorithm, we still need a way to find a point where \({\text {lclm}}^{*} (K, L)\) is non singular. This can be done by brute force. Let us state the result in the most general setting of pseudo-Euclidean matrices.

Theorem 12

Let \(K, L \in \mathbbm {K} [x, \delta ]_{n, r}\) and \(n' \geqslant n\) be such that \(\mathrm {E}^{*} = {\text {Eucl}}^{*} (K, L) \in {\mathbbm {K} [x, \delta ]_{n', r}^{2 \times 2}}\) and \(n \geqslant r\). Then we may compute \(\mathrm {E}^{*}\) in time \(O (\mathsf {SM} (n', r) \log n' + n' (\mathsf {M} (n) r + r^{\omega } \log r)) = \tilde{O} (n' (r^{\omega } + nr))\).

Proof

Let \(k = \deg _{\delta } K, l = \deg _{\delta } L\), and assume first that we know \(n'\). Let \(x_0, \ldots , x_{n' + n}\) be \(n' + n + 1\) be pairwise distinct, randomly chosen points in \(\mathbbm {K}\) at which K and L are non singular. At each \(x_i\), we compute canonical fundamental systems of solutions G and H for K and L at order \(O (x^{k + l})\). We claim that this can be done in time \(O \left( \mathsf {M} (n') r \log n' + n' \left( \mathsf {M} (n) r + r^{\omega } \log r \right) \right) \).

Indeed, it requires a time \(O \left( (n + r) \mathsf {M} (r) \log r \right) \) to rewrite each operator with respect to \(\partial \). We next perform a multipoint evaluation of the coefficients of these operators to obtain the shifted operators at \(x_0, \ldots , x_{n' + n}\) (this requires a time \(O \left( \mathsf {M} (n') r \log n' \right) \)). The truncations of these operators at order \(O (x^{k + l + r})\) are then converted back to the respresentation with respect to \(\delta \). This can be done in time \(O \left( n' r \mathsf {M} (r) \log r \right) \). Using Theorem 5, we finally compute the required fundamental systems of solutions in time \(O \left( n' \mathsf {SM} (r, r) \log r \right) = O (n' r^{\omega } \log r)\).

From \(\mathrm {E}^{*} \in \mathbbm {K} [x, \delta ]_{n', r}^{2 \times 2}\), we get \(\varLambda ^{*} = {\text {lclm}}^{*} (K, L) \in \mathbbm {K} [x, \delta ]_{n' + n, 2 r}\). Since we assumed \(n'\) to be sufficiently large, it follows that \(\varLambda ^{*} = {\text {lclm}}^{*} (K, L)\) is non singular at one of the points \(x_i\). At such a point \(x_i\), the canonical fundamental systems of solutions G and H generate a vector space \(V = {\text {Vect}} (G) + {\text {Vect}} (H)\) of maximal dimension \(s :=\deg _{\delta } \varLambda ^{*}\), and with a basis \(y_0, \ldots , y_{s - 1}\) such that \(v (y_k) = k\) for all \(0 \leqslant k < s\). We finally apply Theorem 11 in order to obtain \(\mathrm {E}^{*}\). If \(n'\) is unknown, then we use a sequence of guesses \(n' = n, 2 n, 4 n, \ldots \), as in the previous proofs. \(\square \)

Remark 8

In the case of least common left multiples, we may directly compute \(\varLambda ^{*}\) using Theorem 10 and certify the result using trial division by K and L. This allows us to use the weaker assumption \(\varLambda ^{*} \in \mathbbm {K} [x, \delta ]_{n', 2 r}\) instead of \(\mathrm {E}^{*} \in \mathbbm {K} [x, \delta ]_{n', r}^{2 \times 2}\), whereas the complexity bound becomes \(O \left( \mathsf {SM} (nr, r) \log n' + n' (\mathsf {M} (n) r + r^{\omega } \log r) \right) = \tilde{O} (n' (r^{\omega } + nr))\).

5.3 Summary of the complexity bounds for Euclidean operations

We have summarized our complexity bounds for Euclidean operations on two operators \(K, L \in \mathbbm {K} [x, \delta ]_{n, r}\) in Table 1. We systematically write \(n'\) for the degree in x of the result. We also write \(n^{*}\) for the degree of the Euclidean matrix in x.

The algorithms in the first line correspond to applying Theorems 9, 10 and 11 at a randomly chosen point, without checking the result. The second line corresponds to the Las Vegas randomized algorithm for which the answers are certified through trial division (the bound for gcrds might further drop to \(\tilde{O} (nr^{\omega })\) in view of Remark 7; more generally, the bounds can be restated in terms of sizes of certificates). In the third line, we rather use Euclidean matrices for the certification. The fourth line shows complexity bounds for the deterministic versions of our algorithms.

In comparison, several randomized Las Vegas algorithms were given in [6] that achieve the complexity bound \(\tilde{O} (nr^{\omega })\) for lclms. This is in particular the case for Heffter’s algorithm [17], when using Theorem 3. The non determinism is due to the use of a fast Las Vegas randomized algorithm for the computation of kernels of matrices with polynomial entries [6, Theorem 2]. Grigoriev established complexity bounds for gcrds which rely on a similar reduction to polynomial linear algebra. Along the same lines as in [6], this should lead to a Las Vegas randomized algorithm of complexity \(\tilde{O} (nr^{\omega })\), although we did not check this in detail.

In summary, the new algorithms do not achieve any improvements in the worst case. Nevertheless, the uncertified versions of our algorithms admit optimal running times up to logarithmic factors in terms of the combined input and output size. The certified randomized versions satisfy similar complexity bounds in terms of the size of a suitable certificate; such bounds can sometimes be better than the previously known worst case bounds. When performing our expansions at a randomly chosen point in \(\mathbbm {K}\), we also recall that the probability of failure is exponentially small as a function of the bitsize of this point.

Table 1 Complexity bounds for the Euclidean operations on two operators K and L

5.4 Generalizations

The algorithms from Sect. 5.1 extend in a straightforward way to the computation of greatest common right divisors and least common left multiples of more than two operators. For instance, using obvious notations, we obtain the following generalizations of Theorems 10 and 9.

Theorem 13

Let \(L_1, \ldots , L_k \in \mathbbm {K} [x, \delta ]_{n, r}\) with \(n \geqslant r\) and \(r' \geqslant r, n' \geqslant \max (n, r')\) be such that \(\varLambda ^{*} = {{\text {lclm}}^{*} (L_1, \ldots , L_k)} \in \mathbbm {K} {[x, \delta ]_{n', r'}}\). Assume that \(L_1, \ldots , L_k\) and \(\varLambda ^{*}\) are all non singular at the origin. Then we may compute \(\varLambda ^{*}\) in time \(O (\mathsf {SM} (n', r') \log n' + {k \mathsf {SM} (n', r) \log n'} + kr (r')^{\omega - 2} n')\).

Proof

We compute \(\varLambda ^{*}\) using the following algorithm:

  • We first compute the canonical fundamental systems of solutions \(H_i\) to \(L_i (H_i) = 0\) at order \(O (x^{2 n' + r'})\). This can be done in time \(O (k \mathsf {SM} (n', r) \log n')\).

  • Let \(V_{i, j} = {\text {Vect}} (H_i) + \cdots + {\text {Vect}} (H_j)\) for all \(1 \leqslant i \leqslant j \leqslant k\). Using linear algebra, we may recursively compute a basis \(B_{i, j}\) for \(V_{i, j}\) from bases \(B_{i, m}\) and \(B_{m + 1, j}\) for \(V_{i, m}\) and \(V_{m + 1, j}\), where \(m = \lfloor (i + j) / 2 \rfloor \). This algorithm yields a basis B for \(V_{1, k}\) in time \(O (kr (r')^{\omega - 2} n')\). Using a suitable generalization of Lemma 5, we also notice that \(\dim (V {\text {mod}} x^{r'}) = \deg _{\delta } \varLambda ^{*}\).

  • We compute \(\varOmega = {\text {ann}} (B) = {\text {lclm}} (L_1, \ldots , L_k)\) at order \(O (x^{2 n'})\). By Theorem 6, this can be done in time \(O (\mathsf {SM} (n', r') \log n')\).

  • We compute \(\varLambda ^{*}\) from \(\varOmega {\text {mod}} x^{2 n'}\) using rational function reconstruction.

We obtain the result by adding up all complexity bounds. \(\square \)

Remark 9

When taking \(r' = kr \leqslant n'\) and using [2], the complexity bound simplifies to \(O \left( \mathsf {SM} (n', kr) \log n' \right) = O \left( k^{\omega - 1} r^{\omega - 1} n' \log n' + kr \mathsf {M} (n') \log ^2 n' \right) \). By [6, Theorem 6], we may always take \(n' = nrk^2\), after which the bound further reduces to \(\tilde{O} (k^{\omega + 1} r^{\omega } n)\). In our randomized setting, this improves upon the bounds from [6, Figure 1].

Remark 10

If we also require a certification of the result, then we may use the trial division technique. This amounts to k exact divisions of operators in \(\mathbbm {K} [x, \delta ]_{n' + nr', r'}\) by \(L_1, \ldots , L_k\). Using the division algorithm from Sect. 4, and taking \(r' = kr \leqslant n'\) and \(n' = nrk^2\) as above, this can be done in time

$$\begin{aligned} O \left( k \mathsf {SM} (n' + nr', r') \log (nr') \right) = \tilde{O} (k (n' + nr') (r')^{\omega - 1}) = \tilde{O} (k^{\omega + 2} r^{\omega } n) . \end{aligned}$$

This is slightly better than the new bound from [6].

Theorem 14

Let \(L_1, \ldots , L_k \in \mathbbm {K} [x, \delta ]_{n, r}\) and \(n' \geqslant n \geqslant r\) be such that \(\varGamma ^{*} = {{\text {gcrd}}^{*} (L_1, \ldots , L_k)} \in \mathbbm {K} {[x, \delta ]_{n', r}}\). Assume that \(L_1, \ldots , L_k\) and \(\varGamma ^{*}\) are all non singular at the origin. Then we may compute \(\varGamma ^{*}\) in time \(O ({\mathsf {SM} (n', r) \log n'} + k \mathsf {SM} (r, r) \log r)\).

Proof

The proof is similar to the one of Theorem 13, except for the way how we compute a basis for \(V = {\text {Vect}} (H_1) \cap \cdots \cap {\text {Vect}} (H_k)\). Indeed, we first compute a basis \(B {\text {mod}} x^r\) for \(V {\text {mod}} x^r\). This requires a time \(O \left( k \mathsf {SM} (r, r) \log r \right) \) for the computation of \(H_1, \ldots , H_k\) modulo \(x^r\) and a time \(O (kr^{\omega })\) for the remaining linear algebra. We next compute the unique constant matrix C such that \(B = CH_1\) modulo \(x^r\). Since \(\varGamma ^{*}\) is non singular, we have \(B = CH_1\) at any order, so it suffices to compute \(H_1\) up to order \(x^{2 n' + r}\) in order to obtain B up to order \(x^{2 n' + r}\). \(\square \)

Remark 11

An interesting question is whether there exists a faster algorithm to compute the orders s and t of \(\varGamma ^{*} = {\text {gcrd}}^{*} (L_1, \ldots , L_k)\) and \(\varLambda ^{*} = {\text {lclm}}^{*} (L_1, \ldots , L_k)\), without computing \(\varGamma ^{*}\) and \(\varLambda ^{*}\) themselves. For this, it suffices to compute the dimensions of \({\text {Vect}} (H_1) \cap \cdots \cap {\text {Vect}} (H_k)\) and \({\text {Vect}} (H_1) + \cdots + {\text {Vect}} (H_k)\). Assuming that we are at a “non singular point”, the answer is therefore yes: using the techniques from the proofs of Theorems 14 and 13, we may compute s in time \(O \left( k \mathsf {SM} (r, r) \log r \right) = \tilde{O} (kr^{\omega })\) and t in time \({O \left( k \mathsf {SM} (t, r) \log t + krt^{\omega - 1} \right) } = \tilde{O} (krt^{\omega - 1})\).