1 Introduction

Let A be a real \(n\times n\) matrix and consider the solution \(Y(t)\in {\mathbb {R}}^n\) of the linear system of differential equations \([DY_1~\ldots ~ DY_n]^T= A [Y_1 ~ \ldots ~ Y_n]^T\) as the independent variable t runs in some open subset X of \({\mathbb {R}}\). (Throughout the paper, D stands for the differential operator.) One classical approach to this problem is to complexify the system and use the Jordan canonical form of A to find certain differential equations of higher orders whose solutions lead to a solution of the given system. (See [1].) Then, of course, one has to return to the real case and retrieve the information obtained in the complex case to the real case. (It is not generally true that the information obtained via the Jordan canonical form of a matrix can be systematically retrieved to the original field which was not algebraically closed; see [2] (pp. 262–263) and [3]; see also [4].) As a direct proof with no change in the underlying field, the authors of [5], apply the rational canonical form to reduce the linear system

$$\begin{aligned} DY=AY \end{aligned}$$
(1)

to the family

$$\begin{aligned} f_j(D)z=0, ~(j=1,2,\ldots , k) \end{aligned}$$
(2)

whose individual solutions sum up to a solution for (1), where \(f_1,f_2,\ldots , f_k\) are the polynomials appearing in the rational canonical form of the matrix A. (See below.)

In the present paper, we continue the works in [5] and obtain a one to one correspondence between the solutions of the problems of types (1) and (2). Also, we follow those authors interested in promoting the algebraic nature of differential equations and lessening their analytical calculations; in this way, we generalize a classical differential operator to any derivation acting on a differentiable algebra \({\mathcal {D}}\) of functions with common domain and codomain in a field \({\mathbb {F}}\) of arbitrary characteristic \(\mathrm{char}({\mathbb {F}})\); see [6, 7]. Some of our results may not need the full algebraic structure of the collection and a vector space structure may suffice them.

The following definition is taken from [7]. Note that the underlying field \({\mathbb {F}}\) may have an arbitrary characteristic char\(({\mathbb {F}})\) which will be explicitly specified in some cases. We appreciate the peer review of the referee resulting the following elegant definition as well as a logical reorganization of the whole paper.

Definition 1.1

A (commutative) differential algebra over a field \({\mathbb {F}}\) is a commutative \({\mathbb {F}}\)-algebra \({\mathcal {D}}\) wherein a derivation D commutes with the scalar multiplication; the latter assertion means that D is a linear operator satisfying the Leibniz product rule

$$\begin{aligned} D(fg)=D(f)g+fD(g). \end{aligned}$$
(3)

Throughout the paper, the following notation is fixed. The derivation D acts on the differential algebra \({\mathcal {D}}\). As mentioned earlier, in some cases, the mere linear properties of D and \({\mathcal {D}}\) may suffice; in particular, we may not need (3). An element \(f\in {\mathcal {D}}\) is called a “differentiable function” and the function Df will be called its derivative. (Note. If we are using the derivative in different contexts, we will use different notation; for example, if \(f(x)=a_0+a_1x+\cdots +a_nx^n\) is a polynomial with derivative \(f'(x)=a_1+2a_2x+\cdots +na_n x^{n-1}\) and if D is the differentiation on \(C^{(1)}([0,1])\), we would certainly replace the ambiguous notation (Df)(D) by the clear functional calculus notation \(f'(D)\).) Since \(Df\in {\mathcal {D}}\), the elements of \({\mathcal {D}}\) may be regarded as infinitely many times differentiable functions. Practically, we do not assume differentiability beyond the order of a differential equation. In the next paragraph, we observe a similar problem with the linear systems of differential equations.

Regarding a system of linear differential equations, we need to extend the notion of derivative to “vectors” in \({\mathcal {D}}^n\); the cartisian product of n copies of \({\mathcal {D}}\). The notation \(Y\in {\mathcal {D}}^n\) stands for a so-called “differentiable vector-valued function”. If \(Y_1,Y_2,\ldots , Y_n\) are the components of Y, the derivative \({{\tilde{D}}}Y\) of the vector Y is the differentiable vector whose components are \(DY_1,DY_2,\ldots ,DY_n\), respectively. Again, here, assuming \(\theta (Y)\in {\mathcal {D}}\) implies that Y is assumed to be infinitly many times differentiable which is not the case in practical problems involving systems of linear differential equations.

The following lemma resolves these ambiguities. Observe that if \({\mathcal {D}}_1\subset {\mathcal {D}}_0\) are vector spaces, if \(D_1:{\mathcal {D}}_1\rightarrow {\mathcal {D}}_0\) is a linear operator, and if \(g\in {\mathbb {F}}[x]\) is of degree r, then the expression \(g(D_1)\eta \) is meaningful whenever \(\eta , D_1\eta ,D_1^2\eta ,\ldots , D_1^{r-1}\eta \in {\mathcal {D}}_1\).

Lemma 1.2

Let \(D_1:{\mathcal {D}}_1\rightarrow {\mathcal {D}}_0\) be a linear transformation, where \({\mathcal {D}}_1\) is a vector subspace of \({\mathcal {D}}_0\). Define \({\mathcal {D}}:=\{\zeta \in {\mathcal {D}}_1:~D_1^k\zeta \in {\mathcal {D}}_1~\forall k\in {\mathbb {N}}\}\). The following assertions are true.

  1. (i)

    If \(\{\eta ,D_1\eta ,\ldots ,D_1^{r-1}\eta \}\subset {\mathcal {D}}_1\) for some \(r\in {\mathbb {N}}\) and if \(g(D_1)\eta = 0\) for some polynomial \(g \in {\mathbb {F}}[x]\) of degree r, then \(\eta \in {\mathcal {D}}\).

  2. (ii)

    If \(Y\in {\mathcal {D}}_1^n\) and if \({{\tilde{D}}}_1Y=AY\) for some \(n\times n\) scalar matrix A, then \(Y \in {\mathcal {D}}^n\). (Here, \({{\tilde{D}}}_1:{\mathcal {D}}_1^n\rightarrow {\mathcal {D}}_0^n\) is defined in the same manner that \({{\tilde{D}}}:{\mathcal {D}}^n\rightarrow {\mathcal {D}}^n\) was defined.)

  3. (iii)

    If \({\mathcal {D}}_1\) happens to be a commutative \({\mathbb {F}}\)-algebra and if \(D_1\) satisfies the Leibniz formula (3), then \(D:=D_1|_{\mathcal {D}}\) is a derivation.

Proof

By the hypotheses of Part (i), \(D_1^k\eta \in {\mathcal {D}}_1\) for \(k=0,1,\ldots ,r-1\). Assume by induction we have shown that \(D_1^k\eta \in {\mathcal {D}}_1\) for all k less than a positive integer m. We show that \(D_1^m \eta \in {\mathcal {D}}_1\). Assume without loss of generality that \(g(x)=a_0+a_1x+\cdots +a_{r-1}x^{r-1}+x^r\). Then

$$\begin{aligned} D_1^m\eta =-a_0D_1^{m-r}\eta -a_1D_1^{m-r+1}\eta -\cdots -a_{r-1}D_1^{m-1}\eta \in {\mathcal {D}}_1~(\forall m\in {\mathbb {N}}), \end{aligned}$$

which shows that \(\eta \in {\mathcal {D}}\).

In Part (ii), observe that if \(Y\in {\mathcal {D}}_1^n\), then \(BY\in {\mathcal {D}}_1^n\) and \({{\tilde{D}}}_1(BY)=B{{\tilde{D}}}_1(Y)\) for any \(m\times m\) scalar matrix B. By assumption \(Y\in {\mathcal {D}}_1^n\). Assume, by induction, we have shown that \({{\tilde{D}}}_1^ {m-1} Y\in {\mathcal {D}}_1^n\) for some positive integer m. Then \(A{{\tilde{D}}}_1^ {m-1} Y\in {\mathcal {D}}_1^m\) which implies that \({{\tilde{D}}}_1^ {m-1}A Y={{\tilde{D}}}_1^ m Y\in {\mathcal {D}}_1^m\). We conclude that \(Y\in {\mathcal {D}}^n\).

The proof of Part (iii) js straightforward. \(\square \)

In regard to D and \({\mathcal {D}}\), polynomials have been treated as (external) operations acting on the differential operator \(D:{\mathcal {D}}\rightarrow {\mathcal {D}}\); often, we need polynomials as differentiable functions in \({\mathcal {D}}\) upon which D acts as an operator. In the latter case, we need \({\mathcal {D}}\) to appear as a subalgebra of \({\mathbb {F}}\)-valued functions defined on a subset X of \({\mathbb {F}}\) and polynomials appear as functions of the form \(f(t)=a_0+a_1t+a_2t^2+\cdots +a_{r-1}t^{r-1}+a_rt^r\) \((t\in X)\). Therefore, in most cases, we may require \({\mathcal {D}}\) to be a linear subspace of all functions with a common domain \(X \subset {\mathbb {F}}\) satisfying all or some of the following conditions.

$$\begin{aligned}&\{f|_X:~f\in {\mathbb {F}}[x]\}\subset {\mathcal {D}}. \end{aligned}$$
(4)
$$\begin{aligned}&\mathrm{card}(X)=\infty ,~\mathrm{and~} D(\mathbf{id})=\mathbf{1},~\mathrm{where~}~ \mathbf{id}(t)\equiv t~\mathrm{and~}{} \mathbf{1}(t)\equiv 1. \end{aligned}$$
(5)

Our main concern is function algebras \({\mathcal {D}}\) and linear operators D satisfying (3)–(5). However, the literature on derivative in p-adic numbers \({\mathbb {Q}}\), complete topological fields other than \({\mathbb {R}}\) and \({\mathbb {C}}\), deal with kind of derivatives which lack the product rule. For this, we refer to [8] and the references cited there. The following simple proposition helps Theorem 2.1 to loosen the conditions on derivative and to leave the door open for possible applications in p-adic problems. If we stick to (3)–(5), we have to study a suitable definition of derivative to begin with. As our next project, we are working on a version of the derivative on p-adic fields \({\mathbb {Q}}\); although hard, it is an interesting task. In soft algebra, where \({\mathbb {F}}\) is a general field with no topological structure, and \({\mathcal {D}}\) consists of polynomials, we are very limited; for example, the higher order differential equation \(p(D)(h)=0\) can have nontrivial solution only when \(p(0)=0\) (as in the proof of Part (ii) of Theorems 2.1 or 2.4).

Proposition 1.3

Let \({\mathcal {D}}\) be a vector space of \({\mathbb {F}}\)-valued functions defined on a common domain X and assume \({\mathcal {D}}\) contains at least one nonzero constant function. Suppose \(D\in L({\mathcal {D}})\) is a linear operator satisfying (3) whenever f and g are constant functions in \({\mathcal {D}}\). Then \(f\in {\mathcal {D}}\) and \(Df=0\) for all constant functions f defined on X.

Proof

Trivial. \(\square \)

Throughout the remainder of the paper, the function \(\mathbf{id}^k(t)\) is denoted by the convenient polynomial notation \(t^k\), where t runs in \(X\subset {\mathbb {F}}\). The following theorem relates the solutions of a differential equation of the form \(f(D)h=0\) with those of the differential equation of the form \(f^r(D)h=0\) for a fixed \(f\in {\mathbb {F}}[x]\) and a positive integer r.

Theorem 1.4

Let \(({\mathcal {D}},D)\) satisfy (3)-(5). Then the following assertions are true.

  1. (i)

    For all \(~k\in {\mathbb {N}},~\) \(~D(t^k)=kt^{k-1}\).

  2. (ii)

    If \(f(D)h=0\) for some \(f\in {\mathbb {F}}[x]\) and some \(h\in {\mathcal {D}}\), then

    $$\begin{aligned} f^r(D)(t^rh)=r! [f'(D)]^rh~\mathrm{~ and~}~f^{r+1}(D)(t^rh)= 0, ~\mathrm{~for~ } r=0,1,2,\cdots . \end{aligned}$$
    (6)
  3. (iii)

    Furthermore, if \(f\in {\mathbb {F}}[x]\) is a minimal polynomial satisfying \(f(D)h=0\) and if \(f^r(D)(t^rh)= 0\), then either \(r!= 0\) in \({\mathbb {F}}\) or \(h=0\).

Proof

The proof of (i) follows easily from induction on k. To prove (ii), let \(r\ge 0\) and observe, in view of Newton’s formula, that

$$\begin{aligned} f(D)(t^rh)= & {} \sum _{k=0}^ma_kD^k(t^rh)=\sum _{k=0}^ma_k\sum _{i=0}^{k_r}\left( \begin{array}{c}k\\ i \end{array} \right) \frac{r!}{(r-i)!}t^{r-i}D^{k-i}h\nonumber \\= & {} \sum _{s=0}^r \frac{r!}{s!}t^s\sum _{k=r-s}^m a_k\left( \begin{array}{c}k\\ r-s\end{array}\right) D^{k-r+s} h\nonumber \\= & {} t^rf(D)h+\sum _{s=0}^{r-1}\frac{r!}{s!}t^s\sum _{k=r-s}^m a_k\left( \begin{array}{c}k\\ r-s\end{array}\right) D^{k-r+s} h\nonumber \\= & {} rt^{r-1}f'(D)h+\sum _{s=0}^{r-2}~\sum _{k=r-s}^m b_{sk}t^s D^{k-r+s} h \end{aligned}$$
(7)

for some scalars \(b_{sk}\in {\mathbb {F}}\), where \(k_r:=\min \{k,r\}\). The formulas in (6) are clearly true for \(0\le r\le 0\). Assume, by induction step, that there exists an integer \(n\ge 0\) such that (6) holds for all \(0\le r\le n\). Since \(f'(D)h\) is a solution of \(f(D)\zeta =0\), it follows from (7) that

$$\begin{aligned} f^{n+1}(D)(t^{n+1}h)= & {} (n+1) f^n(D)(t^nf'(D)h)+\sum _{s=0}^{n-1}~\sum _{k=n+1-s}^m b_{sk}f^n(D)(t^s D^{k-r+s} h)\\= & {} (n+1)![f'(D)]^{n+1}h. \end{aligned}$$

Consequently,

$$\begin{aligned} f^{n+2}(D)(t^{n+1}h)= & {} (n+1)!f(D)([f'(D)]^{n+1}h)=0. \end{aligned}$$

(Note that f(D) and \(f'(D)\) commute.) This completes the proof of Part (ii).

Finally, assume further that \(r!\ne 0\) in \({\mathbb {F}}\) and that \(f^r(D)(t^rh)=0\). In view of (6), \([f'(D)]^rh=0\). Since f is minimal and gcd\((f,(f')^r)=1\), it follows that \(h=0\). \(\square \)

We conclude this introductory section by fixing some notation regarding the cyclic decomposition of \(T\in L(V)\) for some n-dimensional vector space V over \({\mathbb {F}}\). By the cyclic decomposition theorem,

$$\begin{aligned} T=T_1\oplus T_2\oplus \cdots \oplus T_r, \end{aligned}$$
(8)

where each \(T_j\) is a cyclic operator with minimal polynomial \(f_j\) such that \(f_{j+1}|f_j\) for \(j=1,2,\ldots ,r-1\), and the product

$$\begin{aligned} f=f_1f_2\ldots f_r \end{aligned}$$

is the characteristic polynomial of T. For \(j=1,2,\ldots ,r\), let \(\phi _j\) be a cyclic vector of \(T_j\) and \(n_j=\deg (f_j)\). Observe that the set

$$\begin{aligned} {\mathcal {B}}_j := \{\phi _j, T\phi _j, T^2\phi _j, \ldots , T^{d_j}\phi _j\} \end{aligned}$$

is a basis for the domain \(V_j\) of \(T_j\), and that the set

$$\begin{aligned} {\mathcal {B}}:=\{\phi _1,T\phi _1,\ldots ,T^{n_1-1}\phi _1, \phi _2,T\phi _2,\ldots ,T^{n_2-1}\phi _2, \ldots ,\phi _r,T\phi _r,\cdots ,T^{n_r-1}\phi _r\} \end{aligned}$$

is a basis for \(V=V_1\oplus V_2\oplus \cdots \oplus V_r\) with respect to which

$$\begin{aligned} {[T]}_{{\mathcal {B}}\times {\mathcal {B}}}=[T_1]_{{\mathcal {B}}_1\times {\mathcal {B}}_1}\oplus [T_2]_{{\mathcal {B}}_2\times {\mathcal {B}}_2}\oplus \cdots \oplus [T_r]_{{\mathcal {B}}_r\times {\mathcal {B}}_r}. \end{aligned}$$
(9)

2 The Main Results

In the remainder of the paper, we assume \(({\mathcal {D}},D)\) satisfy (3)–(5). We fix the n-dimensional vector space V over \({\mathbb {F}}\) and let Y denote a V-valued function such that \(\theta (Y)\in {\mathcal {D}}\) for all linear functionals \(\theta \in V'\). The linear mapping D can be uniquely extended to the collection of all such vectors Y via

$$\begin{aligned} \theta (DY)=D(\theta (Y)), ~\forall \theta \in V' \end{aligned}$$
(10)

or, equivalently, via

$$\begin{aligned} \theta _i(DY)= D(\theta _i(Y))~(i=1,2,\ldots n), \end{aligned}$$
(11)

where \(\theta _i(\cdot )\) denotes the \(i^{\mathrm{th}}\) coordinate of a vector with respect to an arbitrary but fixed basis of V.

In the present section, we study the tight relation between the solution space of the linear system of differential equations \(DY=TY\) and the higher order differential equation \(f(D)z=0\), where f is the characteristic polynomial of \([T]_{{\mathcal {A}}\times {\mathcal {A}}}\). Along with the notation X, \({\mathbb {F}}\), A, V, \({\mathcal {A}}\), T, \(T_j\), f, \(f_j\), \(d_j\), \({\mathcal {B}}_j\) and \( {\mathcal {B}}\) described in (8)–(9), we further fix the following notation with no further reference; factorize \(f_j\) as

$$\begin{aligned} f_j=f_{j,1}f_{j,2}\ldots f_{j,r_j}, \end{aligned}$$

where each factor \(f_{j,\ell }\) is a power of a prime polynomial \(p_{j,\ell }\) \((\ell =1,2,\ldots ,r_j)\). Now, the primary decomposition theorem yields

$$\begin{aligned} V_j=V_{j,1}\oplus V_{j,2}\oplus \cdots \oplus V_{j,r_j},~T_j=T_{j,1}\oplus T_{j,2}\oplus \cdots \oplus T_{j,r_j}, \end{aligned}$$

where \(f_{j,\ell }\) is the minimal polynomial of \(T_{j,\ell }=T|_{V_{j,\ell }}\). Moreover, each summand \(V_{j,\ell }\) has a basis

$$\begin{aligned} {\mathcal {B}}_{j,\ell }=\{\phi _{j,\ell },T\phi _{j,\ell },T^2\phi _{j,\ell },\ldots ,T^{d_{j,\ell }}\phi _{j,\ell }\},~\mathrm{~where~}~d_{j,\ell }=\deg (f_{j,\ell })-1. \end{aligned}$$

We will also fix the notation

$$\begin{aligned} Y_{j,\ell }= Y_{j,\ell ,0} \phi _{j,\ell }+ Y_{j,\ell ,1} T\phi _{j,\ell }+ \cdots +Y_{j,\ell ,d_{j,\ell }}T^{d_{j,\ell }}\phi _{j,\ell } \end{aligned}$$

for the projection \(Y_{j,\ell }\) of any vector \(Y\in V\) onto \(V_{j,\ell }\) along \(V\ominus V_{j,\ell }\). The minimal polynomial of \(T_{j,\ell }\) is denoted as

$$\begin{aligned} f_{j,\ell }(x)=\sum _{k=0}^{m_{j,\ell } }c_{_{j,\ell ,k}}x^k~\mathrm{~with~}~c_{_{j,\ell ,m_{j,\ell }}}=1~\mathrm{~and~}~ m_{j,\ell }:=\deg (f_{j,\ell }). \end{aligned}$$

The following theorem and its proof are basically the content of Theorems 3.4 and 3.7 of [5]; our superficial modification of the statements and their proof are reproduced here just for the ease of reference and our future needs.

Theorem 2.1

With the notation fixed in Sect. 1 as well as the paragraphs preceding the theorem, the following assertions are true.

  1. (i)

    Assume \(f = p^s\) for some prime polynomial p and some positive integer s. Then \(n = s \deg p\), \(f = f_{11}\), \({\mathcal {B}} = {\mathcal {B}}_{11}\), \(r = r_1 = 1\), \(f(x) = \Sigma _{k = 0}^n c_k x^k\) and every coordinate \(Y_{k,{\mathcal {B}}}\) of the solution Y can be computed as follows.

    $$\begin{aligned}&\mathrm{If}~f(0)=0,~\mathrm{then} ~Y_{k-1,{\mathcal {B}}}=DY_{k,{\mathcal {B}}},~\mathrm{where~}~Y_{-1, {\mathcal {B}}}:=0,~(0\le k\le n-1). \end{aligned}$$
    (12)
    $$\begin{aligned}&\mathrm{If}~f(0)\ne 0,~\mathrm{then}~ Y_{n-q, {\mathcal {B}}}= -c_{0}^{-1}\sum _{i=0}^{q-1}c_{n-i}D^{q-i}Y_{0,{\mathcal {B}}},~(1\le q\le n). \end{aligned}$$
    (13)
  2. (ii)

    With f as in the previous part, if every coordinate \(Y_{k, {\mathcal {B}}}\) satisfy (12)–(13), then it satisfies the system

    $$\begin{aligned} DY= TY. \end{aligned}$$
    (14)
  3. (iii)

    In case f is a general polynomial and each \(Y_{j,\ell }\) satisfies the analogue \(DY_{j,\ell }= T_{j,\ell }Y_{j,\ell }\) of (14), then

    $$\begin{aligned} DY=TY, ~\mathrm{where~}~ Y=\oplus _{j,\ell } Y_{j,\ell }. \end{aligned}$$
    (15)

Proof

In Parts (i) and (ii), we drop the subscript \({\mathcal {B}}\) in \(Y_{\mathcal {B}}\) and \(Y_{k,{\mathcal {B}}}\). Observe that the equation \(DY = TY\) has the following matricial form with respect to the basis \({\mathcal {B}}\):

$$\begin{aligned} \left[ \begin{array}{l}DY_0\\ DY_1\\ DY_2\\ \vdots \\ DY_{n-2}\\ DY_{n-1}\end{array}\right] =\left[ \begin{array}{cccccc}0&{}\quad 0&{}\quad 0&{}\cdots &{}0&{}\quad -c_0\\ 1&{}\quad 0&{}\quad 0&{}\cdots &{}0&{}\quad -c_1\\ 0&{}\quad 1&{}\quad 0&{}\cdots &{}0&{}\quad -c_2\\ \vdots &{}\vdots &{}\vdots &{}\ddots &{}\vdots &{}\vdots \\ 0&{}\quad 0&{}\quad 0&{}\cdots &{}0&{}\quad -c_{n-2}\\ 0&{}\quad 0&{}\quad 0&{}\cdots &{}1&{}\quad -c_{n-1} \end{array}\right] \left[ \begin{array}{l}Y_0\\ Y_1\\ Y_2\\ \vdots \\ Y_{n-2}\\ Y_{n-1} \end{array}\right] . \end{aligned}$$
(16)

Setting \(Y_{-1}:=0\), the equation (16) is equivalent with

$$\begin{aligned} DY_t=Y_{t-1}-c_t Y_{n-1}, ~ t\in \{0,1,\ldots ,n-1\}. \end{aligned}$$
(17)

We, now, consider two cases.

Case 1: \(c_0=0\). Since f is a power of prime, it follows that \(f(x)=x^n\) and, thus, \(c_{n-1}=c_{n-2}=\cdots =c_0=0\) and the equations (12) are immediate.

Case 2: \(c_0\ne 0\). Letting \(t=0\) in (17), it follows that

$$\begin{aligned} Y_{n-1}=-c_0^{-1}DY_0, \end{aligned}$$
(18)

which is (13) for \(q=1\). Assume by induction on q, we have proven (13) for some \(q<n\); i.e.,

$$\begin{aligned} Y_{n-q}=-c_0^{-1}\sum _{i=0}^{q-1}c_{n-i}D^{q-i}Y_0. \end{aligned}$$

Letting \(t=n-q\) in (17) and substituting for \(Y_{n-1}\) from (18), imply that

$$\begin{aligned} Y_{n-q-1}=DY_{n-q}+c_{n-q}Y_{n-1}=-c_0^{-1}\sum _{i=0}^{q}c_{n-i}D^{q-i+1}Y_0, \end{aligned}$$
(19)

which completes the proof of (i).

For the converse, let Y be a vector with coordinates satisfying (12)–(13). We have to show that \(DY=TY\) or, equivalently, (17) holds. The proof in case \(c_0=0\) is clear. So, assume \(c_0\ne 0\). Then, by (13),

$$\begin{aligned} c_0(DY_t-Y_{t-1})= -\sum _{i=0}^{n-t-1}c_{n-i}D^{n-t-i+1}Y_0+\sum _{i=0}^{n-t}c_{n-i}D^{n-t-i+1}Y_0 =c_tDY_0 \end{aligned}$$

which establishes (17) and completes the proof of (ii).

Finally, let \(Y_{j,\ell }\) satisfy (14) as in Part (iii). Then,

$$\begin{aligned} DY= & {} D(\oplus _{j,\ell }\alpha _{j,\ell }Y_{j,\ell }) = \oplus _{j,\ell }\alpha _{j,\ell } DY_{j,\ell } = \oplus _{j,\ell }\alpha _{j,\ell } T_{j,\ell }Y_{j,\ell }\\= & {} \oplus _{j,\ell }\alpha _{j,\ell } TY_{j,\ell } = T(\oplus _{j,\ell }\alpha _{j,\ell }Y_{j,\ell })= TY. \end{aligned}$$

\(\square \)

Note The equations in (12) imply the higher order differential equations \(D^{k+1}Y_{j,\ell ,k}=0\) for \(0\le k\le n-1\); we will later see that the converse may fail to be true. Also, note that the first coordinate \(Y_{j,\ell ,0}\) of \(Y_{j,\ell }\) (with respect to the basis \({\mathcal {B}}_{j,\ell }\)) satisfies the following \(m_{j,\ell }^{\mathrm{th}}\)-order linear differential equations

$$\begin{aligned} p_{j,\ell }^{s_{j,\ell }}(D)Y_{j,\ell ,0}=0, \end{aligned}$$
(20)

where \(p_{j,\ell }\in {\mathbb {F}}[x]\) is a prime polynomial satisfying \(f_{j,\ell }=p_{j,\ell }^{s_{j,\ell }}\) for some positive integer \(s_{j,\ell }\) \((\ell =1,2,\cdots ,r_j;~j=1,2,\ldots ,r)\); in the special case that \(~c_{_{j,\ell ,0}}=0,~\) it also satisfies the first-order differential equation

$$\begin{aligned} DY_{j,\ell ,0}=0. \end{aligned}$$
(21)

The following is an immediate corollary of (20)–(21).

Corollary 2.2

The following assertions are equivalent.

  1. (i)

    The system \(DY=TY\) has a nonzero solution in Y.

  2. (ii)

    The system \(DZ=SZ\) has a nonzero solution in Z for every cyclic part S of T whose characteristic polynomial is a power of a prime polynomial.

  3. (iii)

    The higher-order differential equation \(p^s(D)\zeta =0\) has a nonzero solution in \(\zeta \) for every prime polynomial p and every positive integer s such that \(p^s\) divides the characteristic polynomial of T.

Proof

The implications (i)\(\Leftrightarrow \)(ii)\(\Rightarrow \)(iii) were proven in Theorem 2.1 and the note following the theorem. For the implication (iii)\(\Rightarrow \)(ii), we find a nonzero solution z of \(p^s(D)z = 0\) and consider the two cases \(p(0) = 0\) and \(p(0) \ne 0\). In the first case, we let \(Y_{n-1, {\mathcal {B}}}= \zeta \) and then construct Y as in (12). In case \(p(0) \ne 0\), we let \(Y_{0, {\mathcal {B}}} =\zeta \) and then construct Y as in (13). \(\square \)

The following corollary reveals that the solution space of the equation \(DY=TY\) when T has a prime characteristic polynomial shares a kind of symmetry that is enjoyed by the solution set of the numerical equation \(p(t)=0\) in its splitting field; the difference is that the latter has always a solution in its splitting field, but we do not know anything about the existence of a solution for \(DY=TY\) (except, of course, when \({\mathbb {F}}={\mathbb {R}}\) or \({\mathbb {C}}\)).

Corollary 2.3

Assume T has a prime characteristic polynomial p and let \({\mathcal {E}}\) be any given basis of the domain V of T. If Y is any solution of \(DY=TY\), then every coordinate of \([Y]_{\mathcal {E}}\) is a solution of the equation \(p(D)\zeta =0\).

Proof

Assume without loss of generality that \(n=\mathrm{deg}(p)\ge 2\) and let \(0\ne \psi _1\in V\). Define \({\mathcal {C}}_j = \{\psi _j,T\psi _j,T^2\psi _j,\ldots , T^{n-1}\psi _j\}\) for \(j=1,2\), where \(\psi _2:=T\psi _1\). Let \(Y(j)=[Y]_{{\mathcal {C}}_j}\) \((j=1,2)\). Then

$$\begin{aligned} \psi _1=-c_0^{-1}[c_1\psi _2+c_2T\psi _2+\cdots +c_{n-1}T^{n-2}\psi _2+T^{n-1}\psi _2], \end{aligned}$$

and

$$\begin{aligned} Y= & {} Y_0(1)\psi _1+Y_1(1)T\psi _1+\cdots +Y_{n-1}(1)T^{n-1}\psi _1\\= & {} Y_0(2)\psi _2+Y_1(2)T\psi _2+\cdots +Y_{n-1}(2)T^{n-1}\psi _2\\= & {} [Y_1(1)-c_0^{-1}c_1Y_0(1)]\psi _2+[Y_2(1)-c_0^{-1}c_2Y_0(1)]T\psi _2+\cdots +\\&[Y_{n-1}(1)-c_0^{-1}c_{n-1}Y_0(1)]T^{n-2}\psi _2-c_0^{-1}Y_0(1)T^{n-1}\psi _2. \end{aligned}$$

Thus, \(Y_0(2)=Y_1(1)-c_0^{-1}c_1Y_0(1)\) and, hence, since the first coordinate satisfies the equation \(p(D)(Y_0(1))=p(D)(Y_0(2))\), we have

$$\begin{aligned} p(D)Y_1(1)=p(D)(Y_1(1)-c_0^{-1}c_1Y_0(1))=p(D)Y_0(2)=0. \end{aligned}$$

By a finite induction, the conclusion holds for all coordinates \(Y_i(1)\) \((i=0,1,2,\ldots ,n-1)\).

To complete the proof, let \({\mathcal {E}}\) be an arbitrary basis of V. Then \([Y]_{\mathcal {E}}=[I]_{{\mathcal {E}}\times {\mathcal {C}}_1}[Y]_{{\mathcal {C}}_1}\) and, thus, every coordinate of \([Y]_{\mathcal {E}}\) is a linear combination of the coordinates of \([Y]_{{\mathcal {C}}_1}\) and we are done. \(\square \)

Note that Corollaries 2.2 and 2.3 do not claim the existence of a solution for the system \(DY=TY\); they only relate the existence of the solution of a system of linear differential equations to the existence of the solution of a differential equation of higher order with constant coefficients.

In the following, let \({\mathcal {S}}_T\) (resp. \({\mathcal {S}}_f\)) be the solution space of the system \(DY=TY\) (resp. the differential equation \(f(D)\zeta =0\)), where \(T \in L(V)\) (resp. \(f\in {\mathbb {F}}[x]).\) We say

$$\begin{aligned} {\mathcal {S}}_T~(\mathrm{resp.}~{\mathcal {S}}_f) ~\mathrm{~ is ~adequate ~if~}~ \dim {\mathcal {S}}_T \ge \dim V~(\mathrm{resp.} ~\dim {\mathcal {S}}_f \ge \deg f). \end{aligned}$$

Theorem 2.4

Let \(X,{\mathbb {F}},{\mathcal {D}},D\) satisfy (3)–(5). Assume the dimension n of V satisfies \(n!\ne 0\) in \({\mathbb {F}}\). Then the following assertions are equivalent.

  1. (i)

    The system \(DY=TY\) has an adequate solution space for any \(T\in L(V)\).

  2. (ii)

    The system \(DZ=SZ\) has an adequate solution space for any \(S\in L(V)\) having a prime characteristic polynomial.

  3. (iii)

    The higher order differential equation \(p(D)\zeta =0\) has an adequate solution space for any prime polynomial p.

Proof

The implication (i)\(~\Rightarrow ~\)(ii) follows from the fact that (ii) is a particular case of (i). Assume (ii) holds and let \(p(x)=c_0+c_1x+c_2x^2+\cdots +c_{m-1}x^{m-1}+x^m\). Let \(\{Z(h):~h=1,2,\cdots ,m\}\) be a set of linearly independent solutions of \(DZ=SZ\). Now, consider two cases.

Case 1: \(p(0)=0\) or, equivalently, \(p(x)\equiv x\). Then \(S = 0\), \(m=1\), and the system \(DZ = SZ\) reduces to the equation \(D(Z(1))=0\). Similarly, the equation \(p(D)\zeta =0\) reduces to the equation \(D\zeta =0\) and, hence, (ii)\(~\Rightarrow ~\)(iii).

Case 2: \(p(0)\ne 0\). Choose m linearly independent solutions \(\{Z(1),Z(2),\cdots ,\) \(Z(m)\}\) of the system \(DZ=SZ\) represented as follows:

$$\begin{aligned} Z(h)=Z_0(h)e_0+Z_1(h)Te_0+\cdots +Z_{m-1}(h)T^{m-1}e_0,~(h=1,2,\ldots ,m), \end{aligned}$$

where \(e_0\) is a cyclic vector.

By (20), each \(Z_0(h)\) satisfies \(p(D)Z_0(h)=0\). It remains to show that the set

$$\begin{aligned} \{Z_0(1),Z_0(2),\ldots ,Z_0(m)\} \end{aligned}$$

is a linearly independent set of solutions of \(p(D)\zeta =0\). Assume \(\sum _{h=1}^m \alpha _hZ_0(h)=0\). By (13),

$$\begin{aligned} \sum _{h=1}^m\alpha _hZ_{m-q}(h)= & {} -c_0^{-1}\sum _{h=1}^m\alpha _h \sum _{i=0}^{q-1}c_{m-i} D^{q-i} Z_0(h)\\= & {} -c_0^{-1}\sum _{i=0}^{q-1}c_{m-i}D^{q-i}\sum _{h=1}^m\alpha _hZ_0(h)=0~{~}~(1\le q\le m). \end{aligned}$$

Therefore, \(\sum _{h=1}^m\alpha _hZ(h)=0\) and, hence, \(\alpha _1=\alpha _2=\cdots =\alpha _m=0\). This completes the proof of (ii)\(~\Rightarrow ~\)(iii).

We now assume (iii) is true and prove (i). In view of (14)–(15), the system \(DY=TY\) has solutions of the form \(Y=\oplus _{j,\ell } Y_{j,\ell }\), where \(Y_{j,\ell }\) is a solution of \(DY_{j,\ell }= T_{j,\ell }Y_{j,\ell }\) and the characteristic/minimal polynomial of \(T_{j,\ell }\) is a power of a prime polynomial \(p_{j,\ell }^{s_{j,\ell }}\). It is sufficient to show that every system \(p^s(D)\zeta = 0\) has an adequate set of solutions whenever \(p\in {\mathbb {F}}[x]\) is prime and \(s\ge 1\). We drop the indices \(j,\ell \) and assume T has a characteristic/minimal polynomial \(p^s\) with \(n=s\deg (p)\). Again, here, we consider two cases.

Case 1’: \(p(0)=0\) or, equivalently, \(p(x)\equiv x\). For \(h=0,1,2,\cdots ,n-1\), define

$$\begin{aligned} Y_k(h)(t)=D^{n-k-1}t^h~\forall t\in X, \end{aligned}$$

and observe that \((DY_k(h))(t)=Y_{k-1}(h)(t)\) for \(k=0,1,2,\cdots ,n-1\), where \(Y_{-1}(h)(t):=0\). It is easy to see that the vector \(Y(h)\in ({\mathbb {F}}^n)^X\) defined by \(Y(h):=Y_0(h)e_0+Y_1(h) Te_0+\cdots +Y_{n-1}(h)T^{n-1}e_0\) satisfies (12) and, hence, (14). Assume \(\sum _{h=0}^{n-1}\alpha _hY(h)=0\) for some \(\alpha _0,\alpha _1,\ldots ,\alpha _{n-1}\in {\mathbb {F}}\). Then

$$\begin{aligned} 0= & {} \alpha _{n-1} \frac{(n-1)!}{0!}e_0,\nonumber \\ 0= & {} [\alpha _{n-1} \frac{(n-1)!}{1!}t+\alpha _{n-2}\frac{(n-2)!}{0!}]Te_0,\nonumber \\ 0= & {} [\alpha _{n-1} \frac{(n-1)!}{2!}t^2+\alpha _{n-2}\frac{(n-2)!}{1!}t+\alpha _{n-3}\frac{(n-3)!}{0!}]T^2e_0,\nonumber \\&\vdots&\nonumber \\ 0= & {} [\alpha _{n-1}t^{n-1}+\alpha _{n-2}t^{n-2}+\cdots +\alpha _0]T^{n-1}e_0, \end{aligned}$$
(22)

and, hence, \(\alpha _0=\alpha _1=\cdots =\alpha _{n-1}=0\) which proves the desired adequacy.

Case 2’: \(p(0)\ne 0\). Choose linearly independent solutions \(\zeta _1,\zeta _2,\ldots ,\zeta _m\) of the equation \(p(D)\zeta =0\), where \(m=\deg (p)\). Assume \(p^s(x)=b_0+b_1x+\cdots +b_{n-1}x^{n-1}+x^n\), where \(n=sm\). For each \(h\in \{0,1,2,\ldots , s-1\}\) and for each \(k\in \{1,2,\ldots ,m\}\), define \(Y(h,k)=Y_0(h,k)e_0+Y_1(h,k)Te_0+\cdots +Y_{m-1}(h,k)T^{m-1}e_0\) as follows:

$$\begin{aligned} Y_0(h,k)= & {} t^h\zeta _k,\nonumber \\ Y_{n-q}(h,k)= & {} -b_0^{-1}\sum _{i=0}^{q-1}b_{n-i}D^{q-i}Y_0(h,k),~(1\le q\le n-1). \end{aligned}$$
(23)

In view of Theorem 1.4, \(~p^s(D)Y_0(h,k)=0\) and, hence, the coordinates of each Y(hk) satisfy (13). Thus, \(\{Y(h,k):~h=0,1,2,\ldots ,s-1;~k=1,2,\ldots ,m\}\) is a subset of the solution space of \(DY=TY\) which has \(n(=ms)\) elements. To prove its adequacy, it is sufficient to show that it is linearly independent.

Let \(\alpha _{hk}\in {\mathbb {F}}\) be scalars such that \(\sum _{h=0}^{s-1}\sum _{k=1}^m\alpha _{hk}Y(h,k)=0\). Define

$$\begin{aligned} \eta _j=\sum _{h=0}^{s-j}\sum _{k=1}^m\alpha _{hk}Y_0(h,k)=\sum _{h=0}^{s-j}\sum _{k=1}^m\alpha _{hk}t^h\zeta _k,~\forall j=1,2,\ldots s, \end{aligned}$$

and observe that \(\eta _1=0\). We will show that \(\alpha _{hk}=0\) for \(h=0,1,\ldots ,s-1; k=1,2,\cdots ,m\). Since \(p^{s-1}(D)(t^h\zeta _k)=0\) \((k=1,2, \ldots ,s-2)\), it follows that

$$\begin{aligned} 0=p^{s-1}(D)\eta _1=p^{s-1}(D)(t^{s-1} \sum _{k=1}^m\alpha _{s-1,k}\zeta _k). \end{aligned}$$

By Part (iii) of Theorem 1.4, \(\alpha _{s-1,1}=\alpha _{s-1,2}=\cdots =\alpha _{s-1,m}=0\). Thus, \(\eta _2=0\) and, by a similar argument, \(\alpha _{s-2,1}=\alpha _{s-2,2}=\cdots =\alpha _{s-2,m}=0\). A finite induction on j completes the proof. \(\square \)

Corollary 2.5

With the hypotheses of Theorem 2.4 and the notation established before, let \(V=W_1\oplus W_2\oplus \cdots \oplus W_m\) be a decomposition of \(V={\mathbb {F}}^n\) into direct sums of cyclic invariant subspaces of T such that the minimal/characteristic polynomial of each \(T_j=T|_{W_j}\) is of the form \(p_j^{s_j}\) for some prime polynomial \(p_j\in {\mathbb {F}}[x]\) and some positive integer \(s_j\) \((j=1,2,\ldots ,m)\). Let \(n_j=s_j\mathrm{~deg}(p_j)\) and assume \({\mathcal {C}}_j=\{\psi _j,T\psi _j,\ldots , T^{n_j-1}\psi _j\}\) is a basis for \(W_j\) \((j=1,2,\ldots ,m)\); define \({\mathcal {C}}=\cup _{j=1}^m{\mathcal {C}}_j\). Then the following assertions are true.

  1. (i)

    If \(\zeta _j\) is a solution of \(p_j(D)\zeta =0\), then the function \(u_j(t) \zeta _j(t)~(t \in X)\) is a solution of \(p_j^{s_j}(D)\zeta =0\) for any polynomial \(u_j\in {\mathbb {F}}[x]\) of degree at most \(s_j-1\). Moreover,

    $$\begin{aligned} \mathrm{dim}({\mathcal {S}}_{p_j^{s_j}})\ge s_j\mathrm{dim}({\mathcal {S}}_{p_j}). \end{aligned}$$
  2. (ii)

    If \(\mu _j\) is a solution of the equation \(p_j^{s_j}(D)\zeta =0\), then the linear system \(DZ=T_jZ\) has a solution of the form

    $$\begin{aligned} Z(j)=Z_0(j)\psi _j+Z_1(j)T\psi _j+\cdots +Z_{s_j-2}(j)T^{s_j-2}\psi _j+Z_{s_j-1}(j)T^{s_j-1}\psi _j, \end{aligned}$$

    where, the coefficients \(Z_i(j)\) are obtained from (22)–(23). Moreover

    $$\begin{aligned} \mathrm{dim}({\mathcal {S}}_{p_j^{s_j}})=\mathrm{dim}({\mathcal {S}}_{T_j}) \end{aligned}$$

    holds.

  3. (iii)

    Let \(f=\Pi _{j=1}^mp_j^{s_j}\) be the characteristic polynomial of T. If \(\mu _j\) is the solution of the equation \(p_j^{s_j}(D)\zeta =0\) and if Z(j) is the corresponding solution of the system \(DZ=T_jZ\), then the function \(\mu =\sum _{j=1}^m\alpha _j\mu _j\) is a solution of the equation \(f(D)\zeta =0\), and the vector \(Y =\oplus _{j=1}^m \beta _j Z(j)\) is a solution of the system \(DY=TY\).

3 Classical Examples

In this section, we study some applications of the main results established in the previous section. In case \(X=(a,b)\subset {\mathbb {R}}={\mathbb {F}}\), one classical approach to the solution of the system \(DY=TY\) is to complexify the problem and use the Jordan canonical form of \([T]_{{\mathcal {A}}\times {\mathcal {A}}}\). Let us try the most simple example whose Jordan canonical approach needs a complexification.

Consider the real system \(DY=TY\) with the following matricial form:

$$\begin{aligned}{}[DY]_{\mathcal {A}}=\left[ \begin{array}{cc}0&{}-1\\ 1&{}0\end{array}\right] [Y]_{\mathcal {A}}. \end{aligned}$$

Find a basis \({\mathcal {B}}\subset {\mathbb {C}}^2\) such that

$$\begin{aligned}{}[DY]_{\mathcal {B}}=\left[ \begin{array}{cc}i&{}0\\ 0&{}-i\end{array}\right] [Y]_{\mathcal {B}}. \end{aligned}$$

Then \([Y]_{\mathcal {B}}=[\alpha e^{it},\beta e^{-it}]^T=[\alpha \cos t+i\alpha \sin t,\beta \cos t-i\beta \sin t ]^T\) for arbitrary complex numbers \(\alpha \) and \(\beta \). If the system \(DY=TY\) has a real solution \([Y]_{\mathcal {A}}\), then \([Y]_{\mathcal {B}}=[I]_{{\mathcal {B}}\times {\mathcal {A}}}[Y]_{\mathcal {A}}\) is surely a complex solution of \(DY=TY\), but the converse is not true. For instance, if it happens that a nonzero complex solution \([Y]_{\mathcal {B}}\) of \(DY=TY\) yields a real solution \([Y]_{\mathcal {A}}=[I]_{{\mathcal {A}}\times {\mathcal {B}}}[Y]_{\mathcal {B}}\), it is clear that the complex solution \([iY]_{\mathcal {A}}=[I]_{{\mathcal {A}}\times {\mathcal {B}}}[iY]_{\mathcal {B}}\) is not real. So, simply having a complex solution, it is not clear how to find a real one especially when the characteristic polynomial of \([T]_{{\mathcal {A}}\times {\mathcal {A}}}\) has a complicated factorization.

The following example explains the approach via the rational canonical forms for solving the real systems. In the next example, \({\mathbb {F}}={\mathbb {R}}\) and X is an open subset of \({\mathbb {F}}\). The prime polynomials in \({\mathbb {R}}[x]\) are of the forms \(p_0(t)=t\), \(p_1(t)=t-\alpha \) and \(p_2(t)=(t-\beta )^2+\gamma ^2\) for some real numbers \(\alpha ,~\beta ,~\gamma \) with \(\alpha \ne 0\) and \(\gamma \ne 0\). The obviously unique (up to constants abc) solutions of the equations (i) \(D\zeta =0\), (ii) \(D\zeta =\alpha \zeta \), and (iii) \(D^2\zeta -2\beta D\zeta +(\beta ^2+\gamma ^2)\zeta =0\) are (i’) \(\zeta =a\), (ii’) \(\zeta (t)=ae^{\alpha t}\), and (iii’) \(\zeta (t)=e^{\beta t}(b \cos \gamma t+c\sin \gamma t)\), respectively. Thus, instead of the indirect approach of complexification/decomplexification of the solutions, we use the direct application of the rational canonical form to solve the linear system \(DY=TY\) of real differential equations.

Example 3.1

Let \(T \in L({\mathbb {R}}^n)\) and \(W_j,p_j,s_j\) be as in Corollary 2.5. Then every solution of the system \(DY=TY\) is the direct sum of arbitrary scalar multiples of Z(j) established in Part (iii) of Corollary 2.5. So, to find Y, it is sufficient to find each Z(j). Here, we have three types of p. For convenience, we drop the index j and assume the minimal polynomial is of the form \(p^s\).

  1. (i)

    Case \(p(t)=t\). In this case, let u be any polynomial of degree most \(s-1\) and \(\psi \) be any cyclic vector of and \(T|_W\). then define

    $$\begin{aligned} Z=(D^{s-1}u)\psi +(D^{s-2}u)T\psi +\cdots +(Du)T^{s-2}\psi +uT^{s-1}\psi . \end{aligned}$$
  2. (ii)

    Case \(p(t)=t-\alpha \). In this case, let \(\psi \) be any cyclic vector of \(T|_W\) and define

    $$\begin{aligned} Z=Z_0\psi +Z_1T\psi +\cdots +Z_{s-1}T^{s-1}\psi , \end{aligned}$$
    (24)

    where \(Z_0=u(t)e^{\alpha t}\), u is an arbitrary polynomial of degree \(\le s-1\), and, for \(1\le i\le s-1\),

    $$\begin{aligned} Z_i=-[\left( \begin{array}{c}s\\ i\end{array}\right) \alpha ^{-i}D+\left( \begin{array}{c}s\\ i+1\end{array}\right) \alpha ^{-i-1}D^2+\cdots +\left( \begin{array}{c}s\\ s-1\end{array}\right) \alpha ^{-s+1}D^{s-i}](u(t)e^{\alpha t}). \end{aligned}$$
  3. (iii)

    Case \(p(t)=(t-\beta )^2+\gamma ^2\). In this case, let \(\psi \) and u be as in part (ii) and write Z as in (24), in which \(Z_0(t)=u(t)e^{\beta t}(a \cos \gamma t+b\sin \gamma t)\) and, for \(1\le i\le s-1\),

    $$\begin{aligned} Z_i=-c_0^{-1}[c_iD+c_{i+1}D^2+\cdots +c_{s-1}D^{s-i}](u(t)e^{\beta t}(a \cos \gamma t+b\sin \gamma t)), \end{aligned}$$

    where \(p^s(t)=c_0+c_1t+\cdots +c_{s-1}t^{s-1}+t^s.\) and \(a,b \in {\mathbb {R}}\) are arbitrary.

\(\square \)

Example 3.2

Take \({\mathbb {F}}={\mathbb {C}}\) and let X be any connected open subset of the complex plane \({\mathbb {C}}\sim {\mathbb {R}}^2\). Again, here, the only prime polynomials p are of the forms \(p(z)=z\) and \(p(z)=z-\alpha \) for some \(\alpha \in {\mathbb {C}}\). Also, the essentially unique solutions of the equation \(D\zeta =0\) and (ii) \(D\zeta -\alpha \zeta \) are, respectively, (i’) \(\zeta =a\) and (ii’) \(\zeta =ae^{\alpha z}\), where \(\zeta \) is an analytic function in the complex variable z. To solve \(DY=TY\), we follow the arguments of the previous example and distinguish two cases similar to items (i) and (ii) of that example. \(\square \)

Example 3.3

Let \(X=(a,b)\subset {\mathbb {R}}\) and \({\mathbb {F}}={\mathbb {C}}\). As in the previous example, we have two types of prime polynomials \(p_0(z)=z\) and \(p_1(z)=z-\alpha \). The essentially unique solutions of these equations are the same as in Example 3.2 and so are the arguments leading to the complete solution of the system \(DY=TY\). \(\square \)