1 Introduction

Solutions of some differential equations can be exactly found using explicit formulas what is a remarkable feature. A class of such differential equations is studied in [6]. In this paper, we present a wider class of ODEs that are solvable exactly, i.e. its solutions can be found by using formulas. Such formulas are presented in Sects. 2 and 3 of this paper. We emphasize that our study of such differential equations is motivated also by series of papers [1,2,3,4,5] which are important for understanding significant atmospheric flows like sea breezes and atmospheric undular bores, as discussed in these papers. We also present related results for systems of linear ODEs in Sect. 4. Related results are presented in [7, 8] but it appears that our achievements are not contained in those comprehensive books.

2 Second order scalar ODEs

Motivated by [1,2,3,4,5,6,7,8], we consider a linear second order differential equation with a Dirichlet boundary condition

$$\begin{aligned} \begin{array}{rcl} (pu')'(s)+qu(s)&{}=&{}h(s),\quad s\in I=[0,1],\\ u(0)=u(1)&{}=&{}0 \end{array} \end{aligned}$$
(1)

where \(p\in C^2(I,\mathbb {R})\), \(q,h\in C(I,\mathbb {R})\) are functions and \('=\frac{d}{ds}\). For solving (1), we need to solve

$$\begin{aligned} (pu')'(s)+qu(s)=0. \end{aligned}$$
(2)

We set

$$\begin{aligned} u(s)=r(s)U(f(s)) \end{aligned}$$
(3)

in (2) for \(f,r\in C^2(I,\mathbb {R})\) to get

$$\begin{aligned}{} & {} prf'^2(s)U''(f(s))+U'(f(s))\left( prf''(s)+f'(s)\left( rp'(s)+2pr'(s)\right) \right) \nonumber \\{} & {} \quad +U(f(s))\left( p'r'(s)+pr''(s)+qr(s)\right) =0. \end{aligned}$$
(4)

Our goal to write (4) as the following equation with constant coefficients

$$\begin{aligned} U''(z)+\mu U'(z)+\nu U(z)=0 \end{aligned}$$
(5)

for some \(\mu ,\nu \in \mathbb {R}\) and \(z=f(s)\). This means we have to solve

$$\begin{aligned}{} & {} prf''^2(s)=1,\nonumber \\{} & {} prf''(s)+f'(s)\left( rp'(s)+2pr'(s)\right) =\mu \nonumber \\{} & {} p'r'(s)+pr''(s)+qr(s)=\nu . \end{aligned}$$
(6)

From the 1st equation of (6), we get

$$\begin{aligned} p(s)=\frac{1}{rf'^2(s)} \end{aligned}$$
(7)

by assuming

$$\begin{aligned} r(s)>0,\quad f'(s)>0,\quad s\in I. \end{aligned}$$
(8)

By inserting (7) into the 2nd equation of (6), we obtain

$$\begin{aligned} \frac{\left( f'r'(s)-rf''(s)\right) }{rf'^2(s)}=\mu . \end{aligned}$$
(9)

Solving (9), we have

$$\begin{aligned} r'(s)=\frac{r(s)\left( f''(s)+\mu f'^2(s)\right) }{f'(s)}. \end{aligned}$$
(10)

Then integrating (10), we derive

$$\begin{aligned} \int \frac{r'}{r}(s)ds= & {} \int \frac{f''(s)+\mu f'^2(s)}{f'(s)}ds, \nonumber \\ \ln r(s)= & {} \ln f'(s)+\mu f(s), \nonumber \\ r(s)= & {} f'(s)e^{\mu f(s)}. \end{aligned}$$
(11)

Plugging (11) into (7), we get

$$\begin{aligned} p(s)=\frac{e^{-\mu f(s)}}{f'^3(s)} \end{aligned}$$
(12)

Finally, putting (7) and (11) into the 3rd equation of (6), we have

$$\begin{aligned} \frac{f^{(3)}f'(s)-f''(s)\left( 3f''(s)+\mu f'^2(s)\right) }{f'^4(s)}+qf'(s)e^{\mu f(s)}=\nu . \end{aligned}$$
(13)

From (13), we get

$$\begin{aligned} q(s)=\frac{e^{-\mu f(s)}\left( 3f''^2(s)+\nu f'^4(s)-f^{(3)}f'(s)+\mu f'^2 f''(s)\right) }{f'^5(s)}. \end{aligned}$$
(14)

For simplifying the above formulas, we take

$$ f(s)=\ln g(s) $$

for \(g(s)>0\) on \(s\in I\) to obtain

$$\begin{aligned} r(s)= & {} g^{\mu -1}g'(s), \nonumber \\ p(s)= & {} \frac{g^{3-\mu }}{g'^3}(s),\nonumber \\ q(s)= & {} (g')^{-1}g^{-\mu -3}(s)\Big ((\nu +1)g'^4(s)-gg'^2(s)\left( 3g''(s)+\mu g'(s)\right) \nonumber \\{} & {} +g^2(s)\left( 3g''^2(s)+g'(s)\left( \mu g''(s)-g^{(3)}(s)\right) \right) \Big ). \end{aligned}$$
(15)

Since \(f'(s)=\displaystyle \left( \frac{g'}{g}\right) (s)\), so (8) reads

$$\begin{aligned} g(s)>0,\quad g'(s)>0,\quad s\in I. \end{aligned}$$
(16)

Summarizing we arrive at the following result.

Theorem 2.1

Let \(g\in C^1(I,\mathbb {R})\) satisfy (16). If U(z) is a solution of (5), then u(s) given by (3) is a solution of (2) with the corresponding functions of (15).

Of course, solutions of (5) are well known formulae. As a simple example, we take

$$ g(s)=s+a,\quad a>0. $$

Then (15) gives

$$\begin{aligned} f(s)= & {} \ln (a+s), \nonumber \\ r(s)= & {} (a+s)^{\mu -1}, \nonumber \\ p(s)= & {} (a + s)^{3-\mu },\nonumber \\ q(s)= & {} (a+s)^{-\mu -3}(-\mu (a+s)+\nu +1). \end{aligned}$$
(17)

We consider g(s) in (15) as a parameter, so we have a rich family of ODEs in Theorem 2.1.

3 The non homogeneous problem

In this application section, we will use the method of Green’s functions.

Theorem 3.1

Let \(g\in C^1(I,\mathbb {R})\) satisfy (16), \( h\in C(I,\mathbb {R}) \) and \( \mu ,\nu \in \mathbb {R} \) be given constants. Consider the non homogeneous problem (1) with the corresponding functions of (15) and let \( \lambda _{1,2} \) be the roots of the equation \( \lambda ^{2}+\mu \lambda +\nu =0 \). Then there holds:

  1. 1.

    Assume either \( \mu ^{2}-4\nu \ge 0 \) or

    $$\begin{aligned} g(1)\ne g(0)\textrm{e}^{\frac{k\pi }{\beta }} \text{ for } \text{ every } k\in \mathbb {N}. \end{aligned}$$
    (18)

    Then the solution of (1) is given by

    $$\begin{aligned} u(t)=\frac{u_{1}(t)}{N}\int _{0}^{t}u_{0}h(s)\;\textrm{d}s+\frac{u_{0}(t)}{N}\int _{t}^{1}u_{1}h(s)\;\textrm{d}s. \end{aligned}$$
    (19)

    The functions \( u_{0,1} \) and \( N\in \mathbb {R} \) are specified in the following:

    1. 1.

      if \( \mu ^{2}-4\nu >0 \) and \( \lambda _{1,2} \) are the two distinct real roots then

      $$\begin{aligned} u_{j}(s)=r(g^{\lambda _{2}}(j)g^{\lambda _{1}}(s)-g^{\lambda _{1}}(j)g^{\lambda _{2}}(s)),\;j=0,1, \end{aligned}$$

      and \( N=(g^{\lambda _{2}}(0)g^{\lambda _{1}}(1)-g^{\lambda _{1}}(0)g^{\lambda _{2}}(1))(\lambda _{1}-\lambda _{2}) \) is nonzero,

    2. 2.

      if \( \mu ^{2}-4\nu <0 \), the assumption (18) is valid, and \( \lambda _{1}=\alpha +\beta i \), \( \lambda _{2}=\alpha -\beta i \) for \( \beta \ne 0 \) then

      $$\begin{aligned} u_{j}(s)=rg^{\alpha }(s)\sin \left( \beta \,\textrm{ln}\,\frac{g(s)}{g(j)}\right) ,\;j=0,1, \end{aligned}$$

      and \( N=\beta \sin \left( \beta \,\textrm{ln}\,\frac{g(1)}{g(0)}\right) \) is nonzero,

    3. 3.

      if \( \mu ^{2}-4\nu =0 \) and \( \lambda =\lambda _{1}=\lambda _{2} \) is the multiple root then

      $$\begin{aligned} u_{j}(s)=rg^{\lambda }(s)(\textrm{ln}\,g(j)-\textrm{ln}\,g(s)),\;j=0,1, \end{aligned}$$

      and \( N=\textrm{ln}\frac{g(1)}{g(0)} \) is nonzero.

  2. 4.

    Let \( \mu ^{2}-4\nu <0 \) and \( \lambda _{1}=\alpha +\beta i \), \( \lambda _{2}=\alpha -\beta i \) for \( \beta \ne 0 \). Assume that (18) is not true. Denote \( u_{0}(s)=rg^{\alpha }(s)\sin \left( \beta \,\textrm{ln}\,\frac{g(s)}{g(0)}\right) \) and \( u_{2}(s)=rg^{\alpha }(s)\cos \left( \beta \,\textrm{ln}\,\frac{g(s)}{g(0)}\right) \). Then for every \( h\in C(I,\mathbb {R}) \) such that

    $$\begin{aligned} \int _{0}^{1}u_{0}h(s)\;\textrm{d}s=0, \end{aligned}$$
    (20)

    the function

    $$\begin{aligned} \nonumber u(t)=- & {} \frac{u_{0}(t)}{\beta }\int _{t}^{1}u_{2}h(s)\;\textrm{d}s -\frac{u_{2}(t)}{\beta }\int _{0}^{t}u_{0}h(s)\;\textrm{d}s\\+ & {} \frac{u_{0}(t)}{\beta ||u_{0}||_{2}^{2}} \left( \int _{0}^{1}u_{2}h(s)\int _{0}^{s}u_{0}^{2}(\tau )\;\textrm{d}\tau \;\textrm{d}s +\int _{0}^{1}u_{0}h(s)\int _{s}^{1}u_{0}u_{2}(\tau )\;\textrm{d}\tau \;\textrm{d}s \right) \end{aligned}$$

    is the unique solution of the problem (1) such that \( \int _{0}^{1}u_{0} u(t)\;\textrm{d}t=0. \)

Proof

Let us prove the case 1. We solve the problem (1) for a given \(h\in C(I,\mathbb {R})\) where the functions rpq are given in (15). Our goal is to find a Green’s function for (1) of the form

$$\begin{aligned} G:I\times I\rightarrow \mathbb {R},\quad G(s,t)={\left\{ \begin{array}{ll} A(s)u_{0}(t), &{} 0\le t \le s\le 1,\\ B(s)u_{1}(t), &{} 0\le s <t\le 1. \end{array}\right. } \end{aligned}$$
(21)

The functions \( u_{0} \) and \( u_{1} \) in (21) are nontrivial solutions of the homogeneous equation (2) such that it holds \( u_{0}(0)=0 \) and \( u_{1}(1)=0 \). This assures that G satisfies the boundary conditions in (1). The functions A and B are unknown and can be determined using the basic properties of Green functions: the continuity of G at the diagonal of \( I\times I \) and the jump discontinuity of the derivative \( G^{\prime }_{t} \) at the same diagonal.

Let \( \mu ,\nu \in \mathbb {R} \) are given. The form of the general solution of (5) depends on the choice of these constants. Due to this, we divide the case 1 into three subcases.

Case 1a. Let \( \mu ^{2}-4\nu >0 \). In this case, it holds that \( U(z)=c\textrm{e}^{\lambda _{1}z}+d\textrm{e}^{\lambda _{2}z} \) for some constants \( c,d\in \mathbb {R} \). From (3) and our choice of f, we deduce that \( u(s)=r(s)(cg(s)^{\lambda _{1}}+dg(s)^{\lambda _{2}}) \) and the functions \( u_{0,1} \) are particular solutions that satisfy \( u_{j}(j)=0 \) for \( j=0,1 \). \( u_{0,1} \) are necessary for finding the Green’s function of the form (21). Due to the assumption (16), it holds

$$\begin{aligned} g(0)\ne g(1) \end{aligned}$$

or equivalently,

$$\begin{aligned} g^{\lambda _{2}}(0)g^{\lambda _{1}}(1)-g^{\lambda _{1}}(0)g^{\lambda _{2}}(1)\ne 0. \end{aligned}$$
(22)

This means that the functions \( u_{0} \) and \( u_{1} \) are linearly independent and the Green function defined by (21) exists. Based on the properties of the Green function, it holds \( Au_{0}(s)=Bu_{1}(s) \) and \( Bu^{\prime }_{1}(s)-Au^{\prime }_{0}(s)=\frac{1}{p(s)} \) for \( s\in I \). Thus A and B are the solutions of the system

$$\begin{aligned} \left( \begin{matrix} u_{0}&{}-u_{1}\\ -u_{0}^{\prime } &{}u_{1}^{\prime } \end{matrix} \Bigg |\begin{matrix} 0\\ \frac{1}{p} \end{matrix} \right) . \end{aligned}$$
(23)

Observe that \( pr^{2}g^{\prime }g^{\lambda _{1}+\lambda _{2}-1}(s)=1 \). By a straightforward calculation, one can prove that

$$ det \left( \begin{matrix} u_{0}&{}-u_{1}\\ -u_{0}^{\prime }&{}u_{1}^{\prime } \end{matrix} \right) = (g^{\lambda _{2}}(0)g^{\lambda _{1}}(1)-g^{\lambda _{1}}(0)g^{\lambda _{2}}(1))(\lambda _{1}-\lambda _{2})r^{2}g^{\prime }g^{\lambda _{1}+\lambda _{2}-1}(s)=\frac{N}{p(s)} $$

is nonzero due to (22). We apply the Cramer’s rule we deduce that \( A=\frac{u_{1}}{N} \) and \( B=\frac{u_{0}}{N} \). The function

$$\begin{aligned} u(t)=\int _{0}^{1}G(s,t)h(s)\;\textrm{d}s=u_{1}(t)\int _{0}^{t}Bh(s)\;\textrm{d}s+u_{0}(t)\int _{t}^{1}Ah(s)\;\textrm{d}s \end{aligned}$$

is the desired solution of the problem (1).

Case 1b. Let \( \mu ^{2}-4\nu <0 \) and (18) be true. The corresponding real-valued solution is \( U(z)=\textrm{e}^{\alpha z}(c\sin \beta z+d\cos \beta z) \) where \( c,d\in \mathbb {R} \). Hence \( u(s)=r(s)g^{\alpha }(s)(c\sin (\beta \,\textrm{ln}\,g(s))+d\cos (\beta \,\textrm{ln}\,g(s))) \) and the functions \( u_{0,1} \) are particular solutions such that \( u_{j}(j)=0 \) for \( j=0,1 \).

The assumption (18) implies that the functions \( u_{0} \) and \( u_{1} \) are linearly independent. Note that since g is increasing it suffices to take \( k\in \mathbb {N} \) instead of \( k\in \mathbb {Z} \). Similarly as in the case 1, we come to the system (23) and the determinant

$$ det \left( \begin{matrix} u_{0}&{}-u_{1} \\ -u_{0}^{\prime }&{}u_{1}^{\prime } \end{matrix} \right) = \beta r^{2}g^{\prime }g^{2\alpha -1}(s)\sin \left( \beta \,\textrm{ln}\,\frac{g(1)}{g(0)}\right) =\frac{ N}{ p} $$

is nonzero due to (18). Hence \( A=\frac{u_{1}}{N} \), \( B=\frac{u_{0}}{N} \) and the solution of the problem (1) can be expressed by (19).

Case 1c. - \( \mu ^{2}-4\nu =0 \). In this case, the equation \( \lambda ^{2}+\mu \lambda +\nu =0 \) has one multiple root \( \lambda \in \mathbb {R} \). The corresponding solution is \( U(z)=c\textrm{e}^{\lambda z}+dz\textrm{e}^{\lambda z}\) where \( c,d\in \mathbb {R} \). Hence \( u(s)=rg^{\lambda }(s)(c+d\,\textrm{ln}\,g(s)) \) and again, \( u_{j}(j)=0 \) for \( j=0,1 \).

Again, we come to the system (23) and the determinant

$$ det \left( \begin{matrix} u_{0}&{}-u_{1} \\ -u_{0}^{\prime }&{}u_{1}^{\prime } \end{matrix} \right) = r^{2}g^{\prime }g^{2\lambda -1}(s)\textrm{ln}\,\frac{g(1)}{g(0)} =\frac{N}{p(s)} $$

is nonzero. Hence \( A=\frac{u_{1}}{N} \), \( B=\frac{u_{0}}{N} \) and the solution of the problem (1) can be expressed by (19).

Now, we prove the case 2. In this case, zero is an eigenvalue of the operator

$$ L:L^{2}(0,1)\rightarrow L^{2}(0,1),\quad Lu(s)=(pu(s)')'+qu(s) $$

defined on domain \( \mathcal {D}(L)=\{u\in C^{2};\;u(0)=u(1)=0\} \). The space \( L^{2}(0,1) \) is equipped with the standard integral norm \( ||u||_{2}:=\sqrt{\int _{0}^{1}u^{2}(s)\;\textrm{d}s} \). The function \( u_{0} \) is an eigenfunction of L corresponding to the zero eigenvalue. It is known that there exists a solution of the non homogeneous problem (2) if and only if (20) is valid; see e.g. [9]. Such solution is not even unique.

If (18) is not true then the Green’s function of the form (21) does not exist. However, it is still possible to find a Green’s function in a different form and to obtain the solution u of the problem (2). Let \( h\in C(I,\mathbb {R}) \) be such that (20) holds.

Note that \( u_{2} \) is a solution of (2) and \( u_{0},u_{2} \) are linearly independent. We take the following Green’s function

$$\begin{aligned} G:I\times I\rightarrow \mathbb {R},\quad G(s,t)={\left\{ \begin{array}{ll} C(s)u_{0}(t), &{} 0\le t \le s\le 1,\\ A(s)u_{0}(t)+B(s)u_{2}(t), &{} 0\le s <t\le 1. \end{array}\right. } \end{aligned}$$
(24)

Clearly, \( G(s,0)=0 \) for \( s\in I \). Since we require that \( \int _{0}^{1}u_{0} u(s)\;\textrm{d}s=0 \) we find the functions ABC so that

$$\begin{aligned} \int _{0}^{1}G(s,t)u_{0}(t)\;\textrm{d}t=0 \text{ for } s\in I. \end{aligned}$$
(25)

This implies that \( u(t)=\int _{0}^{1}G(s,t)h(s)\;\textrm{d}s \) is the desired solution. Indeed, it holds

$$\begin{aligned} \int _{0}^{1}u_{0}u(t)\;\textrm{d}t&=\int _{0}^{1}u_{0}(t)\int _{0}^{1}G(s,t)h(s)\;\textrm{d}s\;\textrm{d}t\\&=\int _{0}^{1}h(s)\int _{0}^{1}G(s,t)u_{0}(t)\;\textrm{d}t\;\textrm{d}s=0. \end{aligned}$$

Using (25) and (24), we come to the equation

$$ A(s)\int _{s}^{1}u_{0}^{2}\;\textrm{d}t+B(s)\int _{s}^{1}u_{0}u_{2}\;\textrm{d}t+C(s)\int _{0}^{s}u_{0}^{2}\;\textrm{d}t=0. $$

This along with the equations derived from properties of Green functions, namely the continuity of G and the jump discontinuity of the derivative \( G^{\prime }_{t} \), lead to the system

A simple computation shows that \( u_{0}u^{\prime }_{2}-u_{2}u^{\prime }_{0}=-\beta r^{2}g^{\prime }g^{2\alpha -1} \) and

$$\begin{aligned} det \left( \begin{matrix} \int _{s}^{1}u_{0}^{2}\;\textrm{d}t &{} \int _{s}^{1}u_{0}u_{2}\;\textrm{d}t &{} \int _{0}^{s}u_{0}^{2}\;\textrm{d}t \\ u_{0} &{} u_{2} &{} -u_{0}\\ u_{0}^{\prime } &{} u_{2}^{\prime } &{} -u_{0}^{\prime } \end{matrix} \right) = -\beta ||u_{0}||_{2}^{2} r^{2}g^{\prime }g^{2\alpha -1}(s)=-\frac{\beta }{p} ||u_{0}||_{2}^{2} \end{aligned}$$

is obviously nonzero. Thus

$$\begin{aligned} A(s)= & {} \frac{u_{2}(s)\int _{0}^{s}u_{0}^{2}\;\textrm{d}t+u_{0}(s)\int _{s}^{1}u_{0}u_{2}\;\textrm{d}t}{\beta ||u_{0}||_{2}^{2}},\\ B(s)= & {} -\frac{u_{0}(s)}{\beta },\\ C(s)= & {} \frac{-u_{2}(s)\int _{s}^{1}u_{0}^{2}\;\textrm{d}t+u_{0}(s)\int _{s}^{1}u_{0}u_{2}\;\textrm{d}t}{\beta ||u_{0}||_{2}^{2}}, \end{aligned}$$

and the corresponding solution

$$\begin{aligned} u(t)=u_{0}(t)\int _{0}^{t}Ah(s)\;\textrm{d}s+u_{2}(t)\int _{0}^{t}Bh(s)\;\textrm{d}s+u_{0}(t)\int _{t}^{1}Ch(s)\;\textrm{d}s \end{aligned}$$

satisfies the condition \( u(0)=0 \) and \( \int _{0}^{1}u_{0} u(s)\;\textrm{d}s=0 \). Moreover, due to (20), there holds \( u(1)=0 \). Rearranging the terms in the last expression, we obtain

$$\begin{aligned} u(t)= & {} - \frac{u_{2}(t)}{\beta }\int _{0}^{t}u_{0}h(s)\;\textrm{d}s\\{} & {} +\frac{u_{0}(t)}{\beta ||u_{0}||_{2}^{2}}\left( \int _{0}^{t}u_{2}h(s)\int _{0}^{s}u_{0}^{2}(\tau )\;\textrm{d}\tau \;\textrm{d}s -\int _{t}^{1}u_{2}h(s)\left( ||u_{0}||_{2}^{2}-\int _{0}^{s}u_{0}^{2}(\tau )\;\textrm{d}\tau \right) \;\textrm{d}s \right) \\{} & {} +\frac{u_{0}(t)}{\beta ||u_{0}||_{2}^{2}}\left( \int _{0}^{t}u_{0}h(s)\int _{s}^{1}u_{0}u_{2}(\tau )\;\textrm{d}\tau \;\textrm{d}s +\int _{t}^{1}u_{0}h(s)\int _{s}^{1}u_{0}u_{2}(\tau )\;\textrm{d}\tau \;\textrm{d}s \right) \end{aligned}$$

and the assertion follows. \(\square \)

4 First order systems of ODEs

In this section, we find exact fundamental matrices of certain linear ODEs. Let M(n) be a linear space of \(n\times n\) matrices, and \(I\subset \mathbb {R}\) be an open interval.

Lemma 4.1

If \(G: I\rightarrow M(n)\) has a derivative \(G'(z_0)\) at \(z_0\in I\) then also \(H: I\rightarrow M(n)\) given as \(H(z)=e^{G(z)}\) has a derivative at \(z_0\). If in addition

$$\begin{aligned} G'(z_0)G(z_0)=G(z_0)G'(z_0), \end{aligned}$$
(26)

then

$$\begin{aligned} H'(z_0)=G'(z_0)e^{G(z_0)}=e^{G(z_0)}G'(z_0). \end{aligned}$$
(27)

Proof

The mapping \(e: M(n)\rightarrow M(n)\) given by \(e^K=\sum _{i=0}^\infty \frac{K^i}{i!}\) is \(C^\infty \)-smooth and a straightforward computation of the derivative \(De^KS\) of \(e^K\) at \(S\in M(n)\) leads to

$$ De^KS=e^KS=Se^K $$

whenever \(KS=SK\). Since \(H(z)=e^{G(z)}\), the chain rule gives

$$ H'(z_0)=De^{G(z_0)}G'(z_0)=e^{G(z_0)}G'(z_0)=G'(z_0)e^{G(z_0)}. $$

The proof is completed. \(\square \)

Assume \(0\in I\). If \(A\in L^1_{loc}(I,M(n))\) then it is well-known that \(G(z)=\int _0^zA(s)ds\) has a derivative for almost each (f.a.e.) \(z\in I\) with \(G'(z)=A(z)\). Then applying Lemma 4.2, we obtain

Lemma 4.2

If \(A\in L^1_{loc}(I,M(n))\) then

$$\begin{aligned} X(z)=e^{\int _0^zA(s)ds} \end{aligned}$$
(28)

has a derivative \(X'(z)\) f.a.e. \(z\in I\). If in addition

$$\begin{aligned} A(z_1)A(z_2)=A(z_2)A(z_1)\forall z_1,\forall z_2\in I, \end{aligned}$$
(29)

then

$$\begin{aligned} X'(z)=A(z)X(z)=X(z)A(z). \end{aligned}$$
(30)

Proof

(29) implies

$$\begin{aligned} A(z)\int _0^zA(s)ds=\int _0^zA(z)A(s)ds=\int _0^zA(s)dsA(z). \end{aligned}$$
(31)

This verifies (26) and proof is finished. \(\square \)

Lemma 4.2 shows that (28) is the fundamental solution of (30). If in addition, A(z) is T-periodic in Lemma 4.2, i.e., \(A(z+T)=A(z)\) for any \(z\in I=\mathbb {R}\), then (28) has a form

$$ X(z)=e^{\int _0^zA(s)ds}=e^{\int _0^zA(s)ds-\frac{1}{T}\int _0^TA(s)ds}e^{\frac{1}{T}\int _0^TA(s)ds}=P(z)e^{\frac{1}{T}\int _0^TA(s)ds}, $$

with \(P(z+T)=P(z)\) for any \(z\in \mathbb {R}\), which is the Floquet’s theorem.

Assumption (29) seems to be very special, but there is the following family of such A(z):

$$\begin{aligned} A(z)=\sum _{i=1}^ka_i(z)A_i \end{aligned}$$
(32)

for \(1\le k\le n^2\), \(a_i\in L^1_{loc}(I,\mathbb {R})\), \(A_i\in M(n)\) which are commutative

$$\begin{aligned} A_iA_j=A_jA_i. \end{aligned}$$
(33)

Indeed, we have

$$ \begin{gathered} A(z_1)A(z_2)=\sum _{i=1}^ka_i(z_1)A_i\sum _{j=1}^ka_j(z_2)A_j=\sum _{i,k=1}^ka_i(z_1)a_j(z_2)A_iA_j=\\ \sum _{i,k=1}^ka_j(z_1)a_i(z_2)A_jA_i=\sum _{j=1}^ka_j(z_2)A_j\sum _{i=1}^ka_i(z_1)A_i=A(z_2)A(z_1), \end{gathered} $$

so (29) is verified. Then (28) has a form

$$\begin{aligned} X(z)=e^{\sum _{i=1}^k\int _0^za_i(s)dsA_i}. \end{aligned}$$
(34)

If in addition, it holds

$$\begin{aligned} A_iA_j=A_jA_i=0,\quad i\ne j, \end{aligned}$$
(35)

then we derive

$$\begin{aligned} X(z)= & {} e^{\sum _{i=1}^k\int _0^za_i(s)dsA_i}=\sum _{j=0}^\infty \frac{1}{j!}\left( \sum _{i=1}^k\int _0^za_i(s)dsA_i\right) ^j\nonumber \\= & {} I+\sum _{i=1}^k\sum _{j=1}^\infty \frac{1}{j!}\left( \int _0^za_i(s)ds\right) ^jA_i^j=\sum _{i=1}^ke^{\int _0^za_i(s)dsA_i}-(k-1)I. \end{aligned}$$
(36)

We note that any \(B\in M(n)\), which is not a multiple of I, generates such a 3-parametric family by

$$ A(z)=a_1(z)I+a_2(z)B+a_3(z)C $$

for any \(C\in Com(B)\setminus [I,B]\), where

$$ Com(B)=\{C\in M(n): CB=BC\}. $$

Formula (32) can be extended to

$$\begin{aligned} A(z)=\sum _{i=1}^\infty a_i(z)A_i \end{aligned}$$
(37)

under assumption (33) along with

$$\begin{aligned} \sum _{i=1}^\infty \Vert a_i\Vert _\infty \Vert A_i\Vert <\infty . \end{aligned}$$

As a special case is

$$\begin{aligned} A(z)=\sum _{i=1}^\infty a_i(z)A^i \end{aligned}$$

under assumption

$$\begin{aligned} \sum _{i=1}^\infty \Vert a_i\Vert _\infty \Vert A\Vert ^i<\infty , \end{aligned}$$

that is

$$\begin{aligned} \Vert A\Vert <\frac{1}{\limsup _{i\rightarrow \infty }\root i \of {\Vert a_i\Vert _\infty }}. \end{aligned}$$

Another family is presented in the quaternion field \(\mathbb {H}\) given by

$$\begin{aligned} A(z)q=\sum _{i=1}^\infty p_i(z)qq_i(z),\quad z\in \mathbb {H}\end{aligned}$$

for \(p_i,q_i\in L^\infty (I,\mathbb {C})\) satisfying

$$\begin{aligned} \sum _{i=1}^\infty \Vert p_i\Vert _\infty \Vert q_i\Vert _\infty <\infty . \end{aligned}$$

Remark 4.3

The above results can be directly extended either to Banach spaces or to Banach algebras.