Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This chapter deals with linear systems of ordinary differential equations (ODEs), both homogeneous and nonhomogeneous equations. Linear systems are extremely useful for analyzing nonlinear systems. The main emphasis is given for finding solutions of linear systems with constant coefficients so that the solution methods could be extended to higher dimensional systems easily. The well-known methods such as eigenvalue–eigenvector method and the fundamental matrix method have been described in detail. The properties of fundamental matrix, the fundamental theorem, and important properties of exponential matrix function are given in this chapter. It is important to note that the set of all solutions of a linear system forms a vector space. The eigenvectors constitute the solution space of the linear system. The general solution procedure for linear systems using fundamental matrix, the concept of generalized eigenvector, solutions of multiple eigenvalues, both real and complex, are discussed.

2.1 Linear Systems

Consider a linear system of ordinary differential equations as follows:

$$ \left. {\begin{array}{*{20}l} {\frac{{{\text{d}}x_{1} }}{{{\text{d}}t}} = \dot{x}_{1} = a_{11} x_{1} + a_{12} x_{2} + \cdots + a{}_{1n}x_{n} + b_{1} } \hfill \\ {\frac{{{\text{d}}x_{2} }}{{{\text{d}}t}} = \dot{x}_{2} = a_{21} x_{1} + a_{22} x_{2} + \cdots + a{}_{2n}x_{n} + b_{2} } \hfill \\ \vdots \hfill \\ {\frac{{{\text{d}}x_{n} }}{{{\text{d}}t}} = \dot{x}_{n} = a_{n1} x_{1} + a_{n2} x_{2} + \cdots + a{}_{nn}x_{n} + b_{n} } \hfill \\ \end{array} } \right\} $$
(2.1)

where \( a_{ij} ,b_{j} (i,j = 1,2, \ldots ,n) \) are all given constants. The system (2.1) can be written in matrix notation as

$$ {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} + {\mathop b\limits_{\sim}} $$
(2.2)

where \( {\mathop x\limits_{\sim}} (t) = \left( {x_{1} (t),x_{2} (t), \ldots ,x_{n} (t)} \right)^{t} ,\;{\mathop b\limits_{\sim}} = \left( {b_{1} ,b_{2} , \ldots ,b_{n} } \right)^{t} \) are the column vectors and A = [a ij ] n×n is the square matrix of order n, known as the coefficient matrix of the system. The system (2.2) is said to be homogeneous if \( {\mathop b\limits_{\sim}} = {\mathop 0\limits_{\sim}} \), that is, if all \( b_{i} \)’s are identically zero. On the other hand, if \( {\mathop b\limits_{\sim}} \ne {\mathop 0\limits_{\sim}} \), that is, if at least one \( b_{i} \) is nonzero, then the system is called nonhomogeneous. We consider first linear homogeneous system as

$$ {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} $$
(2.3)

A differentiable function \( {\mathop x\limits_{\sim}} (t ) \) is said to be a solution of (2.3) if it satisfies the equation \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \). Let \( \mathop x\limits_{\sim}{_{1}} (t ) \) and \( \mathop x\limits_{\sim}{_{2}} (t ) \) be two solutions of (2.3). Then any linear combination \( {\mathop x\limits_{\sim}} (t )= c_{1} \mathop x\limits_{\sim}{_{1}} (t )+ c_{2} \mathop x\limits_{\sim}{_{2}} (t ) \) of \( \mathop x\limits_{\sim}{_{1}} (t ) \) and \( \mathop x\limits_{\sim}{_{2}} (t ) \) is also a solution of (2.3). This can be shown very easily as below.

$$ {\mathop {\dot{x}}\limits_{\sim }} = c_{1} \mathop {\dot{x}}\limits_{\sim}{_{1}} + c_{2} \mathop {\dot{x}}\limits_{\sim}{_{2}} $$

and so

$$ A{\mathop x\limits_{\sim}} = A (c_{1} \mathop x\limits_{\sim}{_{1}} + c_{2} \mathop x\limits_{\sim}{_{2}} )= c_{1} A \mathop x\limits_{\sim}{_{1}} + c_{2} A \mathop x\limits_{\sim}{_{2}} = c_{1} \mathop {\dot{x}}\limits_{\sim}{_{1}} + c_{2} \mathop {\dot{x}}\limits_{\sim}{_{2}} = {\mathop {\dot{x}}\limits_{\sim }}. $$

The solution \( {\mathop x\limits_{\sim}} = c_{1} \mathop x\limits_{\sim}{_{1}} + c_{2} \mathop x\limits_{\sim}{_{2}} \) is known as general solution of the system (2.3). Thus the general solution of a system is the linear combination of the set of all solutions of that system (superposition principle). Since the system is linear, we may consider a nontrivial solution of (2.3) as

$$ {\mathop x\limits_{\sim}} (t )= {\mathop \alpha \limits_{ \sim }} e^{\lambda t} $$
(2.4)

where \( {\mathop \alpha \limits_{ \sim }} \) is a column vector with components \( {\mathop \alpha \limits_{ \sim }} = \left( {\alpha_{1} ,\alpha_{2} , \ldots ,\alpha_{n} } \right)^{t} \) and λ is a number. Substituting (2.4) into (2.3) we obtain

$$ \begin{aligned} \lambda {\mathop \alpha \limits_{ \sim }} e^{\lambda t} & = A{\mathop \alpha \limits_{ \sim }} e^{\lambda t} \\ {\text{or}},\left( {A - \lambda I} \right){\mathop \alpha \limits_{ \sim }} & = {\mathop 0\limits_{\sim}} \\ \end{aligned} $$
(2.5)

where I is the identity matrix of order n. Equation (2.5) gives a nontrivial solution if and only if

$$ {\text{det(}}A - \lambda I )= 0 $$
(2.6)

On expansion, Eq. (2.6) gives a polynomial equation of degree n in λ, known as the characteristic equation of matrix A. The roots of the characteristic equation (2.6) are called the characteristic roots or eigenvalues or latent roots of A. The vector \( {\mathop \alpha \limits_{ \sim }} \), which is a nontrivial solution of (2.5), is known as an eigenvector of A corresponding to the eigenvalue λ. If \( {\mathop \alpha \limits_{ \sim }} \) is an eigenvector of a matrix A corresponding to an eigenvalue λ, then \( {\mathop x\limits_{\sim}} (t )= e^{\lambda t} {\mathop \alpha \limits_{ \sim }} \) is a solution of the system \( \dot{{\mathop x\limits_{\sim}} } = A{\mathop x\limits_{\sim}} \). The set of linearly independent eigenvectors constitutes a solution space of the linear homogeneous ordinary differential equations which is a vector space. All properties of vector space hold good for the solution space. We now discuss the general solution of a linear system below.

2.2 Eigenvalue–Eigenvector Method

As we know, the solution of a linear system constitutes a linear space and the solution is formed by the eigenvectors of the matrix. There may have four possibilities according to the eigenvalues and corresponding eigenvectors of matrix A. We proceed now case-wise as follows.

Case I: Eigenvalues of A are real and distinct

If the coefficient matrix A has real distinct eigenvalues, then it has linearly independent (L.I.) eigenvectors. Let \( \mathop \alpha \limits_{ \sim }{_{1}} ,\mathop \alpha \limits_{ \sim }{_{2}} , \ldots , \mathop \alpha \limits_{ \sim }{_{n}} \) be the eigenvectors corresponding to the eigenvalues \( \uplambda_{1} ,\uplambda_{2} , \ldots\uplambda_{n} \) of matrix A. Then each \( \mathop x\limits_{\sim}{_{j}} (t )= \mathop \alpha \limits_{ \sim }{_{j}} \,e^{{{{\lambda_{j} {t}}} }} \), j = 1, 2, …, n is a solution of \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \). The general solution is a linear combination of the solutions \( \mathop x\limits_{\sim}{_{j}}(t) \) and is given by

$$ {\mathop x\limits_{\sim}} (t )= \sum\limits_{j = 1}^{n} {c_{j}\,\mathop x\limits_{\sim}{_{j}}(t)} $$

where \( c_{1} ,c_{2} , \ldots,c_{n} \) are arbitrary constants. In \( {\mathbf{\mathbb{R}}}^{2} \), the solution can be written as

$$ {\mathop x\limits_{\sim}} (t )= \sum\limits_{j = 1}^{2} {c_{j} \mathop \alpha \limits_{ \sim}{_{j}} e^{{{{\lambda_{j} t}} }} } = c_{1} \mathop \alpha \limits_{ \sim}{_{1}} e^{{{{\lambda_{1} t}} }} + c_{2} \mathop \alpha \limits_{ \sim}{_{2}} e^{{{{\lambda_{2} t}} }}. $$

Case II: Eigenvalues of A are real but repeated

In this case matrix A may have either n linearly independent eigenvectors or only one or many (<n) linearly independent eigenvectors corresponding to the repeated eigenvalues . The generalized eigenvectors have been used for linearly independent eigenvectors. We discuss this case in the following two sub-cases.

Sub-case 1: Matrix A has linearly independent eigenvectors

Let \( \mathop \alpha \limits_{ \sim}{_{1}} , \mathop \alpha \limits_{ \sim}{_{2}} , \ldots , \mathop \alpha \limits_{ \sim}{_{n}} \) be n linearly independent eigenvectors corresponding to the repeated real eigenvalue λ of matrix A. In this case the general solution of the linear system is given by

$$ {\mathop x\limits_{\sim}} (t) = \sum\limits_{i = 1}^{n} {c_{i} \mathop \alpha \limits_{ \sim}{_{i}} e^{\lambda t} }. $$

Sub-case 2. Matrix A has only one or many (<n) linearly independent eigenvectors

First, we give the definition of generalized eigenvector of A. Let λ be an eigenvalue of the n × n matrix A of multiplicity m ≤ n. Then for k = 1, 2, …, m, any nonzero solution of the equation \( (A - \lambda I)^{k} {\mathop v\limits_{\sim}} = {\mathop 0\limits_{\sim}} \) is called a generalized eigenvector of A. For simplicity consider a two dimensional system. Let the eigenvalues be repeated but only one eigenvector, say \( \mathop \alpha \limits_{ \sim}{_{1} }\) be linearly independent. Let \( \mathop \alpha \limits_{ \sim }{_{2}} \) be a generalized eigenvector of the 2 × 2 matrix A. Then \( \mathop \alpha \limits_{ \sim }{_{2}} \) can be obtained from the relation \( (A - \lambda I) \mathop \alpha \limits_{ \sim}{_{2}} = \mathop \alpha \limits_{ \sim}{_{1}} \Rightarrow A \mathop \alpha \limits_{ \sim }{_{2}} = \lambda \mathop \alpha \limits_{ \sim }{_{2}} + \mathop \alpha \limits_{ \sim }{_{1}} \). So the general solution of the system is given by

$$ {\mathop x\limits_{\sim}} (t) = c_{1} \mathop \alpha \limits_{ \sim }{_{1}} e^{\lambda t} + c_{2} (t \mathop \alpha \limits_{ \sim }{_{1}} e^{\lambda t} + {\mathop \alpha \limits_{ \sim }}_{2} e^{\lambda t} ). $$

Similarly, for an n × n matrix A, the general solution may be written as \( {\mathop x\limits_{\sim}} (t) = \sum\nolimits_{i = 1}^{n} {c_{i} \mathop x\limits_{\sim}{_{i}} (t)} \), where

$$ \begin{array}{*{20}l} \mathop x\limits_{\sim}{_{1}} (t) =\mathop \alpha \limits_{ \sim }{_{1}} e^{\lambda t}, \hfill \\ \mathop{x}\limits_{\sim}{_{2}} (t) = t \mathop \alpha \limits_{ \sim }{_{1}} e^{\lambda t} + \mathop \alpha \limits_{ \sim}{_{2}} e^{\lambda t}, \hfill \\ \mathop x\limits_{\sim}{_{3}} (t) = \frac{{t^{2} }}{2 !} \mathop \alpha \limits_{ \sim }{_{1}} e^{\lambda t} + t \mathop \alpha \limits_{ \sim }{_{2}} e^{\lambda t} + \mathop \alpha \limits_{ \sim }{_{3}} e^{\lambda t} , \hfill \\ \vdots \hfill \\ {\mathop x\limits_{\sim}{_{n}} (t) = \frac{{t^{n - 1} }}{(n - 1) !} \mathop \alpha \limits_{ \sim }{_{1}} e^{\lambda t} + \cdots + \frac{{t^{2} }}{2 !} \mathop \alpha \limits_{ \sim }{_{n - 2}} e^{\lambda t} + t \mathop \alpha \limits_{ \sim }{_{n - 1}} e^{\lambda t} + \mathop \alpha \limits_{ \sim }{_{n}} e^{\lambda t}.} \hfill \\ \end{array} $$

Case III: Matrix A has non-repeated complex eigenvalues

Suppose the real n × n matrix A has m-pairs of complex eigenvalues \( a_{j} \pm ib_{j} ,j = 1,2, \ldots ,m \). Let \( {\mathop \alpha \limits_{ \sim }}_{j} \pm i{\mathop \beta \limits_{ \sim }}_{j} ,j = 1,2, \ldots ,m \) denote the corresponding eigenvectors . Then the solution of the system \( {\mathop {\dot{x}}\limits_{\sim }} (t )= A{\mathop x\limits_{\sim}} (t ) \) for these complex eigenvalues is given by

$$ {\mathop x\limits_{\sim}} (t) = \sum\limits_{j = 1}^{m} {c_{j} } \mathop u\limits_{\sim}{_{j}} + d_{j} \mathop v\limits_{\sim}{_{j}} $$

where \( \mathop u\limits_{\sim}{_{j}} = \exp (a_{j} t)\{ {\mathop \alpha \limits_{ \sim }}_{j} \cos (b_{j} t) - {\mathop \beta \limits_{ \sim }}_{j} \sin (b_{j} t)\} \), \( \mathop v\limits_{\sim}{_{j}} = \exp (a_{j} t)\{ {\mathop \alpha \limits_{ \sim }}_{j} \sin(b_{j} t) + {\mathop \beta \limits_{ \sim }}_{j} \cos(b_{j} t)\} \) and \( c_{j} ,d_{j} (j = 1,2, \ldots ,m) \) are arbitrary constants. We discuss each of the above cases through specific examples below.

Example 2.1

Find the general solution of the following linear homogeneous system using eigenvalue-eigenvector method:

$$ \begin{aligned} \dot{x} & = 5x + 4y \\ \dot{y} & = x + 2y. \\ \end{aligned} $$

Solution

In matrix notation, the system can be written as \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), where \( {\mathop x\limits_{\sim}} = \left( {\begin{array}{*{20}c} x \\ y \\ \end{array} } \right) \) and \( A = \left( {\begin{array}{*{20}c} 5 & 4 \\ 1 & 2 \\ \end{array} } \right) \). The eigenvalues of A satisfy the equation

$$ \begin{aligned} & {\text{det(}}A - \lambda I )= 0 \\ & \Rightarrow \;\left| {\begin{array}{*{20}c} {5 - \lambda } & 4 \\ 1 & {2 - \lambda } \\ \end{array} } \right| = 0 \\ & \Rightarrow \;\left( {5 - \lambda } \right)\left( {2 - \lambda } \right) - 4 = 0 \\ & \Rightarrow \;\lambda^{2} - 7\lambda + 6 = 0. \\ \end{aligned} $$

The roots of the characteristic equation \( \lambda^{2} - 7\lambda + 6 = 0 \) are λ = 1, 6. So the eigenvalues of A are real and distinct. We shall now find the eigenvectors corresponding to these eigenvalues.

Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{1} = 1 \). Then

$$ \begin{aligned} & \left( {A - I} \right){\mathop e\limits_{\sim}} = {\mathop 0\limits_{\sim}} \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {5 - 1} & 4 \\ 1 & {2 - 1} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {4e_{1} + 4e_{2} } \\ {e_{1} + e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad 4e_{1} + 4e_{2} = 0,\;e_{1} + e_{2} = 0. \\ \end{aligned} $$

We can choose \( e_{{_{1} }} = 1,\;e_{{_{2} }} = - 1 \). So, the eigenvector corresponding to the eigenvalue \( \lambda_{{_{1} }} = 1 \) is \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ { - 1} \\ \end{array} } \right). \)

Again, let \( {\mathop e\limits_{\sim}}^{\prime} = \left( {\begin{array}{*{20}c} {e_{1}^{\prime} } \\ {e_{2}^{\prime}} \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{2} = 6 \). Then

$$ \begin{aligned} & \left( {A - 6I} \right){\mathop e\limits_{\sim}}^{\prime} = {\mathop 0\limits_{\sim}} \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {5-6} & 4 \\ 1 & {2-6} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1}^{\prime} } \\ {e_{2}^{\prime} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} { - e_{1}^{\prime} + 4e_{2}^{\prime} } \\ {e_{1}^{\prime} - 4e_{2}^{\prime} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad - e_{1}^{\prime} + 4e_{2}^{\prime} = 0\,e_{1}^{\prime} - 4e_{2}^{\prime}= 0. \\ \end{aligned} $$

We can choose \( e_{1}^{\prime} = 4,e_{2}^{\prime} = 1 \). So, the eigenvector corresponding to the eigenvalue \( \,\lambda_{2} = 6\, \) is \( {\mathop e\limits_{\sim}}^{\prime} = \left( {\begin{array}{*{20}c} 4 \\ 1 \\ \end{array} } \right) \). The eigenvectors \( {\mathop e\limits_{\sim}} \), \( {\mathop e\limits_{\sim}}^{\prime} \) are linearly independent. Hence the general solution of the system is given as

$$ {\mathop x\limits_{\sim}} \left( t \right) = c_{1} {\mathop e\limits_{\sim}} \;e^{t} + c_{2} {\mathop e\limits_{\sim}}^{\prime} e^{6\,t} = c_{1} \left( {\begin{array}{*{20}c} 1 \\ { - 1} \\ \end{array} } \right)e^{t} + c_{2} \left( {\begin{array}{*{20}c} 4 \\ 1 \\ \end{array} } \right)e^{6\,t} $$

or, \( \left. {\begin{array}{*{20}l} {x\left( t \right) = c_{1} e^{t} + 4c_{2} e^{6\,t} } \hfill \\ {y\left( t \right) = - c_{1} e^{t} + c_{2} e^{6\,t} } \hfill \\ \end{array} } \right\} \), where \( c_{1} ,c_{2} \) are arbitrary constants.

Example 2.2

Find the general solution of the linear system

$$ \frac{d}{{{\text{d}}t}}\left( {\begin{array}{*{20}c} x \\ y \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 3 & 0 \\ 0 & 3 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} x \\ y \\ \end{array} } \right) $$

Solution

The characteristic equation of matrix A is

$$ \begin{aligned} & {\text{det(}}A - \lambda I )= 0 \\ & {\text{or}},\;\left| {\begin{array}{*{20}c} {3 - \lambda } & 0 \\ 0 & {3 - \lambda } \\ \end{array} } \right| = 0 \\ & {\text{or}},\;\left( {3 - \lambda } \right)^{2} = 0 \\ & {\text{or}},\;\lambda = 3,3. \\ \end{aligned} $$

So, the eigenvalues of A are 3, 3, which are real and repeated. Clearly, \( \mathop e\limits_{\sim}{_{1}} = \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \) and \( \mathop e\limits_{\sim}{_{2}} = \left( {\begin{array}{*{20}c} 0 \\ 1 \\ \end{array} } \right) \) are two linearly independent eigenvectors corresponding to the repeated eigenvalue λ = 3. Thus, the general solution of the system is

$$ {\mathop x\limits_{\sim}} \left( t \right) = c_{1} \mathop e\limits_{\sim}{_{1}} e^{\lambda \,t} + c_{2} \mathop e\limits_{\sim}{_{2}} e^{\lambda \,t} $$
$$ \begin{aligned} & \Rightarrow \quad \left( {\begin{array}{*{20}c} {x\left( t \right)} \\ {y\left( t \right)} \\ \end{array} } \right) = c_{1} \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right)e^{3t} + c_{2} \left( {\begin{array}{*{20}c} 0 \\ 1 \\ \end{array} } \right)e^{3t} = \left( {\begin{array}{*{20}c} {c_{{{1} e^{3t} }} } \\ {c_{{{2} e^{3t} }} } \\ \end{array} } \right). \\ & \Rightarrow \quad x\left( t \right) = c_{1} e^{3t} ,\;y\left( t \right) = c_{2} e^{3t}, {\text{where}} \, c_{1} ,\;c_{2} \,{\text{are\, arbitrary \,constants}}. \end{aligned} $$

Example 2.3

Find the general solution of the system

$$ \begin{aligned} \dot{x} & = 3x - 4y \\ \dot{y} & = x - y \\ \end{aligned} $$

using eigenvalue-eigenvector method.

Solution

The characteristic equation of matrix A is

$$ \begin{aligned} {\text{det(}}A - \lambda I )& = 0 \\ \Rightarrow \quad \left| {\begin{array}{*{20}c} {3 - \lambda } & { - 4} \\ 1 & { - 1 - \lambda } \\ \end{array} } \right| & = 0 \\ \Rightarrow \quad \lambda^{2} - 2\lambda + 1 & = 0 \\ \Rightarrow \quad \lambda & = 1,1 \\ \end{aligned} $$

So matrix A has repeated real eigenvalues λ = 1, 1.

Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue λ = 1. Then

$$ \begin{aligned} & \left( {A - I} \right){\mathop e\limits_{\sim}} = {\mathop 0\limits_{\sim}} \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {3 - 1} & { - 4} \\ 1 & { - 1-1} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {2e_{1} - 4e_{2} } \\ {e_{1} - 2e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad 2e_{1} - 4e_{2} = 0,e_{1} - 2e_{2} = 0 \\ \end{aligned} $$

We can choose \( e_{1} = 2,e_{2} = 1 \). Therefore, \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right) \).

Let \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} {g_{1} } \\ {g_{2} } \\ \end{array} } \right) \) be the generalized eigenvector corresponding to the eigenvalue λ = 1. Then

$$ \begin{aligned} & \left( {A - I} \right){\mathop g\limits_{\sim}} = {\mathop e\limits_{\sim}} \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {3 - 1} & { - 4} \\ 1 & { - 1-1} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {g_{1} } \\ {g_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right) \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {2g_{1} - 4g_{2} } \\ {g_{1} - 2g_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right) \\ & \Rightarrow \quad 2g_{1} - 4g_{2} = 2,g_{1} - 2g_{2} = 1 \\ \end{aligned} $$

We can choose \( g_{2} = 1,g_{1} = 3 \). Therefore \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} 3 \\ 1 \\ \end{array} } \right) \).

Therefore the general solution of the system is

$$ \begin{aligned} {\mathop x\limits_{\sim}} \left( t \right) & = c_{1} {\mathop e\limits_{\sim}} e^{t} + c_{2} \left( {{\mathop e\limits_{\sim}} \,t\,e^{t} + {\mathop g\limits_{\sim}} \,e^{t} } \right) \\ {\text{or}},\;\left( {\begin{array}{*{20}c} {x\left( t \right)} \\ {y\left( t \right)} \\ \end{array} } \right) & = c_{1} \left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right)e^{t} + c_{2} \left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right)te^{t} + c_{2} \left( {\begin{array}{*{20}c} 3 \\ 1 \\ \end{array} } \right)e^{t} \\ \end{aligned} $$

or, \( \left. {\begin{array}{*{20}l} {x\left( t \right) = \left\{ {2c_{1} + \left( {2t + 3} \right)c_{2} } \right\}e^{t} } \hfill \\ {y\left( t \right) = \left\{ {c_{1} + \left( {t + 1} \right)c_{2} } \right\}e^{t} } \hfill \\ \end{array} } \right\} \), where \( c_{1} \) and \( c_{2} \) are arbitrary constants.

Example 2.4

Find the general solution of the linear system

$$ \begin{aligned} \dot{x} & = 10x - y \\ \dot{y} & = 25x + 2y \\ \end{aligned} $$

Solution

Given system can be written as

$$ {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}}, \,{\text{where}}\, A = \left( {\begin{array}{*{20}c} {10} & { - 1} \\ {25} & 2 \\ \end{array} } \right) \,{\text{and}}\, {\mathop x\limits_{\sim}} = \left( {\begin{array}{*{20}c} x \\ y \\ \end{array} } \right). $$

The characteristic equation of matrix A is

$$ \begin{aligned} & {\text{det(}}A - \lambda I )= 0 \\ & \Rightarrow \quad \left| {\begin{array}{*{20}c} {10 - \lambda } & { - 1} \\ {25} & {2 - \lambda } \\ \end{array} } \right| = 0 \\ & \Rightarrow \quad \lambda^{2} - 12\lambda + 45 = 0 \\ & \Rightarrow \quad \lambda = 6 \pm 3i. \\ \end{aligned} $$

Therefore, matrix A has a pair of complex conjugate eigenvalues 6 ± 3i.

Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue λ = 6 + 3i. Then

$$ \begin{aligned} & \left( {A - \left( {6 + 3i} \right)I} \right){\mathop e\limits_{\sim}} = {\mathop 0\limits_{\sim}} \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {10-6 - 3i} & { - 1} \\ {25} & {2-6 - 3i} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {\left( {4 - 3i} \right)e_{1} - e_{2} } \\ {25e_{1} - \left( {4 + 3i} \right)e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad \left( {4 - 3i} \right)e_{1} - e_{2} = 0,25e_{1} - \left( {4 + 3i} \right)e_{2} = 0. \\ \end{aligned} $$

A nontrivial solution of this system is

$$ e_{1} = 1,\;e_{2} = 4 - 3i. $$

Therefore \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ {4 - 3i} \\ \end{array} } \right) \) \( = \left( {\begin{array}{*{20}c} 1 \\ 4 \\ \end{array} } \right) + i\left( {\begin{array}{*{20}c} 0 \\ { - 3} \\ \end{array} } \right) \) \( = {\mathop \alpha \limits_{ \sim }}{_{1}} + i{\mathop \alpha \limits_{ \sim }}{_{2}}\), where \( {\mathop \alpha \limits_{ \sim }}{_{1}} = \left( {\begin{array}{*{20}c} 1 \\ 4 \\ \end{array} } \right) \) and \( {\mathop \alpha \limits_{ \sim }}{_{2}} = \left( {\begin{array}{*{20}c} 0 \\ { - 3} \\ \end{array} } \right) \).

Similarly, the eigenvector corresponding to the eigenvalue λ = 6 − 3i is \( {\mathop e\limits_{\sim}}^{\prime} = \left( {\begin{array}{*{20}c} 1 \\ {4 + 3i} \\ \end{array} } \right) \) \( = {\mathop \alpha \limits_{ \sim }}{_{1}} - i{\mathop \alpha \limits_{ \sim }}{_{2}} \). Therefore,

$$ \mathop u\limits_{\sim}{_{1}} = e^{a\,t} \left( {{\mathop \alpha \limits_{ \sim }}{_{1}} \cos bt - {\mathop \alpha \limits_{ \sim }}{_{2}} \sin bt} \right) = e^{6t} \left\{ {\left( {\begin{array}{*{20}c} 1 \\ 4 \\ \end{array} } \right)\cos 3t - \left( {\begin{array}{*{20}c} 0 \\ { - 3} \\ \end{array} } \right)\sin 3t} \right\} $$

and

$$ \mathop v\limits_{\sim}{_{1}} = e^{{{{a{\kern 1pt} {\kern 1pt} t}} }} \left( {{\mathop \alpha \limits_{ \sim }}{_{1}} \sin bt + {\mathop \alpha \limits_{ \sim }}{_{2}} \cos bt} \right) = e^{6t} \left\{ {\left( {\begin{array}{*{20}c} 1 \\ 4 \\ \end{array} } \right)\sin 3t + \left( {\begin{array}{*{20}c} 0 \\ { - 3} \\ \end{array} } \right)\cos 3t} \right\}. $$

Therefore, the general solution is

$$ \begin{aligned} {\mathop x\limits_{\sim}} \left( t \right) & = c_{1} \mathop u\limits_{\sim}{_{1}} + d_{1} \mathop v\limits_{\sim}{_{1}} \\ & = e^{6t} \left\{ {\left( {\begin{array}{*{20}c} 1 \\ 4 \\ \end{array} } \right)c_{1} \cos 3t - \left( {\begin{array}{*{20}c} 0 \\ { - 3} \\ \end{array} } \right)c_{1} \sin 3t} \right\} \\&\quad + e^{6t} \left\{ {\left( {\begin{array}{*{20}c} 1 \\ 4 \\ \end{array} } \right)d_{1} \sin 3t + \left( {\begin{array}{*{20}c} 0 \\ { - 3} \\ \end{array} } \right)d_{1} \cos 3t} \right\} \\ & = e^{6t} \left( {\begin{array}{*{20}c} {c_{1} \cos 3t + d_{1} \sin 3t} \\ {\left( {4c_{1} - 3d_{1} } \right)\cos 3t + \left( {3c_{1} + 4d_{1} } \right)\sin 3t} \\ \end{array} } \right) \\ \Rightarrow \quad x\left( t \right) & = e^{6t} \left( {c_{1} \cos 3t + d_{1} \sin 3t} \right),\\ & \quad y\left( t \right) = e^{6t} \left[ {\left( {4c_{1} - 3d_{1} } \right)\cos 3t + \left( {3c_{1} + 4d_{1} } \right)\sin 3t} \right] \\ \end{aligned} $$

where \( c_{1} \,\,{\text{and }}d_{1} \) are arbitrary constants.

Example 2.5

Find the solution of the system

$$ \dot{x} = x - 5y,\dot{y} = x - 3y $$

satisfying the initial condition x(0) = 1, y(0) = 1. Describe the behavior of the solution as t → ∞.

Solution

The characteristic equation of matrix A is

$$ \begin{aligned} & {\text{det(}}A - \lambda I )= 0 \\ & \Rightarrow \quad \left| {\begin{array}{*{20}c} {1 - \lambda } & { - 5} \\ 1 & { - 3 - \lambda } \\ \end{array} } \right| = 0 \\ & \Rightarrow \quad \lambda^{2} + 2\lambda + 2 = 0 \\ & \Rightarrow \quad \lambda = - 1 \pm i. \\ \end{aligned} $$

So, matrix A has a pair of complex conjugate eigenvalues (−1 ± i).

Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue λ = −1 + i. Then

$$ \begin{aligned} & \left( {A - \left( { - 1 + i} \right)I} \right){\mathop e\limits_{\sim}} = {\mathop 0\limits_{\sim}} \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {2 - i} & { - 5} \\ 1 & { - 2 - i} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {\left( {2 - i} \right)e_{1} - 5e_{2} } \\ {e_{1} - \left( {2 + i} \right)e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad \left( {2 - i} \right)e_{1} - 5e_{2} = 0,e_{1} - \left( {2 + i} \right)e_{2} = 0. \\ \end{aligned} $$

A nontrivial solution of this system is

$$ e_{1} = 2 + i,e_{2} = 1. $$

Therefore \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {2 + i} \\ 1 \\ \end{array} } \right) \) \( = \left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right) + i\left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \) \( = {\mathop \alpha \limits_{ \sim }}{_{1}} + i{\mathop \alpha \limits_{ \sim }}{_{2}} \), where \( {\mathop \alpha \limits_{ \sim }}{_{1}} = \left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right) \) and \( {\mathop \alpha \limits_{ \sim }}{_{2}} = \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \).

Similarly, the eigenvector corresponding to the eigenvalue λ = −1 − i is \( {\mathop e\limits_{\sim}}^{\prime} = \left( {\begin{array}{*{20}c} {2 - i} \\ 1 \\ \end{array} } \right) \) \( = {\mathop \alpha \limits_{ \sim }}{_{1}} - i{\mathop \alpha \limits_{ \sim }}{_{2}} \).

$$ \begin{aligned} \therefore \mathop u\limits_{\sim}{_{1}} & = e^{a\,t} \left( {{\mathop \alpha \limits_{ \sim }}{_{1}} \cos bt - {\mathop \alpha \limits_{ \sim }}{_{2}} \sin bt} \right) = e^{ - t} \left\{ {\left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right)\cos t - \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right)\sin t} \right\}\;{\text{and}} \\ \mathop v\limits_{\sim}{_{1}} & = e^{a\,t} \left( {{\mathop \alpha \limits_{ \sim }}{_{1}} \sin bt + {\mathop \alpha \limits_{ \sim }}{_{2}} \cos bt} \right) = e^{ - t} \left\{ {\left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right)\sin t + \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right)\cos t} \right\} \\ \end{aligned} $$

Therefore, the solution of the system is

$$ \begin{aligned} {\mathop x\limits_{\sim}} \left( t \right) & = x\left( 0 \right) \mathop u\limits_{\sim}{_{1}} + y\left( 0 \right) \mathop v\limits_{\sim}{_{1}} \\ & = e^{ - t} \left\{ {\left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right)\cos t - \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right)\sin t} \right\} + e^{ - t} \left\{ {\left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right)\sin t + \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right)\cos t} \right\} \\ & = e^{ - t} \left\{ {\left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right)\left( {\cos t + \sin t} \right) + \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right)\left( {\cos t - \sin t} \right)} \right\}. \\ \end{aligned} $$

When t → ∞, \( e^{ - \,t} \to 0 \). So, in this case \( {\mathop x\limits_{\sim}} \left( t \right) \to {\mathop 0\limits_{\sim}} \), that is, the solution of the system is stable in the usual sense.

Example 2.6

Find the solution of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), where

$$ A = \left( {\begin{array}{*{20}c} { - 1} & 2 & 3 \\ 0 & { - 2} & 1 \\ 0 & 3 & 0 \\ \end{array} } \right). $$

Solution

The characteristic equation of A is

$$ \begin{aligned} &\qquad \quad {\text{det(}}A - \lambda I )= 0 \\ & \Rightarrow \quad \left| {\begin{array}{*{20}c} { - 1 - \lambda } & 2 & 3 \\ 0 & { - 2 - \lambda } & 1 \\ 0 & 3 & { - \lambda } \\ \end{array} } \right| = 0 \\ & \Rightarrow \quad \left( {\lambda + 1} \right)\left( {\lambda - 1} \right)\left( {\lambda + 3} \right) = 0 \\ & \Rightarrow \quad \lambda = - 1,1, - 3 \\ \end{aligned} $$

Therefore the eigenvalues of matrix A are λ = −1, 1, −3.

We shall now find the eigenvector corresponding to each of the eigenvalues.

Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ {e_{3} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue λ = −1. Then

$$ \begin{aligned} & (A + 1){{{\mathop e\limits_{\sim}} }} = {\mathop 0\limits_{\sim}} \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} { - 1 + 1} & 2 & 3 \\ 0 & { - 2 + 1} & 1 \\ 0 & 3 & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ {e_{3} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad 2e_{2} + 3e_{3} = 0,\; - e_{2} + e_{3} = 0,\;3e_{2} + e_{3} = 0 \\ & \Rightarrow e_{2} = e_{3} = 0 \,{\text{and}}\,e_{1}\, {\text{is \, arbitrary}}.\end{aligned} $$

We choose \( e_{1} = 1 \). Therefore, the eigenvector corresponding to the eigenvalue λ = −1 is \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ 0 \\ 0 \\ \end{array} } \right) \). Similarly, the eigenvectors corresponding to λ = 1 and λ = −3 are, respectively, \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} {11/2} \\ 1 \\ 3 \\ \end{array} } \right) \) and \( {\mathop \alpha \limits_{ \sim }} = \left( {\begin{array}{*{20}c} {1/2} \\ 1 \\ { - 1} \\ \end{array} } \right) \). Therefore the general solution is

$$ \begin{aligned} {\mathop x\limits_{\sim}} (t) & = c_{1} {\mathop e\limits_{\sim}} e^{ - t} + c_{2} {\mathop g\limits_{\sim}} e^{t} + c_{3} {\mathop \alpha \limits_{ \sim }} e^{ - 3t} \\ & = c_{1} \left( {\begin{array}{*{20}c} 1 \\ 0 \\ 0 \\ \end{array} } \right)e^{ - t} + c_{2} \left( {\begin{array}{*{20}c} {11/2} \\ 1 \\ 3 \\ \end{array} } \right)e^{t} + c_{3} \left( {\begin{array}{*{20}c} {1/2} \\ 1 \\ { - 1} \\ \end{array} } \right)e^{ - 3t} \\ \end{aligned} $$

where \( c_{1} \),\( c_{2} \) and \( c_{3} \) are arbitrary constants.

Example 2.7

Solve the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), where

$$ A = \left( {\begin{array}{*{20}c} 1 & { - 3} & 3 \\ 3 & { - 5} & 3 \\ 6 & { - 6} & 4 \\ \end{array} } \right). $$

Solution

The characteristic equation of matrix A is

$$ \begin{aligned} & {\text{det(}}A - \lambda I )= 0 \\ & \Rightarrow \quad \left| {\begin{array}{*{20}c} {1 - \lambda } & { - 3} & 3 \\ 3 & { - 5 - \lambda } & 3 \\ 6 & { - 6} & {4 - \lambda } \\ \end{array} } \right| = 0 \\ & \Rightarrow \quad \lambda = 4, - 2, - 2. \\ \end{aligned} $$

So (−2) is a repeated eigenvalue of A. The eigenvector for the eigenvalue \( \lambda_{1} = 4 \) is given as \( \left( {\begin{array}{*{20}c} 1 \\ 1 \\ 2 \\ \end{array} } \right) \). The eigenvector corresponding to the repeated eigenvalue \( \lambda_{2} = \lambda_{3} = - 2 \) is \( \left( {\begin{array}{*{20}c} {e_{1} } & {e_{2} } & {e_{3} } \\ \end{array} } \right)^{T} \) such that

$$ \left( {\begin{array}{*{20}c} 3 & { - 3} & 3 \\ 3 & { - 3} & 3 \\ 6 & { - 6} & 6 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ {e_{3} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ 0 \\ \end{array} } \right) $$

which is equivalent to

$$ 3e_{1} - 3e_{2} + 3e_{3} = 0,\quad 3e_{1} - 3e_{2} + 3e_{3} = 0,\quad 6e_{1} - 6e_{2} + 6e_{3} = 0, $$

that is, \( e_{1} - e_{2} + e_{3} = 0 \).

We can choose \( e_{1} = 1 \), \( e_{2} = 1 \) and \( e_{3} = 0 \), and so we can take one eigenvector as \( \left( {\begin{array}{*{20}c} 1 \\ 1 \\ 0 \\ \end{array} } \right) \). Again, we can choose \( e_{1} = 0 \), \( e_{2} = 1 \) and \( e_{3} = 1 \). Then we obtain another eigenvector \( \left( {\begin{array}{*{20}c} 0 \\ 1 \\ 1 \\ \end{array} } \right) \). Clearly, these two eigenvectors are linearly independent. Thus, we have two linearly independent eigenvectors corresponding to the repeated eigenvalue −2. Hence, the general solution of the system is given by

$$ {\mathop x\limits_{\sim}} (t) = c_{1} \left( {\begin{array}{*{20}c} 1 \\ 1 \\ 2 \\ \end{array} } \right)e^{4t} + c_{2} \left( {\begin{array}{*{20}c} 1 \\ 1 \\ 0 \\ \end{array} } \right)e^{ - 2t} + c_{3} \left( {\begin{array}{*{20}c} 0 \\ 1 \\ 1 \\ \end{array} } \right)e^{ - 2t} $$

where \( c_{1} \), \( c_{2} \) and \( c_{3} \) are arbitrary constants.

Example 2.8

Solve the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) where

$$ A = \left[ {\begin{array}{*{20}c} { - 1} & { - 1} & 0 & 0 \\ 1 & { - 1} & 0 & 0 \\ 0 & 0 & 0 & { - 2} \\ 0 & 0 & 1 & 2 \\ \end{array} } \right] $$

Solution

Here matrix A has two pair of complex conjugate eigenvalues \( \lambda_{1} = - 1 \pm i \) and \( \lambda_{2} = 1 \pm i \). The corresponding pair of eigenvectors is

$$ \begin{aligned} \mathop w\limits_{\sim}{_{1}} &= {\mathop \alpha \limits_{\sim }}{_{1}} \pm i{\mathop \beta \limits_{ \sim }}{_{1}} = \left( {\begin{array}{*{20}c} { \pm i} \\ 1 \\ 0 \\ 0 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 1 \\ 0 \\ 0 \\ \end{array} } \right) \pm i\left( {\begin{array}{*{20}c} 1 \\ 0 \\ 0 \\ 0 \\ \end{array} } \right)\;{\text{and}}\\\;\mathop w\limits_{\sim}{_{2}} &= {\mathop \alpha \limits_{ \sim }}{_{2}} \pm i{\mathop \beta \limits_{ \sim }}{_{2}} = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ { - 1 \pm i} \\ 1 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ { - 1} \\ 1 \\ \end{array} } \right) \pm i\left( {\begin{array}{*{20}c} 0 \\ 0 \\ 1 \\ 0 \\ \end{array} } \right) \end{aligned} $$

Therefore, the general solution of the system is expressed as

$$ \begin{aligned} {\mathop x\limits_{\sim}} (t) & = \sum\limits_{j = 1}^{2} {c_{j} } \mathop u\limits_{\sim}{_{j}} + d_{j} \mathop v\limits_{\sim}{_{j}} \\ & = c_{1} e^{ - t} \left\{ {\left( {\begin{array}{*{20}c} 0 \\ 1 \\ 0 \\ 0 \\ \end{array} } \right)\cos t - \left( {\begin{array}{*{20}c} 1 \\ 0 \\ 0 \\ 0 \\ \end{array} } \right)\sin t} \right\} + c_{2} e^{t} \left\{ {\left( {\begin{array}{*{20}c} 0 \\ 0 \\ { - 1} \\ 1 \\ \end{array} } \right)\cos t - \left( {\begin{array}{*{20}c} 0 \\ 0 \\ 1 \\ 0 \\ \end{array} } \right)\sin t} \right\} \\ &\quad+ d_{1} e^{ - t} \left\{ {\left( {\begin{array}{*{20}c} 0 \\ 1 \\ 0 \\ 0 \\ \end{array} } \right)\sin t + \left( {\begin{array}{*{20}c} 1 \\ 0 \\ 0 \\ 0 \\ \end{array} } \right)\cos t} \right\} \\ & \quad + d_{2} e^{t} \left\{ {\left( {\begin{array}{*{20}c} 0 \\ 0 \\ { - 1} \\ 1 \\ \end{array} } \right)\sin t + \left( {\begin{array}{*{20}c} 0 \\ 0 \\ 1 \\ 0 \\ \end{array} } \right)\cos t} \right\} \\ & = \left( {\begin{array}{*{20}c} {e^{ - t} (d_{1} \cos t - c_{1} \sin t)} \\ {e^{ - t} (c_{1} \cos t + d_{1} \sin t)} \\ {e^{t} \{ (d_{2} - c_{2} ) \cos t - (d_{2} + c_{2} )\sin t\} } \\ {e^{t} (c_{2} \cos t + d_{2} \sin t)} \\ \end{array} } \right) \\ \end{aligned} $$

where \( c_{j} ,d_{j} (j = 1,2) \) are arbitrary constants.

2.3 Fundamental Matrix

A set \( \{ {\mathop x\limits_{\sim}{_{1}} (t ),\mathop x\limits_{\sim}{_{2}} (t ), \ldots ,\mathop x\limits_{\sim}{_{n}} (t )} \} \) of solutions of a linear homogeneous system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) is said to be a fundamental set of solutions of that system if it satisfies the following two conditions:

  1. (i)

    The set \( \{ {\mathop x\limits_{\sim}{_{1}} (t ), \mathop x\limits_{\sim}{_{2}} (t ), \ldots, \mathop x\limits_{\sim}{_{n}} (t )} \} \) is linearly independent, that is, for \( c_{1} ,c_{2} , \ldots ,c_{n} \in {\mathbf{\mathbb{R}}},\;c_{1} \mathop x\limits_{\sim}{_{1}} + c_{2} \mathop x\limits_{\sim}{_{2}} + \cdots + c_{n} \mathop x\limits_{\sim}{_{n}} = {\mathop 0\limits_{\sim}} \; \Rightarrow c_{1} = c_{2} = \cdots = c_{n} = 0. \)

  2. (ii)

    For any solution \( {\mathop x\limits_{\sim}} (t ) \) of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), there exist \( c_{1} ,c_{2} , \ldots ,c_{n} \in {\mathbf{\mathbb{R}}} \) such that \( {\mathop x\limits_{\sim}} (t )= c_{1} \mathop x\limits_{\sim}{_{1}} (t )+ c_{2} \mathop x\limits_{\sim}{_{2}} (t )+ \cdots + c_{n} \mathop x\limits_{\sim}{_{n}} (t ),\forall t \in {\mathbf{\mathbb{R}}} \).

The solution, expressed as a linear combination of a fundamental set of solutions of a system, is called a general solution of the system.

Let \( \{ {\mathop x\limits_{\sim}{_{1}} (t ), \mathop x\limits_{\sim}{_{2}} (t ), \ldots , \mathop x\limits_{\sim}{_{n}} (t )} \} \) be a fundamental set of solutions of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) for \( t \in I = \left[ {a,b} \right];\,\,a,b \in {\mathbf{\mathbb{R}}} \). Then the matrix

$$ \Phi (t )= \left( { \mathop x\limits_{\sim}{_{1}} (t ), \mathop x\limits_{\sim}{_{2}} (t ), \ldots , \mathop x\limits_{\sim}{_{n}} (t )} \right) $$

is called a fundamental matrix of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), \( {\mathop x\limits_{\sim}} \in {\mathbf{\mathbb{R}}}^{n} \). Since the set \( \{ { \mathop x\limits_{\sim}{_{1}} (t ), \mathop x\limits_{\sim}{_{2}} (t ), \ldots , \mathop x\limits_{\sim}{_{n}} (t )} \} \) is linearly independent, the fundamental matrix \( \Phi (t ) \) is nonsingular. Now the general solution of the system is

$$ \begin{aligned} {\mathop x\limits_{\sim}} (t ) & = c_{1} \mathop x\limits_{\sim}{_{1}} (t )+ c_{2} \mathop x\limits_{\sim}{_{2}} (t )+ \cdots + c_{n} \mathop x\limits_{\sim}{_{n}} (t )\\ & = \left( \mathop x\limits_{\sim}{_{1}} (t ), \mathop x\limits_{\sim}{_{2}} { (t ), \ldots ,\mathop x\limits_{\sim}{_{n}} (t )} \right)\left( {\begin{array}{*{20}c} {c_{1} } \\ {c_{2} } \\ \vdots \\ {c_{n} } \\ \end{array} } \right) \\ & =\Phi (t ){\kern 1pt} {\kern 1pt} {\mathop c\limits_{\sim}} \\ \end{aligned} $$

where \( {\mathop c\limits_{\sim}} = (c_{1} ,c_{2} , \ldots c_{n} )^{t} \) is a constant column vector. If the initial condition is \( {\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} \), then

$$ \begin{aligned} &\Phi ( 0 ){\kern 1pt} {\mathop c\limits_{\sim}} = \mathop x\limits_{\sim}{_{0}} \\ & \Rightarrow {\mathop c\limits_{\sim}} =\Phi ^{ - 1} ( 0 ){\kern 1pt} \mathop x\limits_{\sim}{_{0}} [ {\text{Since }}\Phi (t ) {\text{ is nonsingular for all }}t ]. \\ \end{aligned} $$

Thus the solution of the initial value problem \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) with the initial conditions \( {\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} \) can be expressed in terms of the fundamental matrix Φ(t) as

$$ {\mathop x\limits_{\sim}} (t )=\Phi (t )\Phi ^{ - 1} ( 0 ){\kern 1pt} \mathop x\limits_{\sim}{_{0}} $$
(2.7)

Note that two different homogeneous systems cannot have the same fundamental matrix. Again, if \( \Phi (t ) \) is a fundamental matrix of \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), then for any constant C, \( C\Phi (t ) \) is also a fundamental matrix of the system.

Example 2.9

Find the fundamental matrix of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), where \( A = \left( {\begin{array}{*{20}c} 1 & { - 2} \\ { - 3} & 2 \\ \end{array} } \right) \). Hence find its solution.

Solution

The characteristic equation of matrix A is

$$ \begin{aligned} & \left| {A - \lambda I} \right| = 0 \\ & \Rightarrow \quad \left| {\begin{array}{*{20}c} {1 - \lambda } & { - 2} \\ { - 3} & {2 - \lambda } \\ \end{array} } \right| = 0 \\ & \Rightarrow \quad \left( {1 - \lambda } \right)\left( {2 - \lambda } \right) - 6 = 0 \\ & \Rightarrow \quad \lambda^{2} - 3\lambda - 4 = 0 \\ & \Rightarrow \quad \lambda = - 1,4. \\ \end{aligned} $$

So, the eigenvalues of matrix A are −1, 4, which are real and distinct.

Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{1} = - 1 \). Then

$$ \begin{aligned} & \left( {A + I} \right){\mathop e\limits_{\sim}} = {\mathop 0\limits_{\sim}} \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {1 + 1} & { - 2} \\ { - 3} & {2 + 1} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad 2e_{1} - 2e_{2} = 0, - 3e_{1} + 3e_{2} = 0. \\ \end{aligned} $$

A nontrivial solution of this system is \( e_{1} = 1,e_{2} = 1 \).

$$ \therefore {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ 1 \\ \end{array} } \right). $$

Again, let \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} {g_{1} } \\ {g{}_{2}} \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{2} = 4 \). Then

$$ \begin{aligned} & \left( {A - 4I} \right){\mathop g\limits_{\sim}} = {\mathop 0\limits_{\sim}} \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {1-4} & { - 2} \\ { - 3} & {2-4} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {g_{1} } \\ {g_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad 3g_{1} + 2g_{2} = 0 \\ \end{aligned} $$

Choose \( g_{1} = 2,g_{2} = - 3 \). Therefore, \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} 2 \\ { - 3} \\ \end{array} } \right) \).

Therefore the eigenvectors corresponding to the eigenvalues λ = −1, 4 are respectively \( \left( {\begin{array}{*{20}c} 1 \\ 1 \\ \end{array} } \right) \) and \( \left( {\begin{array}{*{20}c} 2 \\ { - 3} \\ \end{array} } \right) \), which are linearly independent. So two fundamental solutions of the system are

$$ \mathop x\limits_{\sim}{_{1}} \left( t \right) = \left( {\begin{array}{*{20}c} 1 \\ 1 \\ \end{array} } \right)e^{ - t} ,\mathop x\limits_{\sim}{_{2}} \left( t \right) = \left( {\begin{array}{*{20}c} 2 \\ { - 3} \\ \end{array} } \right)e^{4t} $$

and a fundamental matrix of the system is

$$ \Phi (t )= \left( {\begin{array}{*{20}c} {\mathop x\limits_{\sim}{_{1}} (t )} & {\mathop x\limits_{\sim}{_{2}} (t )} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {e^{ - t} } & {2e^{4t} } \\ {e^{ - t} } & { - 3e^{4t} } \\ \end{array} } \right). $$

Now \( \Phi ( 0 )= \left( {\begin{array}{*{20}c} 1 & 2 \\ 1 & { - 3} \\ \end{array} } \right) \) and so \( \Phi ^{ - 1} ( 0 )= \frac{1}{5}\left( {\begin{array}{*{20}c} 3 & 2 \\ 1 & { - 1} \\ \end{array} } \right) \).

Therefore the general solution of the system is given by

$$ \begin{aligned} {\mathop x\limits_{\sim}} (t )=\Phi (t )\Phi ^{ - 1} ( 0 ){\kern 1pt} \mathop x\limits_{\sim}{_{0}} & = \frac{1}{5}\left( {\begin{array}{*{20}c} {e^{ - t} } & {2e^{4t} } \\ {e^{ - t} } & { - 3e^{4t} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 3 & 2 \\ 1 & { - 1} \\ \end{array} } \right) \mathop x\limits_{\sim}{_{0}} \\ & = \frac{1}{5}\left( {\begin{array}{*{20}c} {3e^{ - t} + 2e^{4t} } & {2e^{ - t} - 2e^{4t} } \\ {3e^{ - t} - 3e^{4t} } & {2e^{ - t} + 3e^{4t} } \\ \end{array} } \right)\mathop x\limits_{\sim}{_{0}}. \\ \end{aligned} $$

2.3.1 General Solution of Linear Systems

Consider a simple linear equation

$$ \dot{x} = ax $$
(2.8)

with initial condition \( x ( 0 )= x_{0} \), where \( a{\text{ and }}x_{0} \) are certain constants. The solution of this initial value problem (IVP) is given as \( x (t )= x_{0} e^{{{{a{\kern 1pt} t}} }} \). Then we may expect that the solution of the initial value problem for n × n system

$$ {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \;{\text{with}}\;{\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} $$
(2.9)

can be expressed in term of exponential matrix function as

$$ {\mathop x\limits_{\sim}} \left( t \right) = e^{At} \mathop x\limits_{\sim}{_{0}} $$
(2.10)

where A is an n × n matrix. Comparing (2.10) with the solution obtained by the fundamental matrix , we have the relation

$$ e^{At} =\Phi (t )\Phi ^{ - 1} ( 0 ) $$
(2.11)

Thus we see that if \( \Phi (t ) \) is a fundamental matrix of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), then \( \Phi ( 0 ) \) is invertible and \( e^{At} =\Phi (t )\Phi ^{ - 1} ( 0 ) \). Note that if \( \Phi ( 0 )= I \), then \( \Phi ^{ - 1} ( 0 )= I \) and so, \( e^{At} =\Phi (t ) { }I =\Phi (t ) \).

Example 2.10

Does \( \Phi (t )= \left( {\begin{array}{*{20}c} {2e^{t} } & { - e^{ - 3t} } \\ { - 4e^{t} } & {2e^{ - 3t} } \\ \end{array} } \right) \) a fundamental matrix for a system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \)?

Solution

We know that if \( \Phi (t ) \) is a fundamental matrix, then \( \Phi ( 0 ) \) is invertible.

Here \( \Phi (t )= \left( {\begin{array}{*{20}c} {2e^{t} } & { - e^{ - 3t} } \\ { - 4e^{t} } & {2e^{ - 3t} } \\ \end{array} } \right) \). So, \( \Phi ( 0 )= \left( {\begin{array}{*{20}c} 2 & { - 1} \\ { - 4} & 2 \\ \end{array} } \right) \).

Since \( {\text{det(}}\Phi ( 0 ) )= 4-4 = 0 \), \( \Phi ( 0 ) \) is not invertible and hence the given matrix is not a fundamental matrix for the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \).

Example 2.11

Find \( e^{At} \) for the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), where \( A = \left( {\begin{array}{*{20}c} 1 & 1 \\ 4 & 1 \\ \end{array} } \right) \).

Solution

The characteristic equation of A is

$$ \begin{aligned} & \left| {A - \lambda I} \right| = 0 \\ & \Rightarrow \quad \left| {\begin{array}{*{20}c} {1 - \lambda } & 1 \\ 4 & {1 - \lambda } \\ \end{array} } \right| = 0 \\ & \Rightarrow \quad \left( {\lambda - 1} \right)^{2} \,-\, 4 = 0 \\ & \Rightarrow \quad \lambda = 3, -1. \\ \end{aligned} $$

So, the eigenvalue of A are λ = 3, −1. The eigenvector corresponding to the eigenvalues λ = 3, −1 are, respectively, \( \left( {\begin{array}{*{20}c} 1 \\ 2 \\ \end{array} } \right) \) and \( \left( {\begin{array}{*{20}c} 1 \\ { - 2} \\ \end{array} } \right) \), which are linearly independent. So, two fundamental solutions of the system are \( \mathop x\limits_{\sim}{_{1}} \left( t \right) = \left( {\begin{array}{*{20}c} 1 \\ 2 \\ \end{array} } \right)e^{3t} , \mathop x\limits_{\sim}{_{2}} \left( t \right) = \left( {\begin{array}{*{20}c} 1 \\ { - 2} \\ \end{array} } \right)e^{ - t} \). Therefore a fundamental matrix of the system is

$$ \Phi (t )= \left( {\begin{array}{*{20}c} {\mathop x\limits_{\sim}{_{1}} (t )} & {\mathop x\limits_{\sim}{_{2}} (t )} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {e^{3t} } & {e^{ - t} } \\ {2e^{3t} } & { - 2e^{ - t} } \\ \end{array} } \right). $$

Now, \( \Phi ( 0 )= \left( {\begin{array}{*{20}c} 1 & 1 \\ 2 & { - 2} \\ \end{array} } \right) \) and \( \Phi ^{ - 1} ( 0 )= - \frac{1}{4}\left( {\begin{array}{*{20}c} { - 2} & { - 1} \\ { - 2} & 1 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\frac{1}{2}} & {\frac{1}{4}} \\ {\frac{1}{2}} & { - \frac{1}{4}} \\ \end{array} } \right) \).

Therefore,

$$ \begin{aligned} e^{At} & =\Phi (t )\Phi ^{ - 1} ( 0 )\\ & = \left( {\begin{array}{*{20}c} {e^{3t} } & {e^{ - t} } \\ {2e^{3t} } & { - 2e^{ - t} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\frac{1}{2}} & {\frac{1}{4}} \\ {\frac{1}{2}} & { - \frac{1}{4}} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\frac{1}{2}\left( {e^{3t} + e^{ - t} } \right)} & {\frac{1}{4}\left( {e^{3t} - e^{ - t} } \right)} \\ {\left( {e^{3t} - e^{ - t} } \right)} & {\frac{1}{2}\left( {e^{3t} + e^{ - t} } \right)} \\ \end{array} } \right). \\ \end{aligned} $$

2.3.2 Fundamental Matrix Method

The fundamental matrix can be used to obtain the general solution of a linear system. The fundamental theorem gives the existence and uniqueness of solution of a linear system \( \dot{{\mathop x\limits_{\sim}} } = A{\mathop x\limits_{\sim}} \), \( {\mathop x\limits_{\sim}} \in {\mathbf{\mathbb{R}}}^{n} \) subject to the initial conditions \( \mathop x\limits_{\sim}{_{0}} \in {\mathbf{\mathbb{R}}}^{n} \). We now present the fundamental theorem.

Theorem 2.1

(Fundamental theorem) Let A be an n × n matrix. Then for given any initial condition \( \mathop x\limits_{\sim}{_{0}} \in {\mathbf{\mathbb{R}}}^{n} \), the initial value problem \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) with \( {\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} \) has the unique solution \( {\mathop x\limits_{\sim}} (t )= e^{At} \mathop x\limits_{\sim}{_{0}} \).

Proof

The initial value problem is

$$ {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} ,\quad {\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} $$
(2.12)

We have

$$ e^{At} = I + At + \frac{{A^{2} t^{2} }}{2\, !} + \frac{{A^{3} t^{3} }}{3\, !} + \cdots $$
(2.13)

Differentiating (2.13) w.r.to t,

$$ \begin{aligned} \frac{d}{{{\text{d}}t}}\left( {e^{At} } \right) & = \frac{d}{{{\text{d}}t}}\left( {I + At + \frac{{A^{2} t^{2} }}{2 !} + \frac{{A^{3} t^{3} }}{3 !} + \cdots } \right) \\ & = \frac{d}{{{\text{d}}t}}\left( I \right) + \frac{d}{{{\text{d}}t}}\left( {At} \right) + \frac{d}{{{\text{d}}t}}\left( {\frac{{A^{2} t^{2} }}{2 !}} \right) + \frac{d}{{{\text{d}}t}}\left( {\frac{{A^{3} t^{3} }}{3 !}} \right) + \cdots \\ \end{aligned} $$

The term by term differentiation is valid because the series of \( e^{At} \) is convergent for all t under the operator.

$$ \begin{aligned} {\text{or}},\;\frac{d}{{{\text{d}}t}}\left( {e^{At} } \right) & = \varphi + A + A^{2} t + \frac{{A^{3} t^{2} }}{2\, !} + \frac{{A^{4} t^{3} }}{3 !} + \cdots \\ & = A\left( {I + At + \frac{{A^{2} t^{2} }}{2 !} + \frac{{A^{3} t^{3} }}{3 !} + \cdots } \right) \\ & = Ae^{At}. \\ \end{aligned} $$

Therefore,

$$ \frac{d}{{{\text{d}}t}}\left( {e^{At} } \right) = Ae^{At} $$
(2.14)

This shows that the matrix \( {\mathop x\limits_{\sim}} = e^{At} \) is a solution of the matrix differential equation \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \). The matrix \( e^{At} \) is known as the fundamental matrix of the system (2.12). Now using (2.14)

$$ \begin{aligned} \frac{d}{{{\text{d}}t}}\left( {e^{At} \mathop x\limits_{\sim}{_{0}} } \right) & = \frac{d}{{{\text{d}}t}}\left( {e^{At} } \right)\mathop x\limits_{\sim}{_{0}} = Ae^{At} \mathop x\limits_{\sim}{_{0}} \\ & \Rightarrow \quad {\mathop {\dot{x}}\limits_{\sim }} = \frac{d}{{{\text{d}}t}}( {{\mathop x\limits_{\sim}} }) = A{\mathop x\limits_{\sim}} , \\ \end{aligned} $$

where \( {\mathop x\limits_{\sim}} = e^{At} \mathop x\limits_{\sim}{_{0}} \).

Also, \( {\mathop x\limits_{\sim}} ( 0 )= \left[ {e^{At} \mathop x\limits_{\sim}{_{0}} } \right]_{t = 0} \) \( = \left[ {e^{At} } \right]_{t = 0} \mathop x\limits_{\sim}{_{0}} \) \( = I\, \mathop x\limits_{\sim}{_{0}} = \mathop x\limits_{\sim}{_{0}} \). Thus \( {\mathop x\limits_{\sim}} (t )= e^{At} \mathop x\limits_{\sim}{_{0}} \) is a solution of (2.12). We prove the uniqueness of solution as follows. Let \( {\mathop x\limits_{\sim}} (t ) \) be a solution of (2.12) and \( {\mathop y\limits_{\sim}} (t )= e^{ - At} {\mathop x\limits_{\sim}} (t ) \) be its another solution. Then

$$ \begin{aligned} {\mathop {\dot{y}}\limits_{\sim }} (t )& = - Ae^{ - At} {\mathop x\limits_{\sim}} (t )+ e^{ - At} {\mathop {\dot{x}}\limits_{\sim }} (t )\\ & = - Ae^{ - At} {\mathop x\limits_{\sim}} (t )+ Ae^{ - At} {\mathop x\limits_{\sim}} (t ) \, = \, 0. \\ \end{aligned} $$

This implies \( {\mathop y\limits_{\sim}} (t ) \) is constant. At t = 0, for \( t \in {\mathbf{\mathbb{R}}} \), it shows that \( {\mathop y\limits_{\sim}} (t )= \mathop x\limits_{\sim}{_{0}} \). Therefore any solution of the IVP (2.12) is given as \( {\mathop x\limits_{\sim}} (t )= e^{At} {\mathop y\limits_{\sim}} (t )= e^{At} \mathop x\limits_{\sim}{_{0}} \). This completes the proof.

2.3.3 Matrix Exponential Function

From the fundamental theorem , the general solution of a linear system can be obtained using the exponential matrix function. The exponential matrix function has some interesting properties in which the general solution can be obtained easily. For an n × n matrix A, the matrix exponential function \( e^{A} \) of A is defined as

$$ e^{A} = \sum\limits_{n = 0}^{\infty } {\frac{{A^{n} }}{n !}} = I + A + \frac{{A^{2} }}{2 !} + \cdots $$
(2.15)

Note that the infinite series (2.15) converges for all n × n matrix A. If A = [a], a 1 × 1 matrix, then \( e^{A} = \left[ {e^{a} } \right] \) (see the book by L. Perko [1]). We now discuss some of the important properties of matrix exponential function e A.

Property 1

If A = φ, the null matrix, then \( e^{At} = I \).

Proof

By definition

$$ \begin{aligned} e^{At} & = I + At + \frac{{A^{2} t^{2} }}{2 !} + \frac{{A^{3} t^{3} }}{3 !} + \cdots \\ & = I + {\varphi }t + \frac{{{\varphi }^{2} t^{2} }}{2 !} + \frac{{{\varphi }^{3} t^{3} }}{3 !} + \cdots \\ & = I. \\ \end{aligned} $$

So, \( e^{At} = I \) for A = φ.

Property 2

Let A = I, the identity matrix. Then

$$ e^{At} = \left[ {\begin{array}{*{20}c} {e^{t} } & 0 \\ 0 & {e^{t} } \\ \end{array} } \right] = Ie^{t}. $$

Proof

We know that \( e^{At} = I + At + \frac{{A^{2} t^{2} }}{2\, !} + \frac{{A^{3} t^{3} }}{3\, !} + \cdots \). Therefore

$$ \begin{aligned} e^{It} & = I + It + \frac{{I^{2} t^{2} }}{2\, !} + \frac{{I^{3} t^{3} }}{3\, !} + \cdots \\ & = I + It + \frac{{It^{2} }}{2\, !} + \frac{{It^{3} }}{3\, !} + \cdots \\ & = I\left( {1 + t + \frac{{t^{2} }}{2\, !} + \frac{{t^{3} }}{3\, !} + \cdots } \right) \\ & = Ie^{t} = e^{t} \left[ {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {e^{t} } & 0 \\ 0 & {e^{t} } \\ \end{array} } \right]. \\ \end{aligned} $$

Note

If A = αI, α being a scalar, then

$$ e^{At} = e^{\alpha It} = Ie^{\alpha t} = \left[ {\begin{array}{*{20}c} {e^{\alpha t} } & 0 \\ 0 & {e^{\alpha t} } \\ \end{array} } \right]. $$

Property 3

Suppose \( D = \left[ {\begin{array}{*{20}c} {\lambda_{1} } & 0 \\ 0 & {\lambda_{2} } \\ \end{array} } \right] \) , a diagonal matrix. Then

$$ e^{Dt} = \left[ {\begin{array}{*{20}c} {e^{{{{\lambda_{1} t}} }} } & 0 \\ 0 & {e^{{{{\lambda_{2} t}} }} } \\ \end{array} } \right] $$

Proof

By definition

$$ \begin{aligned} e^{Dt} & = I + Dt + \frac{{D^{2} t^{2} }}{2\, !} + \frac{{D^{3} t^{3} }}{3\, !} + \cdots \\ & = \left[ {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {\lambda_{1} } & 0 \\ 0 & {\lambda_{2} } \\ \end{array} } \right]t + \left[ {\begin{array}{*{20}c} {\lambda_{1} } & 0 \\ 0 & {\lambda_{2} } \\ \end{array} } \right]^{2} \frac{{t^{2} }}{2\, !} + \cdots \\ & = \left[ {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {\lambda_{1} } & 0 \\ 0 & {\lambda_{2} } \\ \end{array} } \right]t + \left[ {\begin{array}{*{20}c} {\lambda_{1}^{2} } & 0 \\ 0 & {\lambda_{2}^{2} } \\ \end{array} } \right]\frac{{t^{2} }}{2\, !} + \cdots \\ & = \left[ {\begin{array}{*{20}c} {1 + \lambda_{1} t + \frac{{\lambda_{1}^{2} t^{2} }}{2\, !} + \cdots } & 0 \\ 0 & {1 + \lambda_{2} t + \frac{{\lambda_{2}^{2} t^{2} }}{2\, !} + \cdots } \\ \end{array} } \right] \\ & = \left[ {\begin{array}{*{20}c} {e^{{{{\lambda_{1} t}} }} } & 0 \\ 0 & {e^{{{{\lambda_{2} t}} }} } \\ \end{array} } \right]. \\ \end{aligned} $$

Property 4

Let \( P^{ - 1} AP = D \), D being a diagonal matrix. Then

$$ e^{At} = Pe^{Dt} P^{ - 1} = P\left[ {\begin{array}{*{20}c} {e^{{{{\lambda_{1} t}} }} } & 0 \\ 0 & {e^{{{{\lambda_{2} t}} }} } \\ \end{array} } \right]P^{ - 1} , \,{\text{where}}\, D = \left[ {\begin{array}{*{20}c} {\lambda_{1} } & 0 \\ 0 & {\lambda_{2} } \\ \end{array} } \right]. $$

Proof

We have

$$ \begin{aligned} e^{At} & = \mathop { \lim }\limits_{n \to \infty } \,\sum\limits_{k = 0}^{n} {\frac{{A^{k} t^{k} }}{k\, !}} \\ & = \mathop { \lim }\limits_{n \to \infty } \sum\limits_{k = 0}^{n} {\frac{{\left( {PDP^{ - 1} } \right)^{k} t^{k} }}{k\, !}} \, [\because \,D = P^{ - 1} AP,{\text{ so }}A = PDP^{ - 1} ]\\ & = \mathop { \lim }\limits_{n \to \infty } \sum\limits_{k = 0}^{n} {\frac{{\left( {PD^{k} P^{ - 1} } \right)t^{k} }}{k\, !}} \left[ \begin{array}{l} \left( {PDP^{ - 1} } \right)^{k} = \left( {PDP^{ - 1} } \right)\left( {PDP^{ - 1} } \right) \cdots \left( {PDP^{ - 1} } \right) \hfill \\ = PD\left( {P^{ - 1} P} \right)D\left( {P^{ - 1} P} \right) \cdots \left( {P^{ - 1} P} \right)DP^{ - 1} \hfill \\ = PD^{k} P^{ - 1} \hfill \\ \end{array} \right] \\ & = P\left( {\mathop { \lim }\limits_{n \to \infty } \sum\limits_{k = 0}^{n} {\frac{{D^{k} t^{k} }}{k\, !}} } \right)P^{ - 1} \\ & = Pe^{Dt} P^{ - 1} \\ & = P\left[ {\begin{array}{*{20}c} {e^{{{{\lambda_{1} t}} }} } & 0 \\ 0 & {e^{{{{\lambda_{2} t}} }} } \\ \end{array} } \right]P^{ - 1} \\ \end{aligned} $$

Property 5

Let N be a nilpotent matrix of order k. Then \( e^{Nt} \) is a series containing finite terms only.

Proof

A matrix N is said to be a nilpotent matrix of order or index k if k is the least positive integer such that \( N^{k} = \varphi \) but \( N^{k - 1} \ne \varphi \), φ being the null matrix.

Since N is a nilpotent matrix of order k, \( N^{k - 1} \ne \varphi \) but \( N^{k} = \varphi. \) Therefore

$$ \begin{aligned} e^{{{{N{\kern 1pt} t}} }} & = I + Nt + \frac{{N^{2} t^{2} }}{2\, !} + \frac{{N^{3} t^{3} }}{3\, !} + \cdots + \frac{{N^{k - 1} t^{k - 1} }}{{\left( {k - 1} \right)\, !}} + \frac{{N^{k} t^{k} }}{k\, !} + \cdots \\ & = I + Nt + \frac{{N^{2} t^{2} }}{2\, !} + \frac{{N^{3} t^{3} }}{3\, !} + \cdots + \frac{{N^{k - 1} t^{k - 1} }}{{\left( {k - 1} \right)\, !}} \\ \end{aligned} $$

which is a series of finite terms only.

Property 6

If \( A = \left[ {\begin{array}{*{20}c} a & { - b} \\ b & a \\ \end{array} } \right] \), then \( e^{At} = e^{a\,I\,t} \left[ {I\cos (bt )+ J\sin (bt )} \right] \) , where \( I = \left[ {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right] \) and \( J = \left[ {\begin{array}{*{20}c} 0 & { - 1} \\ 1 & 0 \\ \end{array} } \right] \).

Proof

We have

$$ \begin{aligned} A & = \left[ {\begin{array}{*{20}c} a & { - b} \\ b & a \\ \end{array} } \right] = a\left[ {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right] + b\left[ {\begin{array}{*{20}c} 0 & { - 1} \\ 1 & 0 \\ \end{array} } \right] = aI + bJ,\;{\text{where}}\;I = \left[ {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right], \\ J & = \left[ {\begin{array}{*{20}c} 0 & { - 1} \\ 1 & 0 \\ \end{array} } \right]. \end{aligned} $$

Therefore

$$ \begin{aligned} e^{A\,t} & = e^{a\,I\,t + b\,J\,t} \\ & = e^{aIt} \cdot e^{bJt} = e^{aIt} \left[ {I + bJt + \frac{{ (bJt )^{2} }}{2 !} + \frac{{ (bJt )^{3} }}{3 !} + \cdots } \right] \\ & = e^{a\,I\,t} \left[ {I\left( {1 - \frac{{b^{2} t^{2} }}{2 !} + \frac{{b^{4} t^{4} }}{4 !} + \cdots } \right) + J\left( {bt - \frac{{b^{3} t^{3} }}{3 !} + \cdots } \right)} \right] \\ & = e^{aIt} \left[ {I\cos \left( {bt} \right) + J\sin \left( {bt} \right)} \right]\left[ \begin{array}{l} \because\, J^{2} = \left[ {\begin{array}{*{20}c} 0 & { - 1} \\ 1 & 0 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} 0 & { - 1} \\ 1 & 0 \\ \end{array} } \right] \hfill \\ = \left[ {\begin{array}{*{20}c} { - 1} & 0 \\ 0 & { - 1} \\ \end{array} } \right] = - I \hfill \\ J^{3} = J^{2} J = \left( { - I} \right)J = - J \hfill \\ J^{4} = J^{3} J = \left( { - J} \right)J = - J^{2} = I \hfill \\ {\text{etc}}. \ldots \hfill \\ \end{array} \right] \\ \end{aligned} $$

Property 7

\( e^{A{+}B} = e^{A} e^{B} \), provided AB = BA.

Proof

Suppose AB = BA. Then by Binomial theorem,

$$ \left( {A{+}B} \right)^{n} \, =\, \sum\limits_{k = 0}^{n} {\frac{n\, !}{{\left( {n - k} \right)\, !k\, !}}} A^{n - k} B^{k} = n\, !\sum\limits_{j + k = n} {\frac{{A^{j} B^{k} }}{j\, !k\, !}}. $$

Therefore

$$ \begin{aligned} e^{A{+}B} & = \sum\limits_{n = 0}^{\infty } {\frac{{\left( {A{+}B} \right)^{n} }}{n\, !}} = \sum\limits_{n = 0}^{\infty } {\sum\limits_{j + k = n} {\frac{{A^{j} B^{k} }}{j\, !k\, !}} } \\ & = \sum\limits_{j = 0}^{\infty } {\frac{{A^{j} }}{j\, !}} \sum\limits_{k = 0}^{\infty } {\frac{{B^{k} }}{k\, !}} \\ & = e^{A} e^{B}. \\ \end{aligned} $$

It is true that \( e^{A{+}B} = e^{A} e^{B} \) if AB = BA. But in general \( e^{A{+}B} \ne e^{A} e^{B} \).

Property 8

For any n × n matrix A, \( \frac{d}{{{\text{d}}t}}\left( {e^{At} } \right) = Ae^{At} \).

Proof

By definition

$$ \begin{aligned} e^{At} & = I + At + \frac{{A^{2} t^{2} }}{2\, !} + \frac{{A^{3} t^{3} }}{3\, !} + \cdots \\ \therefore \frac{d}{{{\text{d}}t}}\left( {e^{At} } \right) & = \frac{d}{{{\text{d}}t}}\left( {I + At + \frac{{A^{2} t^{2} }}{2\, !} + \frac{{A^{3} t^{3} }}{3\, !} + \cdots } \right) \\ & = \frac{d}{{{\text{d}}t}}\left( I \right) + \frac{d}{{{\text{d}}t}}\left( {At} \right) + \frac{d}{{{\text{d}}t}}\left( {\frac{{A^{2} t^{2} }}{2\, !}} \right) + \frac{d}{{{\text{d}}t}}\left( {\frac{{A^{3} t^{3} }}{3\, !}} \right) + \cdots \\ \end{aligned} $$

The term by term differentiation is valid because the series of \( e^{At} \) is convergent for all t under the operator.

$$ \begin{aligned} {\text{or}},\;\frac{d}{{{\text{d}}t}}\left( {e^{At} } \right) & = \varphi + A + A^{2} t + \frac{{A^{3} t^{2} }}{2\, !} + \frac{{A^{4} t^{3} }}{3\, !} + \cdots \\ & = A\left( {I + At + \frac{{A^{2} t^{2} }}{2\, !} + \frac{{A^{3} t^{3} }}{3\, !} + \cdots } \right) \\ & = Ae^{At}. \\ \end{aligned} $$

Therefore, \( \frac{d}{{{\text{d}}t}}\left( {e^{At} } \right) \) \( = Ae^{At} \).

We now establish the important result below.

Result

Multiplying both sides of \( \frac{d}{{{\text{d}}t}}\left( {e^{At} } \right) = Ae^{At} \) by \( \Phi ( 0 ) \) in right, we have

$$ \begin{aligned} &\frac{d}{{{\text{d}}t}}\left( {e^{At} } \right)\Phi ( 0 )= Ae^{At}\Phi ( 0 )\\ &\Rightarrow \;\frac{d}{{{\text{d}}t}}\left( {e^{At}\Phi ( 0 )} \right) = Ae^{At}\Phi ( 0 )\\ &\Rightarrow \;\frac{d}{{{\text{d}}t}}\left( {\Phi (t )\Phi ^{ - 1} ( 0 )\Phi ( 0 )} \right) = A\Phi (t )\Phi ^{ - 1} ( 0 )\Phi ( 0 ) {\text{ [since }}e^{At} =\Phi (t )\Phi ^{ - 1} ( 0 ) ]\\ &\Rightarrow \;\frac{d}{{{\text{d}}t}}\left( {\Phi (t )} \right) = {\dot{\Phi }} (t )= A\Phi (t ). \\ \end{aligned} $$

This shows that the fundamental matrix \( \Phi (t ) \) must satisfy the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \). This is true for all t. So, it is true for t = 0. Putting t = 0 in \( {\dot{\Phi }} (t )= A\Phi (t ) \), we get

$$ {\dot{\Phi }} ( 0 )= A\Phi ( 0 )\Rightarrow A = {\dot{\Phi }} ( 0 )\Phi ^{ - 1} ( 0 ). $$

This gives that the coefficient matrix A can be expressed in terms of the fundamental matrix \( \Phi (t ) \).

Example 2.12

Does \( \Phi (t )= \left( {\begin{array}{*{20}c} {e^{t} } & {e^{ - 2t} } \\ {2e^{t} } & {3e^{ - 2t} } \\ \end{array} } \right) \) a fundamental matrix for the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \)? If so, then find the matrix A.

Solution

We know that if \( \Phi (t ) \) is a fundamental matrix, then \( \Phi ( 0 ) \) is invertible.

Here \( \Phi (t )= \left( {\begin{array}{*{20}c} {e^{t} } & {e^{ - 2t} } \\ {2e^{t} } & {3e^{ - 2t} } \\ \end{array} } \right) \). So, \( \Phi ( 0 )= \left( {\begin{array}{*{20}c} 1 & 1 \\ 2 & 3 \\ \end{array} } \right) \).

Since \( { \det }\left( {\Phi ( 0 )} \right) = 3 - 2 = 1 \ne 0 \), \( \Phi ( 0 ) \) is invertible. Hence the given matrix is a fundamental matrix for the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \). We shall now find the coefficient matrix A.

We have \( \Phi ( 0 )= \left( {\begin{array}{*{20}c} 1 & 1 \\ 2 & 3 \\ \end{array} } \right) \). So \( \Phi ^{ - 1} ( 0 )= \left( {\begin{array}{*{20}c} 3 & { - 1} \\ { - 2} & 1 \\ \end{array} } \right) \).

Also \( {\dot{\Phi }} (t )= \left( {\begin{array}{*{20}c} {e^{t} } & { - 2e^{ - 2t} } \\ {2e^{t} } & { - 6e^{ - 2t} } \\ \end{array} } \right) \), and \( {\dot{\Phi }} ( 0 )= \left( {\begin{array}{*{20}c} 1 & { - 2} \\ 2 & { - 6} \\ \end{array} } \right) \).

Therefore the matrix A is

$$ A = {\dot{\Phi }} ( 0 )\Phi ^{ - 1} ( 0 )= \left( {\begin{array}{*{20}c} 1 & { - 2} \\ 2 & { - 6} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 3 & { - 1} \\ { - 2} & 1 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 7 & { - 3} \\ {18} & { - 8} \\ \end{array} } \right). $$

Example 2.13

Find \( e^{At} \) for the matrix \( A = \left( {\begin{array}{*{20}c} 3 & 1 \\ 1 & 3 \\ \end{array} } \right) \). Hence find the solution of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \).

Solution

We see that the eigenvectors corresponding to the eigenvalues λ = 2, 4 of A are respectively \( {\mathop e\limits_{\sim}} = \) \( \left( {\begin{array}{*{20}c} 1 \\ { - 1} \\ \end{array} } \right) \) and \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ 1 \\ \end{array} } \right) \), which are linearly independent. Therefore, two fundamental solutions of the system are \( \mathop x\limits_{\sim}{_{1}} (t )= \left( {\begin{array}{*{20}c} 1 \\ { - 1} \\ \end{array} } \right)e^{2t} \) and \( \mathop x\limits_{\sim}{_{2}} (t )= \left( {\begin{array}{*{20}c} 1 \\ 1 \\ \end{array} } \right)e^{4t} \). So a fundamental matrix of the system is

$$ \Phi (t )= \left( {\begin{array}{*{20}c} {\mathop x\limits_{\sim}{_{1}} (t )} & {\mathop x\limits_{\sim}{_{2}} (t )} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {e^{2t} } & {e^{4\,t} } \\ { - e^{2t} } & {e^{4\,t} } \\ \end{array} } \right). $$

We find \( \Phi ( 0 )= \left( {\begin{array}{*{20}c} 1 & 1 \\ { - 1} & 1 \\ \end{array} } \right) \) and \( \Phi ^{ - 1} ( 0 )= \frac{1}{2}\left( {\begin{array}{*{20}c} 1 & { - 1} \\ 1 & 1 \\ \end{array} } \right) \). Therefore

$$ e^{At} =\Phi (t )\Phi ^{ - 1} ( 0 )= \frac{1}{2}\left( {\begin{array}{*{20}c} {e^{2t} } & {e^{4t} } \\ { - e^{2t} } & {e^{4t} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & { - 1} \\ 1 & 1 \\ \end{array} } \right) = \frac{1}{2}\left( {\begin{array}{*{20}c} {e^{2t} + e^{4\,t} } & {e^{4t} - e^{2t} } \\ {e^{4t} - e^{2t} } & {e^{2t} + e^{4\,t} } \\ \end{array} } \right). $$

By fundamental theorem , the solution of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) is

$$ {\mathop x\limits_{\sim}} (t )= e^{At} \mathop x\limits_{\sim}{_{0}} = \frac{1}{2}\left( {\begin{array}{*{20}c} {e^{2t} + e^{4\,t} } & {e^{4t} - e^{2t} } \\ {e^{4t} - e^{2t} } & {e^{2t} + e^{4\,t} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {c_{1} } \\ {c_{2} } \\ \end{array} } \right) $$

where \( \mathop x\limits_{\sim}{_{0}} = \left( {\begin{array}{*{20}c} {c_{1} } \\ {c_{2} } \\ \end{array} } \right) \) is an arbitrary constant column vector.

2.4 Solution Procedure of Linear Systems

The general solution of a linear homogeneous system can be easily deduced from the fundamental theorem. According to this theorem the solution of \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) with \( {\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} \) is given as \( {\mathop x\limits_{\sim}} (t) = e^{At} \mathop x\limits_{\sim}{_{0}} \) and this solution is unique.

For a simple change of coordinates \( {\mathop x\limits_{\sim}} = P{\kern 1pt} {\mathop y\limits_{\sim}} \) where P is an invertible matrix, the equation \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \, \) is transformed as

$$ \begin{aligned} {\mathop {\dot{x}}\limits_{\sim }} & = A{\mathop x\limits_{\sim}} \\ \Rightarrow \;P{\mathop {\dot{y}}\limits_{\sim }} & = AP{\mathop y\limits_{\sim}} \\ \Rightarrow \;{\mathop {\dot{y}}\limits_{\sim }} & = P^{ - 1} AP{\kern 1pt} {\kern 1pt} {\mathop y\limits_{\sim}} \\ \Rightarrow \;{\mathop {\dot{y}}\limits_{\sim }} & = C{\mathop y\limits_{\sim}} , \,{\text{where}}\, C = P^{ - 1} AP. \\ \end{aligned} $$

The initial conditions \( {\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} \) become \( {\mathop y\limits_{\sim}} ( 0 )= P^{ - 1} {\mathop x\limits_{\sim}} ( 0 )= P^{ - 1} \mathop x\limits_{\sim}{_{0}} = \mathop y\limits_{\sim}{_{0}} \). So, the new system is\( {\mathop {\dot{y}}\limits_{\sim }} = C{\mathop y\limits_{\sim}} \) with \( {\mathop y\limits_{\sim}} ( 0 )= \mathop y\limits_{\sim}{_{0}} \), where \( C = P^{ - 1} AP \).

It has the solution

$$ {\mathop y\limits_{\sim}} (t )= e^{Ct} \mathop y\limits_{\sim}{_{0}}. $$

Hence the solution of the original system is

$$ {\mathop x\limits_{\sim}} (t )= P{\mathop y\limits_{\sim}} (t )\; = Pe^{Ct} \mathop y\limits_{\sim}{_{0}} = Pe^{Ct} P^{ - 1} \mathop x\limits_{\sim}{_{0}}. $$

We see that \( e^{At} = Pe^{{{{C{\kern 1pt} t}} }} P^{ - 1} \). The matrix P is chosen in such a way that matrix C takes a simple form. We now discuss three cases.

  1. (i)

    Matrix A has distinct real eigenvalues

Let \( P = \left( {{\mathop \alpha \limits_{ \sim }}{_{1}} ,{\mathop \alpha \limits_{ \sim }}{_{2}} , \ldots ,{\mathop \alpha \limits_{ \sim }}{_{n}} } \right) \) so that \( P^{ - 1} \) exists. The matrix C is obtained as \( C = P^{ - 1} AP \) which is a diagonal matrix. Hence the exponential function of C becomes

$$ e^{Ct} = {\text{diag}}(e^{{\lambda_{1} t}} ,e^{{\lambda_{2} t}} , \ldots ,e^{{\lambda_{n} t}} ). $$

Therefore we can write the solution of \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) with \( {\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} \) as \( {\mathop x\limits_{\sim}} (t )= e^{At} \mathop x\limits_{\sim}{_{0}} = Pe^{Ct} P^{ - 1} \mathop x\limits_{\sim}{_{0}} \). So

$$ {\mathop x\limits_{\sim}} (t) = P{\kern 1pt} {\text{diag}}(e^{{\lambda_{1} t}} ,e^{{\lambda_{2} t}} , \ldots ,e^{{\lambda_{n} t}} )P^{ - 1} \mathop x\limits_{\sim}{_{0}} $$

where \( \mathop x\limits_{\sim}{_{0}} = (c_{1} ,c_{2} , \ldots ,c_{n} )^{t} \) is an arbitrary constant.

  1. (ii)

    Matrix A has real repeated eigenvalues

In this case the following theorems are relevant (proofs are available in the book Hirsch and Smale [2]) for finding general solution of a linear system when matrix A has repeated eigenvalues.

Theorem 2.2

Let the n × n matrix A have real eigenvalues λ 1, λ 2, …, λ n repeated according to their multiplicity. Then there exists a basis of generalized eigenvectors \( \{ {\mathop \alpha \limits_{ \sim }}{_{1}} ,{\mathop \alpha \limits_{ \sim }}{_{2}} , \ldots ,{\mathop \alpha \limits_{ \sim }}{_{n}} \} \) such that the matrix \( P = ({\mathop \alpha \limits_{ \sim }}{_{1}} ,{\mathop \alpha \limits_{ \sim }}{_{2}} , \ldots ,{\mathop \alpha \limits_{ \sim }}{_{n}} ) \) is invertible and A = S + N, where \( P^{ - 1} SP = {\text{diag}}(\lambda_{1} ,\lambda_{1} , \ldots ,\lambda_{n} ) \) and N(=A − S) is nilpotent of order k ≤ n, and S and N commute.

Using the theorem the linear system subject to the initial conditions \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0}} \) has the solution

$$ {\mathop x\limits_{\sim}} (t) = P{\kern 1pt} {\text{diag}}(e^{{\lambda_{j} t}} )P^{ - 1} \left[ {I + Nt + \cdots + \frac{{N^{k - 1} t^{k - 1} }}{(k - 1)!}} \right] \mathop x\limits_{\sim}{_{0}}. $$
  1. (iii)

    Matrix A has complex eigenvalues

Theorem 2.3

Let A be a 2n × 2n matrix with complex eigenvalues a j  ± ib j , j = 1, 2, …, n. Then there exists generalized complex eigenvectors \( ({\mathop \alpha \limits_{ \sim }}{_{j}} \pm i{\mathop \beta \limits_{ \sim }}{_{j}} ),j = 1,2 \ldots ,n \) such that the matrix \( P = ({\mathop \beta \limits_{ \sim }}{_{1}} ,{\mathop \alpha \limits_{ \sim }}{_{1}} ,{\mathop \beta \limits_{ \sim }}{_{2}} ,{\mathop \alpha \limits_{ \sim }}{_{2}} , \ldots ,{\mathop \beta \limits_{ \sim }}{_{n}} ,{\mathop \alpha \limits_{ \sim }}{_{n}} ) \) is invertible and A = S + N, where \( P^{ - 1} SP = {\text{diag}}\left[ {\begin{array}{*{20}c} {a_{j} } & { - b_{j} } \\ {b_{j} } & {a_{j} } \\ \end{array} } \right] \), and N(=A − S) is a nilpotent matrix of order k ≤ 2n, and S and N commute.

Using the theorem the linear system of equations subject to the initial conditions \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0}} \) has the solution

$$ {\mathop x\limits_{\sim}} (t) = P{\text{diag}}(e^{{a_{j} t}} )\left[ {\begin{array}{*{20}c} {\cos (b_{j} t)} & { - \sin (b_{j} t)} \\ {\sin (b_{j} t)} & {\cos (b_{j} t)} \\ \end{array} } \right]P^{ - 1} \left[ {I + Nt + \cdots + \frac{{N^{k} t^{k} }}{k!}} \right] \mathop x\limits_{\sim}{_{0}}. $$

For a 2 × 2 matrix A with complex eigenvalues \( (\alpha \pm i\beta) \) the solution is given by

$$ {\mathop x\limits_{\sim}} (t )= Pe^{\alpha t} \left( {\begin{array}{*{20}c} {\cos \beta t} & { - \sin \beta t} \\ {\sin \beta t} & {\cos \beta t} \\ \end{array} } \right)P^{ - 1} \mathop x\limits_{\sim}{_{0}}. $$

Example 2.14

Solve the initial value problem

$$ \dot{x} = x + y,\dot{y} = 4x - 2y $$

with initial condition \( {\mathop x\limits_{\sim}} ( 0 )= \left( {\begin{array}{*{20}c} 2 \\ { - 3} \\ \end{array} } \right) \).

Solution

The characteristic equation of matrix A is

$$ \begin{aligned} & \left| {A - \lambda I} \right| = 0 \\ & \Rightarrow \;\left| {\begin{array}{*{20}c} {1 - \lambda } & 1 \\ 4 & { - 2 - \lambda } \\ \end{array} } \right| = 0 \\ & \Rightarrow \;\left( {\lambda - 1} \right)\left( {\lambda + 2} \right) - 4 = 0 \\ & \Rightarrow \;\lambda^{2} + \lambda - 6 = 0 \\ & \Rightarrow \;\lambda = 2, - 3 \\ \end{aligned} $$

So the eigenvalues of matrix A are 2, −3, which are real and distinct.

Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{1} = 2 \). Then

$$ \begin{aligned} & \left( {A - 2I} \right){\mathop e\limits_{\sim}} = 0 \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {1-2} & 1 \\ 4 & { - 2-2} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad - e_{1} + e_{2} = 0,4e_{1} - 4e_{2} = 0 \\ \end{aligned} $$

A nontrivial solution of this system is \( e_{1} = 1,e_{2} = 1 \).

$$ \therefore \;{\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ 1 \\ \end{array} } \right). $$

Again let \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} {g_{1} } \\ {g{}_{2}} \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{2} = - 3 \). Then

$$ \begin{aligned} & \left( {A + 3I} \right){\mathop g\limits_{\sim}} = 0 \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {1 + 3} & 1 \\ 4 & { - 2 + 3} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {g_{1} } \\ {g_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad 4g_{1} + g_{2} = 0,4g_{1} + g_{2} = 0 \\ \end{aligned} $$

A nontrivial solution of this system is \( g_{1} = 1,g_{2} = - 4 \).

$$ \therefore \;{\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ { - 4} \\ \end{array} } \right). $$

Let \( P = \left( {{\mathop e\limits_{\sim}} ,{\mathop g\limits_{\sim}} } \right) = \left( {\begin{array}{*{20}c} 1 & 1 \\ 1 & { - 4} \\ \end{array} } \right) \). Then \( P^{ - 1} = - \frac{1}{5}\left( {\begin{array}{*{20}c} { - 4} & { - 1} \\ { - 1} & 1 \\ \end{array} } \right) = \frac{1}{5}\left( {\begin{array}{*{20}c} 4 & 1 \\ 1 & { - 1} \\ \end{array} } \right) \)

$$ \begin{aligned} \therefore \;C = P^{ - 1} AP &= \frac{1}{5}\left( {\begin{array}{*{20}c} 4 & 1 \\ 1 & { - 1} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & 1 \\ 4 & { - 2} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & 1 \\ 1 & { - 4} \\ \end{array} } \right) \\ & = \frac{1}{5}\left( {\begin{array}{*{20}c} 8 & 2 \\ { - 3} & 3 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & 1 \\ 1 & { - 4} \\ \end{array} } \right) = \frac{1}{5}\left( {\begin{array}{*{20}c} {10} & 0 \\ 0 & { - 15} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 2 & 0 \\ 0 & { - 3} \\ \end{array} } \right) \\ \end{aligned} $$
$$ \therefore e^{Ct} = \left( {\begin{array}{*{20}c} {e^{2t} } & 0 \\ 0 & {e^{ - 3t} } \\ \end{array} } \right) $$

Therefore by the fundamental theorem , the solution of the system is

$$ \begin{aligned} {\mathop x\limits_{\sim}} (t )& = e^{At} \mathop x\limits_{\sim}{_{0}} = Pe^{Ct} P^{ - 1} \mathop x\limits_{\sim}{_{0}} \\ & = \left( {\begin{array}{*{20}c} 1 & 1 \\ 1 & { - 4} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e^{2t} } & 0 \\ 0 & {e^{ - 3t} } \\ \end{array} } \right)\frac{1}{5}\left( {\begin{array}{*{20}c} 4 & 1 \\ 1 & { - 1} \\ \end{array} } \right)\mathop x\limits_{\sim}{_{0}} \\ & = \frac{1}{5}\left( {\begin{array}{*{20}c} {4e^{2t} + e^{ - 3t} } & {e^{2t} - e^{ - 3t} } \\ {4e^{2t} - 4e^{ - 3t} } & {e^{2t} + 4e^{ - 3t} } \\ \end{array} } \right) \mathop x\limits_{\sim}{_{0}} \\ & \Rightarrow \;\left( {\begin{array}{*{20}c} {x (t )} \\ {y (t )} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\frac{4}{5}e^{2t} + \frac{1}{5}e^{ - 3t} } & {\frac{1}{5}e^{2t} - \frac{1}{5}e^{ - 3t} } \\ {\frac{4}{5}e^{2t} - \frac{4}{5}e^{ - 3t} } & {\frac{1}{5}e^{2t} + \frac{4}{5}e^{ - 3t} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 2 \\ { - 3} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {e^{2t} + e^{ - 3t} } \\ {e^{2t} - 4e^{ - 3t} } \\ \end{array} } \right) \\ & \Rightarrow \;x (t )= e^{2t} + e^{ - 3t} ,y (t )= e^{2t} - 4e^{ - 3t}. \\ \end{aligned} $$

Example 2.15

Solve the system

$$ \dot{x}_{1} = - x_{1} - 3x_{2} ,\dot{x}_{2} = 2x_{2}. $$

Also sketch the phase portrait .

Solution

The characteristic equation of matrix A is

$$ \begin{aligned} & \left| {A - \lambda I} \right| = 0 \\ & \Rightarrow \;\left| {\begin{array}{*{20}c} { - 1 - \lambda } & { - 3} \\ 0 & {2 - \lambda } \\ \end{array} } \right| = 0 \\ & \Rightarrow \;\left( {\lambda - + } \right)\left( {\lambda - 2} \right) = 0 \\ & \Rightarrow \;\lambda = - 1,2 \\ \end{aligned} $$

The eigenvalues of matrix A are −1, 2, which are real and distinct.

Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{1} = - 1 \). Then

$$ \begin{aligned} & \left( {A + I} \right){\mathop e\limits_{\sim}} = 0 \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} { - 1 + 1} & { - 3} \\ 0 & {2 + 1} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad - 3e_{2} = 0,3e_{2} = 0 \\ & \Rightarrow \quad e_{2} = 0 \, {\text{and}} \, e_{1} \,{\text{is\,arbitrary}}. \\ \end{aligned} $$

Choose \( e_{1} \) = 1 so that \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \).

Again, let \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} {g_{1} } \\ {g{}_{2}} \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{2} = 2 \). Then

$$ \begin{aligned} & \left( {A - 2I} \right){\mathop g\limits_{\sim}} = {\mathop 0\limits_{\sim}} \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} { - 1-2} & { - 3} \\ 0 & {2-2} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {g_{1} } \\ {g_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad g_{1} + g_{2} = 0 \\ \end{aligned} $$

Choose \( g_{1} = 1,g_{2} = - 1 \). Then \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ { - 1} \\ \end{array} } \right) \).

Let \( P = \left( {{\mathop e\limits_{\sim}} ,{\mathop g\limits_{\sim}} } \right) = \left( {\begin{array}{*{20}c} 1 & 1 \\ 0 & { - 1} \\ \end{array} } \right) \). Then \( P^{ - 1} = \left( {\begin{array}{*{20}c} 1 & 1 \\ 0 & { - 1} \\ \end{array} } \right) \)

Therefore

$$ \begin{aligned} C & = P^{ - 1} AP = \left( {\begin{array}{*{20}c} 1 & 1 \\ 0 & { - 1} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} { - 1} & { - 3} \\ 0 & 2 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & 1 \\ 0 & { - 1} \\ \end{array} } \right) \\ & = \left( {\begin{array}{*{20}c} { - 1} & { - 1} \\ 0 & { - 2} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & 1 \\ 0 & { - 1} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} { - 1} & 0 \\ 0 & 2 \\ \end{array} } \right) \\ \end{aligned} $$

and so \( e^{Ct} = \left( {\begin{array}{*{20}c} {e^{ - t} } & 0 \\ 0 & {e^{2t} } \\ \end{array} } \right) \).

Therefore by fundamental theorem , the solution of the system is

$$ \begin{aligned} {\mathop x\limits_{\sim}} (t )& = e^{At} \mathop x\limits_{\sim}{_{0}} = Pe^{Ct} P^{ - 1} \mathop x\limits_{\sim}{_{0}} \\ & = \left( {\begin{array}{*{20}c} 1 & 1 \\ 0 & { - 1} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e^{ - t} } & 0 \\ 0 & {e^{2t} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & 1 \\ 0 & { - 1} \\ \end{array} } \right) \mathop x\limits_{\sim}{_{0}} \\ & = \left( {\begin{array}{*{20}c} {e^{ - t} } & {e^{ - t} - e^{2t} } \\ 0 & {e^{2t} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {c_{1} } \\ {c_{2} } \\ \end{array} } \right) \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {x_{1} (t )} \\ {x_{2} (t )} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {e^{ - t} } & {e^{ - t} - e^{2t} } \\ 0 & {e^{2t} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {c_{1} } \\ {c_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\left( {c_{1} + c_{2} } \right)e^{ - t} - c_{2} e^{2t} } \\ {c_{2} e^{2t} } \\ \end{array} } \right) \\ & \Rightarrow \quad x_{1} (t )= c_{1} e^{ - t} + c_{2} \left( {e^{ - t} - e^{2t} } \right),x_{2} (t )= c_{2} e^{2t} \\ \end{aligned} $$

where \( c_{1} ,c_{2} \) are arbitrary constants. The phase diagram is presented in Fig. 2.1.

Fig. 2.1
figure 1

A typical phase portrait of the system

Example 2.16

Solve the following system using the fundamental theorem.

$$ \begin{aligned} \dot{x} & = 5x + 4y \\ \dot{y} & = - x + y \\ \end{aligned} $$

Solution

The characteristic equation of matrix A is

$$ \begin{aligned} & \left| {A - \lambda I} \right| = 0 \\ & \Rightarrow \quad \left| {\begin{array}{*{20}c} {5 - \lambda } & 4 \\ { - 1} & {1 - \lambda } \\ \end{array} } \right| = 0 \\ & \Rightarrow \quad \left( {\lambda - 1} \right)\left( {\lambda - 5} \right) + 4 = 0 \\ & \Rightarrow \quad \lambda^{2} - 6\lambda + 9 = 0 \\ & \Rightarrow \quad \lambda = 3,3. \\ \end{aligned} $$

This shows that matrix A has an eigenvalue λ = 3 of multiplicity 2. Then \( S = \left[ {\begin{array}{*{20}c} 3 & 0 \\ 0 & 3 \\ \end{array} } \right] \) and \( N = A - S = \left[ {\begin{array}{*{20}c} 2 & 4 \\ { - 1} & { - 2} \\ \end{array} } \right] \). Clearly, matrix N is a nilpotent matrix of order 2. So, the general solution of the system is given by

$$ \begin{aligned} {\mathop x\limits_{\sim}} (t) = e^{At} \mathop x\limits_{\sim}{_{0}} = e^{(S + N)t} \mathop x\limits_{\sim}{_{0}} = e^{St} e^{Nt} \mathop x\limits_{\sim}{_{0}} & = \left[ {\begin{array}{*{20}c} {e^{3t} } & 0 \\ 0 & {e^{3t} } \\ \end{array} } \right]\left[ {I + Nt} \right] \mathop x\limits_{\sim}{_{0}} \\ & = \left[ {\begin{array}{*{20}c} {e^{3t} } & 0 \\ 0 & {e^{3t} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {1 + 2t} & {4t} \\ { - t} & {1-2t} \\ \end{array} } \right] \mathop x\limits_{\sim}{_{0}}. \\ \end{aligned} $$

Example 2.17

Find the general solution of the system of linear equations

$$ \begin{aligned} \dot{x} = 4x - 2y \hfill \\ \dot{y} = 5x + 2y \hfill \\ \end{aligned} $$

Solution

The characteristic equation of matrix A is

$$ \begin{aligned} & \left| {A - \lambda I} \right| = 0 \\ & \Rightarrow \quad \left| {\begin{array}{*{20}c} {4 - \lambda } & { - 2} \\ 5 & {2 - \lambda } \\ \end{array} } \right| = 0 \\ & \Rightarrow \quad \left( {\lambda - 4} \right)\left( {\lambda - 2} \right) + 10 = 0 \\ & \Rightarrow \quad \lambda^{2} - 6\lambda + 18 = 0 \\ & \Rightarrow \quad \lambda = \frac{{6 \pm \sqrt {36-72} }}{2} = 3 \pm 3i. \\ \end{aligned} $$

So matrix A has a pair of complex conjugate eigenvalues 3 ± 3i

Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{1} = 3 + 3i \). Then

$$ \begin{aligned} & \left( {A - \left( {3 + 3i} \right)I} \right){\mathop e\limits_{\sim}} = {\mathop 0\limits_{\sim}} \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {4 - \left( {3 + 3i} \right)} & { - 2} \\ 5 & {2 - \left( {3 + 3i} \right)} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} {1-3i} & { - 2} \\ 5 & { - 1-3i} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad \left( {1-3i} \right)e_{1} - 2e_{2} = 0,5e_{1} + \left( {1 + 3i} \right)e_{2} = 0 \\ \end{aligned} $$

A nontrivial solution of this system is \( e_{1} = 2,\;e_{2} = 1-3i \).

$$ \therefore \;{\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 2 \\ {1-3i} \\ \end{array} } \right). $$

Similarly, the eigenvector corresponding to the eigenvalue \( \lambda_{2} = 3-3i\; \) is \( \quad {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} 2 \\ {1 + 3i} \\ \end{array} } \right) \).

Let \( P = \left( {\begin{array}{*{20}c} 0 & 2 \\ { - 3} & 1 \\ \end{array} } \right) \). Then \( P^{-1} = \frac{1}{6} \left( {\begin{array}{*{20}c} 1 & { - 2} \\ 3 & 0 \\ \end{array} } \right) \).

Let \( C = P^{ - 1} AP \). Then \( C = P^{ - 1} AP = \frac{1}{6}\left( {\begin{array}{*{20}c} 1 & { - 2} \\ 3 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 4 & { - 2} \\ 5 & 2 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 0 & 2 \\ { - 3} & 1 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 3 & { - 3} \\ 3 & 3 \\ \end{array} } \right). \)

So,

$$ e^{Ct} = e^{3t} \left( {\begin{array}{*{20}c} {\cos 3t} & { - \sin 3t} \\ {\sin 3t} & {\cos 3t} \\ \end{array} } \right). $$

Therefore, the solution of the system is

$$ \begin{aligned} {\mathop x\limits_{\sim}} (t )& = e^{At} \mathop x\limits_{\sim}{_{0}} = Pe^{Ct} P^{ - 1} \mathop x\limits_{\sim}{_{0}} \\ & = \frac{1}{6}e^{3t} \left( {\begin{array}{*{20}c} 0 & 2 \\{ - 3} & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\cos 3t} & { - \sin 3t} \\ {\sin 3t} & {\cos 3t} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & { - 2} \\ 3 & 0 \\ \end{array} } \right) \mathop x\limits_{\sim}{_{0}}. \\ \end{aligned} $$

Example 2.18

Solve the initial value problem \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), with \( {\mathop x\limits_{\sim}} ( 0 )= \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \), where \( A = \left( {\begin{array}{*{20}c} { - 2} & { - 1} \\ 1 & { - 2} \\ \end{array} } \right),{\mathop x\limits_{\sim}} = \left( {\begin{array}{*{20}c} x \\ y \\ \end{array} } \right) \). Also sketch the solution curve in the phase plane \( {\mathbf{\mathbb{R}}}^{2} \).

Solution

The characteristic equation of matrix A is

$$ \begin{aligned} & \left| {A - \lambda I} \right| = 0 \\ & \Rightarrow \quad \left| {\begin{array}{*{20}c} { - 2 - \lambda } & { - 1} \\ 1 & { - 2 - \lambda } \\ \end{array} } \right| = 0 \\ & \Rightarrow \quad \left( {\lambda + 2} \right)^{2} + 1 = 0 \\ & \Rightarrow \quad \lambda^{2} + 4\lambda + 5 = 0 \\ & \Rightarrow \quad \lambda = \frac{{ - 4 \pm \sqrt {16 - 20} }}{2} = - 2 \pm i. \\ \end{aligned} $$

So matrix A has a pair of complex conjugate eigenvalues −2 ± i

Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{1} = - 2 + i \). Then

$$ \begin{aligned} & \left( {A - \left( { - 2 + i} \right)I} \right){\mathop e\limits_{\sim}} = 0 \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} { - 2 - \left( { - 2 + i} \right)} & { - 1} \\ 1 & { - 2 - \left( { - 2 + i} \right)} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad \left( {\begin{array}{*{20}c} { - i} & { - 1} \\ 1 & { - i} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right) \\ & \Rightarrow \quad - ie_{1} - e_{2} = 0,e_{1} - ie_{2} = 0 \\ \end{aligned} $$

A nontrivial solution of this system is \( e_{1} = 1,e_{2} = - i \).

\( \therefore\,{\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ { - i} \\ \end{array} } \right) \). Similarly, the eigenvector corresponding to the eigenvalue \( \lambda_{2} = - 2 - i \) is \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ i \\ \end{array} } \right) \). Let \( P = \left( {\begin{array}{*{20}c} 0 & 1 \\ { - 1} & 0 \\ \end{array} } \right) \). Then \( P^{ - 1} = \left( {\begin{array}{*{20}c} 0 & { - 1} \\ 1 & 0 \\ \end{array} } \right) \) and

$$ C = P^{ - 1} AP = \left( {\begin{array}{*{20}c} 0 & { - 1} \\ 1 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} { - 2} & { - 1} \\ 1 & { - 2} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 0 & 1 \\ { - 1} & 0 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} { - 2} & { - 1} \\ 1 & { - 2} \\ \end{array} } \right). $$

So,

$$ e^{Ct} = e^{ - 2t} \left( {\begin{array}{*{20}c} {\cos t} & { - \sin t} \\ {\sin t} & {\cos t} \\ \end{array} } \right). $$

Hence the solution of the system is

$$ \begin{aligned} {\mathop x\limits_{\sim}} (t )& = e^{At} \mathop x\limits_{\sim}{_{0}} = Pe^{Ct} P^{ - 1} \mathop x\limits_{\sim}{_{0}} \\ & = e^{ - 2t} \left( {\begin{array}{*{20}c} 0 & 1 \\ { - 1} & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\cos t} & { - \sin t} \\ {\sin t} & {\cos t} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 0 & { - 1} \\ 1 & 0 \\ \end{array} } \right) \mathop x\limits_{\sim}{_{0}}. \\ & = e^{ - 2t} \left( {\begin{array}{*{20}c} {\sin t} & {\cos t} \\ { - \cos t} & {\sin t} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 0 & { - 1} \\ 1 & 0 \\ \end{array} } \right) \mathop x\limits_{\sim}{_{0}} \\ & = e^{ - 2t} \left( {\begin{array}{*{20}c} {\cos t} & { - \sin t} \\ {\sin t} & {\cos t} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \\ & = e^{ - 2t} \left( {\begin{array}{*{20}c} {\cos t} \\ {\sin t} \\ \end{array} } \right) \\ \therefore\, x (t )& = e^{ - 2t} \cos t,y (t )= e^{ - 2t} \sin t. \\ \end{aligned} $$

Phase Portrait

The phase portrait of the solution curve is shown in Fig. 2.2.

Fig. 2.2
figure 2

Phase portrait of the solution curve

Example 2.19

Solve the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) with \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0}} \), where \( A = \left( {\begin{array}{*{20}c} 2 & 1 & 3 & { - 1} \\ 0 & 2 & 2 & { - 1} \\ 0 & 0 & 2 & { - 5} \\ 0 & 0 & 0 & 2 \\ \end{array} } \right). \)

Solution

Clearly, matrix A has the eigenvalue λ = 2 with multiplicity 4. Therefore,

$$ S = \left( {\begin{array}{*{20}c} 2 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 2 \\ \end{array} } \right)\;{\text{and}}\;N = A - S = \left( {\begin{array}{*{20}c} 0 & 1 & 3 & { - 1} \\ 0 & 0 & 2 & { - 1} \\ 0 & 0 & 0 & { - 5} \\ 0 & 0 & 0 & 0 \\ \end{array} } \right). $$

It is easy to check that the matrix N is nilpotent of order 4. Therefore, the solution of the system is

$$ {\mathop x\limits_{\sim}} (t) = e^{St} \left( {I + Nt + \frac{{N^{2} t^{2} }}{2!} + \frac{{N^{3} t^{3} }}{3!}} \right) \mathop x\limits_{\sim}{_{0}}. $$

2.5 Nonhomogeneous Linear Systems

The most general form of a nonhomogeneous linear system is given as

$$ {\mathop {\dot{x}}\limits_{\sim }} (t) = A(t){\mathop x\limits_{\sim}} (t) + {\mathop b\limits_{\sim}} (t) $$
(2.16)

where A(t) is an n × n matrix, usually depends on time and \( {\mathop b\limits_{\sim}} (t) \) is a time dependent column vector. Here we consider matrix A(t) to be time independent, that is, A(t) ≡ A. Then (2.16) becomes

$$ {\mathop {\dot{x}}\limits_{\sim }} (t) = A{\mathop x\limits_{\sim}} (t) + {\mathop b\limits_{\sim}} (t) $$
(2.17)

The corresponding homogeneous system is given as

$$ {\mathop {\dot{x}}\limits_{\sim }} (t) = A{\mathop x\limits_{\sim}} (t) $$
(2.18)

We have described solution techniques for homogeneous system (2.18). We now find the solution of the nonhomogeneous system (2.17), subject to initial conditions \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0}} \).

As discussed earlier if Φ(t) be the fundamental matrix of (2.18) with \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0} }\), then the solution of (2.18) is given by

$$ {\mathop x\limits_{\sim}} (t) = \Phi (t)\Phi^{ - 1} (0) \mathop x\limits_{\sim}{_{0}} $$

We assume that

$$ {\mathop x\limits_{\sim}} (t) = \Phi (t)\Phi^{ - 1} (0) \mathop x\limits_{\sim}{_{0}} + \Phi (t)\Phi^{ - 1} (0){\mathop u\limits_{\sim}} (t) $$
(2.19)

be the solution of the nonhomogeneous linear system (2.17). Then the initial conditions are obtained as \( {\mathop u\limits_{\sim}} (0) = 0 \). Differentiating (2.19) with respect to t, we get

$$ {\mathop {\dot{x}}\limits_{\sim }} (t) = \dot{\Phi }(t)\Phi^{ - 1} (0) \mathop x\limits_{\sim}{_{0}} + \dot{\Phi }(t)\Phi^{ - 1} (0){\mathop u\limits_{\sim}} (t) + \Phi (t)\Phi^{ - 1} (0)\dot{{\mathop u\limits_{\sim}} }(t) $$
(2.20)

Substituting (2.20) and (2.19) into (2.17),

$$ \begin{aligned} \dot{\Phi }(t)\Phi^{ - 1} (0) \mathop x\limits_{\sim}{_{0}} & + \dot{\Phi }(t)\Phi^{ - 1} (0){\mathop u\limits_{\sim}} (t) + \Phi (t)\Phi^{ - 1} (0)\dot{{\mathop u\limits_{\sim}} }(t) \\ & = A\Phi (t)\Phi^{ - 1} (0) \mathop x\limits_{\sim}{_{0}} + A\Phi (t)\Phi^{ - 1} (0){\mathop u\limits_{\sim}} (t) + {\mathop b\limits_{\sim}} (t) \\ \end{aligned} $$
(2.21)

Since Φ(t) is a fundamental matrix solution of (2.18),

$$ {\dot{\Phi }}(t) = A\Phi (t). $$

Using this in (2.21), we get

$$ \begin{aligned} &\Phi (t)\Phi ^{ - 1} (0)\dot{{\mathop u\limits_{\sim}} }(t) = {\mathop b\limits_{\sim}} (t) \\ & \Rightarrow \;\dot{{\mathop u\limits_{\sim}} }(t) =\Phi (0){\dot{\Phi }}^{ - 1} (t){\mathop b\limits_{\sim}} (t) \\ \end{aligned} $$

Integrating w.r.to t and using \( {\mathop u\limits_{\sim}} (0) = 0 \), we get

$$ {\mathop u\limits_{\sim}} (t) = \int\limits_{0}^{t} {\Phi (0)\Phi ^{ - 1} (t){\mathop b\limits_{\sim}} (t){\text{d}}t}. $$

Hence the general solution of the nonhomogeneous system (2.17) subject to \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0}} \) is given by

$$ {\mathop x\limits_{\sim}} (t) =\Phi (t)\Phi ^{ - 1} (0) \mathop x\limits_{\sim}{_{0}} +\Phi (t)\int\limits_{0}^{t} {\Phi ^{ - 1} (\alpha ){\mathop b\limits_{\sim}} (\alpha )d\alpha } $$
(2.22)

Example 2.20

Find the solution of the nonhomogeneous system \( \dot{x} = x + y + t,\;\dot{y} = - y + 1 \) with the initial conditions x(0) = 1, y(0) = 0.

Solution

In matrix notation, the system takes the form \( {\mathop {\dot{x}}\limits_{\sim }} (t) = A{\mathop x\limits_{\sim}} (t) + {\mathop b\limits_{\sim}} (t) \), where \( A = \left( {\begin{array}{*{20}c} 1 & 1 \\ 0 & { - 1} \\ \end{array} } \right) \) and \( {\mathop b\limits_{\sim}} (t) = \left( {\begin{array}{*{20}c} t \\ 1 \\ \end{array} } \right). \)

The initial conditions become \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0}} \), where \( \mathop x\limits_{\sim}{_{0}} = \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \). Matrix A has eigenvalues \( \lambda_{1} = 1 \), \( \lambda_{2} = - 1 \) with corresponding eigenvectors \( \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \) and \( \left( {\begin{array}{*{20}c} 1 \\ { - 2} \\ \end{array} } \right) \). Therefore

$$ \Phi (t) = \left( {\begin{array}{*{20}c} {e^{t} } & {e^{ - t} } \\ 0 & { - 2e^{ - t} } \\ \end{array} } \right). $$

This gives

$$ \Phi ^{ - 1} (t) = \frac{1}{2}\left( {\begin{array}{*{20}c} {2e^{ - t} } & {e^{ - t} } \\ 0 & { - e^{t} } \\ \end{array} } \right),\;\Phi (0) = \left( {\begin{array}{*{20}c} 1 & 1 \\ 0 & { - 2} \\ \end{array} } \right) {\text{and}}\;\Phi ^{ - 1} (0) = \frac{1}{2}\left( {\begin{array}{*{20}c} 2 & 1 \\ 0 & { - 1} \\ \end{array} } \right). $$

Therefore the required solution is

$$ \begin{aligned} {\mathop x\limits_{\sim}} (t) & =\Phi (t)\Phi ^{ - 1} (0) \mathop x\limits_{\sim}{_{0}} +\Phi (t)\int\limits_{0}^{t} {\Phi ^{ - 1} (\alpha ){\mathop b\limits_{\sim}} (\alpha ) \text{d} \alpha } \\ & = \frac{1}{2}\Phi (t)\left\{ {\left( {\begin{array}{*{20}c} 2 & 1 \\ 0 & { - 1} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) + \int\limits_{0}^{t} {\left( {\begin{array}{*{20}c} {2e^{ - \alpha } } & {e^{ - \alpha } } \\ 0 & { - e^{\alpha } } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} \alpha \\ 1 \\ \end{array} } \right){\text{d}}\alpha } } \right\} \\ & = \frac{1}{2}\Phi (t)\left\{ {\left( {\begin{array}{*{20}c} 2 \\ 0 \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} {3 - (2t + 3)e^{ - t} } \\ {1 - e^{t} } \\ \end{array} } \right)} \right\} \\ & = \frac{1}{2}\left( {\begin{array}{*{20}c} {e^{t} } & {e^{ - t} } \\ 0 & { - 2e^{ - t} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {5 - (2t + 3)e^{ - t} } \\ {1 - e^{t} } \\ \end{array} } \right) = \frac{1}{2}\left( {\begin{array}{*{20}c} {5e^{t} - 2t - 4 + e^{ - t} } \\ {2-2e^{ - 2t} } \\ \end{array} } \right). \\ \end{aligned} $$

Example 2.21

Prove that the flow evolution operator \( \phi_{t} ({\mathop x\limits_{\sim}} ) = e^{At} {\mathop x\limits_{\sim}} \) satisfies the following properties:

  1. (i)

    \( \phi_{0} ({\mathop x\limits_{\sim}} ) = {\mathop x\limits_{\sim}} \),

  2. (ii)

    \( \phi_{ - t} \circ \phi_{t} ({\mathop x\limits_{\sim}} ) = {\mathop x\limits_{\sim}} \),

  3. (iii)

    \( \phi_{t} \circ \phi_{s} ({\mathop x\limits_{\sim}} ) = \phi_{t + s} ({\mathop x\limits_{\sim}} ) \)

for all s, \( t \in {\mathbf{\mathbb{R}}} \) and \( {\mathop x\limits_{\sim}} \in {\mathbf{\mathbb{R}}}^{n} \). Is \( \phi_{t} \circ \phi_{s} = \phi_{s} \circ \phi_{t}? \)

Solution

We have

  1. (i)

    \( \phi_{0} ({\mathop x\limits_{\sim}} ) = e^{A \cdot \,0} {\mathop x\limits_{\sim}} = {\mathop x\limits_{\sim}}. \)

  2. (ii)

    \( \phi_{ - t} \circ \phi_{t} ({\mathop x\limits_{\sim}} ) = \phi_{ - t} ({\mathop y\limits_{\sim}} ) = e^{ - At} {\mathop y\limits_{\sim}} = e^{ - At} e^{At} {\mathop x\limits_{\sim}} = {\mathop x\limits_{\sim}} \), where \( {\mathop y\limits_{\sim}} = e^{At} {\mathop x\limits_{\sim}} \).

  3. (iii)

    \( \phi_{t} \circ \phi_{s} ({\mathop x\limits_{\sim}} ) = \phi_{t} ({\mathop y\limits_{\sim}} ) = e^{At} {\mathop y\limits_{\sim}} = e^{At} e^{As} {\mathop x\limits_{\sim}} = e^{A(t + s)} {\mathop x\limits_{\sim}} = \phi_{t + s} ({\mathop x\limits_{\sim}} ). \)

Now,

$$ \phi_{t} \circ \phi_{s} ({\mathop x\limits_{\sim}} ) = \phi_{t} ({\mathop y\limits_{\sim}} ) = e^{At} {\mathop y\limits_{\sim}} = e^{At} e^{As} {\mathop x\limits_{\sim}} = e^{As} e^{At} {\mathop x\limits_{\sim}} = \phi_{s} ({\mathop z\limits_{\sim}} ) = \phi_{s} \circ \phi_{t} ({\mathop x\limits_{\sim}} ) $$

for all \( {\mathop x\limits_{\sim}} \in {\mathbf{\mathbb{R}}}^{n} \), where \( {\mathop z\limits_{\sim}} = e^{As} {\mathop x\limits_{\sim}} \).

Hence \( \phi_{t} \circ \phi_{s} = \phi_{s} \circ \phi_{t} \). This indicates that the given flow evolution operator is commutative.

2.6 Exercises