Abstract
This chapter deals with linear systems of ordinary differential equations (ODEs), both homogeneous and nonhomogeneous equations. Linear systems are extremely useful for analyzing nonlinear systems. The main emphasis is given for finding solutions of linear systems with constant coefficients so that the solution methods could be extended to higher dimensional systems easily. The well-known methods such as eigenvalue–eigenvector method and the fundamental matrix method have been described in detail. The properties of fundamental matrix, the fundamental theorem, and important properties of exponential matrix function are given in this chapter. It is important to note that the set of all solutions of a linear system forms a vector space. The eigenvectors constitute the solution space of the linear system. The general solution procedure for linear systems using fundamental matrix, the concept of generalized eigenvector, solutions of multiple eigenvalues, both real and complex, are discussed.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Exponential Matrix Function
- Fundamental Matrix Method
- Eigenvalue-eigenvector Method
- General Solution Procedure
- Nonhomogeneous Linear System
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This chapter deals with linear systems of ordinary differential equations (ODEs), both homogeneous and nonhomogeneous equations. Linear systems are extremely useful for analyzing nonlinear systems. The main emphasis is given for finding solutions of linear systems with constant coefficients so that the solution methods could be extended to higher dimensional systems easily. The well-known methods such as eigenvalue–eigenvector method and the fundamental matrix method have been described in detail. The properties of fundamental matrix, the fundamental theorem, and important properties of exponential matrix function are given in this chapter. It is important to note that the set of all solutions of a linear system forms a vector space. The eigenvectors constitute the solution space of the linear system. The general solution procedure for linear systems using fundamental matrix, the concept of generalized eigenvector, solutions of multiple eigenvalues, both real and complex, are discussed.
2.1 Linear Systems
Consider a linear system of ordinary differential equations as follows:
where \( a_{ij} ,b_{j} (i,j = 1,2, \ldots ,n) \) are all given constants. The system (2.1) can be written in matrix notation as
where \( {\mathop x\limits_{\sim}} (t) = \left( {x_{1} (t),x_{2} (t), \ldots ,x_{n} (t)} \right)^{t} ,\;{\mathop b\limits_{\sim}} = \left( {b_{1} ,b_{2} , \ldots ,b_{n} } \right)^{t} \) are the column vectors and A = [a ij ] n×n is the square matrix of order n, known as the coefficient matrix of the system. The system (2.2) is said to be homogeneous if \( {\mathop b\limits_{\sim}} = {\mathop 0\limits_{\sim}} \), that is, if all \( b_{i} \)’s are identically zero. On the other hand, if \( {\mathop b\limits_{\sim}} \ne {\mathop 0\limits_{\sim}} \), that is, if at least one \( b_{i} \) is nonzero, then the system is called nonhomogeneous. We consider first linear homogeneous system as
A differentiable function \( {\mathop x\limits_{\sim}} (t ) \) is said to be a solution of (2.3) if it satisfies the equation \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \). Let \( \mathop x\limits_{\sim}{_{1}} (t ) \) and \( \mathop x\limits_{\sim}{_{2}} (t ) \) be two solutions of (2.3). Then any linear combination \( {\mathop x\limits_{\sim}} (t )= c_{1} \mathop x\limits_{\sim}{_{1}} (t )+ c_{2} \mathop x\limits_{\sim}{_{2}} (t ) \) of \( \mathop x\limits_{\sim}{_{1}} (t ) \) and \( \mathop x\limits_{\sim}{_{2}} (t ) \) is also a solution of (2.3). This can be shown very easily as below.
and so
The solution \( {\mathop x\limits_{\sim}} = c_{1} \mathop x\limits_{\sim}{_{1}} + c_{2} \mathop x\limits_{\sim}{_{2}} \) is known as general solution of the system (2.3). Thus the general solution of a system is the linear combination of the set of all solutions of that system (superposition principle). Since the system is linear, we may consider a nontrivial solution of (2.3) as
where \( {\mathop \alpha \limits_{ \sim }} \) is a column vector with components \( {\mathop \alpha \limits_{ \sim }} = \left( {\alpha_{1} ,\alpha_{2} , \ldots ,\alpha_{n} } \right)^{t} \) and λ is a number. Substituting (2.4) into (2.3) we obtain
where I is the identity matrix of order n. Equation (2.5) gives a nontrivial solution if and only if
On expansion, Eq. (2.6) gives a polynomial equation of degree n in λ, known as the characteristic equation of matrix A. The roots of the characteristic equation (2.6) are called the characteristic roots or eigenvalues or latent roots of A. The vector \( {\mathop \alpha \limits_{ \sim }} \), which is a nontrivial solution of (2.5), is known as an eigenvector of A corresponding to the eigenvalue λ. If \( {\mathop \alpha \limits_{ \sim }} \) is an eigenvector of a matrix A corresponding to an eigenvalue λ, then \( {\mathop x\limits_{\sim}} (t )= e^{\lambda t} {\mathop \alpha \limits_{ \sim }} \) is a solution of the system \( \dot{{\mathop x\limits_{\sim}} } = A{\mathop x\limits_{\sim}} \). The set of linearly independent eigenvectors constitutes a solution space of the linear homogeneous ordinary differential equations which is a vector space. All properties of vector space hold good for the solution space. We now discuss the general solution of a linear system below.
2.2 Eigenvalue–Eigenvector Method
As we know, the solution of a linear system constitutes a linear space and the solution is formed by the eigenvectors of the matrix. There may have four possibilities according to the eigenvalues and corresponding eigenvectors of matrix A. We proceed now case-wise as follows.
Case I: Eigenvalues of A are real and distinct
If the coefficient matrix A has real distinct eigenvalues, then it has linearly independent (L.I.) eigenvectors. Let \( \mathop \alpha \limits_{ \sim }{_{1}} ,\mathop \alpha \limits_{ \sim }{_{2}} , \ldots , \mathop \alpha \limits_{ \sim }{_{n}} \) be the eigenvectors corresponding to the eigenvalues \( \uplambda_{1} ,\uplambda_{2} , \ldots\uplambda_{n} \) of matrix A. Then each \( \mathop x\limits_{\sim}{_{j}} (t )= \mathop \alpha \limits_{ \sim }{_{j}} \,e^{{{{\lambda_{j} {t}}} }} \), j = 1, 2, …, n is a solution of \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \). The general solution is a linear combination of the solutions \( \mathop x\limits_{\sim}{_{j}}(t) \) and is given by
where \( c_{1} ,c_{2} , \ldots,c_{n} \) are arbitrary constants. In \( {\mathbf{\mathbb{R}}}^{2} \), the solution can be written as
Case II: Eigenvalues of A are real but repeated
In this case matrix A may have either n linearly independent eigenvectors or only one or many (<n) linearly independent eigenvectors corresponding to the repeated eigenvalues . The generalized eigenvectors have been used for linearly independent eigenvectors. We discuss this case in the following two sub-cases.
Sub-case 1: Matrix A has linearly independent eigenvectors
Let \( \mathop \alpha \limits_{ \sim}{_{1}} , \mathop \alpha \limits_{ \sim}{_{2}} , \ldots , \mathop \alpha \limits_{ \sim}{_{n}} \) be n linearly independent eigenvectors corresponding to the repeated real eigenvalue λ of matrix A. In this case the general solution of the linear system is given by
Sub-case 2. Matrix A has only one or many (<n) linearly independent eigenvectors
First, we give the definition of generalized eigenvector of A. Let λ be an eigenvalue of the n × n matrix A of multiplicity m ≤ n. Then for k = 1, 2, …, m, any nonzero solution of the equation \( (A - \lambda I)^{k} {\mathop v\limits_{\sim}} = {\mathop 0\limits_{\sim}} \) is called a generalized eigenvector of A. For simplicity consider a two dimensional system. Let the eigenvalues be repeated but only one eigenvector, say \( \mathop \alpha \limits_{ \sim}{_{1} }\) be linearly independent. Let \( \mathop \alpha \limits_{ \sim }{_{2}} \) be a generalized eigenvector of the 2 × 2 matrix A. Then \( \mathop \alpha \limits_{ \sim }{_{2}} \) can be obtained from the relation \( (A - \lambda I) \mathop \alpha \limits_{ \sim}{_{2}} = \mathop \alpha \limits_{ \sim}{_{1}} \Rightarrow A \mathop \alpha \limits_{ \sim }{_{2}} = \lambda \mathop \alpha \limits_{ \sim }{_{2}} + \mathop \alpha \limits_{ \sim }{_{1}} \). So the general solution of the system is given by
Similarly, for an n × n matrix A, the general solution may be written as \( {\mathop x\limits_{\sim}} (t) = \sum\nolimits_{i = 1}^{n} {c_{i} \mathop x\limits_{\sim}{_{i}} (t)} \), where
Case III: Matrix A has non-repeated complex eigenvalues
Suppose the real n × n matrix A has m-pairs of complex eigenvalues \( a_{j} \pm ib_{j} ,j = 1,2, \ldots ,m \). Let \( {\mathop \alpha \limits_{ \sim }}_{j} \pm i{\mathop \beta \limits_{ \sim }}_{j} ,j = 1,2, \ldots ,m \) denote the corresponding eigenvectors . Then the solution of the system \( {\mathop {\dot{x}}\limits_{\sim }} (t )= A{\mathop x\limits_{\sim}} (t ) \) for these complex eigenvalues is given by
where \( \mathop u\limits_{\sim}{_{j}} = \exp (a_{j} t)\{ {\mathop \alpha \limits_{ \sim }}_{j} \cos (b_{j} t) - {\mathop \beta \limits_{ \sim }}_{j} \sin (b_{j} t)\} \), \( \mathop v\limits_{\sim}{_{j}} = \exp (a_{j} t)\{ {\mathop \alpha \limits_{ \sim }}_{j} \sin(b_{j} t) + {\mathop \beta \limits_{ \sim }}_{j} \cos(b_{j} t)\} \) and \( c_{j} ,d_{j} (j = 1,2, \ldots ,m) \) are arbitrary constants. We discuss each of the above cases through specific examples below.
Example 2.1
Find the general solution of the following linear homogeneous system using eigenvalue-eigenvector method:
Solution
In matrix notation, the system can be written as \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), where \( {\mathop x\limits_{\sim}} = \left( {\begin{array}{*{20}c} x \\ y \\ \end{array} } \right) \) and \( A = \left( {\begin{array}{*{20}c} 5 & 4 \\ 1 & 2 \\ \end{array} } \right) \). The eigenvalues of A satisfy the equation
The roots of the characteristic equation \( \lambda^{2} - 7\lambda + 6 = 0 \) are λ = 1, 6. So the eigenvalues of A are real and distinct. We shall now find the eigenvectors corresponding to these eigenvalues.
Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{1} = 1 \). Then
We can choose \( e_{{_{1} }} = 1,\;e_{{_{2} }} = - 1 \). So, the eigenvector corresponding to the eigenvalue \( \lambda_{{_{1} }} = 1 \) is \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ { - 1} \\ \end{array} } \right). \)
Again, let \( {\mathop e\limits_{\sim}}^{\prime} = \left( {\begin{array}{*{20}c} {e_{1}^{\prime} } \\ {e_{2}^{\prime}} \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{2} = 6 \). Then
We can choose \( e_{1}^{\prime} = 4,e_{2}^{\prime} = 1 \). So, the eigenvector corresponding to the eigenvalue \( \,\lambda_{2} = 6\, \) is \( {\mathop e\limits_{\sim}}^{\prime} = \left( {\begin{array}{*{20}c} 4 \\ 1 \\ \end{array} } \right) \). The eigenvectors \( {\mathop e\limits_{\sim}} \), \( {\mathop e\limits_{\sim}}^{\prime} \) are linearly independent. Hence the general solution of the system is given as
or, \( \left. {\begin{array}{*{20}l} {x\left( t \right) = c_{1} e^{t} + 4c_{2} e^{6\,t} } \hfill \\ {y\left( t \right) = - c_{1} e^{t} + c_{2} e^{6\,t} } \hfill \\ \end{array} } \right\} \), where \( c_{1} ,c_{2} \) are arbitrary constants.
Example 2.2
Find the general solution of the linear system
Solution
The characteristic equation of matrix A is
So, the eigenvalues of A are 3, 3, which are real and repeated. Clearly, \( \mathop e\limits_{\sim}{_{1}} = \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \) and \( \mathop e\limits_{\sim}{_{2}} = \left( {\begin{array}{*{20}c} 0 \\ 1 \\ \end{array} } \right) \) are two linearly independent eigenvectors corresponding to the repeated eigenvalue λ = 3. Thus, the general solution of the system is
Example 2.3
Find the general solution of the system
using eigenvalue-eigenvector method.
Solution
The characteristic equation of matrix A is
So matrix A has repeated real eigenvalues λ = 1, 1.
Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue λ = 1. Then
We can choose \( e_{1} = 2,e_{2} = 1 \). Therefore, \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right) \).
Let \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} {g_{1} } \\ {g_{2} } \\ \end{array} } \right) \) be the generalized eigenvector corresponding to the eigenvalue λ = 1. Then
We can choose \( g_{2} = 1,g_{1} = 3 \). Therefore \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} 3 \\ 1 \\ \end{array} } \right) \).
Therefore the general solution of the system is
or, \( \left. {\begin{array}{*{20}l} {x\left( t \right) = \left\{ {2c_{1} + \left( {2t + 3} \right)c_{2} } \right\}e^{t} } \hfill \\ {y\left( t \right) = \left\{ {c_{1} + \left( {t + 1} \right)c_{2} } \right\}e^{t} } \hfill \\ \end{array} } \right\} \), where \( c_{1} \) and \( c_{2} \) are arbitrary constants.
Example 2.4
Find the general solution of the linear system
Solution
Given system can be written as
The characteristic equation of matrix A is
Therefore, matrix A has a pair of complex conjugate eigenvalues 6 ± 3i.
Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue λ = 6 + 3i. Then
A nontrivial solution of this system is
Therefore \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ {4 - 3i} \\ \end{array} } \right) \) \( = \left( {\begin{array}{*{20}c} 1 \\ 4 \\ \end{array} } \right) + i\left( {\begin{array}{*{20}c} 0 \\ { - 3} \\ \end{array} } \right) \) \( = {\mathop \alpha \limits_{ \sim }}{_{1}} + i{\mathop \alpha \limits_{ \sim }}{_{2}}\), where \( {\mathop \alpha \limits_{ \sim }}{_{1}} = \left( {\begin{array}{*{20}c} 1 \\ 4 \\ \end{array} } \right) \) and \( {\mathop \alpha \limits_{ \sim }}{_{2}} = \left( {\begin{array}{*{20}c} 0 \\ { - 3} \\ \end{array} } \right) \).
Similarly, the eigenvector corresponding to the eigenvalue λ = 6 − 3i is \( {\mathop e\limits_{\sim}}^{\prime} = \left( {\begin{array}{*{20}c} 1 \\ {4 + 3i} \\ \end{array} } \right) \) \( = {\mathop \alpha \limits_{ \sim }}{_{1}} - i{\mathop \alpha \limits_{ \sim }}{_{2}} \). Therefore,
and
Therefore, the general solution is
where \( c_{1} \,\,{\text{and }}d_{1} \) are arbitrary constants.
Example 2.5
Find the solution of the system
satisfying the initial condition x(0) = 1, y(0) = 1. Describe the behavior of the solution as t → ∞.
Solution
The characteristic equation of matrix A is
So, matrix A has a pair of complex conjugate eigenvalues (−1 ± i).
Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue λ = −1 + i. Then
A nontrivial solution of this system is
Therefore \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {2 + i} \\ 1 \\ \end{array} } \right) \) \( = \left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right) + i\left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \) \( = {\mathop \alpha \limits_{ \sim }}{_{1}} + i{\mathop \alpha \limits_{ \sim }}{_{2}} \), where \( {\mathop \alpha \limits_{ \sim }}{_{1}} = \left( {\begin{array}{*{20}c} 2 \\ 1 \\ \end{array} } \right) \) and \( {\mathop \alpha \limits_{ \sim }}{_{2}} = \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \).
Similarly, the eigenvector corresponding to the eigenvalue λ = −1 − i is \( {\mathop e\limits_{\sim}}^{\prime} = \left( {\begin{array}{*{20}c} {2 - i} \\ 1 \\ \end{array} } \right) \) \( = {\mathop \alpha \limits_{ \sim }}{_{1}} - i{\mathop \alpha \limits_{ \sim }}{_{2}} \).
Therefore, the solution of the system is
When t → ∞, \( e^{ - \,t} \to 0 \). So, in this case \( {\mathop x\limits_{\sim}} \left( t \right) \to {\mathop 0\limits_{\sim}} \), that is, the solution of the system is stable in the usual sense.
Example 2.6
Find the solution of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), where
Solution
The characteristic equation of A is
Therefore the eigenvalues of matrix A are λ = −1, 1, −3.
We shall now find the eigenvector corresponding to each of the eigenvalues.
Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ {e_{3} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue λ = −1. Then
We choose \( e_{1} = 1 \). Therefore, the eigenvector corresponding to the eigenvalue λ = −1 is \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ 0 \\ 0 \\ \end{array} } \right) \). Similarly, the eigenvectors corresponding to λ = 1 and λ = −3 are, respectively, \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} {11/2} \\ 1 \\ 3 \\ \end{array} } \right) \) and \( {\mathop \alpha \limits_{ \sim }} = \left( {\begin{array}{*{20}c} {1/2} \\ 1 \\ { - 1} \\ \end{array} } \right) \). Therefore the general solution is
where \( c_{1} \),\( c_{2} \) and \( c_{3} \) are arbitrary constants.
Example 2.7
Solve the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), where
Solution
The characteristic equation of matrix A is
So (−2) is a repeated eigenvalue of A. The eigenvector for the eigenvalue \( \lambda_{1} = 4 \) is given as \( \left( {\begin{array}{*{20}c} 1 \\ 1 \\ 2 \\ \end{array} } \right) \). The eigenvector corresponding to the repeated eigenvalue \( \lambda_{2} = \lambda_{3} = - 2 \) is \( \left( {\begin{array}{*{20}c} {e_{1} } & {e_{2} } & {e_{3} } \\ \end{array} } \right)^{T} \) such that
which is equivalent to
that is, \( e_{1} - e_{2} + e_{3} = 0 \).
We can choose \( e_{1} = 1 \), \( e_{2} = 1 \) and \( e_{3} = 0 \), and so we can take one eigenvector as \( \left( {\begin{array}{*{20}c} 1 \\ 1 \\ 0 \\ \end{array} } \right) \). Again, we can choose \( e_{1} = 0 \), \( e_{2} = 1 \) and \( e_{3} = 1 \). Then we obtain another eigenvector \( \left( {\begin{array}{*{20}c} 0 \\ 1 \\ 1 \\ \end{array} } \right) \). Clearly, these two eigenvectors are linearly independent. Thus, we have two linearly independent eigenvectors corresponding to the repeated eigenvalue −2. Hence, the general solution of the system is given by
where \( c_{1} \), \( c_{2} \) and \( c_{3} \) are arbitrary constants.
Example 2.8
Solve the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) where
Solution
Here matrix A has two pair of complex conjugate eigenvalues \( \lambda_{1} = - 1 \pm i \) and \( \lambda_{2} = 1 \pm i \). The corresponding pair of eigenvectors is
Therefore, the general solution of the system is expressed as
where \( c_{j} ,d_{j} (j = 1,2) \) are arbitrary constants.
2.3 Fundamental Matrix
A set \( \{ {\mathop x\limits_{\sim}{_{1}} (t ),\mathop x\limits_{\sim}{_{2}} (t ), \ldots ,\mathop x\limits_{\sim}{_{n}} (t )} \} \) of solutions of a linear homogeneous system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) is said to be a fundamental set of solutions of that system if it satisfies the following two conditions:
-
(i)
The set \( \{ {\mathop x\limits_{\sim}{_{1}} (t ), \mathop x\limits_{\sim}{_{2}} (t ), \ldots, \mathop x\limits_{\sim}{_{n}} (t )} \} \) is linearly independent, that is, for \( c_{1} ,c_{2} , \ldots ,c_{n} \in {\mathbf{\mathbb{R}}},\;c_{1} \mathop x\limits_{\sim}{_{1}} + c_{2} \mathop x\limits_{\sim}{_{2}} + \cdots + c_{n} \mathop x\limits_{\sim}{_{n}} = {\mathop 0\limits_{\sim}} \; \Rightarrow c_{1} = c_{2} = \cdots = c_{n} = 0. \)
-
(ii)
For any solution \( {\mathop x\limits_{\sim}} (t ) \) of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), there exist \( c_{1} ,c_{2} , \ldots ,c_{n} \in {\mathbf{\mathbb{R}}} \) such that \( {\mathop x\limits_{\sim}} (t )= c_{1} \mathop x\limits_{\sim}{_{1}} (t )+ c_{2} \mathop x\limits_{\sim}{_{2}} (t )+ \cdots + c_{n} \mathop x\limits_{\sim}{_{n}} (t ),\forall t \in {\mathbf{\mathbb{R}}} \).
The solution, expressed as a linear combination of a fundamental set of solutions of a system, is called a general solution of the system.
Let \( \{ {\mathop x\limits_{\sim}{_{1}} (t ), \mathop x\limits_{\sim}{_{2}} (t ), \ldots , \mathop x\limits_{\sim}{_{n}} (t )} \} \) be a fundamental set of solutions of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) for \( t \in I = \left[ {a,b} \right];\,\,a,b \in {\mathbf{\mathbb{R}}} \). Then the matrix
is called a fundamental matrix of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), \( {\mathop x\limits_{\sim}} \in {\mathbf{\mathbb{R}}}^{n} \). Since the set \( \{ { \mathop x\limits_{\sim}{_{1}} (t ), \mathop x\limits_{\sim}{_{2}} (t ), \ldots , \mathop x\limits_{\sim}{_{n}} (t )} \} \) is linearly independent, the fundamental matrix \( \Phi (t ) \) is nonsingular. Now the general solution of the system is
where \( {\mathop c\limits_{\sim}} = (c_{1} ,c_{2} , \ldots c_{n} )^{t} \) is a constant column vector. If the initial condition is \( {\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} \), then
Thus the solution of the initial value problem \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) with the initial conditions \( {\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} \) can be expressed in terms of the fundamental matrix Φ(t) as
Note that two different homogeneous systems cannot have the same fundamental matrix. Again, if \( \Phi (t ) \) is a fundamental matrix of \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), then for any constant C, \( C\Phi (t ) \) is also a fundamental matrix of the system.
Example 2.9
Find the fundamental matrix of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), where \( A = \left( {\begin{array}{*{20}c} 1 & { - 2} \\ { - 3} & 2 \\ \end{array} } \right) \). Hence find its solution.
Solution
The characteristic equation of matrix A is
So, the eigenvalues of matrix A are −1, 4, which are real and distinct.
Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{1} = - 1 \). Then
A nontrivial solution of this system is \( e_{1} = 1,e_{2} = 1 \).
Again, let \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} {g_{1} } \\ {g{}_{2}} \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{2} = 4 \). Then
Choose \( g_{1} = 2,g_{2} = - 3 \). Therefore, \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} 2 \\ { - 3} \\ \end{array} } \right) \).
Therefore the eigenvectors corresponding to the eigenvalues λ = −1, 4 are respectively \( \left( {\begin{array}{*{20}c} 1 \\ 1 \\ \end{array} } \right) \) and \( \left( {\begin{array}{*{20}c} 2 \\ { - 3} \\ \end{array} } \right) \), which are linearly independent. So two fundamental solutions of the system are
and a fundamental matrix of the system is
Now \( \Phi ( 0 )= \left( {\begin{array}{*{20}c} 1 & 2 \\ 1 & { - 3} \\ \end{array} } \right) \) and so \( \Phi ^{ - 1} ( 0 )= \frac{1}{5}\left( {\begin{array}{*{20}c} 3 & 2 \\ 1 & { - 1} \\ \end{array} } \right) \).
Therefore the general solution of the system is given by
2.3.1 General Solution of Linear Systems
Consider a simple linear equation
with initial condition \( x ( 0 )= x_{0} \), where \( a{\text{ and }}x_{0} \) are certain constants. The solution of this initial value problem (IVP) is given as \( x (t )= x_{0} e^{{{{a{\kern 1pt} t}} }} \). Then we may expect that the solution of the initial value problem for n × n system
can be expressed in term of exponential matrix function as
where A is an n × n matrix. Comparing (2.10) with the solution obtained by the fundamental matrix , we have the relation
Thus we see that if \( \Phi (t ) \) is a fundamental matrix of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), then \( \Phi ( 0 ) \) is invertible and \( e^{At} =\Phi (t )\Phi ^{ - 1} ( 0 ) \). Note that if \( \Phi ( 0 )= I \), then \( \Phi ^{ - 1} ( 0 )= I \) and so, \( e^{At} =\Phi (t ) { }I =\Phi (t ) \).
Example 2.10
Does \( \Phi (t )= \left( {\begin{array}{*{20}c} {2e^{t} } & { - e^{ - 3t} } \\ { - 4e^{t} } & {2e^{ - 3t} } \\ \end{array} } \right) \) a fundamental matrix for a system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \)?
Solution
We know that if \( \Phi (t ) \) is a fundamental matrix, then \( \Phi ( 0 ) \) is invertible.
Here \( \Phi (t )= \left( {\begin{array}{*{20}c} {2e^{t} } & { - e^{ - 3t} } \\ { - 4e^{t} } & {2e^{ - 3t} } \\ \end{array} } \right) \). So, \( \Phi ( 0 )= \left( {\begin{array}{*{20}c} 2 & { - 1} \\ { - 4} & 2 \\ \end{array} } \right) \).
Since \( {\text{det(}}\Phi ( 0 ) )= 4-4 = 0 \), \( \Phi ( 0 ) \) is not invertible and hence the given matrix is not a fundamental matrix for the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \).
Example 2.11
Find \( e^{At} \) for the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), where \( A = \left( {\begin{array}{*{20}c} 1 & 1 \\ 4 & 1 \\ \end{array} } \right) \).
Solution
The characteristic equation of A is
So, the eigenvalue of A are λ = 3, −1. The eigenvector corresponding to the eigenvalues λ = 3, −1 are, respectively, \( \left( {\begin{array}{*{20}c} 1 \\ 2 \\ \end{array} } \right) \) and \( \left( {\begin{array}{*{20}c} 1 \\ { - 2} \\ \end{array} } \right) \), which are linearly independent. So, two fundamental solutions of the system are \( \mathop x\limits_{\sim}{_{1}} \left( t \right) = \left( {\begin{array}{*{20}c} 1 \\ 2 \\ \end{array} } \right)e^{3t} , \mathop x\limits_{\sim}{_{2}} \left( t \right) = \left( {\begin{array}{*{20}c} 1 \\ { - 2} \\ \end{array} } \right)e^{ - t} \). Therefore a fundamental matrix of the system is
Now, \( \Phi ( 0 )= \left( {\begin{array}{*{20}c} 1 & 1 \\ 2 & { - 2} \\ \end{array} } \right) \) and \( \Phi ^{ - 1} ( 0 )= - \frac{1}{4}\left( {\begin{array}{*{20}c} { - 2} & { - 1} \\ { - 2} & 1 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\frac{1}{2}} & {\frac{1}{4}} \\ {\frac{1}{2}} & { - \frac{1}{4}} \\ \end{array} } \right) \).
Therefore,
2.3.2 Fundamental Matrix Method
The fundamental matrix can be used to obtain the general solution of a linear system. The fundamental theorem gives the existence and uniqueness of solution of a linear system \( \dot{{\mathop x\limits_{\sim}} } = A{\mathop x\limits_{\sim}} \), \( {\mathop x\limits_{\sim}} \in {\mathbf{\mathbb{R}}}^{n} \) subject to the initial conditions \( \mathop x\limits_{\sim}{_{0}} \in {\mathbf{\mathbb{R}}}^{n} \). We now present the fundamental theorem.
Theorem 2.1
(Fundamental theorem) Let A be an n × n matrix. Then for given any initial condition \( \mathop x\limits_{\sim}{_{0}} \in {\mathbf{\mathbb{R}}}^{n} \), the initial value problem \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) with \( {\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} \) has the unique solution \( {\mathop x\limits_{\sim}} (t )= e^{At} \mathop x\limits_{\sim}{_{0}} \).
Proof
The initial value problem is
We have
Differentiating (2.13) w.r.to t,
The term by term differentiation is valid because the series of \( e^{At} \) is convergent for all t under the operator.
Therefore,
This shows that the matrix \( {\mathop x\limits_{\sim}} = e^{At} \) is a solution of the matrix differential equation \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \). The matrix \( e^{At} \) is known as the fundamental matrix of the system (2.12). Now using (2.14)
where \( {\mathop x\limits_{\sim}} = e^{At} \mathop x\limits_{\sim}{_{0}} \).
Also, \( {\mathop x\limits_{\sim}} ( 0 )= \left[ {e^{At} \mathop x\limits_{\sim}{_{0}} } \right]_{t = 0} \) \( = \left[ {e^{At} } \right]_{t = 0} \mathop x\limits_{\sim}{_{0}} \) \( = I\, \mathop x\limits_{\sim}{_{0}} = \mathop x\limits_{\sim}{_{0}} \). Thus \( {\mathop x\limits_{\sim}} (t )= e^{At} \mathop x\limits_{\sim}{_{0}} \) is a solution of (2.12). We prove the uniqueness of solution as follows. Let \( {\mathop x\limits_{\sim}} (t ) \) be a solution of (2.12) and \( {\mathop y\limits_{\sim}} (t )= e^{ - At} {\mathop x\limits_{\sim}} (t ) \) be its another solution. Then
This implies \( {\mathop y\limits_{\sim}} (t ) \) is constant. At t = 0, for \( t \in {\mathbf{\mathbb{R}}} \), it shows that \( {\mathop y\limits_{\sim}} (t )= \mathop x\limits_{\sim}{_{0}} \). Therefore any solution of the IVP (2.12) is given as \( {\mathop x\limits_{\sim}} (t )= e^{At} {\mathop y\limits_{\sim}} (t )= e^{At} \mathop x\limits_{\sim}{_{0}} \). This completes the proof.
2.3.3 Matrix Exponential Function
From the fundamental theorem , the general solution of a linear system can be obtained using the exponential matrix function. The exponential matrix function has some interesting properties in which the general solution can be obtained easily. For an n × n matrix A, the matrix exponential function \( e^{A} \) of A is defined as
Note that the infinite series (2.15) converges for all n × n matrix A. If A = [a], a 1 × 1 matrix, then \( e^{A} = \left[ {e^{a} } \right] \) (see the book by L. Perko [1]). We now discuss some of the important properties of matrix exponential function e A.
Property 1
If A = φ, the null matrix, then \( e^{At} = I \).
Proof
By definition
So, \( e^{At} = I \) for A = φ.
Property 2
Let A = I, the identity matrix. Then
Proof
We know that \( e^{At} = I + At + \frac{{A^{2} t^{2} }}{2\, !} + \frac{{A^{3} t^{3} }}{3\, !} + \cdots \). Therefore
Note
If A = αI, α being a scalar, then
Property 3
Suppose \( D = \left[ {\begin{array}{*{20}c} {\lambda_{1} } & 0 \\ 0 & {\lambda_{2} } \\ \end{array} } \right] \) , a diagonal matrix. Then
Proof
By definition
Property 4
Let \( P^{ - 1} AP = D \), D being a diagonal matrix. Then
Proof
We have
Property 5
Let N be a nilpotent matrix of order k. Then \( e^{Nt} \) is a series containing finite terms only.
Proof
A matrix N is said to be a nilpotent matrix of order or index k if k is the least positive integer such that \( N^{k} = \varphi \) but \( N^{k - 1} \ne \varphi \), φ being the null matrix.
Since N is a nilpotent matrix of order k, \( N^{k - 1} \ne \varphi \) but \( N^{k} = \varphi. \) Therefore
which is a series of finite terms only.
Property 6
If \( A = \left[ {\begin{array}{*{20}c} a & { - b} \\ b & a \\ \end{array} } \right] \), then \( e^{At} = e^{a\,I\,t} \left[ {I\cos (bt )+ J\sin (bt )} \right] \) , where \( I = \left[ {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right] \) and \( J = \left[ {\begin{array}{*{20}c} 0 & { - 1} \\ 1 & 0 \\ \end{array} } \right] \).
Proof
We have
Therefore
Property 7
\( e^{A{+}B} = e^{A} e^{B} \), provided AB = BA.
Proof
Suppose AB = BA. Then by Binomial theorem,
Therefore
It is true that \( e^{A{+}B} = e^{A} e^{B} \) if AB = BA. But in general \( e^{A{+}B} \ne e^{A} e^{B} \).
Property 8
For any n × n matrix A, \( \frac{d}{{{\text{d}}t}}\left( {e^{At} } \right) = Ae^{At} \).
Proof
By definition
The term by term differentiation is valid because the series of \( e^{At} \) is convergent for all t under the operator.
Therefore, \( \frac{d}{{{\text{d}}t}}\left( {e^{At} } \right) \) \( = Ae^{At} \).
We now establish the important result below.
Result
Multiplying both sides of \( \frac{d}{{{\text{d}}t}}\left( {e^{At} } \right) = Ae^{At} \) by \( \Phi ( 0 ) \) in right, we have
This shows that the fundamental matrix \( \Phi (t ) \) must satisfy the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \). This is true for all t. So, it is true for t = 0. Putting t = 0 in \( {\dot{\Phi }} (t )= A\Phi (t ) \), we get
This gives that the coefficient matrix A can be expressed in terms of the fundamental matrix \( \Phi (t ) \).
Example 2.12
Does \( \Phi (t )= \left( {\begin{array}{*{20}c} {e^{t} } & {e^{ - 2t} } \\ {2e^{t} } & {3e^{ - 2t} } \\ \end{array} } \right) \) a fundamental matrix for the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \)? If so, then find the matrix A.
Solution
We know that if \( \Phi (t ) \) is a fundamental matrix, then \( \Phi ( 0 ) \) is invertible.
Here \( \Phi (t )= \left( {\begin{array}{*{20}c} {e^{t} } & {e^{ - 2t} } \\ {2e^{t} } & {3e^{ - 2t} } \\ \end{array} } \right) \). So, \( \Phi ( 0 )= \left( {\begin{array}{*{20}c} 1 & 1 \\ 2 & 3 \\ \end{array} } \right) \).
Since \( { \det }\left( {\Phi ( 0 )} \right) = 3 - 2 = 1 \ne 0 \), \( \Phi ( 0 ) \) is invertible. Hence the given matrix is a fundamental matrix for the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \). We shall now find the coefficient matrix A.
We have \( \Phi ( 0 )= \left( {\begin{array}{*{20}c} 1 & 1 \\ 2 & 3 \\ \end{array} } \right) \). So \( \Phi ^{ - 1} ( 0 )= \left( {\begin{array}{*{20}c} 3 & { - 1} \\ { - 2} & 1 \\ \end{array} } \right) \).
Also \( {\dot{\Phi }} (t )= \left( {\begin{array}{*{20}c} {e^{t} } & { - 2e^{ - 2t} } \\ {2e^{t} } & { - 6e^{ - 2t} } \\ \end{array} } \right) \), and \( {\dot{\Phi }} ( 0 )= \left( {\begin{array}{*{20}c} 1 & { - 2} \\ 2 & { - 6} \\ \end{array} } \right) \).
Therefore the matrix A is
Example 2.13
Find \( e^{At} \) for the matrix \( A = \left( {\begin{array}{*{20}c} 3 & 1 \\ 1 & 3 \\ \end{array} } \right) \). Hence find the solution of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \).
Solution
We see that the eigenvectors corresponding to the eigenvalues λ = 2, 4 of A are respectively \( {\mathop e\limits_{\sim}} = \) \( \left( {\begin{array}{*{20}c} 1 \\ { - 1} \\ \end{array} } \right) \) and \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ 1 \\ \end{array} } \right) \), which are linearly independent. Therefore, two fundamental solutions of the system are \( \mathop x\limits_{\sim}{_{1}} (t )= \left( {\begin{array}{*{20}c} 1 \\ { - 1} \\ \end{array} } \right)e^{2t} \) and \( \mathop x\limits_{\sim}{_{2}} (t )= \left( {\begin{array}{*{20}c} 1 \\ 1 \\ \end{array} } \right)e^{4t} \). So a fundamental matrix of the system is
We find \( \Phi ( 0 )= \left( {\begin{array}{*{20}c} 1 & 1 \\ { - 1} & 1 \\ \end{array} } \right) \) and \( \Phi ^{ - 1} ( 0 )= \frac{1}{2}\left( {\begin{array}{*{20}c} 1 & { - 1} \\ 1 & 1 \\ \end{array} } \right) \). Therefore
By fundamental theorem , the solution of the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) is
where \( \mathop x\limits_{\sim}{_{0}} = \left( {\begin{array}{*{20}c} {c_{1} } \\ {c_{2} } \\ \end{array} } \right) \) is an arbitrary constant column vector.
2.4 Solution Procedure of Linear Systems
The general solution of a linear homogeneous system can be easily deduced from the fundamental theorem. According to this theorem the solution of \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) with \( {\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} \) is given as \( {\mathop x\limits_{\sim}} (t) = e^{At} \mathop x\limits_{\sim}{_{0}} \) and this solution is unique.
For a simple change of coordinates \( {\mathop x\limits_{\sim}} = P{\kern 1pt} {\mathop y\limits_{\sim}} \) where P is an invertible matrix, the equation \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \, \) is transformed as
The initial conditions \( {\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} \) become \( {\mathop y\limits_{\sim}} ( 0 )= P^{ - 1} {\mathop x\limits_{\sim}} ( 0 )= P^{ - 1} \mathop x\limits_{\sim}{_{0}} = \mathop y\limits_{\sim}{_{0}} \). So, the new system is\( {\mathop {\dot{y}}\limits_{\sim }} = C{\mathop y\limits_{\sim}} \) with \( {\mathop y\limits_{\sim}} ( 0 )= \mathop y\limits_{\sim}{_{0}} \), where \( C = P^{ - 1} AP \).
It has the solution
Hence the solution of the original system is
We see that \( e^{At} = Pe^{{{{C{\kern 1pt} t}} }} P^{ - 1} \). The matrix P is chosen in such a way that matrix C takes a simple form. We now discuss three cases.
-
(i)
Matrix A has distinct real eigenvalues
Let \( P = \left( {{\mathop \alpha \limits_{ \sim }}{_{1}} ,{\mathop \alpha \limits_{ \sim }}{_{2}} , \ldots ,{\mathop \alpha \limits_{ \sim }}{_{n}} } \right) \) so that \( P^{ - 1} \) exists. The matrix C is obtained as \( C = P^{ - 1} AP \) which is a diagonal matrix. Hence the exponential function of C becomes
Therefore we can write the solution of \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) with \( {\mathop x\limits_{\sim}} ( 0 )= \mathop x\limits_{\sim}{_{0}} \) as \( {\mathop x\limits_{\sim}} (t )= e^{At} \mathop x\limits_{\sim}{_{0}} = Pe^{Ct} P^{ - 1} \mathop x\limits_{\sim}{_{0}} \). So
where \( \mathop x\limits_{\sim}{_{0}} = (c_{1} ,c_{2} , \ldots ,c_{n} )^{t} \) is an arbitrary constant.
-
(ii)
Matrix A has real repeated eigenvalues
In this case the following theorems are relevant (proofs are available in the book Hirsch and Smale [2]) for finding general solution of a linear system when matrix A has repeated eigenvalues.
Theorem 2.2
Let the n × n matrix A have real eigenvalues λ 1, λ 2, …, λ n repeated according to their multiplicity. Then there exists a basis of generalized eigenvectors \( \{ {\mathop \alpha \limits_{ \sim }}{_{1}} ,{\mathop \alpha \limits_{ \sim }}{_{2}} , \ldots ,{\mathop \alpha \limits_{ \sim }}{_{n}} \} \) such that the matrix \( P = ({\mathop \alpha \limits_{ \sim }}{_{1}} ,{\mathop \alpha \limits_{ \sim }}{_{2}} , \ldots ,{\mathop \alpha \limits_{ \sim }}{_{n}} ) \) is invertible and A = S + N, where \( P^{ - 1} SP = {\text{diag}}(\lambda_{1} ,\lambda_{1} , \ldots ,\lambda_{n} ) \) and N(=A − S) is nilpotent of order k ≤ n, and S and N commute.
Using the theorem the linear system subject to the initial conditions \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0}} \) has the solution
-
(iii)
Matrix A has complex eigenvalues
Theorem 2.3
Let A be a 2n × 2n matrix with complex eigenvalues a j ± ib j , j = 1, 2, …, n. Then there exists generalized complex eigenvectors \( ({\mathop \alpha \limits_{ \sim }}{_{j}} \pm i{\mathop \beta \limits_{ \sim }}{_{j}} ),j = 1,2 \ldots ,n \) such that the matrix \( P = ({\mathop \beta \limits_{ \sim }}{_{1}} ,{\mathop \alpha \limits_{ \sim }}{_{1}} ,{\mathop \beta \limits_{ \sim }}{_{2}} ,{\mathop \alpha \limits_{ \sim }}{_{2}} , \ldots ,{\mathop \beta \limits_{ \sim }}{_{n}} ,{\mathop \alpha \limits_{ \sim }}{_{n}} ) \) is invertible and A = S + N, where \( P^{ - 1} SP = {\text{diag}}\left[ {\begin{array}{*{20}c} {a_{j} } & { - b_{j} } \\ {b_{j} } & {a_{j} } \\ \end{array} } \right] \), and N(=A − S) is a nilpotent matrix of order k ≤ 2n, and S and N commute.
Using the theorem the linear system of equations subject to the initial conditions \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0}} \) has the solution
For a 2 × 2 matrix A with complex eigenvalues \( (\alpha \pm i\beta) \) the solution is given by
Example 2.14
Solve the initial value problem
with initial condition \( {\mathop x\limits_{\sim}} ( 0 )= \left( {\begin{array}{*{20}c} 2 \\ { - 3} \\ \end{array} } \right) \).
Solution
The characteristic equation of matrix A is
So the eigenvalues of matrix A are 2, −3, which are real and distinct.
Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{1} = 2 \). Then
A nontrivial solution of this system is \( e_{1} = 1,e_{2} = 1 \).
Again let \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} {g_{1} } \\ {g{}_{2}} \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{2} = - 3 \). Then
A nontrivial solution of this system is \( g_{1} = 1,g_{2} = - 4 \).
Let \( P = \left( {{\mathop e\limits_{\sim}} ,{\mathop g\limits_{\sim}} } \right) = \left( {\begin{array}{*{20}c} 1 & 1 \\ 1 & { - 4} \\ \end{array} } \right) \). Then \( P^{ - 1} = - \frac{1}{5}\left( {\begin{array}{*{20}c} { - 4} & { - 1} \\ { - 1} & 1 \\ \end{array} } \right) = \frac{1}{5}\left( {\begin{array}{*{20}c} 4 & 1 \\ 1 & { - 1} \\ \end{array} } \right) \)
Therefore by the fundamental theorem , the solution of the system is
Example 2.15
Solve the system
Also sketch the phase portrait .
Solution
The characteristic equation of matrix A is
The eigenvalues of matrix A are −1, 2, which are real and distinct.
Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{1} = - 1 \). Then
Choose \( e_{1} \) = 1 so that \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \).
Again, let \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} {g_{1} } \\ {g{}_{2}} \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{2} = 2 \). Then
Choose \( g_{1} = 1,g_{2} = - 1 \). Then \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ { - 1} \\ \end{array} } \right) \).
Let \( P = \left( {{\mathop e\limits_{\sim}} ,{\mathop g\limits_{\sim}} } \right) = \left( {\begin{array}{*{20}c} 1 & 1 \\ 0 & { - 1} \\ \end{array} } \right) \). Then \( P^{ - 1} = \left( {\begin{array}{*{20}c} 1 & 1 \\ 0 & { - 1} \\ \end{array} } \right) \)
Therefore
and so \( e^{Ct} = \left( {\begin{array}{*{20}c} {e^{ - t} } & 0 \\ 0 & {e^{2t} } \\ \end{array} } \right) \).
Therefore by fundamental theorem , the solution of the system is
where \( c_{1} ,c_{2} \) are arbitrary constants. The phase diagram is presented in Fig. 2.1.
Example 2.16
Solve the following system using the fundamental theorem.
Solution
The characteristic equation of matrix A is
This shows that matrix A has an eigenvalue λ = 3 of multiplicity 2. Then \( S = \left[ {\begin{array}{*{20}c} 3 & 0 \\ 0 & 3 \\ \end{array} } \right] \) and \( N = A - S = \left[ {\begin{array}{*{20}c} 2 & 4 \\ { - 1} & { - 2} \\ \end{array} } \right] \). Clearly, matrix N is a nilpotent matrix of order 2. So, the general solution of the system is given by
Example 2.17
Find the general solution of the system of linear equations
Solution
The characteristic equation of matrix A is
So matrix A has a pair of complex conjugate eigenvalues 3 ± 3i
Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{1} = 3 + 3i \). Then
A nontrivial solution of this system is \( e_{1} = 2,\;e_{2} = 1-3i \).
Similarly, the eigenvector corresponding to the eigenvalue \( \lambda_{2} = 3-3i\; \) is \( \quad {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} 2 \\ {1 + 3i} \\ \end{array} } \right) \).
Let \( P = \left( {\begin{array}{*{20}c} 0 & 2 \\ { - 3} & 1 \\ \end{array} } \right) \). Then \( P^{-1} = \frac{1}{6} \left( {\begin{array}{*{20}c} 1 & { - 2} \\ 3 & 0 \\ \end{array} } \right) \).
Let \( C = P^{ - 1} AP \). Then \( C = P^{ - 1} AP = \frac{1}{6}\left( {\begin{array}{*{20}c} 1 & { - 2} \\ 3 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 4 & { - 2} \\ 5 & 2 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 0 & 2 \\ { - 3} & 1 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 3 & { - 3} \\ 3 & 3 \\ \end{array} } \right). \)
So,
Therefore, the solution of the system is
Example 2.18
Solve the initial value problem \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \), with \( {\mathop x\limits_{\sim}} ( 0 )= \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \), where \( A = \left( {\begin{array}{*{20}c} { - 2} & { - 1} \\ 1 & { - 2} \\ \end{array} } \right),{\mathop x\limits_{\sim}} = \left( {\begin{array}{*{20}c} x \\ y \\ \end{array} } \right) \). Also sketch the solution curve in the phase plane \( {\mathbf{\mathbb{R}}}^{2} \).
Solution
The characteristic equation of matrix A is
So matrix A has a pair of complex conjugate eigenvalues −2 ± i
Let \( {\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} {e_{1} } \\ {e_{2} } \\ \end{array} } \right) \) be the eigenvector corresponding to the eigenvalue \( \lambda_{1} = - 2 + i \). Then
A nontrivial solution of this system is \( e_{1} = 1,e_{2} = - i \).
\( \therefore\,{\mathop e\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ { - i} \\ \end{array} } \right) \). Similarly, the eigenvector corresponding to the eigenvalue \( \lambda_{2} = - 2 - i \) is \( {\mathop g\limits_{\sim}} = \left( {\begin{array}{*{20}c} 1 \\ i \\ \end{array} } \right) \). Let \( P = \left( {\begin{array}{*{20}c} 0 & 1 \\ { - 1} & 0 \\ \end{array} } \right) \). Then \( P^{ - 1} = \left( {\begin{array}{*{20}c} 0 & { - 1} \\ 1 & 0 \\ \end{array} } \right) \) and
So,
Hence the solution of the system is
Phase Portrait
The phase portrait of the solution curve is shown in Fig. 2.2.
Example 2.19
Solve the system \( {\mathop {\dot{x}}\limits_{\sim }} = A{\mathop x\limits_{\sim}} \) with \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0}} \), where \( A = \left( {\begin{array}{*{20}c} 2 & 1 & 3 & { - 1} \\ 0 & 2 & 2 & { - 1} \\ 0 & 0 & 2 & { - 5} \\ 0 & 0 & 0 & 2 \\ \end{array} } \right). \)
Solution
Clearly, matrix A has the eigenvalue λ = 2 with multiplicity 4. Therefore,
It is easy to check that the matrix N is nilpotent of order 4. Therefore, the solution of the system is
2.5 Nonhomogeneous Linear Systems
The most general form of a nonhomogeneous linear system is given as
where A(t) is an n × n matrix, usually depends on time and \( {\mathop b\limits_{\sim}} (t) \) is a time dependent column vector. Here we consider matrix A(t) to be time independent, that is, A(t) ≡ A. Then (2.16) becomes
The corresponding homogeneous system is given as
We have described solution techniques for homogeneous system (2.18). We now find the solution of the nonhomogeneous system (2.17), subject to initial conditions \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0}} \).
As discussed earlier if Φ(t) be the fundamental matrix of (2.18) with \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0} }\), then the solution of (2.18) is given by
We assume that
be the solution of the nonhomogeneous linear system (2.17). Then the initial conditions are obtained as \( {\mathop u\limits_{\sim}} (0) = 0 \). Differentiating (2.19) with respect to t, we get
Substituting (2.20) and (2.19) into (2.17),
Since Φ(t) is a fundamental matrix solution of (2.18),
Using this in (2.21), we get
Integrating w.r.to t and using \( {\mathop u\limits_{\sim}} (0) = 0 \), we get
Hence the general solution of the nonhomogeneous system (2.17) subject to \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0}} \) is given by
Example 2.20
Find the solution of the nonhomogeneous system \( \dot{x} = x + y + t,\;\dot{y} = - y + 1 \) with the initial conditions x(0) = 1, y(0) = 0.
Solution
In matrix notation, the system takes the form \( {\mathop {\dot{x}}\limits_{\sim }} (t) = A{\mathop x\limits_{\sim}} (t) + {\mathop b\limits_{\sim}} (t) \), where \( A = \left( {\begin{array}{*{20}c} 1 & 1 \\ 0 & { - 1} \\ \end{array} } \right) \) and \( {\mathop b\limits_{\sim}} (t) = \left( {\begin{array}{*{20}c} t \\ 1 \\ \end{array} } \right). \)
The initial conditions become \( {\mathop x\limits_{\sim}} (0) = \mathop x\limits_{\sim}{_{0}} \), where \( \mathop x\limits_{\sim}{_{0}} = \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \). Matrix A has eigenvalues \( \lambda_{1} = 1 \), \( \lambda_{2} = - 1 \) with corresponding eigenvectors \( \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \) and \( \left( {\begin{array}{*{20}c} 1 \\ { - 2} \\ \end{array} } \right) \). Therefore
This gives
Therefore the required solution is
Example 2.21
Prove that the flow evolution operator \( \phi_{t} ({\mathop x\limits_{\sim}} ) = e^{At} {\mathop x\limits_{\sim}} \) satisfies the following properties:
-
(i)
\( \phi_{0} ({\mathop x\limits_{\sim}} ) = {\mathop x\limits_{\sim}} \),
-
(ii)
\( \phi_{ - t} \circ \phi_{t} ({\mathop x\limits_{\sim}} ) = {\mathop x\limits_{\sim}} \),
-
(iii)
\( \phi_{t} \circ \phi_{s} ({\mathop x\limits_{\sim}} ) = \phi_{t + s} ({\mathop x\limits_{\sim}} ) \)
for all s, \( t \in {\mathbf{\mathbb{R}}} \) and \( {\mathop x\limits_{\sim}} \in {\mathbf{\mathbb{R}}}^{n} \). Is \( \phi_{t} \circ \phi_{s} = \phi_{s} \circ \phi_{t}? \)
Solution
We have
-
(i)
\( \phi_{0} ({\mathop x\limits_{\sim}} ) = e^{A \cdot \,0} {\mathop x\limits_{\sim}} = {\mathop x\limits_{\sim}}. \)
-
(ii)
\( \phi_{ - t} \circ \phi_{t} ({\mathop x\limits_{\sim}} ) = \phi_{ - t} ({\mathop y\limits_{\sim}} ) = e^{ - At} {\mathop y\limits_{\sim}} = e^{ - At} e^{At} {\mathop x\limits_{\sim}} = {\mathop x\limits_{\sim}} \), where \( {\mathop y\limits_{\sim}} = e^{At} {\mathop x\limits_{\sim}} \).
-
(iii)
\( \phi_{t} \circ \phi_{s} ({\mathop x\limits_{\sim}} ) = \phi_{t} ({\mathop y\limits_{\sim}} ) = e^{At} {\mathop y\limits_{\sim}} = e^{At} e^{As} {\mathop x\limits_{\sim}} = e^{A(t + s)} {\mathop x\limits_{\sim}} = \phi_{t + s} ({\mathop x\limits_{\sim}} ). \)
Now,
for all \( {\mathop x\limits_{\sim}} \in {\mathbf{\mathbb{R}}}^{n} \), where \( {\mathop z\limits_{\sim}} = e^{As} {\mathop x\limits_{\sim}} \).
Hence \( \phi_{t} \circ \phi_{s} = \phi_{s} \circ \phi_{t} \). This indicates that the given flow evolution operator is commutative.
2.6 Exercises
References
Perko, L.: Differential Equations and Dynamical Systems, 3rd edn. Springer, New York (2001)
Hirsch, M.W., Smale, S.: Differential Equations, Dynamical Systems and Linear Algebra. Academic Press, London (1974)
Robinson, R.C.: An Introduction to Dynamical Systems: Continuous and Discrete. American Mathematical Society (2012)
Jordan, D.W., Smith, P.: Non-linear Ordinary Differential Equations. Oxford University Press, Oxford (2007)
Friedberg, S.H., Insel, A.J., Spence, L.E.: Linear Algebra. Prentice Hall (2003)
Hoffman, K., Kunze, R.: Linear Algebra. Prentice Hall (1971)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2015 Springer India
About this chapter
Cite this chapter
Layek, G.c. (2015). Linear Systems. In: An Introduction to Dynamical Systems and Chaos. Springer, New Delhi. https://doi.org/10.1007/978-81-322-2556-0_2
Download citation
DOI: https://doi.org/10.1007/978-81-322-2556-0_2
Published:
Publisher Name: Springer, New Delhi
Print ISBN: 978-81-322-2555-3
Online ISBN: 978-81-322-2556-0
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)