Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In this book we consider only processes, which future configurations are completely defined by their initial states, i.e. the so-called evolutionary deterministic processes. Since this chapter deals with basic concepts of theory of ordinary differential equations, the reader is encouraged to be familiar with the monographs [10, 11, 54, 63, 70, 84, 103, 105, 111, 114, 122, 193, 216, 217, 236, 257].

A solution \(x =\varphi (t)\) to the system of ordinary differential equations

$$\displaystyle{ \dot{x} = F(t,x), }$$
(1.1)

where \(x = (x_{1},\ldots,x_{n})\) and \(F = (F_{1},\ldots,F_{n})\), is the function \(\varphi (t)\), which substituted into (1.1) satisfies it identically. We have assumed that \(F_{1},\ldots,F_{n}\) are C r(r ≥ 1) smooth functions.

The representation of \(\varphi (t)\) in the space \(\mathbb{R}^{n+1}\) of the variables (t, x) is called the integral curve of (1.1).

The solutions of (1.1) have the following properties:

  1. (i)

    If \(x =\varphi (t)\) is a solution to (1.1) then also \(x =\varphi (t + t_{0})\) is a solution to (1.1). Both of them correspond to the same initial point x 0 but for different time instant.

  2. (ii)

    Let the solution satisfy the initial condition \(x_{0} =\varphi (t_{0})\). Then it can be written in the form \(x =\varphi (t - t_{0},x_{0})\), where \(\varphi (0,x_{0}) = x_{0}\), t ≥ t 0.

  3. (iii)

    The following group property is satisfied:

$$\displaystyle{ \varphi (t_{2},\varphi (t_{1},x_{0})) =\varphi (t_{1} + t_{2},x_{0}). }$$
(1.2)

There exist two geometrical representations of a solution to (1.1). Let \(\tilde{D} \subset \mathbb{R}^{n+1}\) denote the so-called extended phase space \(\tilde{D} = D \times \mathbb{R}^{1}\). Changing the parameter t (time) we get points in phase space D for different values of the parameter t.

When only smooth systems are considered, a velocity vector F(t, x) is tangent to the phase trajectory at a given point x 0, and there is only one trajectory passing through such an arbitrary point (this question will be clarified in more detail later). The phase curves are also called phase trajectories.

As it has been mentioned earlier, the space \(\tilde{D}\) contains integral curves. Their projection into the phase space D are phase trajectories or singular points (equilibrium states). If a non-singular trajectory corresponds to a solution \(\varphi (t)\) of (1.1), and \(\varphi (t_{1}) =\varphi (t_{2})\) for \(t_{1}\neq t_{2}\), then \(\varphi (t)\) is defined for all t and is periodic.

However, there are trajectories having no points of self-intersection, i.e. quasi-periodic or chaotic orbits. When any two distinct solutions corresponding to the same trajectory are identical up to a time shift \(t \rightarrow t + t_{0}\), all solutions corresponding to the same periodic trajectory are periodic with the same period.

A solution also possesses a mechanical interpretation. Trajectory made by a point with an increase of time is called a motion. Sometimes it is useful to consider a ‘time-reversed’ system

$$\displaystyle{ \dot{x} = F(-t,x), }$$
(1.3)

which can be obtained from (1.1), by reversing the direction of each tangent vector. Knowing the trajectories of one system we can easily find the corresponding trajectories of another system, simply by reversing the direction of the arrowheads.

Dynamical systems are the systems solution of which can be continued for time t ∈ (−, ), and the corresponding trajectories are called the entire trajectories.

Since in Eq. (1.1) the independent variable x(t) as well as the function F are treated as vector functions, one can investigate the system state via its vector state. Hence, analysis of the function \(x_{t} =\varphi _{t}(x_{0})\) with respect to time t and the initial condition is referred as system’s dynamics investigation.

Let \(Y\) be the metric space and \(\varphi _{t}: Y \rightarrow Y\) be a family of transformations depending on the parameter t in a smooth way.

FormalPara Definition 1.1.

If ∀z ∈ D and t, t  ∈ [0, ], family of transformations satisfies the identity

$$\displaystyle{ \varphi _{t}[\varphi _{t^{{\ast}}}(z)] =\varphi _{t+t^{{\ast}}}(z), }$$
(1.4)

then the pair \((Y,\varphi _{t})\) is called dynamical system (with continuous time) or a flow. In a case of homeomorphism, when \(t \in (-\infty,+\infty )\), we have \(\varphi _{t}^{-1} =\varphi _{-t}\).

In a case, when time has discrete natural numbers, one obtains the definition of a cascade.

FormalPara Definition 1.2.

The pair \((Y,\varphi )\), for natural values of the parameter t, where Y is the metric space and \(\varphi: Y \rightarrow Y\), is called a cascade (or a dynamical system with a discrete time).

Let \(\varphi ^{m} =\mathop{\underbrace{ \varphi \circ \varphi \ldots \circ \varphi }}\limits _{m\mbox{ -}\mathrm{times}}\). (Since \(\forall z \in Y\) and n, m = 0, 1, 2, , we have \(\varphi ^{n}[\varphi ^{m}(z)] =\varphi ^{n+m}(z)\)). If we have homeomorphism, i.e. one-to-one continuous mappings with continuous inverse, then the above identity is true for all integer numbers n and m. The definition of cascades is used during analysis of all problems related to numerical solutions of the ordinary differential equations. Now we give some basic notations and definitions related to either cascades or flows.

The sequence \(\{x_{n}\}_{n=-\infty }^{+\infty }\) is called a trajectory of the point x 0, where \(x_{n+1} =\varphi (x_{n})\). There are following distinct trajectories:

  1. 1.

    When \(\psi (x_{0}) = x_{0}\), then x 0 is called a fixed point (or a periodic point with period 1).

  2. 2.

    When \(x_{i} =\psi ^{i}(x_{0})\), \(i = 0,1,\ldots,k - 1\) and \(x_{0} =\psi ^{k}(x_{0})\), \((x_{i}\neq x_{j}\) for ij), then each x i is called a periodic point of period k.

  3. 3.

    When for k → ±, \(x_{i}\neq x_{j}\) for ij, then a sequence \(\{x_{k}\}_{-\infty }^{+\infty }\) called a bi-infinite (or unclosed) trajectory.

Recall that a set B is called invariant if \(B =\varphi (t,B)\) for any t. If x ∈ A, then the trajectory \(\varphi (t,x) \in A\).

A point x 0 is called a wondering point if there is an open neighbourhood U(x 0) of x 0 and T > 0, such that

$$\displaystyle{ U(x_{0}) \cap \varphi (t,U(x_{0})) = \varnothing \quad \text{for}\ t > T. }$$
(1.5)

A set of wondering points \(\mathcal{W}\) is open and invariant. In contrary, a set of non-wondering points \(\mathcal{M} = D/\mathcal{W}\) is closed and invariant (equilibrium states and periodic trajectories). The points on bi-asymptotic trajectories tending to equilibrium states and periodic trajectories as t → ± are also non-wondering.

FormalPara Definition 1.3.

A point x 0 is said to be positive Poisson-stable if for a given any neighbourhood U(x 0) and any T > 0, there is t > T such that \(\varphi (t,x_{0}) \subset U(x_{0})\). If for any T > 0 there exists t such that t < −T, then the point x 0 is called a negative Poisson-stable point. A point is said Poisson-stable (p) if it is positive (p +) and negative (p ) Poisson-stable. Note that p +, p and p trajectories consist of non-wandering points.

A very important result has been obtained by Birkhoff.

FormalPara Theorem 1.1.

If a \(p(p^{-},p^{+})\) -trajectory is unclosed, then its closure \(\bar{p}(\bar{p}^{-},\bar{p}^{+})\) contains a continuum of unclosed p-trajectories.

Considering \(\varepsilon\)-neighbourhood \(U_{\varepsilon }(x_{0})\) of a point x 0 and denoting by series \(\{t_{n}(\varepsilon )\}_{-\infty }^{+\infty }\) the successive intersection of \(U_{\varepsilon }(x_{0})\), then values \(\uptau _{n}(\varepsilon ) = t_{n+1}(\varepsilon ) - t_{n}(\varepsilon )\) are called the Poincare return times. When the series \(\{\uptau _{n}(\varepsilon )\}\) is bounded for a finite \(\varepsilon\), then the p-trajectory is said to be recurrent. A closure \(\bar{p}\), corresponding to this trajectory, is non-empty, invariant and closed and it creates a so-called minimal set. The return time in this case is not constrained.

When the series \(\{\uptau _{n}(\varepsilon )\}\) is unbounded, then the closure \(\bar{p}\) of D-trajectory is called a quasi-minimal set. In this case the closure may contain other objects, like periodic or quasi-periodic trajectories, which can be approached by a flow arbitrarily close.

FormalPara Definition 1.4.

A point x is called an ω-limit (\(\upalpha\)-limit) point of the trajectory \(\{\varphi ^{n}(x)\}\) if

$$\displaystyle{ \mathop{\lim }\limits _{t_{k}\rightarrow \pm \infty }\varphi ^{t_{k} }(x) = x^{{\ast}}. }$$
(1.6)

A set of all ω-limit (\(\upalpha\)-limit) points of the trajectory L are denoted by \(\Omega _{L}(A_{L})\).

Observe that all of the points of the periodic trajectory L are both α and ω-limit points and \(L = \Omega _{L} = A_{L}\). For an unclosed Poisson-stable trajectory \(\bar{L} = \Omega _{L} = A_{L}\), where \(\bar{L}\) is the closure of L. As it has been stated earlier, \(\bar{L}\) is either minimal or quasi-minimal set.

Owing to the earlier investigations of Poincaré and Bendixon, only three topological cases can be found in two-dimensional (planar) systems: (a) equilibrium; (b) periodic orbit; (c) cycles. In the latter case one deals with either ω-limit homoclinic (one equilibrium state) or heteroclinic (two or more equilibrium states) cycles. A similar observation holds for negative semi-trajectories. The planar systems will be considered in more detail later.

Another important feature of the trajectory studies is a topological equivalence. Two objects in the phase space are topologically equivalent if there is a homeomorphism mapping the trajectories of one to another object. If the space D is compact one (for instance a closed and bounded subset of \(\mathbb{R}^{n}\)), then for \(\forall x\) the set \(\Omega (x)\) is non-empty and closed.

FormalPara Definition 1.5.

A subset \(I \subset D\) is called an invariant set of the cascade \((D,\varphi )\) if \(\varphi (I) = I\).

Let us introduce a set of homeomorphism H = { h i } and the metric dist\((h_{1},h_{2}) =\mathop{ \sup }\limits _{x\in G}\vert \vert h_{1}x - h_{2}x\vert \vert \).

FormalPara Definition 1.6.

If the condition h i L = L holds for all homomorphism h i , satisfying dist\((h_{i},I) <\varepsilon\), where I is the identity homomorphism, then L ∈ G is called the special trajectory.

Closed trajectories are equilibrium states and periodic orbits.

In the case of cascades, the fixed points, periodic points and ω-limit sets are invariant. If the mapping \(\varphi\) is invertible, then an entire trajectory \(\{\varphi ^{k}\}\), \(k = 0,\pm 1,\pm 2,\ldots\) is also invariant. It can be shown that a sum or a product of invariant sets is also an invariant set.

Finally, let us introduce a definition of an attractor (repiler).

FormalPara Definition 1.7.

A closed, bounded and invariant set \(A\ \subset D\) is called attractor of the dynamical system \((D,\varphi )\), when it has a neighbourhood U(A) such that for any x ∈ U(A) the trajectory \(\{\varphi ^{n}(x)\}\) remains in A and tends to A for n → , i.e.

$$\displaystyle{ \mathop{\lim }\limits _{t\rightarrow +\infty }\rho ((\varphi (t,x),A),A) = 0, }$$
(1.7)

where

$$\displaystyle{ \rho (x,A) =\mathop{ \inf }\limits _{x_{0}\in A}\vert \vert x - x_{0}\vert \vert. }$$
(1.8)

A set of all x for which \(\{\varphi ^{n}(x)\}\) tends to A is called the basin of attraction of A. A similar like definition can be formulated in a case of the repiler.

FormalPara Definition 1.8.

A closed, bounded and invariant set \(R \subset D\) is called repiler of the dynamical system \((D,\varphi )\), if there exists a neighbourhood U(R) of R such that if xR and x ∈ U(R), then there is n when \(\varphi ^{k}(x)\notin U(R)\) for k > n.

The equilibrium states, periodic and quasi-periodic orbits can be either attractors or repilers depending on their stability. The question how to find regular attractors or repilers will be addressed later.

1.1 Existence of a Solution

It is well known that Eq. (1.1) may have a unique solution on a given interval, may have no solution at all, may have infinitely many solutions or may have a few distinct solutions. However, finding an analytical solution explicitly to specific initial problems governed by (1.1) does not belong to easy tasks. It is important in many cases that even if we do not know a solution, we are still interested in getting the answer to the following problem: how we may know that a given ODE possesses a solution, and if it so, then we would like to know if it is a unique one.

Theorem 1.2 (Picard’s Theorem).

Let a function F(t,x) of (1.1) is continuous on the rectangular \(\Pi =\{ (t,x): \vert t - t_{0}\vert \leq a,\vert x - x_{0}\vert \leq b,a > 0,b > 0\}\) and satisfies the Lipschitz conditions uniquely regarding x, i.e.

$$\displaystyle{\vert F(t,x_{1}) - F(t,x_{2})\vert \leq L\vert x_{1} - x_{2}\vert }$$

for all t, where |t − t 0 |≤ a, \(\vert x_{1} - x_{0}\vert \leq b\), \(\vert x_{2} - x_{0}\vert \leq b\) . Let \(M =\max \limits _{(t,x)\in \Pi }\vert F(t,x)\vert \), \(t^{{\ast}} =\min \left (a, \frac{b} {M}\right )\) . Then the Cauchy problem associated with (1.1) has a unique solution in the interval

$$\displaystyle{\begin{array}{*{10}c} \vert t - t^{{\ast}}\vert \leq \alpha,&\alpha <\min \left (a, \frac{b} {M}, \frac{1} {L}\right ) \end{array} }$$

A proof of Theorem 1.2 is given in [191] and it is omitted here. Picard’s theorem allows not only to estimate a solution existence of (1.1), but it also guarantees its uniqueness.

In what follows we apply this theorem to find the solution, when we cannot find it using elementary approaches. If assumptions of Theorem 1.2 are satisfied, then a solution to (1.1) can be found as a limit of the uniformly convergent series \(\lim \limits _{n\rightarrow \infty }\{x_{n}(t)\}\) defined by the following recurrent rule

$$\displaystyle{ x_{n+1}(t) = x_{0} +\int \limits _{ t_{0}}^{t}F(y,x_{ n}(y))dy,\quad x_{0}(t) = x_{0},\quad n = 1,2,\ldots }$$
(1.9)

Furthermore, one may also estimate an error introduced by the N approximation {x n (t)} via the following inequality

$$\displaystyle{ \vert x(t) - x_{n}(t)\vert \leq \frac{ML^{n-1}} {n!} (t^{{\ast}})^{n}. }$$
(1.10)

Example 1.1.

Consider the one-dimensional initial value problem

$$\displaystyle{\begin{array}{*{10}c} \frac{dx} {dt} = t + x,&x(0) = 1.\end{array} }$$

Equation (1.9) takes the form

$$\displaystyle\begin{array}{rcl} x_{n+1}(t)& =\,& 1 +\int \limits _{ 0}^{t}(y + x_{ n}(y))dy, {}\\ n& =\,& 0,1,2,\ldots,x_{0}(t) = 1. {}\\ \end{array}$$

We substitute successively n = 0, 1, 2,  to equation in the above to get

$$\displaystyle\begin{array}{rcl} & x_{0}(t) = 1; & {}\\ & x_{1}(t) = 1 +\int \limits _{ 0}^{t}(y + 1)dy = 1 + t + \frac{t^{2}} {2}; & {}\\ & x_{2}(t) = 1 +\int \limits _{ 0}^{t}\left (y + 1 + y + \frac{y^{2}} {2} \right )dy = 1 + t + t^{2} + \frac{t^{3}} {3!}; & {}\\ & x_{3}(t) = 1 +\int \limits _{ 0}^{t}\left (y + 1 + y + y^{2} + \frac{y^{3}} {3!} \right )dy = 1 + t + t^{2} + \frac{2t^{3}} {3!} + \frac{t^{4}} {4!};& {}\\ & x_{n}(t) = 1 +\int \limits _{ 0}^{t}\left (y + 1 + y + y^{2} + \frac{y^{3}} {3} + \cdots + \frac{2y^{n-1}} {(n-1)!} + \frac{y^{n}} {n!} \right )dy & {}\\ & = 1 + t + t^{2} + \frac{2t^{3}} {3!} + \cdots + \frac{2t^{n}} {n!} + \frac{t^{n+1}} {(n+1)!} & {}\\ \end{array}$$

Final result can be presented in its equivalent form

$$\displaystyle{x_{n}(t) = 2\sum \limits _{k=0}^{n}\frac{t^{k}} {k!} + \frac{t^{k+1}} {(k + 1)!} - t - 1}$$

The solution to our problem follows

$$\displaystyle{x(t) =\lim \limits _{n\rightarrow \infty }x_{n}(t) = 2\sum \limits _{k=0}^{\infty }\frac{t^{k}} {k!} - t - 1 = 2e^{t} - t - 1.}$$

 □ 

Example 1.2.

Apply the method of successive approximations to the following Cauchy problem: \(\frac{dx} {dt} = t - x^{2}\), x(0) = 0 defined on the rectangular | t | ≤ 1, | x | ≤ 1. Estimate an interval of the successive approximations convergence guaranteed by the Picard’s theorem as well as an error between the exact solution and its second-order approximation.

Observe that the function \(F(t,x) = t - x^{2}\) is continuously differentiable regarding x, and \(\frac{dF} {dx} = -2x\). Function F satisfies the Lipschitz condition with \(L =\max \left \vert \frac{\partial F} {\partial x} \right \vert = 2\). Since

$$\displaystyle\begin{array}{rcl} & M =\,\max \limits _{\vert t\vert \leq 1;\vert x\vert \leq 1}\vert F(t,x)\vert =\max \limits _{\vert t\vert \leq 1;\vert x\vert \leq 1}\vert t - x^{2}\vert = 2,& {}\\ & t^{{\ast}} =\min \left (a, \frac{b} {M}\right ) =\min \left (1, \frac{1} {2}\right ) = \frac{1} {2}. & {}\\ \end{array}$$

Therefore, the Picard’s approximation is convergent in the interval \([-\frac{1} {2}, \frac{1} {2}]\). Successive approximations obey the following rule

$$\displaystyle{\begin{array}{*{10}c} x_{n+1}(t):\int \limits _{ 0}^{t}(y - x_{n}(y))dy,&n = 0,1,2,\ldots,\end{array} }$$

and hence

$$\displaystyle{\begin{array}{*{10}c} n = 0:& x_{1}(t) =\int \limits _{ 0}^{t}(y - 0)dy = \frac{t^{2}} {2}; \\ n = 1:&x_{2}(t) =\int \limits _{ 0}^{t}\left (y -\left (\frac{y^{2}} {2} \right )^{2}\right ) = \frac{t^{2}} {2} - \frac{t^{5}} {20}. \end{array} }$$

Equation (1.10) takes the form

$$\displaystyle{\vert x(t) - x_{2}(t)\vert = \frac{ML^{1}} {2!} (t^{{\ast}})^{2} = \frac{2 \cdot 2} {2} \left (\frac{1} {2}\right )^{2} = \frac{1} {2}.}$$

 □ 

Example 1.3.

Show that the so-called Riccati equation

$$\displaystyle{\frac{dx} {dt} = a(t)x^{2} + b(t)x + c(t) \equiv F(t,x),}$$

where a(t), b(t) and c(t) are continuous functions, cannot have a singular solution.

Function F(t, x) is continuously differentiable with respect to x, and \(\frac{\partial F} {\partial x}: 2ax + b\) is bounded in an arbitrary rectangular

$$\displaystyle{\Pi =\{ (t,x) \in R^{2};\vert t - t_{ 0}\vert \leq a,\vert x - x_{0}\vert \leq b\}.}$$

Function F(t, x) satisfies the Lipschitz condition in the rectangular \(\Pi \) with respect to x, and hence it satisfies the Picard’s assumptions. The studied equation does not have singular solutions. □ 

In the given below Peano theorem one may guarantee existence of a solution but its uniqueness is not defined.

Theorem 1.3 (Peano’s Theorem).

Let function F(t,x) of (1.1) is continuous on the rectangular

$$\displaystyle{\Pi =\{ (t,x): t \in [t_{0},t_{0} + a],\vert x - x_{0}\vert \leq b\},}$$

where \(\sup \limits _{(t,x)\in \Pi }\vert F(t,x)\vert = M\) . Then the Couchy problem regarding (1.1) has a solution in the interval \([t_{0},t_{0} + \alpha ]\) , where \(\alpha =\min \left (a, \frac{b} {M}\right )\) .

Example 1.4.

Find singular solutions to the equation \(\frac{dx} {dt} = 1 + \frac{3} {2}(x - t)^{\frac{1} {3} }\).

Let us introduce the new variable \(x - t = y\), and the problem boils down to investigation of the following equation \(\frac{dy} {dt} = \frac{3} {2}y^{\frac{1} {3} }\). For y = 0 the last equation does not satisfy the Lipschitz conditions, i.e. assumption of the Picard’s theorem are not satisfied, although the Peano theorem assumptions are satisfied. The studied equation can be solved by the following steps.

$$\displaystyle{\begin{array}{*{10}c} \int y^{-\frac{1} {3} }dy = \frac{3} {2}\int dt;&y^{\frac{2} {3} } = (t - C);&y = (\sqrt{t - C})^{3}.\end{array} }$$

Initial condition y(t 0) = 0 is satisfied by \(y = (\sqrt{t - t_{0}})^{3}\) and by y = 0. Therefore, y = 0 is a singular solution to equation \(\frac{dy} {dt} = \frac{3} {2}y^{\frac{1} {3} }\), and the function x = t is a singular solution to the initial equation. The remaining solutions are defined by the formula \(x = t + (t - C)^{\frac{3} {2} }\). □ 

In general, if the function F(t, x) satisfies the Picard’s theorem on the closed rectangle Π, then its any solution \(x = x(t)\), \(x(t_{0}) = x_{0}\), \((t_{0},x_{0}) \in \Pi \) can be extended outside the rectangle. Furthermore, if the function F(t, x) in a slab \(\alpha _{1} \leq t \leq \alpha _{2}\), | x |  <  \((\alpha _{1} \geq -\infty,\alpha _{2} \leq +\infty )\) is continuous and satisfies the inequality | F(t, x) | ≤ a(t) | x | + b(t), where a(t), b(t) are continuous functions, then any solution of Eq. (1.1) can be extended in the interval \(\alpha _{1} < t < \alpha _{2}\).

Theorem 1.4 (Global Existence of Solutions).

Let F be a vector-valued function of n + 1 real variables, and let I be an open interval containing t = t 0 . If F(t,x) is continuous and satisfies the Lipschitz condition for all t in I and for all \(x_{1},x_{2} \in \mathbb{R}^{n}\) , then the initial value problem \(x(t_{0}) = x_{0}\) has a solution in the entire interval I.

Proof ([84]).

In what follows we demonstrate that the series \(\{x_{n}(t)\}_{0}^{\infty }\) of successive approximation

$$\displaystyle{\begin{array}{*{10}c} x(t_{0}) = x_{0},&x_{n+1} = x_{0} +\int \limits _{ 0}^{t}F(x_{n}(s),s)ds \end{array} }$$

converges to a solution x(t) of \(\frac{dx} {dt} = F(t,x)\), \(x(t_{0}) = x_{0}\). □ 

We take t 0 = 0 and consider t ≥ 0. We show that if [0, t ] is a closed and bounded interval of I, then {x n (t)} converges uniformly on [0, t ] to a limit x(t). In other words, given \(\varepsilon > 0\), there is an integer N such that

$$\displaystyle{\vert x_{n}(t) - x(t)\vert <\varepsilon,}$$

for all n ≥ N and all t ∈ [0, t ]. Let M = max | F(x 0, t) | for t ∈ [0, t ], then

$$\displaystyle{\vert x_{1}(t) - x_{0}(t)\vert = \left \vert \int \limits _{0}^{t}F(x_{ 0}(s),s)ds\right \vert \leq \int \limits _{0}^{t}\vert F(x_{ 0}(s),s)\vert ds \leq Mt,}$$

and similarly

$$\displaystyle{\vert x_{2}(t) - x_{1}(t)\vert = \left \vert \int \limits _{0}^{t}[F(x_{ 1}(s),s) - F(x_{0}(s),s)]ds\right \vert \leq m\int \limits _{0}^{t}\vert x_{ 1}(s) - x_{0}(s)\vert ds}$$

and therefore

$$\displaystyle{\vert x_{2}(t) - x_{1}(t)\vert \leq m\int \limits _{0}^{t}Msds = \frac{1} {2}mMt^{2}.}$$

Assuming that

$$\displaystyle{\vert x_{n}(t) - x_{n-1}(t)\vert \leq \frac{M} {m} \frac{(kt)^{n}} {n!},}$$

it follows (using induction)

$$\displaystyle{\vert x_{n+1}(t)-x_{n}(t)\vert = \left \vert \int \limits _{0}^{t}[F(x_{ n}(s),s)-F(x_{n-1}(s),s)]ds\right \vert \leq m\int \limits _{0}^{t}\vert x_{ n}(s)-x_{n-1}(s)\vert ds,}$$

and finally

$$\displaystyle{\vert x_{n+1}(t) - x_{n}(t)\vert \leq m\int \limits _{0}^{t}\frac{M} {m} \frac{(ms)^{n}} {n!} ds = \frac{M} {m} \frac{(mt)^{n+1}} {(n + 1)!} = \frac{M} {m} (e^{mt^{{\ast}} }- 1).}$$

It means that the terms of the series

$$\displaystyle{x_{0}(t) +\sum \limits _{ n=1}^{\infty }[x_{ n}(t) - x_{n-1}(t)]}$$

are dominated in the convergent series with positive constants. Since the sequence of this uniformly convergent series on [0, t ] is the original sequence \(\{x_{n}(t)\}_{0}^{\infty }\), then its uniform convergence has been proved. Standard theorems of advanced calculus yield the following conclusions:

  1. (i)

    The limit function x(t) is continuous on [0, t ].

  2. (ii)

    The Lipschitz continuity of F gives the estimation

    $$\displaystyle{\vert F(x_{n}(t),t) - F(x(t),t)\vert \leq m\vert x_{n}(t) - x(t)\vert < m\varepsilon }$$

    for t ∈ [0, t ] and n ≥ N; it means that the sequence \(\{F(x_{n}(t),t)\}_{0}^{\infty }\) converges uniformly to F(x(t), t) on [0, t ].

  3. (iii)

    It follows that

    $$\displaystyle\begin{array}{rcl} x(t)=\lim \limits _{n\rightarrow \infty }x_{n+1}(t)& =& x_{0}+\lim \limits _{n\rightarrow \infty }\int \limits _{0}^{t}F(x_{ n}(s),s)ds=x_{0}+\int \limits _{0}^{t}\lim F(x_{ n}(s),s)ds {}\\ & =& x_{0}+\int \limits _{0}^{t}F(x(s),s)ds. {}\\ \end{array}$$
  4. (iv)

    Since x(t) is continuous on [0, t ], then the latter results imply that \(dx/dt = F(x(t),t)\) on [0, t ]. Because this is true on every closed subinterval of I, then it is true on the entire I.

Theorem 1.5 (Global Existence of a Solution of Linear Systems).

Let the n × n matrix-valued function A(t) and the vector-valued function f(t) are continuous on the open interval I containing t = t 0 . Then the Cauchy problem

$$\displaystyle{\frac{dx} {dt} = A(t)x + f(t),\quad x(t_{0}) = x_{0}}$$

has a solution on the entire interval I.

Proof.

We need to show that for each closed and bounded subinterval of I there is a Lipschitz constant L such that

$$\displaystyle{\vert [A(t)x_{1} + f(t)] - [A(t)x_{2} + f(t)]\vert \leq L\vert x_{1} - x_{2}\vert.}$$

It means that

$$\displaystyle{\vert A(t)x\vert \leq L\vert x\vert = \vert \vert A\vert \vert \cdot \vert x\vert,}$$

where \(\vert \vert A\vert \vert = \sqrt{\sum \limits _{i,j=1 }^{n }(a_{ij } )^{2}}\). Further, it means that L =  | | A | | , and because A(t) is continuous on the closed and bounded subinterval I, then its norm | | A | | is bounded on the considered subinterval. The global existence theorem for linear system has been proved. □ 

In the case of nonlinear ordinary differential equations a solution may exist only on a small neighbourhood of t = t 0, and the length of existence interval can depend on a nonlinear differential equation and on the initial condition \(x(t_{0}) = x_{0}\).

If F(t, x) of (1.1) is continuously differentiable in vicinity of the point \((x_{0},t_{0})\) in (n + 1)-dimensional space, then it can be concluded that F(x, t) satisfies the Lipschitz condition on a rectangle \(\Pi \) centered at \((x_{0},t_{0})\) of the form \(\vert t - t_{0}\vert < \alpha\), \(\vert x_{i} - x_{0i}\vert < \beta _{i}\), i = 1, , n. If one applies the Lipschitz condition

$$\displaystyle{x_{n+1}(t) = x_{0} +\int \limits _{ t_{0}}^{t}F(x_{ n}(s)s)ds,}$$

then the point (x n (t), t) lies in the rectangle \(\Pi \) only for a suitable choice of t.

Theorem 1.6 (Local Solutions Existence).

If the first-order partial derivatives of F in (1.1) all exist and are continuous in a neighbourhood of the point \((x_{0},t_{0})\) , then the Cauchy problem (1.1) has a solution on some open interval containing t = t 0 .

However, if the Lipschitz condition is satisfied, then an investigated solution is in addition unique.

Theorem 1.7 (Uniqueness).

Let on some region Q in (n + 1)-space the function F(x,t) in (1.1) is continuous and satisfies the Lipschitz condition

$$\displaystyle{\vert F(x_{1},t) - F(x_{2},t)\vert \leq L(x_{1} - x_{2}).}$$

If x 1 (t) and x 2 (t) are solutions to the Cauchy problem (1.1) on some open interval I, where t = t 0 ∈ I such that the solution curves (x 1 (t),t) and (x 2 (t),t) lie in Q for all t in I, then \(x_{1}(t) = x_{2}(t)\) for all t in I.

Proof.

We consider only 1D case, where x is real and we follow the steps given in [191].

Consider the function

$$\displaystyle{\upphi (t) = [x_{1}(t) - x_{2}(t)]^{2},}$$

where \(x_{1}(t_{0}) = x_{2}(t_{0}) = x_{0}\), i.e. \(\upphi (t_{0}) = 0\).

Differentiating equation in the above one gets

$$\displaystyle{\vert \dot{\upphi }(t)\vert = \vert 2(x_{1}-x_{2})\cdot (\dot{x}_{1}-\dot{x}_{2})\vert = \vert 2(x_{1}-x_{2})\cdot (F(x_{1},t)-F(x_{2},d))\vert \leq 2L\vert x_{1}-x_{2}\vert ^{2} = 2L\upphi (t).}$$

On the other hand a solution to the differential equation

$$\displaystyle{\dot{\varphi }(t) = 2L\varphi (t),\quad \varphi (t_{0}) =\varphi _{0}}$$

is as follows

$$\displaystyle{\varphi (t) =\varphi _{0}e^{2L(t-t_{0})}.}$$

For \(\upphi (t_{0}) =\varphi (t_{0})\) it yields

$$\displaystyle{\begin{array}{*{10}c} \upphi (t) \leq \varphi (t)&\text{for}&t \geq t_{0}.\end{array} }$$

Therefore

$$\displaystyle{0 \leq (x_{1}(t) - x_{2}(t))^{2} \leq (x_{ 1}(t_{0}) - x_{2}(t_{0}))^{2}e^{2L(t-t_{0})},}$$

and taking into account square roots we finally obtain

$$\displaystyle{0 \leq \vert x_{1}(t) - x_{2}(t)\vert \leq \vert x_{1}(t_{0}) - x_{2}(t_{0})\vert e^{L(t-t_{0})}.}$$

Because \(x_{1}(t_{0}) - x_{2}(t_{0}) = 0,\) then \(x_{1}(t) \equiv x_{2}(t)\). □ 

The carried out so far proof allows to illustrate how solutions of (1.1) depend continuously on the initial value x(t 0). Namely, if we take \(\vert x_{1}(t_{0}) - x_{2}(t_{0})\vert \leq \delta\), then the last inequality implies that

$$\displaystyle{\vert x_{1}(t) - x_{2}(t)\vert \leq \delta e^{L(t^{{\ast}}-t_{ 0})} = \varepsilon }$$

for all \(t_{0} \leq t \leq t^{{\ast}}\). The Cauchy problems are said to be well posed as mathematical model for real-world processes if the considered differential equation has unique solutions that are continuous with respect to initial values.

As we will see further, through a point \((t_{0},x_{0})\) may pass only one integral curve of Eq. (1.1) satisfying a given initial condition, which in many cases corresponds to a proper modelling of real-world processes. However, more archetypical questions are valid before starting to solve a given differential equation. The so far discussed theorems allow to verify if a solution actually exists, and if it is unique.

As it will be shown further, one may deal with (a) failure of existence; (b) failure of uniqueness; (c) one, a few or infinitely many solutions.