Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

It is well known that some important properties of nonlinear equations can be determined through qualitative analysis. The general theory and solution methods for linear equations are highly developed in mathematics, whereas a very little is known about nonlinear equations. Linearization of a nonlinear system does not provide the actual solution behaviors of the original nonlinear system. Nonlinear systems have interesting solution features. It is a general curiosity to know in what conditions an equation has periodic or bounded solutions. Systems may have solutions in which the neighboring trajectories are closed, known as limit cycle s. What are the conditions for the existence of such limiting solutions? In what conditions does a system have unique limit cycle? These were some questions both in theoretical and engineering interest at the beginning of twentieth century. This chapter deals with oscillatory solutions in linear and nonlinear equations, their mathematical foundations, properties, and some applications.

5.1 Oscillatory Solutions

In our everyday life, we encounter many systems either in engineering devices or in natural phenomena, some of which may exhibit oscillatory motion and some are not. The undamped pendulum is such a device that executes an oscillatory motion. Oscillatory motions have wide range of applications. It is in fact that no system in macroscopic world is a simple oscillator because of damping force, however small present in the system. Of course, in some cases these forces are so small that we can neglect them in respect to time scale and treat the system as a simple oscillator. Oscillation and periodicity are closely related to each other. Both linear and nonlinear systems may exhibit oscillation, but qualitatively different. Linear oscillators have interesting properties. To explore these properties, we begin with second order linear systems. Note that the first-order linear system cannot have oscillatory solution.

Consider a second-order linear homogeneous differential equation represented as

$$ {\ddot x} + a(t)\dot{x} + b(t)x = 0 $$
(5.1)

where \( a(t) \) and \( b(t) \) are real-valued functions of the real variable \( t \), and are continuous on an interval \( I \subset {\mathbb{R}} \), that is, \( a(t),b(t) \in C(I) \).

A solution \( x(t) \) of the Eq. (5.1) is said to be oscillating on \( I \), if it vanishes there at least two times, that is, if \( x(t) \) has at least two zeros on \( I \). Otherwise, it is called non-oscillating on \( I \). For example, consider a linear equation

$$ {\ddot x} - m^{2} x = 0,\;\;x,m \in {\mathbb{R}}{\mkern 1mu}. $$

Its general solution is given by

$$ x(t) = \left\{ {\begin{array}{*{20}l} {ae^{mt} + be^{ - mt} ,{\text{if}}\, m \ne 0} \hfill \\ {a + bt,{\text{if}}\, m = 0} \hfill \\ \end{array} } \right. $$

which is non-oscillating in \( {\mathbb{R}} \). On the other hand, the general solution

$$ x(t) = a\cos (mt) + b\sin (mt) = A\sin (mt + \delta ) \, $$

of the equation \( {\ddot x} + m^{2} x = 0,\;m \in {\mathbb{R}},\;m \ne 0 \) is oscillating, where \( A = \sqrt {(a^{2} + b^{2} )} \) and \( \delta = \tan^{ - 1} (a/b) \) are the amplitude and the initial phase of the solution, respectively. All solutions of this equation are oscillating with period of oscillation \( \left( {2\pi /m} \right) \). The distance between two successive zeros is \( \left( {\pi /m} \right) \). The above two equations give a good illustration of the existence/nonexistence oscillatory solutions for the general second-order linear equation

$$ {\ddot x} + p(t)x = 0,\quad p \in C(I) $$
(5.2)

The Eq. (5.2) can be derived from (5.1) by applying the transformation

$$ x(t) = z(t)\exp \left( { - \frac{1}{12}\int\limits_{{t_{0} }}^{t} a (\tau ){\text{d}}\tau } \right),\;\;t_{0} ,t \in I $$

with \( p(t) = - \frac{{a^{2} (t)}}{4} - \frac{{\dot{a}(t)}}{2} + b(t){\mkern 1mu}. \) The transformation preserves the zeros of the solutions of the equations. We now derive condition(s) for which the Eq. (5.2) has oscillating and/or non-oscillating solutions . We first assume that \( p(t) = {\text{constant}} \). If \( p \,>\, 0 \), every solution

$$ x(t) = a\cos (\sqrt p t) + b\sin (\sqrt p t) = A\sin (\sqrt p t + \delta ) $$

of (5.2) has infinite number of zeros and the distance between two successive zeros is \( \left( {\pi /\sqrt p } \right) \). So the solution is oscillating. On the other hand, if \( p \,\le\, 0 \), we cannot find any nonzero solution of (5.2) that vanish at more than one point. The solution is called non-oscillating.

5.2 Theorems on Linear Oscillatory Systems

Theorem 5.1

(On non-oscillating solutions ) If \( p(t) \,\le\,0 \) for all \( t \in I \), then all nontrivial solutions of (5.2) are non-oscillating on \( I \).

Proof

If possible, let a solution \( X(t)\not \equiv 0 \) of (5.2) has at least two zeros on \( I \) and let \( t_{0} \), \( t_{1} {\mkern 1mu} (t_{0} \,<\, t_{1} ) \) be two of them. We also assume that the function \( X(t) \) has no zeros in the interval \( (t_{0} ,t_{1} ) \). Since \( X(t) \) is continuous and have no zeros in \( (t_{0} ,t_{1} ) \), it has same sign (positive or negative) in \( (t_{0} ,t_{1} ) \) .

Fig. 5.1
figure 1

Graphical representation of solution \( X\left( t \right) \)

Without loss of generality, we assume that \( X(t) \,>\, 0 \) in \( (t_{0} ,t_{1} ) \). Then from (Fig. 5.1), it follows that \( X(t) \) is maximum at some point, say \( c \in (t_{0} ,t_{1} ) \) and consequently, \( {\ddot X}(t) \,<\, 0 \) in some neighborhood of \( c \). Now, if \( p(t) \,\le\,0 \) on \( I \), then from (5.2) it follows that \( {\ddot X}(t) \,\ge\, 0 \) on \( I \), that is, \( {\ddot X}(t) \,\ge\,0 \) in the neighborhood of \( c \). This gives a contradiction. So, our assumption is wrong, and hence the solution \( X(t) \) of (5.2) cannot have two or more than two zeros on \( I \). Consequently, \( X(t) \) is non-oscillating on \( I \). Since \( X(t) \) is arbitrary, every nontrivial solutions of (5.2) are non-oscillating on \( I \). This completes the proof.

Corollary 5.1

If \( p(t) \,>\, 0 \) on \( I \) , all nontrivial solutions of (5.2) are oscillating on \( I \).

Lemma 5.1

Zeros of any nontrivial solution of Eq. (5.1) in \( I \) are simple and isolated.

Corollary 5.2

Any nontrivial solution of (5.1) has a finite number of zeros on any compact interval \( I \).

Theorem 5.2

(Sturm’s separation theorem) Let \( t_{0} \), \( t_{1} \) be two successive zeros of a nontrivial solution \( x_{1} (t) \) of the Eq. (5.1) and \( x_{2} (t) \) be another linearly independent solution of that equation. Then there exists exactly one zero of \( x_{2} (t) \) between \( t_{0} \) and \( t_{1} \) , that is, the zeros of two linearly independent solutions separate each other.

Proof

Without loss of generality, we assume that \( t_{0} \,<\, t_{1} \). If possible, let \( x_{2} (t) \) has no zeros in the interval \( (t_{0} ,t_{1} ) \). Also, since \( x_{1} (t) \) and \( x_{2} (t) \) are linearly independent, and \( x_{1} (t) \) has two zeros \( t_{0} \) and \( t_{1} \) in \( I \), \( x_{2} (t) \) does not vanish at \( t = t_{0} ,{\mkern 1mu} t_{1} \). Then the Wronskian

$$ W(x_{1} ,x_{2} ;t) = \left| {\begin{array}{*{20}c} {x_{1} (t)} \hfill & {x_{2} (t)} \hfill \\ {\dot{x}_{1} (t)} \hfill & {\dot{x}_{2} (t)} \hfill \\ \end{array} } \right| = x_{1} (t)\dot{x}_{2} (t) - x_{2} (t)\dot{x}_{1} (t) $$
(5.3)

does not vanish on \( [t_{0} ,t_{1} ] \). We assume that \( W(x_{1} ,x_{2} ;t) \,>\, 0 \) on \( [t_{0} ,t_{1} ] \). Dividing both sides of (5.3) by \( x_{2}^{2} (t) \), we get

$$ \frac{{W(x_{1} ,x_{2} ;t)}}{{x_{2}^{2} (t)}} = \frac{{x_{1} (t)\dot{x}_{2} (t) - x_{2} (t)\dot{x}_{1} (t)}}{{x_{2}^{2} (t)}} = - \frac{d}{{{\text{d}}t}}\left( {\frac{{x_{1} (t)}}{{x_{2} (t)}}} \right) $$
(5.4)

Integrating (5.4) with respect to \( t \) from \( t_{0} \) to \( t_{1} \),

$$ \begin{aligned} \int\limits_{{t_{0} }}^{{t_{1} }} {\frac{{W(x_{1} ,x_{2} ;t)}}{{x_{2}^{2} (t)}}{\text{d}}t} & = - \int\limits_{{t_{0} }}^{{t_{1} }} {d\left( {\frac{{x_{1} (t)}}{{x_{2} (t)}}} \right)} \\ & = - \left. {\left( {\frac{{x_{1} (t)}}{{x_{2} (t)}}} \right)} \right|_{{t_{0} }}^{{t_{1} }} \\ & = \frac{{x_{1} (t_{0} )}}{{x_{2} (t_{0} )}} - \frac{{x_{1} (t_{1} )}}{{x_{2} (t_{1} )}} \\ & = 0. \\ \end{aligned} $$

(Since \( x_{2} (t) \) does not have zeros at \( t_{0} ,t_{1} \), that is, \( x_{2} (t_{0} ) \ne 0 \) and \( x_{2} (t_{1} ) \ne 0 \).)

This gives a contradiction, because

$$ \frac{{W(x_{1} ,x_{2} ;t)}}{{x_{2}^{2} (t)}} \,>\, 0\;\forall \,t \in [t_{0} ,t_{1} ]{\mkern 1mu}. $$

So, our assumption is wrong, and hence \( x_{2} (t) \) has at least one zero in the interval \( (t_{0} ,t_{1} ) \). To prove the uniqueness, let \( t_{2} \),\( t_{3} \) be two distinct zeros of \( x_{2} (t) \) in \( (t_{0} ,t_{1} ) \) with \( t_{2} \,<\, t_{3} \). That is, \( x_{2} (t_{2} ) = x_{2} (t_{3} ) = 0 \), where \( t_{0} \,<\, t_{2} \,<\, t_{3} \,<\, t_{1} \). Since \( x_{1} (t) \) and \( x_{2} (t) \) are linearly independent, \( x_{1} (t) \) must have at least one zero in \( (t_{2} ,t_{3} ) \), that is, in \( (t_{0} ,t_{1} ) \). This is a contradiction, which ensures that \( x_{2} (t) \) has exactly one zero between \( t_{0} \) and \( t_{1} \). This completes the proof. In the same way, one can also prove it when \( W(x_{1} ,x_{2} ;t) \,<\, 0 \) on \( [t_{0} ,t_{1} ] \).

Corollary 5.3

If at least one solution of the Eq. (5.2) has more than two zeros on \( I \) , then all the solutions of (5.2) are oscillating on \( I \).

Theorem 5.3

(Sturm’s comparison theorem) Consider two equations \( {\ddot y} + p(t)y = 0 \) and \( {\ddot z} + q(t)z = 0 \) where \( p(t),q(t) \in C(I) \) and \( q(t) \,\ge\,p(t),\;\;t \in I \) . Let a pair \( t_{0} ,t_{1} \) \( (t_{0} \,<\, t_{1} ) \) of successive zeros of a nontrivial solution \( y(t) \) be such that there exists \( t \in (t_{0} ,t_{1} ) \) such that \( q(t) \,>\, p(t) \) . Then any nontrivial solution \( z(t) \) has at least one zero between \( t_{0} \) and \( t_{1} \).

Proof

Let \( y(t) \) be a solution of the first equation such that \( y(t_{0} ) = y(t_{1} ) = 0 \) and \( y(t) \,>\, 0 \) in \( t \in (t_{0} ,t_{1} ) \). If possible, let there exist a solution \( z(t) \) such that \( z(t) \,>\, 0\;\;\forall t \in (t_{0} ,t_{1} ) \). Note that if there exists a solution \( z(t) \,<\, 0 \) we may consider the solution \( - z(t) \) instead of \( z(t) \). Multiplying the first equation by \( z(t) \) and the second by \( y(t) \) and then subtracting, we get

$$ \begin{aligned} & \quad {\ddot y}(t)z(t) - {\ddot z}(t)y(t) = (q(t) - p(t))y(t)z(t) \\ & \Rightarrow \frac{d}{{{\text{d}}t}}\left( {\dot{y}(t)z(t) - \dot{z}(t)y(t)} \right) = (q(t) - p(t))y(t)z(t) \\ \end{aligned} $$

Integrating this w.r.to \( t \) from \( t_{0} \) to \( t_{1} \) and using \( y(t_{0} ) = y(t_{1} ) = 0 \), we get

$$ \dot{y}(t_{1} )z(t_{1} ) - \dot{z}(t_{0} )y(t_{0} ) = \int\limits_{{t_{0} }}^{{t_{1} }} {(q(} t) - p(t))y(t)z(t){\text{d}}t $$
(5.5)

The right-hand side of (5.5) is positive, since \( y(t),z(t) \) are positive on \( (t_{0} ,t_{1} ) \) and \( q(t) \,\ge\,p(t)\;\;\forall t \in (t_{0} ,t_{1} ) \). But the left-hand side is nonpositive, because \( \dot{y}(t_{0} ) \,>\, 0 \), \( \dot{y}(t_{1} ) \,<\, 0 \) and \( z(t_{1} ) \,\ge\,0 \). So, we arrive at a contradiction. This completes the proof.

Sturm’s comparison theorem has a great importance on determining the distance between two successive zeros of any nontrivial solution of (5.2). Let us consider three equations \( {\ddot x} + q(t)x = 0 \), \( {\ddot y} + my = 0 \) and \( {\ddot z} + Mz = 0 \), where \( q(t) \,>\, 0 \) for all \( t \), \( m = \mathop {\text{min}}\nolimits_{{t \in [t_{0} ,t_{1} ]}} q(t) \), and \( M = \mathop {\text{max}}\nolimits_{{t \in [t_{0} ,t_{1} ]}} q(t) \). We also assume that \( M \,>\, m \) so that \( q(t) \) is not constant over the interval. Applying Sturm’s comparison theorem for first two equations, we see that the distance between two successive zeros of any solution of \( {\ddot x} + q(t)x = 0 \) is not greater than \( \left( {\pi /\sqrt m } \right) \). Similarly, taking the first and the third equations and then applying Sturm’s comparison theorem, we see that the distance between two successive zeros of \( {\ddot x} + q(t)x = 0 \) is not smaller than \( \left( {\pi /\sqrt M } \right) \). If \( \mathop {\lim }\nolimits_{t \to \infty } q(t) = q \,>\, 0 \), then any solution of the equation \( {\ddot x} + q(t)x = 0 \) is infinitely oscillating, and the distance between two successive zeros tends to \( \left( {\pi /\sqrt q } \right) \). From this discussion, we have the following theorem.

Theorem 5.4

(Estimate of distance between two successive zeros of solutions of the Eq. (5.2)) Let the inequality

$$ 0 \,<\, m^{2} \,\le\,p(t) \,\le\,M^{2} $$

be true on \( [t_{0} ,t_{1} ] \subset I \) . Then the distance \( d \) between two successive zeros of any nontrivial solution of (5.2) is estimated as

$$ \frac{\pi }{M} \,\le\,d \,\le\,\frac{\pi }{m}{\mkern 1mu}. $$

We illustrate two examples as follows:

  1. (I)

    Estimate the distance between two successive zeros of the equation

$$ {\ddot x} + 2t^{2} \dot{x} + t(2t^{3} + 3)x = 0 $$

on \( \left[ {1,2} \right] \).

Solution

Consider the transformation \( x(t) = z(t)e^{{ - t^{3} /3}} \). It transforms the given equation into the equation

$$ {\ddot z} + (t^{4} + t)z = 0{\mkern 1mu}. $$

Comparing this with (5.2), we get

$$ p(t) = t^{4} + t{\mkern 1mu}. $$

Since \( 1 \,\le\,t \,\le\,2 \), \( 0 \,<\, 2 \,\le\,p(t) \,\le\,18 \). Therefore by Theorem 5.4, we estimated as

$$ \frac{\pi }{3\sqrt 2 } \,\le\,d \,\le\,\frac{\pi }{\sqrt 2 }{\mkern 1mu}. $$
  1. (II)

    Transform the Bessel’s equation of order \( n \) into the form \( {\ddot x} + p(t)x = 0 \). Show that if \( 0 \,\le\,n \,<\, 1/2 \) the distance between two successive zeros of \( x(t) \) is less than \( \pi \) and tends to \( \pi \) when the number of oscillating solutions increases.

Solution

The Bessel’s equation of order \( n \) is given by

$$ t^{2} {\ddot y} + t\dot{y} + (t^{2} - n^{2} )y = 0,\;t \,>\, 0. $$

Taking \( y(t) = x(t)t^{ - 1/2} \), we obtain the transformed equation as

$$ {\ddot x} + \left( {1 - \frac{{n^{2} - 1/4}}{{t^{2} }}} \right)x = 0{\mkern 1mu}. $$

Therefore, we have

$$ p(t) = 1 - \frac{{n^{2} - 1/4}}{{t^{2} }}{\mkern 1mu}. $$

Now, if \( 0 \,\le\,n \,<\, 1/2 \), then \( p(t) \,>\, 1 \) and the distance between two successive zeros of \( x(t) \) is

$$ d \,<\, \frac{\pi }{1} = \pi {\mkern 1mu}. $$

If the number of zeros increases, for a sufficiently large \( t \), \( p(t) \) can be made arbitrarily close to 1, and so, the distance between two successive zeros of \( x(t) \) will tend to \( \pi \). In general, the Bessel’s equation of order \( n \) the expression \( \left( {1 - \frac{{n^{2} - 1/4}}{{t^{2} }}} \right) \) can be made arbitrarily to unite a sufficiently large \( t \). Therefore, for sufficiently large values of \( t \), the distance between two successive zeros of the solutions of Bessel’s equation is arbitrarily close to \( \pi \).

We now give a lower estimate of the distance between two successive zeros of the Eq. (5.1) without reducing it into the Eq. (5.2).

Theorem 5.5

(de la Vallée Poussian) Let the coefficients \( a(t) \) and \( b(t) \) of the Eq. (5.1) be such that

$$ |a(t)| \,\le\,M_{1} ,\quad |b(t)| \,\le\,M_{2} ,\;t \in I{\mkern 1mu}. $$

Then the distance \( d \) between two successive zeros of any nontrivial solution of (5.1) satisfies

$$ d \,\ge\,\frac{{\sqrt {4M_{1}^{2} + 8M_{2} } {\mkern 1mu} - 2M_{1} }}{{M_{2} }}{\mkern 1mu}. $$

See the book of Tricomi [1].

Remark 5.1

The lower estimate of the distance d between two successive zeros of the Eq. (5.1) can be determined using one of the followings:

  1. (1)

    Use directly the statement of Theorem 5.5;

  2. (2)

    First, apply the transformation given earlier to reduce (5.1) into (5.2) and then use the left part of the inequality of the Theorem 5.4.

In general, we cannot say which one is a better estimation for the distance between two successive zeros. In some cases, Theorem 5.5 gives better estimation. Let us consider two examples for this purpose.

  1. (I)

    Consider the equation

    $$ {\ddot x} + 2t^{2} \dot{x} + t(2t^{3} + 3) = 0,\;\;t \in [1,2]{\mkern 1mu}. $$
    1. (1)

      Applying Theorem 5.4

      $$ p(t) = t^{4} + t,\;1 \,\le\,t \,\le\,2 \Rightarrow 2 \,\le\,p(t) \,\le\,18 \Rightarrow d \,\ge\,\frac{\pi }{3\sqrt 2 }{\mkern 1mu} $$
    2. (2)

      Applying Theorem 5.5

      $$ |a(t)| = |2t^{2} | \,\le\,8 = M_{1} ,\;|b(t)| = |t(2t^{3} + 3)| \,\le\,38 = M_{2} {\mkern 1mu}. $$

Therefore, the distance \( d \) is given by

$$ d \,\ge\,\frac{{\sqrt {4 \times 8^{2} + 8 \times 38} {\mkern 1mu} - 2 \times 8}}{38} \approx 0.2{\mkern 1mu}. $$

Therefore, Theorem 5.4 gives a better result for approximate distance between successive zeros.

We now give our attention on nonlinear oscillating systems.

5.3 Nonlinear Oscillatory Systems

Linear oscillators follow the superposition principle and the frequency of oscillation is independent of the amplitude and the initial conditions. In contrast, nonlinear oscillators do not follow the superposition principle and their frequencies of oscillations depend on both the amplitude and the initial conditions. Nonlinear oscillations have characteristic features such as resonance, jump or hysteresis, limit cycle s, noisy output, etc. Some of these features are useful in communication engineering, electrical circuits, cyclic action of heart, cardiovascular flow, neural systems, biological and chemical reactions, dynamic interactions of various species, etc. Relaxation oscillation s are special type of periodic phenomena. This oscillation is characterized by intervals of time in which very little change takes place, followed by short intervals of time in which significant change occurs. Relaxation oscillations occur in many branches of physics, engineering sciences, economy, and geophysics. In mathematical biology one finds applications in the heartbeat rhyme, respiratory movements of lungs, and other cyclic phenomena. We illustrate four physical problems where nonlinear oscillations are occurred.

  1. (i)

    Simple pendulum: The simplest nonlinear oscillating system is the undamped pendulum. The equation of motion of a simple pendulum is given by \( {\ddot \theta } + (g/L)\sin \theta = 0 \), where \( g \) and \( L \) are the acceleration due to gravity and length of the light inextensible string, respectively. Due to the presence of the nonlinear term \( \sin \theta \), the equation is nonlinear. For small angle approximations, that is, \( \sin \theta = \theta - \frac{{\theta^{3} }}{6} + O(\theta^{5} ) \), the equation can be written as \( {\ddot \theta } + (g/L)\theta - (g/6L)\theta^{3} = 0 \). This is a good approximation even for angles as large as \( \pi /4 \). The original system has equilibrium points at \( (n\pi ,0),n \in {\mathbf{\mathbb{Z}}} \). The equilibrium solutions \( (\pi ,0) \) and \( ( - \pi ,0) \) are unstable whereas the equilibrium solution (0, 0) is stable. Consider a periodic solution with the initial condition \( \theta (0) = a \),\( \dot{\theta }(0) = 0 \) where \( 0 \,<\, a \,<\, \pi \). We now calculate the period of the periodic solution. The equation \( {\ddot \theta } + (g/L)\sin \theta = 0 \) has the first integral

$$ \begin{aligned} & \quad \frac{1}{2}\dot{\theta }^{2} - \left( {\frac{g}{L}} \right)\sin \theta = - \frac{g}{L}\cos a \\ & \Rightarrow \frac{{{\text{d}}\theta }}{{{\text{d}}t}} = \pm \left[ {2(g/L)\left( {\cos \theta - \cos a} \right)} \right]^{1/2} \\ \end{aligned} $$

satisfying the above initial condition. The period \( T \) of the periodic solutions is then given by

$$ T = 4\int\limits_{0}^{a} {\frac{{{\text{d}}\theta }}{{\left[ {2(g/L)\left( {\cos \theta - \cos a} \right)} \right]^{1/2} }}}. $$

The period \( T \) can be expressed in terms of Jacobian elliptic functions. So, the period \( T \) depends nontrivially on the initial condition. On the other hand, the linear pendulum has the constant period \( T = 2\pi \sqrt {g/L} \).

  1. (ii)

    Nonlinear electric circuits : As we know that electric circuits may be analyzed by applying Kirchoff’s laws. For a simple electric circuit, the voltage drop across the inductance is \( L\frac{{{\text{d}}I}}{{{\text{d}}t}} \), L being the inductance. But for an iron-core inductance coil, the voltage drop can be expressed as \( {\text{d}}\phi /{\text{d}}t \), where \( \phi \) is the magnetic flux. The voltage drop across the capacitor is \( Q/C \), where \( Q \) is the charge on the capacitor and \( C \) the capacitance. The current in the circuit is followed by \( I = \frac{{{\text{d}}Q}}{{{\text{d}}t}} \). The equation of the current in the circuit for an iron-core inductance coil connected parallel to a charged condenser may be expressed by equation

$$ \frac{{{\text{d}}\phi }}{{{\text{d}}t}} + \frac{Q}{C} = 0 \Rightarrow \frac{{{\text{d}}^{2} \phi }}{{{\text{d}}t^{2} }} + \frac{I}{C} = 0. $$

For an elementary circuit, there is a linear relationship between the current and the flux, that is, \( I = \phi /L \). It is known that for an iron-core inductance the relationship is \( I = A\phi - B\phi^{3} \), where \( A \) and \( B \) are positive constants, for small values of magnetic flux. This gives the equation of the current in the circuit as

$$ \frac{{{\text{d}}^{2} \phi }}{{{\text{d}}t^{2} }} + \left( {\frac{A}{C}} \right)\phi - \left( {\frac{B}{C}} \right)\phi^{3} = 0. $$

It is a nonlinear second-order equation which may exhibit oscillatory solutions for certain values of the parameters \( A \), \( B \) and \( C \) (see Mickens [2], Lakshmanan and Rajasekar [3]).

  1. (iii)

    Brusselator chemical reactions : It is a widely used model for chemical reactions invented by Prigogine and Lefever (1968). The following set of chemical reactions were considered

$$ \left. {\begin{array}{*{20}l} {A \to X} \hfill \\ {B + X \to D + Y} \hfill \\ {Y + 2X \to 3X} \hfill \\ {X \to E} \hfill \\ \end{array} } \right\} $$

The net effect of the above set of reactions is to convert the two reactants \( A \) and \( B \) into the products \( D \) and \( E \). If the initial concentrations of \( A \) and \( B \) are large then the maximum rate concentrations of \( X \) and \( Y \) are expressed by the following equations

$$ \left. {\begin{array}{*{20}l} {\frac{{{\text{d}}x}}{{{\text{d}}t}} = a - (1 + b)x + x^{2} y} \hfill \\ {\frac{{{\text{d}}y}}{{{\text{d}}t}} = bx - x^{2} y} \hfill \\ \end{array} } \right\} $$

The constants \( a \) and \( b \) are proportional to the concentrations of chemical reactants \( A \) and \( B \) and the dimensionless variables \( x \) and \( y \) are proportional to the chemical reactants \( X \) and \( Y \), respectively. The nonlinear equations may have a stable limit cycle for certain values the parameters \( a \) and \( b \). This system exhibits oscillatory changes in the concentrations of \( X \) and \( Y \) depending upon the values of \( a \) and \( b \).

  1. (iv)

    Glycolysis: It is a fundamental biochemical reaction in which living cells get energy by breaking down glucose. This biochemical process may give rise to oscillations in the concentrations of various intermediate chemical reactants. However, the period of oscillations is of the order of minutes. The oscillatory pattern of reactions and the time period of oscillations are very crucial in reaching the final product. A set of equations was derived by Sel’kov (1968) for this biochemical reaction. The biochemical reaction process equations may be expressed by

$$ \left. {\begin{array}{*{20}l} {\dot{x} = - x + ay + x^{2} y} \hfill \\ {\dot{y} = b - ay - x^{2} y} \hfill \\ \end{array} } \right\} $$

where \( x \) and \( y \) are proportional to the concentrations of adenosine diphosphate (ADP) and fructose-6-phospate (F6P). The positive constants \( a \) and \( b \) are kinetic parameters. The same nonlinear term \( (x^{2} y) \) is present in both the equations with opposite signs. A stable limit cycle may exist for certain relations of \( a \) and \( b \). The existence of the stable limit cycle for the glycolysis mechanism indicates that the biochemical reaction reaches its desire product finally.

5.4 Periodic Solutions

The existence of periodic solutions is of great importance in dynamical systems. Consider a nonlinear autonomous system represented by

$$ \left. {\begin{array}{*{20}l} {\dot{x} = f(x,y)} \hfill \\ {\dot{y} = g(x,y)} \hfill \\ \end{array} } \right\} $$
(5.6)

where the functions \( f(x,y) \) and \( g(x,y) \) are continuous and have continuous first-order partial derivatives throughout the \( xy \)-plane. Global properties of phase paths are those which describe their behaviors over large region of the phase plane . The main problem of the global theory is to examine the existence of closed paths of the system (5.6). Close path solutions correspond to periodic solutions of a system. A solution \( (x(t),y(t)) \) of (5.6) is said to be periodic if there exists a real number \( T \,>\, 0 \) such that

$$ x(t + T) = x(t) \,{\text{and}} \,y(t + T) = y(t) \forall t. $$

The least value of \( T \) for which this relation is satisfied is called the period (prime period) of the periodic solution . Note that if a solution of (5.6) is periodic of period \( T \), then it is periodic of period \( nT \) for every \( n \in {\mathbb{N}} \). The periodic solutions represent a closed path which is traversed once as \( t \) increases from \( t_{0} \) to \( (t_{0} + T) \) for any \( t_{0} \). Conversely, if \( C = [x(t),y(t)] \) is a closed path of the system, then \( (x(t),y(t)) \) is a periodic solution. There are some systems which have no closed paths and so they have no periodic solutions. We now discuss the existence/nonexistence criteria for closed paths.

5.4.1 Gradient Systems

An autonomous system \( {\mathop {\dot{x}}\limits_{\sim }} = {\mathop f\limits_{\sim}} ({\mathop x\limits_{\sim}}) \) in \( {\mathbb{R}}^{n} \) is said to be a gradient system if there exists a single valued, continuously differentiable scalar function \( V = V({\mathop x\limits_{\sim}}) \) such that \( {\mathop {\dot{x}}\limits_{\sim }} = - \nabla V \). The function \( V \) is known as the potential function of the system in compare with the energy in a mechanical system. In terms of components, the equation can be written as

$$ \dot{x}_{i} = - \frac{\partial V}{{\partial x_{i} }};\,i = 1,2, \ldots ,n. $$

Every one-dimensional system can be expressed as a gradient system. Consider a two-dimensional system (5.6). This will represent a gradient system if there exists a potential function \( V = V(x,y) \) such that

$$ \begin{aligned} \dot{x} & = - \frac{\partial V}{\partial x},\dot{y} = - \frac{\partial V}{\partial y} \\ {\text{that is,}}\quad f & = - \frac{\partial V}{\partial x},g = - \frac{\partial V}{\partial y}{\mkern 1mu}. \\ \end{aligned} $$

Differentiating partially the first equation by \( y \) and the second by \( x \) and then subtracting, we get

$$ \, \frac{\partial f}{\partial y} - \frac{\partial g}{\partial x} = 0 \Rightarrow \frac{\partial f}{\partial y} = \frac{\partial g}{\partial x}{\mkern 1mu}. $$

This is the condition for which a two-dimensional system is expressed as a gradient system .

Theorem 5.6

Gradient systems cannot have closed orbits.

Proof

If possible, let there be a closed orbit \( C \) in a gradient system in \( {\mathbb{R}}^{n} \). Then there exists a potential function \( V \) such that \( \dot{{\mathop x\limits_{\sim}} } = - \nabla V \). Consider a change \( {\Delta} V \) of the potential function \( V \) in one circuit. Let \( T \) be the time of one complete rotation along the closed orbit \( C \). Since \( V \) is a single valued scalar function, we have \( {\Delta} V = 0 \). Again using the definition, we get

$$ \begin{aligned} {\Delta}V = \int\limits_{0}^{T} {\frac{{{\text{d}}V}}{{{\text{d}}t}}} {\text{d}}t & = \int\limits_{0}^{T} {(\nabla V \cdot {\dot{{\mathop x\limits_{\sim}} }} )} {\text{d}}t\left[ \because{\frac{{{\text{d}}V}}{{{\text{d}}t}} = \frac{\partial V}{{\partial x_{1} }}\dot{x}_{1} + \frac{\partial V}{{\partial x_{2} }}\dot{x}_{2} + \cdots + \frac{\partial V}{{\partial x_{n} }}\dot{x}_{n} = \nabla V \cdot {\dot{{\mathop x\limits_{\sim}} }} } \right] \\ & = - \int\limits_{0}^{T} {({\dot{{\mathop x\limits_{\sim}} }} \cdot {\dot{{\mathop x\limits_{\sim}} }} )} {\text{d}}t \\ & = - \int\limits_{ 0}^{\text{T}} {\| {{\dot{{\mathop x\limits_{\sim}} }} } \|}^{2} {\text{d}}t, \\ \end{aligned} $$

where \( \| {\dot{{\mathop x\limits_{\sim}} }} \| \) is the norm of \( {\mathop {\dot{x}}\limits_{\sim }} \) in \( {\mathbf{\mathbb{R}}}^{n} \). This is a contradiction. So, our assumption is wrong. Hence there are no closed orbits in a gradient system. This completes the proof.

We give few examples as follows:

  1. (I)

    Consider the two-dimensional system \( \dot{x} = 2xy + y^{3} ,\;\dot{y} = x^{2} + 3xy^{2} - 2y \). Here we take \( f(x,y) = 2xy + y^{3} \) and \( g(x,y) = x^{2} + 3xy^{2} - 2y \). Now, calculate the derivatives as

$$ \frac{\partial f}{\partial y} = 2x + 3y^{2} ,\;\;\frac{\partial g}{\partial x} = 2x + 3y^{2} {\mkern 1mu}. $$

Since \( \frac{\partial f}{\partial y} = \frac{\partial g}{\partial x} \), the system is a gradient system, and so it has no closed path . The given system does not exhibit periodic solution . We now determine the potential function \( V = V(x,y) \) for this system. By definition, we get

$$ \frac{\partial V}{\partial x} = - f = - 2xy - y^{3} ,\;\;\frac{\partial V}{\partial y} = - g = - x^{2} - 3xy^{2} + 2y{\mkern 1mu}. $$

From the first relation,

$$ \frac{\partial V}{\partial x} = - 2xy - y^{3} \Rightarrow V = - x^{2} y - xy^{3} + h(y) $$

where \( h(y) \) is a function of \( y \) only. Differentiating this relation partially with respect to \( y \) and then using the value of \( \frac{\partial V}{\partial y} \), we get

$$ \begin{aligned} & \quad - x^{2} - 3xy^{2} + 2y = - x^{2} - 3xy^{2} + \frac{{{\text{d}}h(y)}}{{{\text{d}}y}} \\ & \Rightarrow \frac{{{\text{d}}h}}{{{\text{d}}y}} = 2y \\ & \Rightarrow h(y) = y^{2} \,[{\text{Neglecting the constant of integration}}.] \\ \end{aligned} $$

Therefore, the potential function of the system is \( V(x,y) = - x^{2} y - xy^{3} + y^{2} {\mkern 1mu}. \)

  1. (II)

    Let \( V:{\mathbb{R}}^{n} \to {\mathbb{R}} \) be the potential of a gradient system \( {\mathop {\dot{x}}\limits_{\sim }} = {\mathop f\limits_{\sim}} ({\mathop x\limits_{\sim}} ) \), \( {\mathop x\limits_{\sim}} \in {\mathbb{R}}^{n} \). Show that \( \dot{V}({\mathop x\limits_{\sim}} ) \,\le\,0\;\;\forall {\mkern 1mu} {\mathop x\limits_{\sim}} \) and \( \dot{V}({\mathop x\limits_{\sim}} ) = 0 \) if and only if \( {\mathop x\limits_{\sim}} \) is an equilibrium point.

Solution

Using the chain rule of differentiation, we have

$$ \begin{aligned} \dot{V}({\mathop x\limits_{\sim}} ) & = \sum\limits_{i = 1}^{n} {\frac{\partial V}{{\partial x_{i} }}} \dot{x}_{i} \\ & = \nabla V \cdot {\mathop {\dot{x}}\limits_{\sim }} \\ & = \nabla V \cdot ( - \nabla V)\quad [{\text{Since }}{\mathop {\dot{x}}\limits_{\sim }} = - \nabla V] \\ & = - |\nabla V|^{2} \,\le\,0. \\ \end{aligned} $$

Now, \( \dot{V} = 0 \) if and only if \( \nabla V = 0 \), that is, if and only if \( {\mathop {\dot{x}}\limits_{\sim }} = 0 \). Hence \( {\mathop x\limits_{\sim}} \) is an equilibrium point.

5.4.2 Poincaré Theorem

Given below is an important theorem of Poincaré on the existence of a closed path of a two-dimensional system.

Theorem 5.7

A closed path of a two-dimensional system (5.6) necessarily surrounds at least one equilibrium point of the system.

Proof

If possible, let there be a closed path \( C \) of the system (5.6) that does not surround any equilibrium point of the system. Let \( A \) be the region (area) bounded by \( C \). Then \( f^{2} + g^{2} \ne 0 \) in \( A \). Let \( \theta \) be the angle between the tangent to the closed path \( C \) and the \( x \)-axis. Then

$$ \oint\limits_{C} {{\text{d}}\theta } = 2\pi $$
(5.7)

But \( \tan \theta = \frac{{{\text{d}}y}}{{{\text{d}}x}} = \frac{g}{f}{\mkern 1mu}. \) On differentiation, we get

$$ \begin{aligned} \sec^{2} \theta {\text{d}}\theta & = \,\frac{{f{\text{d}}g - g{\text{d}}f}}{{f^{2} }} \\ \Rightarrow \left( {1 + \frac{{g^{2} }}{{f^{2} }}} \right){\text{d}}\theta & =\, \frac{{f{\text{d}}g - g{\text{d}}f}}{{f^{2} }} \\ \Rightarrow {\text{d}}\theta & =\, \frac{{f{\text{d}}g - g{\text{d}}f}}{{f^{2} + g^{2} }} \\ \end{aligned} $$

Substituting this value in (5.7),

$$ \begin{aligned} & \quad \oint\limits_{C} {\left( {\frac{{f{\text{d}}g - g{\text{d}}f}}{{f^{2} + g^{2} }}} \right)} = 2\pi \\ & \Rightarrow \oint\limits_{C} {\left\{ {\left( {\frac{f}{{f^{2} + g^{2} }}} \right){\text{d}}g - \left( {\frac{g}{{f^{2} + g^{2} }}} \right){\text{d}}f} \right\}} = 2\pi \\ \end{aligned} $$

Using Green’s theorem in plane, we have

$$ \begin{aligned} \iint\limits_{A} {\left\{ {\frac{\partial }{\partial f}\left( {\frac{f}{{f^{2} + g^{2} }}} \right) + \frac{\partial }{\partial g}\left( {\frac{g}{{f^{2} + g^{2} }}} \right)} \right\}}{\text{d}}f{\text{d}}g & = \oint\limits_{C} {\left\{ {\left( {\frac{f}{{f^{2} + g^{2} }}} \right){\text{d}}g - \left( {\frac{g}{{f^{2} + g^{2} }}} \right){\text{d}}f} \right\}} \\ & = 2\pi \\ \end{aligned} $$
(5.8)

But

$$ \frac{\partial }{\partial f}\left( {\frac{f}{{f^{2} + g^{2} }}} \right) + \frac{\partial }{\partial g}\left( {\frac{g}{{f^{2} + g^{2} }}} \right) = \frac{{g^{2} - f^{2} }}{{f^{2} + g^{2} }} + \frac{{f^{2} - g^{2} }}{{f^{2} + g^{2} }} = 0{\mkern 1mu}. $$

So, finally we get \( 0 = 2\pi \), which is a contradiction. Hence a closed path of the system (5.6) must surround at least one equilibrium point of the system. This completes the proof.

This theorem also implies that a system without any equilibrium point in a given region cannot have closed paths in that region.

5.4.3 Bendixson’s Negative Criterion

Bendixson’s negative criterion gives an easiest way of ruling out the existence of periodic orbit for a system in \( {\mathbb{R}}^{2} \). The theorem is as follows.

Theorem 5.8

There are no closed paths in a simply connected region of the phase plane of the system (5.6) on which \( \left( {\frac{\partial f}{\partial x} + \frac{\partial g}{\partial y}} \right) \) is not identically zero and is of one sign.

Proof

Let \( D \) be a simply connected region of the system (5.6) in which \( \left( {\frac{\partial f}{\partial x} + \frac{\partial g}{\partial y}} \right) \) is of one sign. If possible, let \( C \) be a closed curve in \( D \) and \( A \) be the area of the region. Then by divergence theorem, we have

$$ \begin{aligned} \iint\limits_{A} {\left( {\frac{\partial f}{\partial x} + \frac{\partial g}{\partial y}} \right)}{\text{d}}x{\text{d}}y & = \oint\limits_{C} {(f,g)} \cdot {\mathop n\limits_{\sim}} {\text{d}}l \\ & = 0\,[\because \, (f,g) \bot {\mathop n\limits_{\sim}} ]. \\ \end{aligned} $$

where \( {\text{d}}l \) is an undirected line element of the path \( C \). This is a contradiction, since \( \left( {\frac{\partial f}{\partial x} + \frac{\partial g}{\partial y}} \right) \) is of one sign, that is, either positive or negative, and hence the integral cannot be zero. Therefore, \( C \) cannot be a closed path of the system. This completes the proof.

We now illustrate the theorem through some examples.

Example I

Show that the equation \( {\ddot x} + f(x)\dot{x} + g(x) = 0 \) cannot have periodic solution s whose phase path lies in a region where \( f \) is of one sign.

Solution

Let \( D \) be the region where \( f \) is of one sign. Given equation can be written as

$$ \left. {\begin{array}{*{20}c} \hfill {\dot{x} = y = F(x,y)} \\ \hfill {\dot{y} = - yf(x) - g(x) = G(x,y)} \\ \end{array} } \right\} $$

Therefore,

$$ \frac{\partial F}{\partial x} + \frac{\partial G}{\partial y} = 0 - f(x) = - f(x){\mkern 1mu}. $$

This shows that \( \left( {\frac{\partial F}{\partial x} + \frac{\partial G}{\partial y}} \right) \) is of one sign in \( D \), since \( f \) is of one sign in \( D \). Hence by Bendixson’s negative criterion, there is no closed path of the system in \( D \). Therefore, the given equation cannot have periodic solution in the region where \( f \) is of one sign.

Example II

Consider the system \( \dot{x} = p(y) + x^{n} ,\;\;\dot{y} = q(x) \), where \( p,q \in C^{1} \) and \( n \in {\mathbb{N}} \). Derive a sufficient condition for \( n \) so that the system has no periodic solution.

Solution

Let \( f(x,y) = p(y) + x^{n} \) and \( g(x,y) = q(x) \). Then

$$ \frac{\partial f}{\partial x} + \frac{\partial g}{\partial y} = \frac{\partial }{\partial x}(p(y) + x^{n} ) + \frac{\partial }{\partial y}q(x) = nx^{n - 1} {\mkern 1mu}. $$

This expression is of one sign if \( n \) is odd. So, by Bendixson’s negative criterion the given system has no periodic solutions, if \( n \) is odd. This is the required sufficient condition for \( n \).

Example III

Show that the system

$$ \dot{x} = - y + x(x^{2} + y^{2} - 1),\;\dot{y} = x + y(x^{2} + y^{2} - 1) $$

has no closed orbits inside the circle with center at \( (0,0) \) and radius \( \frac{1}{\sqrt 2 } \).

Solution

Let \( f(x,y) = - y + x(x^{2} + y^{2} - 1) \) and \( g(x,y) = x + y(x^{2} + y^{2} - 1) \). Now

$$ \frac{\partial f}{\partial x} + \frac{\partial g}{\partial y} = 3x^{2} + y^{2} - 1 + x^{2} + 3y^{2} - 1 = 4\left( {x^{2} + y^{2} - \frac{1}{2}} \right) $$

Clearly, \( \left( {\frac{\partial f}{\partial x} + \frac{\partial g}{\partial y}} \right) \) is of one sign inside the circle \( x^{2} + y^{2} = \frac{1}{2} \), which is a simply connected region in \( {\mathbb{R}}^{2} \). Hence by Bendixson’s negative criterion, the given system has no closed orbits inside the circle with center at \( (0,0) \) and radius \( \frac{1}{\sqrt 2 } \). We also see that \( \left( {\frac{\partial f}{\partial x} + \frac{\partial g}{\partial y}} \right) \) is of one sign outside the circle \( x^{2} + y^{2} = \frac{1}{2} \). But it is not simply connected. So, we cannot apply Bendixson’s criterion on this region.

5.4.4 Dulac’s Criterion

There are some systems for which Bendixson’s negative criterion fails in ruling out the existence of closed paths . For example, consider the system \( \dot{x} = y,\;\dot{y} = x - y + y^{2} \). Using Bendixson’s negative criterion, one cannot be ascertained the ruling out of the closed path of the system. However, the generalization of this criterion may give the ruling out of the closed path and this is due to Dulac, who made the generalization of Bendixon’s negative criterion. The criterion is given below.

Theorem 5.9

Consider the system \( \dot{x} = f(x,y),\;\dot{y} = g(x,y) \) , where \( f(x,y) \), \( g(x,y) \) are continuously differentiable functions in a simply connected region \( D \) of \( {\mathbb{R}}^{2} \) . If there exists a real-valued continuously differentiable function \( \rho = \rho (x,y) \) in \( D \) such that

$$ \frac{\partial (\rho f)}{\partial x} + \frac{\partial (\rho g)}{\partial y} $$

is of one sign throughout \( D \) , then the system has no closed orbits ( periodic solutions ) lying entirely in \( D \).

Proof

If possible, let there be a closed orbit \( C \) in the region D on which \( \left( {\frac{\partial (\rho f)}{\partial x} + \frac{\partial (\rho g)}{\partial y}} \right) \) is of one sign. Let \( A \) be the area bounded by \( C \). Then by divergence theorem, we have

$$ \iint\limits_{A} {\left( {\frac{\partial (\rho f)}{\partial x} + \frac{\partial (\rho g)}{\partial y}} \right)}{\text{d}}x{\text{d}}y = \oint\limits_{C} {(\rho f,\rho g)} \cdot {\mathop n\limits_{\sim}} {\text{d}}l = \oint\limits_{C} \rho (f,g) \cdot {\mathop n\limits_{\sim}} {\text{d}}l $$

where \( {\mathop n\limits_{\sim}} \) is the unit outward drawn normal to the closed orbit \( C \) and \( {\text{d}}l \) is an elementary line element along \( C \) (Fig. 5.2).

Fig. 5.2
figure 2

Sketch of the domains

Since \( (f,g) \) is perpendicular to \( {\mathop n\limits_{\sim}} \), we have

$$ \oint\limits_{C} \rho (f,g) \cdot {\mathop n\limits_{\sim}} {\text{d}}l = 0{\mkern 1mu}. $$

So, \( \iint\limits_{A} {\left( {\frac{\partial (\rho f)}{\partial x} + \frac{\partial (\rho g)}{\partial y}} \right)}{\text{d}}x{\text{d}}y = 0{\mkern 1mu} \). This yields a contradiction, since \( \left( {\frac{\partial (\rho f)}{\partial x} + \frac{\partial (\rho g)}{\partial y}} \right) \) is of one sign. Hence no such closed orbit \( C \) can exist.

Remarks

  1. (i)

    The function \( \rho = \rho (x,y) \) is called the weight function.

  2. (ii)

    Bendixson’s negative criterion is a particular case of Dulac’s criterion with \( \rho = 1 \).

  3. (iii)

    The main difficulty of Dulac’s criterion is to choose the weight function \( \rho \). There is no specific rule for choosing this function.

Example

Show that the system \( \dot{x} = x(\alpha - ax - by),\;\dot{y} = y(\beta - cx - {\text{d}}y) \) where \( a,d \,>\, 0 \) has no closed orbits in the positive quadrant in \( {\mathbb{R}}^{2} \).

Solution

Let \( D = \left\{ {(x,y) \in {\mathbb{R}}^{2} :x,y \,>\, 0} \right\} \). Clearly, \( D \) is a simply connected region in \( {\mathbb{R}}^{2} \). Let \( f(x,y) = x(\alpha - ax - by),\;g(x,y) = y(\beta - cx - {\text{d}}y){\mkern 1mu}. \) Consider the weight function \( \rho (x,y) = \frac{1}{xy}{\mkern 1mu} \). Then

$$ \frac{\partial (\rho f)}{\partial x} + \frac{\partial (\rho g)}{\partial y} = \frac{\partial }{\partial x}\left( {\frac{\alpha - ax - by}{y}} \right) + \frac{\partial }{\partial y}\left( {\frac{{\beta - cx - {\text{d}}y}}{x}} \right) = - \left( {\frac{a}{y} + \frac{d}{x}} \right) \,<\, 0\;\;\forall (x,y) \in D{\mkern 1mu}. $$

Again, the functions \( f \), \( g \) and \( \rho \) are continuously differentiable in \( D \). Hence by Dulac’s criterion, the system has no closed orbits in the positive quadrant \( x,y \,>\, 0 \) of \( {\mathbb{R}}^{2} \).

5.5 Limit Cycles

A limit cycle (cycle in limiting sense) is an isolated closed path such that the neighboring paths (or trajectories) are not closed. The trajectories approach to the closed path or move away from it spirally. This is a nonlinear phenomenon and occurs in many physical systems such as the path of a satellite, biochemical processes, predator–prey model, nonlinear electric circuits , economical growth model, ecology, beating of heart, self-excited vibrations in bridges and airplane wings, daily rhythms in human body temperature, hormone secretion, etc. Linear systems cannot support limit cycles. There are basically three types of limit cycles, namely stable, unstable, and semistable limit cycles. A limit cycle is said to be stable (or attracting) if it attracts all neighboring trajectories. If the neighboring trajectories are repelled from the limit cycle , then it is called an unstable (or repelling) limit cycle. A semistableLimit cycle:semistable limit cycle is one which attracts trajectories from one side and repels from the other. These three types of limit cycles are shown in Fig. 5.3.

Fig. 5.3
figure 3

a Stable, b unstable, c semistable limit cycles

Scientifically, the stable limit cycles are very important.

5.5.1 Poincaré–Bendixson Theorem

So far we have discussed theorems that describe the procedure for the nonexistence of periodic orbit of a system in some region in the phase plane . It is extremely difficult to prove the existence of a limit cycle or periodic solution for a nonlinear system of \( n \,\ge\,3 \) variables. The Poincaré–Bendixson theorem permits us to prove the existence of at least one periodic orbit of a system in \( {\mathbf{\mathbb{R}}}^{2} \) under certain conditions The main objective of this theorem is to find an ‘annular region ’ that does not contain any equilibrium point of the system, in which one can find at least one periodic orbit. The proof of the theorem is quite complicated because of the topological concepts involved in it.

Theorem 5.10

Suppose that

  1. (i)

    \( R \) is a closed, bounded subset of the phase plane.

  2. (ii)

    \( {\mathop {\dot{x}}\limits_{\sim }} = {\mathop f\limits_{\sim}} ({\mathop x\limits_{\sim}} ) \) is a continuously differentiable vector field on an open set containing \( R \).

  3. (iii)

    \( R \) does not contain any fixed points of the system.

    There exists a trajectory \( C \) of the system that lies in \( R \) for some time \( t_{0} \) , say, and remains in \( R \) for all future time \( t \,\ge \,t_{0} \).

    Then \( C \) is either itself a closed orbit or it spirals towards a closed orbit as time \( t \to \infty \) . In either case, the system has a closed orbit in \( R \).

For explaining the theorem, we consider a region \( R \) containing two dashed curves together with the ring-shaped region between them as depicted in Fig. 5.4. Every path \( C \) through a boundary point at \( t = t_{0} \) must enter in \( R \) and can leave it. The theorem asserts that \( C \) must spiral toward a closed path \( C_{0} \). The closed path \( C_{0} \) must surround a fixed point, say \( P \) and the region \( R \) must exclude all fixed points of the system.

Fig. 5.4
figure 4

Sketch of the annular region

The Poincaré–Bendixson theorem is quite satisfying from the theoretical point of view. But in general, it is rather difficult to apply. We give an example that shows how to use the theorem to prove the existence of at least one periodic orbit of the system.

Example

Consider the system \( \dot{x} = x - y - x(x^{2} + 2y^{2} ),\;\dot{y} = x + y - y(x^{2} + 2y^{2} ){\mkern 1mu}. \) The origin \( (0,0) \) is a fixed of the system. Other fixed points must satisfy \( \dot{x} = 0,\;\dot{y} = 0 \). These give

$$ x = y + x(x^{2} + 2y^{2} ),\;x = - y + y(x^{2} + 2y^{2} ){\mkern 1mu}. $$

A sketch of these two curves shows that they cannot intersect except at the origin. So, the origin is the only fixed point of the system. We now covert the system into polar coordinates \( (r,\theta ) \) using the relations \( x = r\cos \theta ,\;y = r\sin \theta \) where \( r^{2} = x^{2} + y^{2} \) and \( \tan \theta = y/x \). Differentiating the expression \( r^{2} = x^{2} + y^{2} \) with respect to \( t \), we have

$$ \begin{aligned} r\dot{r} & = x\dot{x} + y\dot{y} \\ & = x[x - y - x(x^{2} + 2y^{2} )] + y[x + y - y(x^{2} + 2y^{2} )] \\ & = x^{2} + y^{2} - (x^{2} + y^{2} )(x^{2} + 2y^{2} ) \\ & = r^{2} - r^{2} (r^{2} + r^{2} \sin^{2} \theta ) \\ & = r^{2} - (1 + \sin^{2} \theta )r^{4} \\ \Rightarrow \dot{r} & = r - (1 + \sin^{2} \theta )r^{3} {\mkern 1mu}. \\ \end{aligned} $$

Similarly, differentiating \( \tan \theta = y/x \) with respect to \( t \), we get \( \dot{\theta } = 1 \).

We see that \( \dot{r} \,>\, 0 \) for all \( \theta \) if \( (r - 2r^{3} ) \,>\, 0 \), that is, if \( r^{2} \,<\, 1/2 \), that is, if \( r \,<\, 1/\sqrt 2 \), and \( \dot{r} \,<\, 0 \) for all \( \theta \) if \( (r - r^{3} ) \,<\, 0 \), that is, if \( r^{2} \,>\, 1 \), that is, if \( r \,>\, 1 \). We now define an annular region \( R \) on which we can apply the Poincaré–Bendixson theorem. Consider the annular region

$$ R = \left\{ {(r,\theta ):\frac{1}{\sqrt 2 } \,\le\,r \,\le\,1} \right\}{\mkern 1mu}. $$

Since the origin is the only fixed point of the system, the region \( R \) does not contain any fixed point of the system. Again, since \( \dot{r} \,>\, 0 \) for \( r \,<\, 1/\sqrt 2 \) and \( \dot{r} \,<\, 0 \) for \( r \,>\, 1 \), all trajectories in \( R \) will remain in \( R \) for all future time. Hence, by Poincaré–Bendixson theorem, there exists at least one periodic orbit of the system in the annular region \( R \).

Again, consider another system below

$$ \begin{aligned} \dot{x} & = - y + x(x^{2} + y^{2} )\sin \frac{1}{{\sqrt {x^{2} + y^{2} } }} \\ \dot{y} & = x + y(x^{2} + y^{2} )\sin \frac{1}{{\sqrt {x^{2} + y^{2} } }}. \\ \end{aligned} $$

The origin is an equilibrium point of the system. In polar coordinates, the system can be transformed as

$$ \dot{r} = r^{3} \sin \frac{1}{r},\;\dot{\theta } = 1. $$

This system has limit cycles \( {\Gamma}_{n} \) lying on the circle \( r = 1/(n\pi ) \). These limit cycles concentrated at the origin for large value of \( n \), that is, the distance between the limit cycle \( \Gamma _{n} \) and the equilibrium point origin decreases as \( n \) increases, and finally the distance becomes zero as \( n \to \infty \). Among these limit cycles, the limit cycles \( \Gamma _{2n} \) are stable while the other are unstable. The existence of finite numbers of limit cycles is of great importance physically. The following theorem gives the criterion for a system to have finite number of limit cycles.

Theorem 5.11

(Dulac) In any bounded region of the plane, a planar analytic system \( {\mathop {\dot{x}}\limits_{\sim }} = {\mathop f\limits_{\sim}} ({\mathop x\limits_{\sim}}) \) with \( {\mathop f\limits_{\sim}} ({\mathop x\limits_{\sim}}) \) analytic in \( {\mathbf{\mathbb{R}}}^{2} \) has at most a finite number of limit cycles. In other words, any polynomial system has at most a finite number of limit cycles in \( {\mathbf{\mathbb{R}}}^{2} \).

Theorem 5.12

(Poincaré) A planar analytic system (5.6) cannot have an infinite number of limit cycles that accumulate on a cycle of (5.6).

5.5.2 Liénard System

Consider the system

$$ \left. {\begin{array}{*{20}l} {\dot{x} = y - F(x)} \hfill \\ {\dot{y} = - g(x)} \hfill \\ \end{array} } \right\} $$
(5.9)

This system can also be written as a second-order differential equation of the form

$$ {\ddot x} + f(x)\dot{x} + g(x) = 0 $$
(5.10)

where \( f(x) = F^{\prime}(x) \). This equation is popularly known as Liénard equation , according to the the French Physicist A. Liénard, who invented this equation in the year 1928 in connection with nonlinear electrical circuit. The Liénard equation is a generalization of the van der Pol (Dutch Electrical Engineer) oscillator in connection with diode circuit as

$$ {\ddot x} + \mu (x^{2} - 1)\dot{x} + x = 0{\mkern 1mu} , $$

\( \mu \,\ge\,0 \) is the parameter. The Liénard equation can also be interpreted as the motion of a unit mass subject to a nonlinear damping force \( ( - f(x)\dot{x}) \) and a nonlinear restoring force \( ( - g(x)) \). Under certain conditions on the functions \( F \) and \( g \), Liénard proved the existence and uniqueness of a stable limit cycle of the system.

Theorem 5.13

(Liénard Theorem ) Suppose two functions \( f(x) \) and \( g(x) \) satisfy the following conditions:

  1. (i)

    \( f(x) \) and \( g(x) \) are continuously differentiable for all \( x \),

  2. (ii)

    \( g( - x) = - g(x)\;\forall {\mkern 1mu} x \) , that is, \( g(x) \) is an odd function,

  3. (iii)

    \( g(x) \,>\, 0 \,{\text{for}} \,x \,>\, 0 \),

  4. (iv)

    \( f( - x) = f(x)\;\forall {\mkern 1mu} x \) , that is, \( f(x) \) is an even function,

  5. (v)

    The odd function, \( F(x) = \int\limits_{0}^{x} f (u)du \) , has exactly one positive zero at \( x = \alpha \) , say, \( F(x) \) is negative for \( 0 \,<\, x \,<\, \alpha \) , and \( F(x) \) is positive and nondecreasing for \( x \,>\, \alpha \) and \( F(x) \to \infty \) as \( x \to \infty \).

Then the Liénard equation (5.10) has a unique stable limit cycle surrounding the origin of the phase plane .

The theorem can also be stated as follows.

Under the assumptions that \( F,g \in C^{1} ({\mathbf{\mathbb{R}}}) \), \( F \) and \( g \) are odd functions of \( x \), \( xg(x) \,>\, 0 \) for \( x \ne 0 \), \( F(0) = 0 \), \( F^{\prime}(0) \,<\, 0 \), \( F \) has single positive zero at \( x = \alpha \) , and \( F \) increases monotonically to infinity for \( x \,\ge\,\alpha \) as \( x \to \infty \) , it follows that the Liénard system (5.9) has a unique stable limit cycle.

The Liénard system is a very special type of equation that has a unique stable limit cycle. The following discussions are given for the existence of finite numbers of limit cycles for some special class of equations. In 1958, the Chinese mathematician Zhang Zhifen proved a theorem for the existence and the number of limit cycles for a system. The theorem is given below.

Theorem 5.14

(Zhang theorem-I) Under the assumptions that \( a \,<\, 0 \,<\, b \), \( F,g \in C^{1} (a,b) \), \( xg(x) \,>\, 0 \) for \( x \ne 0 \), \( G(x) \to \infty \) as \( x \to a \) if \( a = - \infty \) and \( G(x) \to \infty \) as \( x \to b \) if \( b = \infty \), \( f(x)/g(x) \) is monotone increasing on \( (a,0) \,\cap\, (0,b) \) and is not constant in any neighborhood of \( x = 0 \) , it follows that the system (5.9) has at most one limit cycle in the region \( a \,<\, x \,<\, b \) and if it exists it is stable, where \( G(x) = \int\limits_{0}^{x} {g(u)du} \).

Again in 1981, he proved another theorem relating the number of limit cycle s of the Liénard type systems.

Theorem 5.15

(Zhang theorem-II) Under the assumptions that \( g(x) = x \), \( F \in C^{1} ({\mathbf{\mathbb{R}}}) \), \( f(x) \) is an even function with exactly two positive zeros \( a_{1} ,a_{2} (a_{1} \,<\, a_{2} ) \) with \( F(a_{1} ) \,>\, 0 \) and \( F(a_{2} ) \,<\, 0 \) , and \( f(x) \) is monotone increasing for \( x \,>\, a_{2} \) , it follows that the system (5.9) has at most two limit cycles.

If \( g(x) = x \) and \( F(x) \) is a polynomial, then one can ascertain the number of limit cycles of a system from the following theorem.

Theorem 5.16

(Lins, de Melo and Pugh) The system (5.9) with \( g(x) = x \), \( F(x) = a_{1} x + a_{2} x^{2} + a_{3} x^{3} \) , and \( a_{1} a_{3} \,<\, 0 \) has exactly one limit cycle. It is stable if \( a_{1} \,<\, 0 \) and unstable if \( a_{1} \,>\, 0 \).

Remark

The Russian mathematician Rychkov proved that the system (5.9) with \( g(x) = x \) and \( F(x) = a_{1} x + a_{3} x^{3} + a_{5} x^{5} \) has at most two limit cycles.

See Perko [4] for detail discussions.

The great mathematician David Hilbert presented 23 outstanding mathematical problems to the Second International Congress of Mathematics in 1900 and the 16th Hilbert Problem was to determine the maximum number of limit cycles \( H_{n} \), of an \( n \) th degree polynomial system

$$ \begin{aligned} \dot{x} & = \sum\limits_{i + j = 0}^{n} {a_{ij} x^{i} y^{j} } \\ \dot{y} & = \sum\limits_{i + j = 0}^{n} {b_{ij} x^{i} y^{j} } \\ \end{aligned} $$

For given \( (a,b) \in {\mathbf{\mathbb{R}}}^{(n + 1)(n + 2)} \), the number of limit cycle s \( H_{n} (a,b) \) of the above system must be finite. This can be ascertained from Dulac’s theorem. So even for a nonlinear system in \( {\mathbf{\mathbb{R}}}^{2} \), it is difficult to determine the number of limit cycles. In the year 1962, Russian mathematician N.V. Bautin proved that any quadratic system has at most three limit cycle s. However, in 1979 the Chinese mathematicians S.L. Shi, L.S. Chen, and M.S. Wang established that a quadratic system has four limit cycles. It had been proved by Y.X. Chin in 1984. A cubic system can have at least 11 limit cycles. Thus the determination of the number of limit cycles is extremely difficult for a system in general.

5.5.3 van der Pol Oscillator

We now discuss van der Pol equation which is a special type nonlinear oscillator. This type of oscillator is appeared in electrical circuits, diode valve, etc. The van der Pol equation is given by

$$ {\ddot x} + \mu (x^{2} - 1)\dot{x} + x = 0,\;\mu \,>\, 0{\mkern 1mu}. $$

Comparing with the Liénard equation (5.10), we get \( f(x) = \mu (x^{2} - 1) \) and \( g(x) = x \). Clearly, the conditions (i) to (iv) of the Liénard theorem are satisfied. We only check the condition (v). So we have

$$ \begin{aligned} F(x) & = \int\limits_{0}^{x} f(u){\text{d}}u = \int\limits_{0}^{x} {\mu (u^{2} - 1){\text{d}}u} \\ & = \mu \left[ {\frac{{u^{3} }}{3} - u} \right]_{0}^{x} = \mu \left( {\frac{{x^{3} }}{3} - x} \right) \\ \Rightarrow F(x) & = \frac{1}{3}\mu x(x^{2} - 3). \\ \end{aligned} $$

The function \( F(x) \) is odd. \( F(x) \) has exactly one positive zero at \( x = \sqrt 3 \). \( F(x) \) is negative for \( 0 \,<\, x \,<\, \sqrt 3 \), and it is positive and nondecreasing for \( x \,>\, \sqrt 3 \) and \( F(x) \to \infty \) as \( x \to \infty \). So, the condition (v) is satisfied for \( x = \sqrt 3 \). Hence, the van der Pol equation has a unique stable limit cycle , provided \( \mu \,>\, 0 \). The graphical representation of the limit cycle and the solution \( x(t) \) are displayed below taking initial point \( x(0) = 0.5,\;\dot{x}(0) = 0 \) and \( \mu = 1.5 \) and \( 0.5 \), respectively.

The behavior of solutions for large values of the parameter \( \mu \) can be understood from the figures. The graph of \( t\,{\text{ versus}}\,x \) for \( \mu = 1.5 \) (cf. Fig. 5.5a) is characterized by fast changes of the position \( x \) near certain values of time \( t \). The van der Pol equation can be expressed as \( {\ddot x} + \mu \dot{x}(x^{2} - 1) = \frac{\text d}{{{\text{d}}x}}\left( {\dot{x} + \mu \left( {\frac{1}{3}x^{3} - x} \right)} \right) \). Now, we put \( f(x) = - x + x^{3} /3 \) and \( \mu y = \dot{x} + \mu f(x) \). So, the van der Pol equation becomes

Fig. 5.5
figure 5

a van der Pol oscillator for \( \mu = 1.5 \). b van der Pol oscillator for \( \mu = 0.5 \)

$$ \begin{aligned} \dot{x} & = \mu (y - f(x)) \\ \dot{y} & = - x/\mu \\ \end{aligned} $$

Therefore,

$$ \frac{{{\text{d}}y}}{{{\text{d}}x}} = \frac{{\dot{y}}}{{\dot{x}}} = - \frac{x}{{\mu^{2} \left( {y - f(x)} \right)}} \Rightarrow \left( {y - f(x)} \right)\frac{{{\text{d}}y}}{{{\text{d}}x}} = - \frac{x}{{\mu^{2} }}. $$

For \( \mu \,\gg \,1 \), the right-hand side of the above equation is small and the orbits can be described by the equation

$$ \left( {y - f(x)} \right)\frac{{{\text{d}}y}}{{{\text{d}}x}} = 0 \Rightarrow {\text{either }}y = f(x) \, {\text{or}} \, y = {\text{constant}} . $$

A sketch of the function \( y = f(x) = - x + x^{3} /3 \) indicates that the variable \( x(t) \) changes quickly in the outside of the curve, whereas the variable \( y(t) \) is changing very slow. Using the Poincaré-Bendixson theorem in the annular region \( R = \left\{ {(x,y): - 2 \,\le\,x \,\le\,2, - 1 \,\le\,y \,\le\,1} \right\}\backslash \left\{ {(x,y): - 1 \,\le\,x \,\le\,1, - 0.5 \,\le\,y \,\le\,0.5} \right\} \), which does not contain the equilibrium point origin of the system, the limit cycle must be located in a neighborhood of the curve \( y = f(x) \), as shown in Fig. 5.6. Now, the relaxation period \( T \) is given by

Fig. 5.6
figure 6

Relaxation oscillation of the van der Pol equation for \( \mu = 90 \)

$$ T = - \mu \int\limits_{ABCDA} {\frac{{{\text{d}}y}}{x}} = - 2\mu \int\limits_{AB} {\frac{{{\text{d}}y}}{x} - 2\mu \int\limits_{BC} {\frac{{{\text{d}}y}}{x}} }. $$

where the first integral corresponds with the slow motion and so it gives the largest contribution to the period \( T \). With \( y = f(x) = - x + x^{3} /3 \), we see that

$$ - 2\mu \int\limits_{AB} {\frac{{{\text{d}}y}}{x} = - 2\mu \int\limits_{ - 2}^{ - 1} {\frac{{ - 1 + x^{2} }}{x}{\text{d}}x = (3 - 2\log 2)\mu } }. $$

and with the equation \( \left( {y - f(x)} \right)\frac{{{\text{d}}y}}{{{\text{d}}x}} = - \frac{x}{{\mu^{2} }} \), the second integral is obtained as

$$ - 2\mu \int\limits_{BC} {\frac{{{\text{d}}y}}{x} = \frac{2}{\mu }\int\limits_{BC} {\frac{x}{y - f(x)}{\text{d}}x} }\,. $$

For \( \mu \,\gg\, 1 \), we get approximately the equation \( \left( {y - f(x)} \right)\frac{{{\text{d}}y}}{{{\text{d}}x}} = 0 \). The orders of \( \left( {y - f(x)} \right) \) and \( \frac{{{\text{d}}y}}{{{\text{d}}x}} \) depend upon the parameter \( \mu \) and they are inversely proportional to each other. One can obtain the orders of \( x \) and \( y \). From Fig. 5.6, it is easily found that for \( \mu \,\gg\, 1 \), \( y \sim O(\mu^{2/3} ) \), and so \( y - f(x) \sim O(\mu^{ - 2/3} ) \). Therefore, the integral \( - 2\mu \int\limits_{BC} {\frac{{{\text{d}}y}}{x}} \) must be of order \( O(\mu^{ - 1/3} ) \) for \( \mu \,\gg\, 1 \). Hence the period \( T \) of the periodic solution of the van der Pol equation is given by

$$ T = (3 - 2\log 2)\mu + O(\mu^{ - 1/3} )\,{\text{when}}\,\mu \,\gg\, 1. $$

Relaxation oscillation is a periodic phenomenon in which a slow build-up is followed by a fast discharge. One may think that there are two time scales, viz., the fast and slow scales that operate sequentially.

We now discuss another consequence of nonlinear oscillations. Weakly nonlinear oscillations have been observed in many physical systems. Let us consider the van der Pol equation \( {\ddot x} + \varepsilon (x^{2} - 1)\dot{x} + x = 0 \) and the Duffing equation \( {\ddot x} + x + \varepsilon x^{3} = 0 \) where \( \varepsilon \,\ll\, 1 \). The phase diagrams for the two nonlinear oscillators with the initial condition \( x(0) = a,\dot{x}(0) = 0 \) are shown in Fig. 5.7.

Fig. 5.7
figure 7

Limit cycle s of the a van der Pol oscillator and b duffing oscillator for \( \varepsilon = 0.1, a = 0.5 \)

The phase trajectories for the van der Pol oscillator are slowly building up and the trajectories take many cycles to grow amplitude. The trajectories finally reach to circular limit cycle of radius 2 as shown in Fig. 5.7a. In case of Duffing oscillator, the phase trajectories form a closed path and the frequency of oscillation is depending on \( \varepsilon \) and \( a \). The Duffing equation is conservative. It has a nonlinear center at the origin. For sufficiently small \( \varepsilon \), all orbits close to the origin are periodic with no change in amplitude in long term. The mathematical theories such as regular perturbation and method of averaging are well-established and can be found in Gluckenheimer and Holmes [5], Grimshaw [6], Verhulst [7], and Strogatz [8]. We present few examples for limit cycles as follows:

Example 5.1

Show that the equation \( {\ddot x} + \mu (x^{4} - 1)\dot{x} + x = 0 \) has a unique stable limit cycle if \( \mu \,>\, 0 \).

Solution

Comparing the given equation with the Liénard equation (5.9), we get \( f(x) = \mu (x^{4} - 1),\;g(x) = x{\mkern 1mu} \). Let \( \mu \,>\, 0 \). Clearly, the conditions (i) to (iv) of the Liénard theorem are satisfied. We now check the condition (v). We have

$$ \begin{aligned} F(x) & = \int\limits_{0}^{x} f(u){\text{d}}u = \int\limits_{0}^{x} {\mu (u^{4} - 1){\text{d}}u} \\ & = \mu \left[ {\frac{{u^{5} }}{5} - u} \right]_{0}^{x} = \mu \left( {\frac{{x^{5} }}{5} - x} \right) \\ \Rightarrow F(x) & = \frac{1}{5}\mu x(x^{4} - 5) = \frac{1}{5}\mu x(x^{2} + \sqrt 5 {\mkern 1mu} )(x^{2} - \sqrt 5 ). \\ \end{aligned} $$

The function \( F(x) \) is odd. \( F(x) \) has exactly one positive zero at \( x = \sqrt[4]{5} \), is negative for \( 0 \,<\, x \,<\, \sqrt[4]{5} \), is positive and nondecreasing for \( x \,>\, \sqrt[4]{5} \) and \( F(x) \to \infty \) as \( x \to \infty \). So, the condition (v) is satisfied for \( x = \sqrt[4]{5} \). Thus the given equation has a unique stable limit cycle if \( \mu \,>\, 0 \).

Example 5.2

Find analytical solution of the following system

$$ \begin{aligned} \dot{x} & = - y + x(1 - x^{2} - y^{2} ) \\ \dot{y} & = x + y(1 - x^{2} - y^{2} ) \\ \end{aligned} $$

and then obtain limit cycle of the system.

Solution

Let us convert the system into polar coordinates \( (r,\theta ) \) by putting \( x = r\cos \theta ,\;y = r\sin \theta \),

$$ \begin{aligned} r\dot{r} & = x\dot{x} + y\dot{y} \\ & = x[ - y + x(1 - x^{2} - y^{2} )] + y[x + y(1 - x^{2} - y^{2} )] \\ & = (x^{2} + y^{2} )(1 - x^{2} - y^{2} ) \\ & = r^{2} (1 - r^{2} ) \\ \Rightarrow \dot{r} & = r(1 - r^{2} ) \\ \end{aligned} $$

Similarly, differentiating \( \tan \theta = y/x \) with respect to \( t \), we get

$$ \begin{aligned} & \quad \sec^{2} (\theta )\dot{\theta } = \frac{{x\dot{y} - y\dot{x}}}{{x^{2} }} \\ & \Rightarrow \left( {1 + \frac{{y^{2} }}{{x^{2} }}} \right)\dot{\theta } = \frac{{x[x + y(1 - x^{2} - y^{2} )] - y[ - y + x(1 - x^{2} - y^{2} )]}}{{x^{2} }} \\ & \Rightarrow (x^{2} + y^{2} )\dot{\theta } = x^{2} + y^{2} \\ & \Rightarrow \dot{\theta } = 1 \\ \end{aligned} $$

So, the system becomes

$$ \left. {\begin{array}{*{20}l} {\dot{r} = r(1 - r^{2} )} \hfill \\ {\dot{\theta } = 1} \hfill \\ \end{array} } \right\} $$
(5.11)

We now solve this system. We have

$$ \dot{r} = \frac{{{\text{d}}r}}{{{\text{d}}t}} = r(1 - r^{2} ) \Rightarrow \left( {\frac{1}{r} + \frac{r}{{1 - r^{2} }}} \right){\text{d}}r = {\text{d}}t $$

Integrating, we get

$$ \log r - \frac{1}{2}\log (1 - r^{2} ) = t - \frac{1}{2}\log c \Rightarrow \, \frac{1}{{r^{2} }} = 1 + ce^{ - 2t} $$

where \( c \) is an arbitrary constant. Similarly

$$ \dot{\theta } = \frac{{{\text{d}}\theta }}{{{\text{d}}t}} = 1 \Rightarrow \theta (t) = t + \theta_{0} $$

where \( \theta_{0} = \theta (t = 0) \). Therefore, the solution of the system (5.11) is

$$ \left. {\begin{array}{*{20}l} {r = \frac{1}{{\sqrt {1 + ce^{ - 2t} } }}} \hfill \\ {\theta = t + \theta_{0} } \hfill \\ \end{array} } \right\} $$

Hence, the corresponding general solution of the original system is given by

$$ \left. {\begin{array}{*{20}l} {x(t) = \frac{{\cos (t + \theta_{0} )}}{{\sqrt {1 + ce^{ - 2t} } }}} \hfill \\ {y(t) = \frac{{\sin (t + \theta_{0} )}}{{\sqrt {1 + ce^{ - 2t} } }}} \hfill \\ \end{array} } \right\} $$

Now, if \( c = 0 \), we have the solutions \( r = 1 \), \( \theta = t + \theta_{0} \). This represents the closed path \( x^{2} + y^{2} = 1 \) in anticlockwise direction (since \( \theta \) increases as \( t \) increases).

If \( c \,<\, 0 \), it is clear that \( r \,>\, 1 \) and \( r \to 1 \) as \( t \to \infty \). Again, if \( c \,>\, 0 \), we see that \( r \,<\, 1 \) and \( r \to 1 \) as \( t \to \infty \). This shows that there exists a single closed path \( (r = 1) \) and all other paths approach spirally from the outside or inside as \( t \to \infty \). The following figure is drawn for different values of \( c \) (Fig. 5.8).

Fig. 5.8
figure 8

Solution curves for different values of \( c \)

The important point is that the closed orbit or isolated closed orbit of the system solely depends on some parameters.

From Fig. 5.9, we see that all solutions of the equation tend to the periodic solution (limit cycle) \( S = \left\{ {(x,y):x^{2} + y^{2} = 1} \right\} \).

Fig. 5.9
figure 9

Sketch of the limit cycle of the system

Example 5.3

Show that the system

$$ \dot{x} = y + \frac{x}{{\sqrt {x^{2} + y^{2} } }}\left\{ {1 - (x^{2} + y^{2} )} \right\},\;\dot{y} = - x + \frac{y}{{\sqrt {x^{2} + y^{2} } }}\left\{ {1 - (x^{2} + y^{2} )} \right\} $$

has a stable limit cycle.

Solution

Let us convert the system into polar coordinate \( (r,\theta ) \) using \( x = r\cos \theta ,\;y = r\sin \theta \). Then

$$ r^{2} = x^{2} + y^{2} ,\;\tan \theta = \frac{y}{x} $$

Then the system can be written as

$$ \dot{x} = y + \frac{x}{r}\left( {1 - r^{2} } \right),\;\dot{y} = - x + \frac{y}{r}\left( {1 - r^{2} } \right) $$

Differentiating \( r^{2} = x^{2} + y^{2} \) with respect to \( t \), we have

$$ \begin{aligned} r\dot{r} & = x\dot{x} + y\dot{y} \\ & = x\left[ {y + \frac{x}{r}\left( {1 - r^{2} } \right)} \right] + y\left[ { - x + \frac{y}{r}\left( {1 - r^{2} } \right)} \right] \\ & = \frac{{(x^{2} + y^{2} )}}{r}(1 - r^{2} ) \\ & = r(1 - r^{2} ) \\ \Rightarrow \dot{r} & = 1 - r^{2} \\ \end{aligned} $$

Similarly, differentiating \( \tan \theta = y/x \) with respect to \( t \), we get

$$ \begin{aligned} & \quad \sec^{2} (\theta )\dot{\theta } = \frac{{x\dot{y} - y\dot{x}}}{{x^{2} }} \\ & \Rightarrow \left( {1 + \frac{{y^{2} }}{{x^{2} }}} \right)\dot{\theta } = \frac{{x\left[ { - x + \frac{y}{r}(1 - r^{2} )} \right] - y\left[ {y + \frac{x}{r}(1 - r^{2} )} \right]}}{{x^{2} }} \\ & \Rightarrow (x^{2} + y^{2} )\dot{\theta } = - (x^{2} + y^{2} ) \\ & \Rightarrow \dot{\theta } = - 1 \\ \end{aligned} $$

So, the system becomes

$$ \left. {\begin{array}{*{20}l} {\dot{r} = 1 - r^{2} } \hfill \\ {\dot{\theta } = - 1} \hfill \\ \end{array} } \right\} $$

We now solve this system. We have

$$ \frac{{\text{d}}r}{{{\text{d}}t}} = \dot{r} = 1 - r^{2} \Rightarrow \left( {\frac{1}{1 + r} + \frac{1}{1 - r}} \right){\text{d}}r = 2{\text{d}}t $$

Integrating, we get

$$ \begin{aligned} & \quad \log (1 + r) - \log (1 - r) = 2t + \log A \\ & \Rightarrow \log \left( {\frac{1 + r}{1 - r}} \right) = 2t + \log A \\ & \Rightarrow \frac{1 + r}{1 - r} = Ae^{2t} \\ & \Rightarrow r = \frac{{Ae^{2t} - 1}}{{Ae^{2t} + 1}} \\ \end{aligned} $$

where \( A = \frac{{1 + r_{0} }}{{1 \,-\, r_{0} }} \), \( r_{0} \ne 1 \) being the initial condition.

Now, \( r \to 1 \) as \( t \to \infty \) and the limit cycle in this case is a circle of unit radius.

If \( r_{0} \,>\, 1 \), the spiral goes itself into the circle \( r = 1 \) from outside in clockwise direction (as \( \dot{\theta } = - 1 \)) and if \( r_{0} \,<\, 1 \), it goes itself onto the unit circle \( r = 1 \) from the inside in the same direction. Therefore, the limit cycle is stable.

Example 5.4

Find the limit cycle of the system

$$ \dot{x} = (x^{2} + y^{2} - 1)x - y\sqrt {x^{2} + y^{2} } {\mkern 1mu} ,\;\dot{y} = (x^{2} + y^{2} - 1)y + x\sqrt {x^{2} + y^{2} } $$

and investigate its stability.

Solution

Let us convert the system into polar coordinates \( (r,\theta ) \) putting

$$ x = r\cos \theta ,\;y = r\sin \theta $$

where \( r^{2} = x^{2} + y^{2} \) and \( \tan \theta = y/x \). Then the system can be written as

$$ \dot{x} = (r^{2} - 1)x - yr,\;\dot{y} = (r^{2} - 1)y + xr{\mkern 1mu}. $$

Differentiating \( r^{2} = x^{2} + y^{2} \) with respect to \( t \)

$$ \begin{aligned} r\dot{r} & = x\dot{x} + y\dot{y} \\ & = x[(r^{2} - 1)x - yr] + y[(r^{2} - 1)y + xr] \\ & = (x^{2} + y^{2} )(r^{2} - 1) = r^{2} (r^{2} - 1) \\ \Rightarrow \dot{r} & = r(r^{2} - 1) \\ \end{aligned} $$

Differentiating \( \tan \theta = y/x \) with respect to \( t \)

$$ \begin{aligned} & \quad \sec^{2} (\theta )\dot{\theta } = \frac{{x\dot{y} - y\dot{x}}}{{x^{2} }} \\ & \Rightarrow \left( {1 + \frac{{y^{2} }}{{x^{2} }}} \right)\dot{\theta } = \frac{{x[(r^{2} - 1)y + xr] - y[(r^{2} - 1)x - yr]}}{{x^{2} }} \\ & \Rightarrow (x^{2} + y^{2} )\dot{\theta } = (x^{2} + y^{2} )r \\ & \Rightarrow \dot{\theta } = r \\ \end{aligned} $$

Therefore, the given system reduces to

$$ \left. {\begin{array}{*{20}l} {\dot{r} = r(r^{2} - 1)} \hfill \\ {\dot{\theta } = r} \hfill \\ \end{array} } \right\} $$

Now,

$$ \frac{{\text{d}}r}{{{\text{d}}t}} = \dot{r} = r(r^{2} - 1) \Rightarrow \left( {\frac{r}{{r^{2} - 1}} - \frac{l}{r}} \right){\text{d}}r = {\text{d}}t $$

Integrating, we have

$$ \log (r^{2} - 1) - 2\log r = 2t + \log c \Rightarrow \, r = \frac{1}{{\sqrt {1 - ce^{2t} } }} $$

where \( c = (r_{0}^{2} - 1)/r_{0}^{2} \), \( r_{0} \ne 0 \) being the initial condition. This gives an unstable limit cycle at \( r = 1 \) as presented in Fig. 5.10.

Fig. 5.10
figure 10

Graphical representation of the unstable limit cycle for the given system

Example 5.5

Show that the system

$$\begin{aligned} \dot{x} &= - y + x\left( {\sqrt {x^{2} + y^{2} } - 1} \right)\left( {2 - \sqrt {x^{2} + y^{2} } } \right),\\ \dot{y}& = x + y\left( {\sqrt {x^{2} + y^{2} } - 1} \right)\left( {2 - \sqrt {x^{2} + y^{2} } } \right) \end{aligned} $$

has exactly two limit cycles.

Solution

Setting \( x = r\cos \theta ,y = r\sin \theta \), the given system can be transformed into polar coordinates \( (r,\theta ) \) as

$$ \dot{r} = r(r - 1)(2 - r),\;\dot{\theta } = 1. $$

It has limit cycles when \( r(r - 1)(2 - r) = 0 \), that is, \( r = 0,1,2 \). But the point \( r = 0 \) corresponds to the unique stable equilibrium point of the given system. So the system has exactly two limit cycles at \( r = 1 \) and \( r = 2 \). The limit cycle at \( r = 1 \) is unstable, while the limit cycle at \( r = 2 \) is stable. A computer generated plot of the limit cycles is presented in the Fig. 5.11.

Fig. 5.11
figure 11

Plot of the limit cycles of the given system

Example 5.6

Show that the system

$$\begin{aligned} \dot{x} &= - y + x(x^{2} + y^{2} )\sin \left( {\log \left( {\sqrt {x^{2} + y^{2} } } \right)} \right),\\ \dot{y} &= x + y(x^{2} + y^{2} )\sin \left( {\log \left( {\sqrt {x^{2} + y^{2} } } \right)} \right)\end{aligned}$$

has infinite number of periodic orbits.

Solution

The given system can be transformed into polar coordinates as follows:

$$ \dot{r} = r^{3} \sin (\log r),\;\dot{\theta } = 1. $$

It has periodic orbits when \( \sin (\log r) = 0 \), that is, \( \log r = {\pm}n\pi \). Hence there is an infinite sequence of isolated periodic orbits with period \( e^{ - n\pi } ,n = 1,2,3,\ldots \).

5.6 Applications

In this section, we discuss two dynamical processes viz., the biochemical reaction of living cells and the interaction dynamics of prey and predator populations. Both the dynamical processes have limit cycles under the fulfillment of certain conditions among parameters of the model equations.

5.6.1 Glycolysis

Every animate object requires energy to perform their daily activities. This energy is produced through respiration. We all know that our daily meals contain hues amount of Glucose (C6H12O6, 12H2O). Glycolysis is a process that breaks down the glucose and produce Pyruvic acid and two mole ATP. Through Craves cycle, the Pyruvic acid is finally converted to energy, which is stored in cells. So, glycolysis is a fundamental biochemical reaction in which living cells get energy by breaking down glucose. It was observed that the process may give rise to oscillations in the concentrations of various intermediate chemical reactants. A set of equations for this oscillatory motion was derived by Sel’kov (1968). In dimensionless form, the equations are expressed by

$$ \left. {\begin{array}{*{20}l} {\dot{x} = - x + ay + x^{2} y} \hfill \\ {\dot{y} = b - ay - x^{2} y} \hfill \\ \end{array} } \right\} $$

where \( x \) and \( y \) are proportional to the concentrations of adenosine diphosphate (ADP) and fructose-6-phospate (F6P). The positive constants \( a \) and \( b \) are kinetic parameters of the glycolysis process. The same nonlinear term is present in both equations with opposite signs. The system has a unique equilibrium point

$$ (x^*, y^*) = \left( {b,\frac{b}{{a + b^{2} }}} \right). $$

At this equilibrium point, the Jacobian matrix is

$$ J = \left( {\begin{array}{*{20}c} {\frac{{b^{2} - a}}{{a + b^{2} }}} & {a + b^{2} } \\ { - \frac{{2b^{2} }}{{a + b^{2} }}} & { - (a + b^{2} )} \\ \end{array} } \right). $$

Therefore, \( {\Delta} = \det (J) = (a + b^{2} ) \,>\, 0 \) and the sum of diagonal elements of the Jacobian matrix, \( \tau = - \frac{{b^{4} + (2a - 1)b^{2} + a(1 + a)}}{{a + b^{2} }} \). So, the fixed point is stable when \( \tau \,<\, 0 \), and unstable when \( \tau \,>\, 0 \). A bifurcation may occur when \( \tau = 0 \), that is, when \( b^{2} = \frac{{(1-2a) \,\pm\, \sqrt {(1-8a)} }}{2} \). A plot of \( (a,b) \) plane representing a parabola is presented in Fig. 5.12.

Fig. 5.12
figure 12

Representation of the stable fixed point and limit cycles in the \( (a,b) \)-plane

The figure shows that the region corresponding to \( \tau \,>\, 0 \) is bounded. The unique stable limit cycle has been obtained for particular values of \( a \) and \( b \) satisfying above relation of \( a \) and \( b \). A typical limit cycle for \( a = 0.07 \) and \( b = 0.8 \) is shown in Fig. 5.13.

Fig. 5.13
figure 13

Limit cycle of the Glycolysis problem for \( a = 0.07 \) and \( b = 0.8 \)

5.6.2 Predator-Prey Model

Volterra and Lotka were formulated a model to describe the interaction of two species, the prey and the predator. The dynamical equation for predator–prey model are given by

$$ \left. {\begin{array}{*{20}l} {\dot{x} = x(\alpha - \beta y)} \hfill \\ {\dot{y} = y(\delta x - \gamma )} \hfill \\ \end{array} } \right\} $$
(5.12)

where \( \alpha ,\beta ,\gamma , \) and \( \delta \) are all positive constants. \( x(t) \), \( y(t) \) and t represent the population density of the prey, the population density of the predator and time, respectively. Here \( \alpha \) represents the growth rate of the prey in the absence of intersection with the predators, whereas \( \gamma \) represents the death rate of the predators in the absence of interaction with the prey and \( \beta \), \( \delta \) are the interaction parameters and are all constants (for simple model). Note that the survival of the predators completely depends on the population of the prey. If \( x(0) = 0 \) then \( y(t) = y(0) \exp \left( { - \gamma t} \right) \) and \( \mathop {\lim }\nolimits_{t \to \infty } y(t) = 0 \). We now study the model under dynamical system’s approach. First we calculate the fixed points of the system, which are found by solving the equations \( \dot{x} = \dot{y} = 0 \). This gives the following fixed points of the system:

$$ x^{*} = 0,y^{*} = 0;\,{\text{and}}\,x^{*} = \frac{\gamma }{\delta },y^{*} = \frac{\alpha }{\beta }. $$

The Jacobian of the linearization of the model Eq. (5.12) is obtained as

$$ J(x,y) = \left( {\begin{array}{*{20}c} {\alpha - \beta y} & { - \beta x} \\ {\delta y} & {\delta x - \gamma } \\ \end{array} } \right). $$

It is easy to show that the matrix \( J(0,0) \), at the fixed point origin, has the eigenvalues \( \alpha \) and \( ( - \gamma ) \), which are opposite in sign, and therefore the fixed point origin is a saddle point. Now, at the fixed point \( \left( {\frac{\gamma }{\delta },\frac{\alpha }{\beta }} \right) \) the Jacobian matrix is calculated as follows:

$$ J = \left( {\begin{array}{*{20}c} 0 & { - \frac{\beta \gamma }{\delta }} \\ {\frac{\alpha \delta }{\beta }} & 0 \\ \end{array} } \right) $$

which yields the purely imaginary eigenvalues \( \left( { \pm \,i\sqrt {\alpha \gamma } } \right) \). Therefore, fixed point \( \left( {\frac{\gamma }{\delta },\frac{\alpha }{\beta }} \right) \) always forms a center which gives a closed path in the neighborhood of the fixed point. This fixed point is nonhyperbolic type, so the stability cannot be determined from the linearised system. The phase path of the system can be obtained easily as

$$ \frac{{{\text{d}}y}}{{{\text{d}}x}} = \frac{y(\delta x - \gamma )}{x(\alpha - \beta y)}, $$

which gives the solution curves \( f(x,y) \equiv x^{\gamma } y^{\alpha } e^{ - (\delta x + \beta y)} = k \), where \( k \) is a constant. For different values of \( k \) we will get different solution curves. The solution curves can also be written as \( f(x,y) \equiv g(x)h(y) = k \), where \( g(x) = x^{\gamma } e^{ - \delta x} \) and \( h(y) = y^{\alpha } e^{ - \beta y} \). Linearisation of the system gives periodic solutions in a neighborhood of the equilibrium point. Using the solution \( f(x,y) \), this result can be verified easily for the nonlinear original system. Note that each of the functions \( g(x) \) and \( h(y) \) have the same form; they are positive \( x,y \in (0,\infty ) \). Also, they attain a single maximum in this interval. It is easy to verify that the functions \( g(x) \) and \( h(y) \) attain their maximums at the points \( x = \frac{\gamma }{\delta } \) and \( y = \frac{\alpha }{\beta } \), respectively. Therefore, the function \( f(x,y) \) has its maximum at the fixed point \( \left( {\frac{\gamma }{\delta },\frac{\alpha }{\beta }} \right) \) and the trajectories surrounding this point are closed curves, since the point \( \left( {\frac{\gamma }{\delta },\frac{\alpha }{\beta }} \right) \) is a center. Figure 5.14 depicts the function \( g(x) \) and \( h(y) \) for some typical values of the parameters involved in the system.

Fig. 5.14
figure 14

Graphs of \( g(x) \) and \( h(x) \) for some typical values of \( \alpha ,\beta ,\gamma \), and \( \delta \)

The graphical representations of two solutions curves are explained as follows. The number of preys is increasing when there is less number of predators. The prey population approaches its maximum value at \( \alpha /\beta \). Thereafter, suddenly the number of predators increases explosively at the cost of the preys. As the number of preys decreases the number of predators has to decrease. These features clearly depict in the figure.

The phase portrait of the system is presented in Fig. 5.15. This shows close curves in the neighborhood of the equilibrium point.

Fig. 5.15
figure 15

Phase portrait of the system (5.12) for \( \alpha = 1.5,\beta = 1,\gamma = 2,\delta = 1 \)

The equations of Volterra and Lotka are too simple model to represent the populations of prey predator in real situations. However, the equations model a crude approximation for two species living together with interactions.

5.7 Exercises

  1. (1)

    What do you mean by oscillation of a system? Give examples of two systems in which one is oscillating and the other is non-oscillating.

  2. (2)

    Prove that one-dimensional system cannot oscillate.

  3. (3)

    Prove that zeros of two linearly independent solutions of the equation \( {\ddot x} + p(t)x = 0,\;\;p(t) \in C(I) \) separate one another.

  4. (4)

    Prove that if at least one solution of the equation \( {\ddot x} + p(t)x = 0 \) has more than two zeros on an interval \( I \), then all solutions of this equation are oscillating on \( I \).

  5. (5)

    Find the relation between \( \alpha \,>\, 0 \) and \( k \in {\mathbb{N}} \), sufficient for any solution of \( {\ddot x} + (\alpha \sin t)x = 0 \) to be non-oscillating on \( [0,2 k\pi ] \).

  6. (6)

    Find the relation between \( A \,>\, 0 \) and \( n \in {\mathbb{N}} \), sufficient for any solution of \( {\ddot x} + (e^{ - At} \cos t)x = 0 \) to be non-oscillating on \( [0,2n\pi ] \).

  7. (7)

    For which \( a \,>\, 0 \) and \( n \in {\mathbb{N}} \) all solutions of the equation \( {\ddot x} + \alpha xe^{ - at} = 0 \) oscillate on \( [0,2n\pi ] \), where \( \alpha \) is a nonzero real.

  8. (8)

    Discuss nonlinear oscillation of a system with example. Give few examples where nonlinear oscillations are needed for practical applications.

  9. (9)

    Consider the system \( \dot{x} = \sin y,\;\dot{y} = x\cos y \). Verify that the system is a gradient system . Also find the potential function. Show that the system has no closed orbits.

  10. (10)

    Show that the system \( \dot{x} = 2xy + y^{3} ,\;\dot{y} = x^{2} + 3xy^{2} - 2y \) has no closed orbits.

  11. (11)

    Determine the critical points and characterize these points for the system \( {\ddot x} - \mu \dot{x} - (1 - \mu )(2 - \mu )x = 0,\mu \in {\mathbf{\mathbb{R}}} \). Sketch the flow in \( x - \dot{x} \) phase plane .

  12. (12)

    Show that the system \( \dot{x} = x(2-3x - y),\;\dot{y} = y(4x - 4x^{2} - 3) \) has no closed orbits in the positive quadrant in \( {\mathbb{R}}^{2} \).

  13. (13)

    Show that there can be no periodic orbit of the system

    $$ \dot{x} = y,\;\dot{y} = ax + by - x^{2} y - x^{3} $$

    if \( b \,<\, 0 \).

  14. (14)

    Verify that the system \( \dot{x} = y,\;\dot{y} = - x + y + x^{2} + y^{2} \) has no periodic solutions.

  15. (15)

    Consider the system

    $$ \dot{x} = y(1 + x - y^{2} ),\;\;\dot{y} = x(1 + y - x^{2} ) $$

    where \( x \,\ge\,0 \), \( y \,\ge\,0 \). Does periodic solution exist?

  16. (16)

    Consider the system \( \dot{x} = f(y),\;\dot{y} = g(x) + y^{n + 1} \), where \( f,g \in C^{1} \) and \( n \in {\mathbb{N}} \). Derive a sufficient condition for \( n \) so that the system has no periodic solutions in \( {\mathbb{R}}^{2} \).

  17. (17)

    An example is given to show that the requirement that in Dulac’s criterion \( D \) is simply connected is essential.

  18. (18)

    Show that the system \( \dot{x} = \alpha x - ax^{2} + bxy,\;\dot{y} = \beta y - cy^{2} + {\text{d}}xy \) with \( a,c \,>\, 0 \) has no periodic orbits in the positive quadrant of \( {\mathbb{R}}^{2} \).

  19. (19)

    Consider the system \( \dot{x} = - y + x^{2} - xy,\;\dot{y} = x + xy \). Show that Bendixson’s negative criterion will not give the existence of closed orbits and but Dulac’s criterion will be useful for taking the function \( \rho (x,y) = - (1 + y)^{ - 3} (1 + x)^{ - 1} \).

  20. (20)

    Show that the following systems have no periodic solutions

    1. (a)

      \( \dot{x} = y + x^{3} ,\;\dot{y} = x + y + y^{3}. \)

    2. (b)

      \( \dot{x} = y,\;\dot{y} = x^{9} + (1 + x^{4} )y. \)

    3. (c)

      \( \dot{x} = \sin x\sin y + y^{3} ,\;\dot{y} = \cos x\cos y + x^{3} + y. \)

    4. (d)

      \( \dot{x} = x(1 + y^{2} ) + y,\;\dot{y} = y(1 + x^{2} ). \)

    5. (e)

      \( \dot{x} = x(y^{2} + 1) + y,\;\dot{y} = x^{2} y + x. \)

    6. (f)

      \( \dot{x} = - x + y^{2} ,\;\dot{y} = - y^{3} + x^{2}. \)

    7. (g)

      \( \dot{x} = y^{2} ,\;\dot{y} = - y^{3} + x^{2}. \)

    8. (h)

      \( \dot{x} = x^{2} - y - 1,\;\dot{y} = xy - 2y. \)

    9. (i)

      \( \dot{x} = - xe^{y} + y^{2} ,\;\dot{y} = x - x^{2} y. \)

    10. (j)

      \( \dot{x} = x + y + x^{3} - y^{3} ,\;\dot{y} = - x + 2y - x^{2} y + y^{3}. \)

    11. (k)

      \( \dot{x} = x(y - 1),\;\dot{y} = x + y - 2y^{2}. \)

  21. (21)

    Show that the system

    $$ \dot{x} = - y + x(x^{2} + y^{2} - 2x - 3),\;\;\dot{y} = x + y(x^{2} + y^{2} - 2x - 3) $$

    has no closed orbits inside the circle with center \( \left( {\frac{3}{4},0} \right) \) and radius \( \frac{{\sqrt {33} }}{4} \).

  22. (22)

    Show that the system \( \dot{x} = x + y^{3} - xy^{2} ,\;\dot{y} = 3y + x^{3} - x^{2} y \) has no periodic orbit in the region \( \left\{ {(x,y) \in {\mathbb{R}}:x^{2} + y^{2} \,\le\,4} \right\} \).

  23. (23)

    Consider the system \( \dot{x} = y - x^{3} + \mu x,\;\dot{y} = - x \). For what values of the parameter \( \mu \) does a periodic solution exist? Describe what happens as \( \mu \to 0 \) ?

  24. (24)

    Consider the system \( \dot{x} = f(x,y),\;\dot{y} = g(x,y) \), where \( f,g \in C^{1} (D) \), \( D \) being a simply connected region in \( {\mathbb{R}}^{2} \). If \( \frac{\partial g}{\partial x} = \frac{\partial f}{\partial y} \) in \( D \), show that the system has no closed orbit in \( D \). Hence show that the system \( \dot{x} = 2xy + y,\;\dot{y} = x + x^{2} - y^{2} \) has no closed orbits.

  25. (25)

    Define ‘limit cycle’ and ‘periodic solution ’. Give few physical phenomena where limit cycles form. Find the limit cycles and investigate their stabilities for the following systems:

    1. (a)

      \( \dot{x} = (x^{2} + y^{2} - 1)x - y\sqrt {x^{2} + y^{2} } ,\,\dot{y} = (x^{2} + y^{2} - 1)y + x\sqrt {x^{2} + y^{2} } \)

    2. (b)

      \( \dot{x} = - y + x(x^{2} + y^{2} - 1),\;\dot{y} = x + y(x^{2} + y^{2} - 1) \)

    3. (c)

      \( \dot{x} = - y + x(\sqrt {x^{2} + y^{2} } {\mkern 1mu} - 1)^{2} ,\;\dot{y} = x + y(\sqrt {x^{2} + y^{2} } {\mkern 1mu} - 1)^{2} \)

    4. (d)

      \( \dot{x} = y + x(x^{2} + y^{2} )^{ - 1/2} ,\;\dot{y} = - x + y(x^{2} + y^{2} )^{ - 1/2} \)

    5. (e)

      \( \dot{x} = y + x(1 - x^{ - } y^{2} ),\;\dot{y} = - x + y(1 - x^{2} - y^{2} ) \)

  26. (26)

    Show that the system

    $$ \dot{x} = x - y\left( {x^{2} + \frac{3}{2}y^{2} } \right),\;\dot{y} = x + y - y\left( {x^{2} + \frac{1}{2}y^{2} } \right) $$

    has a periodic solution .

  27. (27)

    Show that the system

    $$ \dot{x} = x - y - x(x^{2} + 5y^{2} ),\;\dot{y} = x + y - y(x^{2} + y^{2} ) $$

    has a limit cycle in some ‘trapping region’ to be determined.

  28. (28)

    State Poincaré–Bendixson theorem. Prove that the system given by

    $$ \dot{r} = r(1 - r^{2} ) + \mu r\cos \theta ,\;\dot{\theta } = 1 $$

    has a closed orbit for \( \mu = 0 \).

  29. (29)

    Show the system \( \dot{x} = - y + x(1 - x^{2} - y^{2} ),\;\dot{y} = x + y(1 - x^{2} - y^{2} ) \) has at least one periodic solution.

  30. (30)

    Consider the system \( \dot{x} = y + ax(1-2b - r^{2} ),\;\dot{y} = - x + ay(1 - r^{2} ) \), where \( r^{2} = x^{2} + y^{2} \) and \( a \) and \( b \) are two constants with \( 0 \,\le\,b \,\le\,1/2 \) and \( 0 \,<\, a \,\le\, 1 \). Prove the system has at least one periodic orbit and if there are several periodic orbits, then they have the same period \( P(b,a) \). Also prove that if \( b = 0 \), then there is exactly one periodic orbit of the system.

  31. (31)

    Consider the system

    $$ \dot{x} = - y + x(x^{4} + y^{4} - 3x^{2} + 2x^{2} y^{2} - 3y^{2} + 1),\;\dot{y} = x + y(x^{4} + y^{4} - 3x^{2} + 2x^{2} y^{2} - 3y^{2} + 1) $$
    1. (a)

      Determine the stability of the fixed points of the system.

    2. (b)

      Convert the system into polar coordinates, using \( x = r\cos \theta \) and \( y = r\sin \theta \).

    3. (c)

      Use Poincaré–Bendixson theorem to show that the system has a limit cycle in an annular region to be determined.

  32. (32)

    Consider the system

    $$ \dot{x} = - y + x(1-2x^{2} - 3y^{2} ),\;\dot{y} = x + y(1-2x^{2} - 3y^{2} ){\mkern 1mu}. $$
    1. (a)

      Find all fixed points of the system and define their stability.

    2. (b)

      Transfer the system into polar coordinates \( (r,\theta ) \).

    3. (c)

      Find a trapping region \( R(a,b) = \left\{ {(r,\theta ):a \,\le\,r \,\le\,b} \right\} \) and then use Poincaré–Bendixson theorem to prove that the system has a limit cycle in the region \( R \).

  33. (33)

    Show that the system given by

    $$ \begin{aligned} \dot{r} & = r(1 - r^{2} ) + \mu r\cos \theta \\ \dot{\theta } & = 1 \\ \end{aligned} $$

    has a closed orbit in the annular region \( \sqrt {1 - \mu } {\mkern 1mu} \,<\, r \,<\, \sqrt {1 + \mu } \) for all \( \mu \,\,\ll\, 1 \).

  34. (34)

    Consider the system \( \dot{x} = x - y - x^{3} /3,\;\dot{y} = - x \). Show that the system has a limit cycle in some annular region to be determined.

  35. (35)

    Show that the system represented by \( {\ddot x} + \mu (x^{2} - 1)\dot{x} + \tanh x = 0 \), for \( \mu \,>\, 0 \), has exactly one periodic solution and classify its stability.

  36. (36)

    Show that the equation \( {\ddot x} + x = \mu (1 - \dot{x}^{2} )x \), \( \mu \,>\, 0 \) has a unique periodic solution and classify its stability.

  37. (37)

    Show that the equation

    $$ {\ddot x} + \frac{{x^{3} - x}}{{x^{2} + 1}}\dot{x} + x = 0 $$

    has exactly one stable limit cycle.

  38. (38)

    Show that the system \( \dot{x} = 2x + y + x^{3} ,\;\dot{y} = 3x - y + y^{3} \) has no limit cycles and hence no periodic solutions, using Bendixson’s negative criterion or otherwise, while the system \( \dot{x} = - y,\;\dot{y} = x \) has periodic solutions.

  39. (39)

    Show that the system \( \dot{r} = r(r^{2} - 2r\cos \theta - 3),\dot{\theta } = 1 \) contains one or more limit cycles in the annulus \( 1 \,<\, r \,<\, 3 \).

  40. (40)

    Show that the Rayleigh equation \( {\ddot x} + x - \mu \left( {1 - \dot{x}^{2} } \right)x = 0,\mu \,>\, 0 \) has a unique periodic solution.

  41. (41)

    Show that the system \( \dot{x} = y + x(x^{2} + y^{2} - 1),\;\dot{y} = - x + y(x^{2} + y^{2} - 1) \) has an unstable limit cycle.

  42. (42)

    Show that the system

    $$ \dot{x} = y + x\sqrt {(x^{2} + y^{2} )} (x^{2} + y^{2} - 1)^{2} ,\;\dot{y} = - x + y\sqrt {(x^{2} + y^{2} )} (x^{2} + y^{2} - 1)^{2} $$

    has a semistable limit cycle.

  43. (43)

    Find the limit cycle of the system \( \dot{z} = - iz + \alpha (1 - |z|^{2} ),\;z \in {\mathbb{C}} \), where \( \alpha \) is a real parameter. Also investigate its stability.

  44. (44)

    Find the radius of the limit cycle and its period approximately for the equation represented by \( {\ddot x} + \epsilon(x^{2} - 1)\dot{x} + x - \epsilon x^{3} = 0 \), \( 0 \,<\, \epsilon \,<\, 1 \).

  45. (45)

    Find the phase path of the Lotka–Volterra competating model equations \( \dot{x} = \alpha x - \beta xy,\,\dot{y} = \beta xy - \gamma y \) with \( x,y \,\ge\,0 \) and positive parameters \( \alpha ,\beta ,\gamma \). Also, show that the solution represents periodic solutions in the neighborhood of the critical point \( \left( {\gamma /\beta ,\alpha /\beta } \right) \).

  46. (46)

    Consider the following Lotka–Volterra model taking into account the saturation affect caused by a large number of prey in the prey–predator populations \( \dot{x} = \alpha x - \beta xy/(1 + \delta x),\,\dot{y} = \beta xy/(1 + \delta x) - \gamma y \) with \( x,y \,\ge\,0 \) and \( \alpha ,\beta ,\gamma ,\delta \,>\, 0 \). Determine the critical points and sketch the flow in the linearized system. Explain the saturation affect of the prey for the cases \( \delta \to 0 \) and \( \delta \to \infty \).

  47. (47)

    Discuss the solution behaviors of the two populations \( x(t) \) and \( y(t) \) in the Lotka–Volterra model \( \dot{x} = \alpha x - \beta xy,\,\dot{y} = \beta xy - \gamma y \) when the birth rate \( \alpha \) of a prey is much smaller than the death rate \( \gamma \) of the predator, that is, \( \alpha /\gamma =\, \in_{1} \) a small quantity.

  48. (48)

    Find the creeping velocity of the system with large friction \( {\ddot x} + \mu \dot{x} + f(x) = 0 \), \( \mu \,\gg\, 0 \). Also, show that for a creeping motion the solution \( x(t) \) follows approximately \( \dot{x} = - \frac{1}{\mu }f(x) \) which is called gradient flow.

  49. (49)

    Find the period of oscillation \( T(\varepsilon ) \) for the Duffing oscillator \( {\ddot x} + x + \varepsilon x^{3} = 0 \) with \( x(0) = a,\dot{x}(0) = 0 \), where \( 0 \,<\, \varepsilon \,\ll\, 1 \).

  50. (50)

    Obtain the frequency of small oscillation (amplitude \( \ll 1 \)) of the equation \( {\ddot x} + x - x^{3} /6 = 0 \).