1 Stability and Instability of a Mechanical System

The concepts presented in this section are a simplified version of what is available in many textbooks on dynamical systems and classical mechanics. For a deeper discussion of the topics proposed here, we refer the reader to Hirsch et al. [1] and Arnold [1].

The state of a mechanical system is uniquely determined by a number N of spatial coordinates, \(q_1, q_2, \ldots , q_N \), and a number N of velocities associated with these coordinates, \(v_1, v_2, \ldots , v_N \). The state of the system is thus a point of a 2N-dimensional space called phase space. The number N is the number of degrees of freedom of the system.

For simplicity of notation, the N coordinates will be denoted by the N-dimensional vector \(\mathbf {q}\), while the N velocities will be denoted by the N-dimensional vector \(\mathbf {v}\). The motion of the mechanical system is described by the system of first-order differential equations,

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\mathrm {d}\, {\mathbf {q}}}{\mathrm {d}\, {t}} = \mathbf {v} \;,\\ \displaystyle \frac{\mathrm {d}\, {\mathbf {v}}}{\mathrm {d}\, {t}} = \mathbf {F}({\mathbf {q}},\mathbf {v}) \;, \end{array}\right. } \end{aligned}$$
(4.1)

where the N-dimensional vector \(\mathbf {F}\) is built with the components of the force per unit mass acting on the system. For instance, let the studied system consist of \(N_p\) pointlike masses, then \(N = 3\, N_p\). In this case, if \(q_i\) is the jth coordinate (where j varies from 1 to 3) of the \(\ell \)th pointlike mass (where \(\ell \) varies from 1 to \(N_p\)) with mass \(m_\ell \), then the component \(F_i({\mathbf {q}}, \mathbf {v})\) is the jth component of the force acting on the \(\ell \)th pointlike mass of the system divided by the mass \(m_\ell \).

The solution of Eq. (4.1) requires the specification of the initial state, or the state \(\{\mathbf {q}(0), \mathbf {v}(0)\}\) owned by the system at the initial instant of time, \(t = 0\). Geometrically, this solution yields a trajectory in the phase space.

The concept of stability of a solution of Eq. (4.1) is formulated according to Lyapunov’s definition.Footnote 1 A motion of the mechanical system, i.e. a solution \(\{\mathbf {q}(t), \mathbf {v}(t)\}\) of the system of Eqs. (4.1), is called stable if for any positive real number \(\varepsilon \), there is a corresponding positive real number \(\delta _\varepsilon \) such that if the distance between two initial conditions, \(\{\mathbf {q}(0), \mathbf {v}(0)\}\) and \(\{\mathbf {q}^{\star }(0), \mathbf {v}^{\star }(0)\}\), is less than \(\delta _\varepsilon \), then the two trajectories in the phase space, \(\{\mathbf {q}(t), \mathbf {v}(t)\}\) and \(\{\mathbf {q}^{\star }(t), \mathbf {v}^{\star }(t)\}\), have a distance less than \(\varepsilon \) for every instant of time \(t>0\). In mathematical form, this definition can be expressed as follows,

$$\begin{aligned} \forall \ \varepsilon> 0 \;, \quad \exists \ \delta _\varepsilon > 0 :\qquad \left\| \{\mathbf {q}(0), \mathbf {v}(0)\} - \{\mathbf {q}^{\star }(0), \mathbf {v}^{\star }(0)\} \right\| < \delta _\varepsilon \end{aligned}$$
(4.2)

implies that

$$\begin{aligned} \left\| \{\mathbf {q}(t), \mathbf {v}(t)\} - \{\mathbf {q}^{\star }(t) \;, \mathbf {v}^{\star }(t)\} \right\| < \varepsilon \;, \quad \forall \ t > 0 \;. \end{aligned}$$
(4.3)

The distance \(\Vert \cdot \Vert \) between any two points in the phase space is the Euclidean distance

$$\begin{aligned} \left\| \{\mathbf {q}, \mathbf {v}\} - \{\mathbf {q}^{\star }, \mathbf {v}^{\star }\} \right\| = \left[ \frac{1}{\mathscr {A}}\sum _{i=1}^N (q_i -q^{\star }_i)^2 + \frac{1}{{\mathscr {V}}^2}\sum _{i=1}^N (v_i -v^{\star }_i)^2 \right] ^{1/2} , \end{aligned}$$
(4.4)

where we introduced two constants, \(\mathscr {A}\) and \(\mathscr {V}\), with the dimensions of a length and a velocity, respectively. These constants, whose value is set conventionally, are introduced for the sole purpose of defining the distance between any two points of the phase space in a dimensionless way.

Fig. 4.1
figure 1

Qualitative sketch of stable and unstable trajectories in phase space according to Lyapunov’s definition

In order to give a visual representation of the concept of stability of motion as stated above, we can imagine that around a stable trajectory in phase space, there is a cylinder of radius \(\varepsilon \) within which all the trajectories that differ from the stable trajectory for a small perturbation of the initial conditions are contained. A graphical representation of this notion is given in Fig. 4.1.

The concept of stability of motion for a mechanical system also applies to those particular motions of the system that correspond to equilibrium states. A solution of the equations of motion (4.1) is called an equilibrium state if it takes the form

$$\begin{aligned} \{\mathbf {q}(t), \mathbf {v}(t)\} = \{\mathbf {q}_0, \mathbf {0} \}\;, \qquad \forall \, t \ge 0 \;, \end{aligned}$$
(4.5)

where \(\mathbf {q}_0\) is an N-dimensional constant vector, and \(\mathbf {0}\) is the N-dimensional vector with zero components. Thus, an equilibrium state corresponds to a trivial trajectory that degenerates into a point. The equilibrium states admitted by a system of forces \(\mathbf {F}(\mathbf {q}, \mathbf {v}) \) are of course obtained as solutions of the vector equation

$$\begin{aligned} \mathbf {F}(\mathbf {q}, \mathbf {0}) = \mathbf {0} \;. \end{aligned}$$
(4.6)

According to the most general definition expressed by Eqs. (4.2) and (4.3), an equilibrium state is deemed stable if

$$\begin{aligned} \forall \ \varepsilon> 0 \;, \quad \exists \ \delta _\varepsilon > 0 :\qquad \left\| \{\mathbf {q}_0, \mathbf {0}\} - \{\mathbf {q}^{\star }(0), \mathbf {v}^{\star }(0)\} \right\| < \delta _\varepsilon \end{aligned}$$
(4.7)

implies that

$$\begin{aligned} \left\| \{\mathbf {q}_0, \mathbf {0}\} - \{\mathbf {q}^{\star }(t) \;, \mathbf {v}^{\star }(t)\} \right\| < \varepsilon \;, \quad \forall \ t > 0 \;. \end{aligned}$$
(4.8)

This notion of stability of an equilibrium state is often referred to as stability according to Lyapunov.

A stable equilibrium state, \(\{\mathbf {q}_0, \mathbf {0}\}\), of a mechanical system is called asymptotically stable if there exists a positive real number \({\mathscr {R}}\) such that

$$\begin{aligned} \left\| \{\mathbf {q}_0, \mathbf {0} \} - \{\mathbf {q}^{\star }(0), \mathbf {v}^{\star }(0)\} \right\| < {\mathscr {R}} \end{aligned}$$
(4.9)

implies that

$$\begin{aligned} \lim _{t \rightarrow \infty } \left\| \{\mathbf {q}_0, \mathbf {0} \} - \{\mathbf {q}^{\star }(t), \mathbf {v}^{\star }(t)\} \right\| = 0 \;. \end{aligned}$$
(4.10)

For an asymptotically stable equilibrium state, any trajectory in phase space that originates from an initial state \(\{\mathbf {q}^{\star }(0), \mathbf {v}^{\star }(0)\}\), lying in a small neighbourhood of the equilibrium state \(\{\mathbf {q}_0, \mathbf {0} \}\), tends to collapse to this state when time tends to infinity.

We note that asymptotic stability is a condition stronger than stability, so that asymptotic stability of an equilibrium state implies stability, but not vice versa.

It should also be noted that the concepts of stability and of asymptotic stability for an equilibrium state have, in general, a local meaning. In other words, these concepts are the result of a criterion, Lyapunov’s criterion, which refers only to those motions that originate from the neighbourhood of an equilibrium state, i.e. for initial conditions that lie in a neighbourhood of this state. Lyapunov’s criterion does not provide information on those trajectories whose initial condition is very far from the equilibrium state. The local or global nature of the stability of an equilibrium state of a mechanical system relies, ultimately, on the linearity or nonlinearity of the system. A mechanical system is said to be linear if the vector function \(\mathbf {F}({\mathbf {q}}, \mathbf {v})\) is linear, otherwise it is deemed nonlinear. Generally speaking, the stability has a local character for nonlinear mechanical systems and has a global character for linear systems. For nonlinear mechanical systems, around an asymptotically stable equilibrium state, there is a region of phase space called basin of attraction, such that any state within the basin of attraction evolves along a trajectory that for \(t \rightarrow \infty \) collapses onto the equilibrium state. On the contrary, any state outside the basin of attraction evolves along a trajectory that cannot enter the basin of attraction, for every instant of time \(t > 0\).

1.1 A Simple Mechanical System

As an example, consider the simplest case of a mechanical system, namely a system with only one degree of freedom, \(N =1\). For this system, the phase space is two-dimensional, the state is described by the pair \(\{q, v\}\), and the equations of motion take the form

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\mathrm {d}\, {q}}{\mathrm {d}\, {t}} = v \;, \\ \displaystyle \frac{\mathrm {d}\, {v}}{\mathrm {d}\, {t}} = F(q, v) \;. \end{array}\right. } \end{aligned}$$
(4.11)

In other terms, we consider a pointlike mass m subject to an external force. We assume that the function F(qv) is given by

$$\begin{aligned} F(q,v) = \frac{1}{m} \left( - k\, q + h\, q^3 - \beta \, v \right) \;, \end{aligned}$$
(4.12)

where the constants k, h and \(\beta \) are non-negative. The system is therefore subject to an attractive elastic force, \(- k \, q\), a repulsive force, \(h \, q^3\), and a dissipative friction force, \(- \beta \, v\).

Fig. 4.2
figure 2

Plot of the potential energy \(\varphi (q)\)

If \(k \ne 0\) and \(h \ne 0\), we may infer that there are three equilibrium states of the system corresponding to the positions,

$$\begin{aligned} F(q,0) = 0 \quad \Rightarrow \quad q= 0\; , \quad q= \pm \sqrt{\frac{k}{h}} \;. \end{aligned}$$
(4.13)

Conversely, if either \(k = 0\) or \(h = 0\), there is only one equilibrium state in the position \(q = 0\).

We can associate a potential energy to the attractive and repulsive forces, given by

$$\begin{aligned} \varphi (q) = k\, \frac{q^2}{2} - h\, \frac{q^4}{4} \;. \end{aligned}$$
(4.14)

The trend of the potential energy \(\varphi (q)\) is shown in Fig. 4.2. Therefore, Eq. (4.12) can be rewritten as

$$\begin{aligned} F(q,v) = - \frac{1}{m} \left[ \frac{\mathrm {d}\, {\varphi (q)}}{\mathrm {d}\, {q}} + \beta \, v \right] \;. \end{aligned}$$
(4.15)

We can also define a total energy defined as the sum of the kinetic energy and the potential energy,

$$\begin{aligned} E(q,v) = m\; \frac{v^2}{2} + \varphi (q) \;. \end{aligned}$$
(4.16)

The derivative of E with respect to time reads

$$\begin{aligned} \frac{\mathrm {d}\, {E}}{\mathrm {d}\, {t}} = m\, v\; \frac{\mathrm {d}\, {v}}{\mathrm {d}\, {t}} + \frac{\mathrm {d}\, {\varphi }}{\mathrm {d}\, {q}}\; \frac{\mathrm {d}\, {q}}{\mathrm {d}\, {t}} = - v \left( \frac{\mathrm {d}\, {\varphi }}{\mathrm {d}\, {q}} + \beta \, v \right) + \frac{\mathrm {d}\, {\varphi }}{\mathrm {d}\, {q}}\; v = - \beta \, v^2 \;, \end{aligned}$$
(4.17)

where Eqs. (4.11) and (4.15) have been employed. In the non-dissipative case, where \(\beta = 0\), Eq. (4.17) leads to the conclusion that the total energy remains invariant during the system evolution. In this case, the force per unit mass F acting on the system is associated with the potential energy,

$$\begin{aligned} F = -\,\frac{1}{m}\; \frac{\mathrm {d}\varphi }{\mathrm {d}q} \;. \end{aligned}$$
(4.18)

Since the force can be expressed in terms of the gradient of the potential energy, then the system is conservative. On the other hand, the energy E is not invariant in the dissipative case, \(\beta \ne 0\). The effect of the dissipative force is a decrease in time of the total energy, E, as demonstrated by Eq. (4.17).

Fig. 4.3
figure 3

Constant energy curves in the phase space. The thicker line corresponds to \(E=\varphi _{\max }\)

Thus, if \(\beta = 0\), every trajectory of the system corresponds to a given energy E. Stated differently, in the non-dissipative case, the trajectories in phase phase coincide with the curves at constant energy.

Figure 4.3 displays curves at constant energy. Among them, a special one is the curve corresponding to \(E = \varphi _{\max }\), where \(\varphi _{\max }\) is the maximum value of the potential energy, Eq. (4.14), given by

$$\begin{aligned} \varphi _{\max } = \frac{k^2}{4\,h}\;. \end{aligned}$$
(4.19)

The shape of the trajectories for \(\beta =0\) suggests that, among the three equilibrium states defined by Eq. (4.13), only one is stable: that corresponding to the position \(q = 0\). Within this domain, all constant energy curves are closed orbits of smaller and smaller size as E decreases. Lyapunov’s criterion is then satisfied by the equilibrium state with \(q=0\). On the contrary, all trajectories around the equilibrium states with \(q = \pm \sqrt{k / h}\) cannot be confined within a small neighbourhood of these points. This behaviour is the effect of instability.

Fig. 4.4
figure 4

Trajectories in phase space for \(\beta > 0\). The thicker line is the constant energy curve for \(E=\varphi _{\max }\)

If \(\beta > 0\), nothing changes both with respect to the stability of the state \(\{0,0 \}\) and the instability of the states \(\{- \sqrt{k / h}, 0 \}\) and \(\{\sqrt{k / h}, 0 \}\) (Fig. 4.4). Nevertheless, there is an important difference: the energy defined by Eq. (4.16) is not conserved along the trajectories in phase space. In other words, the trajectories do not coincide with the closed curves of constant energy. The stable equilibrium state \(\{0,0 \}\) is now asymptotically stable. The basin of attraction of such a state of equilibrium is extended to that limited domain around the origin enclosed by the curve \(E = \varphi _{\max }\). Within the basin of attraction, the trajectories are not closed orbits, as in the non-dissipative case, \(\beta =0\). On the contrary, they appear as spirals converging to the stable equilibrium state \(\{0,0 \}\). This behaviour is typical of asymptotic stability as described by Eq. (4.10).

1.2 The Method of Small Perturbations

An alternative analysis of the stability or instability of the equilibrium states of a mechanical system is based on the method of small perturbations. We consider again the simple mechanical system described in Sect. 4.1.1. Its equations of motion are given by Eqs. (4.11) and (4.12), namely

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\mathrm {d}\, {q}}{\mathrm {d}\, {t}} = v \;, \\ \displaystyle \frac{\mathrm {d}\, {v}}{\mathrm {d}\, {t}} = \frac{1}{m} \left( - k\, q + h\, q^3 - \beta \, v \right) \;. \end{array}\right. } \end{aligned}$$
(4.20)

If \(\{q_0, 0\}\) is any equilibrium state, then it is a solution of Eq. (4.20). Let us perturb this equilibrium state by superposing a very small disturbance. Mathematically, this means writing

$$\begin{aligned} q = q_0 + \varepsilon \, \hat{q}\;, \quad v = 0 + \varepsilon \, \hat{v} = \varepsilon \, \hat{v} \;, \end{aligned}$$
(4.21)

where \(\varepsilon \) is a positive and very small number, called perturbation parameter. By substituting Eq. (4.21) into the equations of motion, (4.20), we obtain

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \varepsilon \; \frac{\mathrm {d}\, {\hat{q}}}{\mathrm {d}\, {t}} = \varepsilon \, \hat{v} \;, \\ \displaystyle \varepsilon \; \frac{\mathrm {d}\, {\hat{v}}}{\mathrm {d}\, {t}} = \frac{1}{m} \left[ - k\, q_0 - k\, \varepsilon \, \hat{q} + h\left( q_0 + \varepsilon \, \hat{q} \right) ^3 - \beta \, \varepsilon \, \hat{v} \right] \;. \end{array}\right. } \end{aligned}$$
(4.22)

Since \(\{q_0, 0\}\) is a solution of the equations of motion, we can simplify Eq.  (4.22),

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \varepsilon \; \frac{\mathrm {d}\, {\hat{q}}}{\mathrm {d}\, {t}} = \varepsilon \, \hat{v} \;, \\ \displaystyle \varepsilon \; \frac{\mathrm {d}\, {\hat{v}}}{\mathrm {d}\, {t}} = \frac{1}{m} \left[ - k\, \varepsilon \, \hat{q} + h\left( \varepsilon ^3\, \hat{q}^3 + 3\, \varepsilon ^2\, \hat{q}^2 \, q_0 + 3\, \varepsilon \, \hat{q}\, q_0^2 \right) - \beta \, \varepsilon \, \hat{v} \right] \;. \end{array}\right. } \end{aligned}$$
(4.23)

The perturbation parameter is small, namely \(\varepsilon \ll 1\), so that we can safely neglect terms \(O(\varepsilon ^2)\) or higher with respect to terms \(O(\varepsilon )\). Thus, Eq. (4.23) yields

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \varepsilon \; \frac{\mathrm {d}\, {\hat{q}}}{\mathrm {d}\, {t}} = \varepsilon \, \hat{v} \;, \\ \displaystyle \varepsilon \; \frac{\mathrm {d}\, {\hat{v}}}{\mathrm {d}\, {t}} = \frac{1}{m} \left( - k\, \varepsilon \, \hat{q} + 3\, \varepsilon \,h\, \hat{q}\, q_0^2 - \beta \, \varepsilon \, \hat{v} \right) \;. \end{array}\right. } \end{aligned}$$
(4.24)

Division by \(\varepsilon \) now leads to the equations of motion for small perturbations.

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\mathrm {d}\, {\hat{q}}}{\mathrm {d}\, {t}} = \hat{v} \;, \\ \displaystyle \frac{\mathrm {d}\, {\hat{v}}}{\mathrm {d}\, {t}} = \frac{1}{m} \left( - k\, \hat{q} + 3 \,h\, \hat{q}\, q_0^2 - \beta \, \hat{v} \right) \;. \end{array}\right. } \end{aligned}$$
(4.25)

Equations (4.25) are linear. This is a consequence of having neglected terms of order higher than \(\varepsilon \). For this reason, the assumption of small perturbations leads to linearised equations of motion.

Equation (4.25) can be collapsed into a single differential equation, namely

$$\begin{aligned} \frac{\mathrm {d}^2 {\hat{q}}}{\mathrm {d}\, {t}^2} = \frac{1}{m} \left( - k\, \hat{q} + 3 \,h\, \hat{q}\, q_0^2 - \beta \; \frac{\mathrm {d}\, {\hat{q}}}{\mathrm {d}\, {t}} \right) \;. \end{aligned}$$
(4.26)

For the three equilibrium states \(q_0=0\) and \(q_0=\pm \sqrt{k/h}\), we get analytical solutions. In particular, for \(q_0=0\), we obtain

$$\begin{aligned}&\displaystyle \hat{q}(t) = {\text {e}}^{- \beta t/(2 m)} \Bigg [ \hat{q}(0) \cosh \left( \frac{\sqrt{\beta ^2 - 4\, k\,m}}{2\, m}\; t \right) \nonumber \\&\displaystyle +\frac{2\, m\, \hat{v}(0) + \beta \, \hat{q}(0)}{\sqrt{\beta ^2 - 4\, k\,m}}\; \sinh \left( \frac{\sqrt{\beta ^2 - 4\, k\,m}}{2\, m}\; t \right) \Bigg ] \; . \end{aligned}$$
(4.27)

Equation (4.27) shows that the perturbation \(\hat{q}(t)\) always decreases in time if \(\beta > 0\). If \(\beta ^2 \geqslant 4\,k\,m\), the perturbation undergoes an exponential decay, where the leading exponential is

$$\begin{aligned} \exp \left[ - \left( \frac{\beta }{2\,m} - \frac{\sqrt{\beta ^2 - 4\, k\,m}}{2\, m} \right) t \right] \; . \end{aligned}$$
(4.28)

One can easily check that the coefficient of this exponential is always negative, if \(k>0\), or zero, if \(k=0\). In both cases, the perturbation remains \(O(\varepsilon )\) for every \(t>0\), thus ensuring stability according to Lyapunov’s criterion. If \(0< \beta ^2 < 4\,k\,m\), the argument of the hyperbolic cosine and sine becomes imaginary, so that these contributions can be rewritten in terms of trigonometric cosine and sine. As a consequence, in this case, Eq. (4.27) describes a decaying exponential multiplied by a periodic function of time. Again, we have a response of stability for the equilibrium state \(q_0=0\). Finally, if we consider the non-dissipative case, \(\beta =0\), Eq. (4.27) shows that the solution is purely oscillatory, so that the perturbation remains \(O(\varepsilon )\) at any time.

For \(q_0 = \pm \sqrt{k/h}\), the analytical solution of Eq. (4.26) is

$$\begin{aligned}&\displaystyle \hat{q}(t) = {\text {e}}^{- \beta t/(2 m)} \Bigg [ \hat{q}(0) \cosh \left( \frac{\sqrt{\beta ^2 + 8\, k\,m}}{2\, m}\; t \right) \nonumber \\&\displaystyle +\frac{2\, m\, \hat{v}(0) + \beta \, \hat{q}(0)}{\sqrt{\beta ^2 + 8\, k\,m}}\; \sinh \left( \frac{\sqrt{\beta ^2 + 8\, k\,m}}{2\, m}\; t \right) \Bigg ] \; . \end{aligned}$$
(4.29)

The solution has an exponential behaviour in time, where the leading exponential is

$$\begin{aligned} \exp \left[ \left( \frac{\sqrt{\beta ^2 + 8\, k\,m}}{2\, m} - \frac{\beta }{2\,m} \right) t \right] \; . \end{aligned}$$
(4.30)

This exponential grows in time for every \(k>0\). This means that the perturbation will not remain confined in a small neighbourhood of the equilibrium state and, hence, we have instability in the sense of Lyapunov. Considering \(k=0\) is not significant as \(q_0 = \pm \sqrt{k/h}\) would coincide with \(q_0=0\).

We can conclude that the method of small perturbations entirely confirms the results of the stability analysis obtained by a direct evaluation of the trajectories in phase space undergone by the mechanical system. A limitation in the use of this method arises due to the local character of the information. We can only consider small distances of the initial conditions from the equilibrium state. Moreover, in the case of instability, we can only predict the time evolution of perturbations at the earlier instants of time. When the growth in time makes the perturbation of order larger than \(\varepsilon \), then the linearised Eq. (4.26) becomes unreliable. In other words, nonlinearity becomes dominant in governing the time evolution of the system.

2 Flow Stability with Burgers Equation

Let us consider the one-dimensional Burgers equation with a linear forcing term,

$$\begin{aligned} \frac{\partial {W}}{\partial {t}} + W\; \frac{\partial {W}}{\partial {x}} = \frac{\partial ^2 {W}}{\partial {x}^2} + R \left( W - W_0 \right) \;, \end{aligned}$$
(4.31)

where \(R \in \mathbb {R}\) and \(W_0 \in \mathbb {R}\). We mention that Burgers equation is a toy model for the one-dimensional flow of a fluid. In a paper by J. M. Burgers of 1939, entitled “Mathematical examples illustrating relations occurring in the theory of turbulent fluid motion”, a slightly different form of Eq. (4.31) was presented as a simplified governing equation of a system developing turbulence [13].

Evidently, \(W = W_0\) is a solution of Eq. (4.31). This solution is stationary and, as a consequence, it can be defined as an equilibrium state for Eq. (4.31). We can investigate the stability of this equilibrium state, according to Lyapunov’s theory, by perturbing it and checking the evolution in time of the perturbation. This procedure is an extension of what has been found for a discrete mechanical system in Sect. 4.1. Here, we have a continuous flow system, meaning that we have a partial differential governing equation, Eq. (4.31), where the variable evolving in time is distributed in space. In this simple model, space is one-dimensional and, hence, flow is one-dimensional as well occurring along the real x-axis.

Hereafter, \(W=W_0\) will be called the basic solution of Eq. (4.31). To test its stability, we will carry out an analysis of small perturbations according to the lines discussed in Sect. 4.1.2.

2.1 Linear Stability Analysis

A linear stability analysis of the basic solution, \(W = W_0\), is performed by superposing a small perturbation to the basic solution, namely

$$\begin{aligned} W = W_0 + \varepsilon \, w \;, \qquad \varepsilon > 0 \;, \end{aligned}$$
(4.32)

where \(\varepsilon \) is a perturbation parameter such that \(\varepsilon \ll 1\). We now substitute Eq. (4.32) into (4.31),

$$\begin{aligned} \varepsilon \, \frac{\partial {w}}{\partial {t}} + \varepsilon \, W_0\; \frac{\partial {w}}{\partial {x}} + \varepsilon ^2\, w\; \frac{\partial {w}}{\partial {x}} = \varepsilon \, \frac{\partial ^2 {w}}{\partial {x}^2} + \varepsilon \, R \, w \;. \end{aligned}$$
(4.33)

Then, neglecting terms \(O(\varepsilon ^2)\) and dividing by \(\varepsilon \), we obtain

$$\begin{aligned} \frac{\partial {w}}{\partial {t}} + W_0\ \frac{\partial {w}}{\partial {x}} = \frac{\partial ^2 {w}}{\partial {x}^2} + R\, w \;. \end{aligned}$$
(4.34)

We employ the Fourier transform to solve Eq. (4.34), namely

$$\begin{aligned}&\displaystyle \tilde{w}(k,t) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } w(x,t)\, {\text {e}}^{-I\, k x}\, {\text {d}}\,x \;, \nonumber \\&\displaystyle w(x,t) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } \tilde{w}(k,t)\, {\text {e}}^{I\, k x}\, {\text {d}}\,k \;. \end{aligned}$$
(4.35)

Here, k is the wave number. We can transform Eq. (4.34) by employing the properties of the Fourier transform of partial derivatives, given by Eqs. (2.17) and (2.18). Thus, we obtain

$$\begin{aligned} \frac{\partial {\tilde{w}}}{\partial {t}} = \lambda (k) \, \tilde{w} \;, \end{aligned}$$
(4.36)

where

$$\begin{aligned} \lambda (k) = R - k^2 - I\, k \, W_0 \;. \end{aligned}$$
(4.37)

The solution of Eq. (4.36) is

$$\begin{aligned} \tilde{w}(k,t) = \tilde{w}(k,0) \; {\text {e}}^{\lambda (k) \, t} \;. \end{aligned}$$
(4.38)

If we substitute Eq. (4.38) into the expression of w(xt) given by Eq. (4.35), we can write the perturbation as

$$\begin{aligned} w(x,t) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } \tilde{w}(k,0) \; {\text {e}}^{I\, k x} \, {\text {e}}^{\lambda (k) \, t}\, {\text {d}}\,k \;. \end{aligned}$$
(4.39)

The solution w(xt) expressed by Eq. (4.39) depends on the initial condition, w(x, 0), through its Fourier transform \(\tilde{w}(k,0)\). Moreover, w(xt) is represented as a wave packet where

$$\begin{aligned} \omega (k) = - \mathfrak {I}(\lambda (k)) = k\, W_0 \end{aligned}$$
(4.40)

is the angular frequency, and

$$\begin{aligned} b(k,t) = \frac{1}{\sqrt{2 \pi }} \; \tilde{w}(k,0) \; {\text {e}}^{\mathfrak {R}(\lambda (k)) \, t} \end{aligned}$$
(4.41)

is the time-dependent amplitude of the normal mode. The single normal mode, with a given wave number \(k_a\), represents the evolution of an initial perturbation having the shape of a plane wave, namely

$$\begin{aligned} w(x,0) = \frac{1}{\sqrt{2 \pi }} \; {\text {e}}^{I\, k_a x} \;. \end{aligned}$$
(4.42)

In fact, on account of Eqs. (2.9) and (2.10), from Eq. (4.42) one obtains a Dirac’s delta distribution for \(\tilde{w}(k,0)\),

$$\begin{aligned} \tilde{w}(k,0) = \delta (k - k_a) \;. \end{aligned}$$
(4.43)

Then, Eq. (4.39) yields

$$\begin{aligned} w(x,t) = \frac{{\text {e}}^{\mathfrak {R}(\lambda (k_a))\, t}}{\sqrt{2 \pi }} \; {\text {e}}^{I\left[ k_a x - \omega (k_a) t \right] } \;, \end{aligned}$$
(4.44)

where the angular frequency \(\omega (k)\) is given by Eq. (4.40). Equation (4.44) defines a plane wave perturbation propagating with a phase velocity \(\omega (k_a)/k_a\), whose amplitude grows unboundedly in time if \(\mathfrak {R}(\lambda (k_a))>0\), or it is damped in time if \(\mathfrak {R}(\lambda (k_a))<0\). The former alternative defines an unstable behaviour, while the latter yields a stable character of the perturbation. We can now formally define the concept of convective instability.

Definition 4.1

(Convective Instability) A single normal mode perturbation with a given wave number k is deemed to be convectively stable if \(\mathfrak {R}(\lambda (k))<0\). It is said convectively unstable if \(\mathfrak {R}(\lambda (k))>0\). The marginal condition where \(\mathfrak {R}(\lambda (k))=0\) is called neutral stability.

On account of Eq. (4.37), the condition of convective instability reads

$$\begin{aligned} R > k^2 \;, \end{aligned}$$
(4.45)

with the curve given by \(R=k^2\) defining neutral stability.

Fig. 4.5
figure 5

Qualitative sketch of the definition of convective instability as implied by Eq. (4.45)

A simple sketch resuming the concept of convective instability and the marginal condition of neutral stability is displayed in Fig. 4.5. In this figure, only the domain of positive wave numbers is represented, as the condition of convective instability just involves \(k^2\) and is thus independent of the sign of k.

We note that convective instability, to some wave number k, is possible only when R exceeds a critical value, denoted as \(R_\mathrm{c}\), which corresponds to the absolute minimum of R along the neutral stability curve. The corresponding value of k is the critical wave number, \(k_\mathrm{c}\). Thus, we have

$$\begin{aligned} k_\mathrm{c} = 0\,, \qquad R_\mathrm{c} = 0\,. \end{aligned}$$
(4.46)

Hereafter, a situation where \(R < R_\mathrm{c} = 0\) will be termed subcritical, while the condition \(R > R_\mathrm{c} = 0\) will be termed supercritical.

The convective instability regards the behaviour of quite special initial perturbations of the basic solution, having the form of plane waves with a given wave number. These perturbations have an intrinsic non-local character as their support is widespread all over the real x-axis. A more general perturbation comes from a superposition of infinite plane waves with all possible wave numbers, as represented by the Fourier integral, Eq. (4.39). These wave packets may describe perturbations with a local support as it could be, for instance, when the initial condition w(x, 0) is a Gaussian signal. In general, as pointed out in Sect. 2.2.1, the initial condition w(x, 0) must be absolutely integrable over the real x-axis. Otherwise, the Fourier integral can only make sense in a space of generalised functions, or distributions. This is the reason why the normal mode initial condition, given by Eq. (4.42), leads to a Fourier transform given by a Dirac’s delta. A normal mode is not absolutely integrable and Dirac’s delta is not a function in the traditional sense, but a distribution.

Definition 4.2

(Absolute Instability) A perturbation w(xt) is deemed to be absolutely unstable if it is absolutely integrable over the real x-axis and if

$$\begin{aligned} \lim _{t \rightarrow +\infty } | w(x,t) | = + \infty \;, \end{aligned}$$
(4.47)

for every \(x \in \mathbb {R}\) .

Deciding whether a perturbation w(xt) expressed through Eq. (4.39) is absolutely unstable means checking the large-time behaviour of the Fourier integral on the right-hand side of Eq. (4.39). This task can be accomplished by employing the steepest-descent approximation described in Sect. 3.5.3. The first step is to determine the saddle points of \(\lambda (k)\). In fact, Eq. (4.37) yields

$$\begin{aligned} \lambda '(k) = - 2\, k - I\, W_0 \;. \end{aligned}$$
(4.48)

Equation (4.48) shows that there is just one, purely imaginary, saddle point,

$$\begin{aligned} k_0 = - \frac{I\, W_0}{2} \; . \end{aligned}$$
(4.49)

We must now check that the holomorphy requirement is satisfied by \(\tilde{w}(k,t)\) expressed by Eq. (4.38). We know that \(\lambda (k)\) is holomorphic for every \(k \in \mathbb {C}\). On the other hand, \(\tilde{w}(k,0)\) is arbitrary. However, in order to employ the steepest-descent approximation as specified in Sect. 3.5.3, we need the assumption that no singularity of \(\tilde{w}(k,0)\) exists in the region of the complex plane bounded by the real k-axis, \(\mathfrak {I}(k)=0\), and the deformed curve \(\gamma ^*\) locally crossing \(k_0\) through a path of steepest descent. If this hypothesis regarding the initial state w(x, 0) holds, we can approximately evaluate |w(xt)| for large times, by employing Eq. (3.163), as

$$\begin{aligned} |w(x,t)| \approx \frac{\left| \tilde{w}(k_0,0)\right| }{\sqrt{2 \, t\,}} \; {\text {e}}^{\mathfrak {R}(\lambda (k_0))\, t} \;. \end{aligned}$$
(4.50)

As a consequence of Eqs. (4.47) and (4.50), one can conclude that absolute instability is attained when \(\mathfrak {R}(\lambda (k_0)) > 0\). On account of Eqs. (4.37) and (4.49), this means

$$\begin{aligned} R > R_\mathrm{a} = \frac{W_0^2}{4} \; , \end{aligned}$$
(4.51)

where \(R_\mathrm{a}\) denotes the threshold for the onset of absolute instability. It is important to emphasize that the condition of absolute instability is independent of the details of the initial perturbation, w(x, 0), inasmuch as it is absolutely integrable over the real x-axis and its Fourier transform, \(\tilde{w}(k,0)\), allows one to satisfy the holomorphy requirement relative to the steepest-descent method applied to the integral on the right-hand side of Eq. (4.39).

Fig. 4.6
figure 6

Qualitative sketch of the definitions of convective instability and absolute instability as implied by Eqs. (4.45) and (4.51)

A qualitative sketch of the concepts of convective instability and absolute instability is displayed in Fig. 4.6. This figure highlights that absolute instability is not a modal condition, meaning that its validity does not depend on the behaviour of individual normal modes, but on the asymptotic behaviour of a general class of perturbations. Interestingly enough, the condition of absolute instability turns out to be a parametric condition, given by Eq. (4.51), mostly independent on the detailed characteristics of the initial perturbations superposed to the basic stationary solution through Eq. (4.32).

There is a physical picture of how the mathematical condition of absolute instability can be viewed. One can imagine the basic flow \(W_0\) as observed by a laboratory reference frame and by a co-moving reference frame. An observer in the latter frame travels downstream with speed \(W_0\) and detects normal modes of perturbation growing in time or damped in time. On the other hand, the view of an observer in the laboratory reference frame is different. Such an observer sees the flowing fluid with uniform velocity \(W_0\), detects the perturbations, but will also experience some difficulty in checking the ultimate behaviour of perturbations at large times. In fact, normal modes of perturbation initially growing in time are convected downstream by the basic flow, so that a growing normal mode can be driven away so fast that its time growth is not actually perceived with the instruments employed by this observer. If the basic flow velocity \(W_0\) is sufficiently low (remember that the absolute instability condition can be reformulated as \(W_0^2 < 4\,R\)), then any actually unbounded growth of each growing normal mode is correctly detected in the laboratory reference frame.

Fig. 4.7
figure 7

Plots of the time evolution of the Gaussian perturbation for \(W_0=1\), at different positions, x, and with different values of R such that \(R < R_\mathrm{c}\), \(R = R_\mathrm{c}\), \(R_\mathrm{c}< R < R_\mathrm{a}\) and \(R > R_\mathrm{a}\)

2.2 Time Evolution of a Special Perturbation Wave Packet

We can check the results of the steepest-descent approximation by a direct evaluation of w(xt) for a very special initial wave packet given by a Gaussian distribution,

$$\begin{aligned} w(x,0) = {\text {e}}^{-x^2} \;. \end{aligned}$$
(4.52)

Its Fourier transform is readily determined, namely

$$\begin{aligned} \tilde{w}(k,0) = \frac{1}{\sqrt{2}}\, {\text {e}}^{-\frac{k^2}{4}} \;. \end{aligned}$$
(4.53)

Then, from Eqs. (4.37) and (4.38), we obtain

$$\begin{aligned} \tilde{w}(k,t) = \frac{1}{\sqrt{2}}\, \exp \left[ -\frac{k^2}{4} + (R - k^2 - I\, k \, W_0)\; t \right] \;. \end{aligned}$$
(4.54)

The inverse Fourier transform of \(\tilde{w}(k,t)\), given by Eq. (4.54), is evaluated analytically as

$$\begin{aligned} w(x,t) = \frac{1}{\sqrt{4\, t + 1}}\, \exp \left[ R\, t-\frac{(x - W_0\, t)^2}{4 t + 1} \right] \;. \end{aligned}$$
(4.55)

Plots showing the time evolution of w(xt), given by Eq. (4.55), are presented in Fig. 4.7 for the choice \(W_0=1\). Different positions, x, are considered. Each frame corresponds to a value of R that is either subcritical, critical or supercritical. Among the supercritical cases, \(R = 0.2\) or \(R = 0.3\), it is clearly displayed the expected difference between the behaviour when \(R < R_\mathrm{a}\) and that when \(R > R_\mathrm{a}\). The frame with \(R = 0.3\) clearly shows the large-time growing trend of the plots of |w(xt)| versus t for different positions, x. This behaviour is precisely what one expects on the basis of the asymptotic expression of |w(xt)| given by Eq. (4.50), and based on the steepest-descent approximation.

Fig. 4.8
figure 8

Plots of the spatial distribution of the Gaussian perturbation for \(W_0=1\), at different times, t, and with different values of R such that \(R < R_\mathrm{c}\), \(R = R_\mathrm{c}\), \(R_\mathrm{c}< R < R_\mathrm{a}\) and \(R > R_\mathrm{a}\)

The spatial distribution of the Gaussian perturbation at different times is illustrated in Fig. 4.8. The same cases considered in Fig. 4.7 are reported. We see that, when R is subcritical or critical, there is a net decrease in height of the Gaussian maximum, accompanied by a rightward displacement and a spreading, as time increases. This is not the case when R is supercritical, as the maximum decreases at first, reaches a minimum, but eventually it increases unboundedly in time. This behaviour is easily gathered from Eq. (4.55), as the position of the maximum is \(x=W_0\,t\), and its height is

$$\begin{aligned} \max _{x \in \mathbb {R}} |w(x,t)| = \frac{1}{\sqrt{4\, t + 1}}\, {\text {e}}^{R\, t} \;. \end{aligned}$$
(4.56)

The height decreases monotonically when \(R \leqslant 0\), but it behaves non-monotonically when \(0< R < 2\). In fact, it decreases at first, reaches a minimum when

$$\begin{aligned} t = \frac{2-R}{4\,R}\;, \end{aligned}$$
(4.57)

and then it increases unboundedly. This explains the behaviour of the frames in Fig. 4.8 corresponding to \(R=0.2\) and \(R=0.3\). What makes the difference between the supercritical behaviour for \(R<R_\mathrm{a}\) and \(R>R_\mathrm{a}\), so well evident in Fig. 4.7, is the competition between the speed of the rightward displacement and the gradual increase of the maximum height at sufficiently large times. This competition results in a signal at a given x gradually decreasing in time, if \(R<R_\mathrm{a}\), and gradually increasing in time, if \(R>R_\mathrm{a}\). This is the essence of the transition from convective to absolute instability.

3 Stability of Channelised Burgers Flow

The analysis of the instability occurring in Burgers flow can be modelled as three-dimensional if we imagine that the flow along the x direction is directed, in fact, through a channel with a rectangular cross section. In a rectangular channel, where \(x \in \mathbb {R}\) , \(y \in [0, L_1]\) and \(z \in [0, L_2]\), the three-dimensional version of Burgers equation (4.31) is given by

$$\begin{aligned} \frac{\partial {\mathbf {W}}}{\partial {t}} + \left( \mathbf {W} \varvec{\cdot }\varvec{\nabla }\right) \mathbf {W} = \nabla ^2 \mathbf {W} + R \left( \mathbf {W} - \mathbf {W}_0 \right) \;, \end{aligned}$$
(4.58)

where \(\mathbf {W}_0\) is the constant vector \((W_0, 0, 0)\), and \(W_0 \in \mathbb R\) is the same constant considered in the one-dimensional case envisaged in Sect. 4.2. Evidently, a basic stationary solution of Eq. (4.58) is

$$\begin{aligned} \mathbf {W} = \mathbf {W}_0 \;. \end{aligned}$$
(4.59)

We imagine the confining walls of the channel positioned at \(y = 0\), \(y = L_1\), \(z = 0\), \(z = L_2\) as impermeable surfaces moving along the flow direction with velocity \(\mathbf {W}_0\). This assumption is compatible with a uniform velocity in the channel, as implied by Eq. (4.59). Thus, we assume the system of boundary conditions,

$$\begin{aligned} t>0\;; \quad x \in \mathbb {R}\;; \quad y = 0,\, L_1\;; \quad z \in [0,L_2]\;&: \qquad \mathbf {W} = \mathbf {W}_0\;, \nonumber \\ t>0\;; \quad x \in \mathbb {R}\;; \quad y \in [0, L_1]\;; \quad z = 0,\, L_2\;&: \qquad \mathbf {W} = \mathbf {W}_0\;. \end{aligned}$$
(4.60)

3.1 Linear Stability Analysis

The linear stability analysis of the solution, \(\mathbf {W} = \mathbf {W}_0\), can be carried out by writing

$$\begin{aligned} \mathbf {W} = \mathbf {W}_0 + \varepsilon \, \mathbf {w} \;, \qquad \varepsilon > 0 \;, \end{aligned}$$
(4.61)

where \(\varepsilon \) is a small perturbation parameter, \(\varepsilon \ll 1\). By substituting Eq. (4.61) into Eq. (4.58),

$$\begin{aligned} \varepsilon \, \frac{\partial {\mathbf {w}}}{\partial {t}} + \varepsilon \, W_0 \; \frac{\partial {\mathbf {w}}}{\partial {x}} + \varepsilon ^2\, \left( \mathbf {w} \varvec{\cdot }\varvec{\nabla }\right) \mathbf {w} = \varepsilon \, \nabla ^2 \mathbf {w} + R\, \varepsilon \, \mathbf {w} \;. \end{aligned}$$
(4.62)

We neglect terms \(O(\varepsilon ^2)\) and divide by \(\varepsilon \), so that we obtain

$$\begin{aligned} \frac{\partial {\mathbf {w}}}{\partial {t}} + W_0 \; \frac{\partial {\mathbf {w}}}{\partial {x}} = \nabla ^2 \mathbf {w} + R\, \mathbf {w} \;. \end{aligned}$$
(4.63)

Equation (4.63) governs the evolution of the linear perturbations \(\mathbf {w}\) and, as a consequence of Eqs. (4.60) and (4.61), its boundary conditions are

$$\begin{aligned} t>0\;; \quad x \in \mathbb {R}\;; \quad y = 0,\, L_1\;; \quad z \in [0,L_2]\;&: \qquad \mathbf {w} = 0\;, \nonumber \\ t>0\;; \quad x \in \mathbb {R}\;; \quad y \in [0, L_1]\;; \quad z = 0,\, L_2\;&: \qquad \mathbf {w} = 0\;. \end{aligned}$$
(4.64)

Due to the linearity of Eqs. (4.63) and (4.64), solutions can be sought as a series. The method to be employed is the separation of variables, described in Appendix A. Thus, we can write

$$\begin{aligned} \mathbf {w} = \sum _{n=1}^{\infty }\, \sum _{m=1}^{\infty } \mathbf {w}_{n m}(x,t)\, \sin (\alpha _n\, y)\, \sin (\beta _m \, z) \;, \end{aligned}$$
(4.65)

where

$$\begin{aligned} \alpha _n = \frac{\pi \, n}{L_1}\;, \quad \beta _m = \frac{\pi \, m}{L_2}\;. \end{aligned}$$
(4.66)

Series solutions described by Eq. (4.66) identically satisfy the boundary conditions, Eq. (4.65), provided that \(\mathbf {w}_{n m}(x,t)\) is a solution of

$$\begin{aligned} \frac{\partial {\mathbf {w}_{n m}}}{\partial {t}} + W_0 \; \frac{\partial {\mathbf {w}_{n m}}}{\partial {x}} = \frac{\partial ^2 {\mathbf {w}_{n m}}}{\partial {x}^2} + \left( R - \alpha _n^2 - \beta _m^2 \right) \mathbf {w}_{n m} \;. \end{aligned}$$
(4.67)

We note that Eq. (4.34) is entirely equivalent to Eq. (4.67), provided that we replace R with

$$\begin{aligned} R_{n m} = R - \alpha _n^2 - \beta _m^2\;, \end{aligned}$$
(4.68)

and w(xt) with the vector function \(\mathbf {w}_{n m}(x,t)\). Thus, \(\mathbf {w}_{n m}(x,t)\) can be expressed through the Fourier integral

$$\begin{aligned} \mathbf {w}_{n m}(x,t) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } \tilde{\mathbf {w}}_{n m}(k,0) \; {\text {e}}^{I\, k x} \, {\text {e}}^{\lambda (k) \, t}\, {\text {d}}\,k \;, \end{aligned}$$
(4.69)

where \(\lambda (k)\) is now given by

$$\begin{aligned} \lambda (k) = R_{n m} - k^2 - I\, k \, W_0 \;, \end{aligned}$$
(4.70)

and \(\tilde{\mathbf {w}}_{n m}(k,0)\) is the Fourier transform of the initial perturbation \(\mathbf {w}_{n m}(x,0)\).

We can follow step by step the analysis described in Sect. 4.2.1 to conclude that, on account of Eq. (4.45), convective instability occurs when

$$\begin{aligned} R_{n m} > k^2\; . \end{aligned}$$
(4.71)

This means

$$\begin{aligned} R > \alpha _n^2 + \beta _m^2 + k^2\;. \end{aligned}$$
(4.72)

This condition is satisfied with the minimum value of R occurring when \(n=1\), \(m=1\) and \(k=0\). In other words, the critical values \((k_\mathrm{c}, R_\mathrm{c})\) for the onset of convective instability are

$$\begin{aligned} k_\mathrm{c} = 0\;, \quad R_\mathrm{c} = \frac{\pi ^2}{L_1^2} + \frac{\pi ^2}{L_2^2}\;. \end{aligned}$$
(4.73)

On the other hand, on account of Eq. (4.51), absolute instability is detected when

$$\begin{aligned} R_{n m} > \frac{W_0^2}{4} \; . \end{aligned}$$
(4.74)

As for the convective instability, the modes that allow the inequality (4.74) to be satisfied with the least value of R are those with \(n=1\) and \(m=1\). Thus, the widest region of absolute instability is defined by

$$\begin{aligned} R > R_\mathrm{a} = \frac{W_0^2}{4} + \frac{\pi ^2}{L_1^2} + \frac{\pi ^2}{L_2^2} \; . \end{aligned}$$
(4.75)

The channelisation of Burgers flow thus yields a stabilisation of the basic solution, Eq. (4.59), by raising the thresholds to convective instability, \(R_\mathrm{c}\), and to absolute instability, \(R_\mathrm{a}\). The stabilisation is due to the restriction imposed with respect to the allowed modes of perturbation implied by the boundary conditions, Eq. (4.60). In fact, the results described in Sect. 4.2.1 for the one-dimensional study are readily recovered on taking the limit of an infinite channel cross section, namely \(L_1 \rightarrow \infty \) and \(L_2 \rightarrow \infty \) .

4 Stability of a Convective Cahn–Hilliard Process

The convective Cahn–Hilliard equation is a partial differential equation formulated as a model of the phase separation due to spinodal decomposition [6, 7]. In one-dimensional form, it can be written as,

$$\begin{aligned} \frac{\partial {\varPsi }}{\partial {t}} = \alpha \, \varPsi \; \frac{\partial {\varPsi }}{\partial {x}} - \frac{\partial ^2 {}}{\partial {x}^2} \left( \varPsi - \varPsi ^3 + \frac{\partial ^2 {\varPsi }}{\partial {x}^2} \right) \;, \end{aligned}$$
(4.76)

where \(\alpha \) is a real positive constant which represents the driving force parameter. Equation (4.76) can be equivalently expressed as

$$\begin{aligned} \frac{\partial {\varPsi }}{\partial {t}} = \alpha \, \varPsi \; \frac{\partial {\varPsi }}{\partial {x}} + 6\, \varPsi \left( \frac{\partial {\varPsi }}{\partial {x}} \right) ^2 + \left( 3\, \varPsi ^2 - 1 \right) \frac{\partial ^2 {\varPsi }}{\partial {x}^2} - \frac{\partial ^4 \varPsi }{\partial \,x^4} \;, \end{aligned}$$
(4.77)

A possible basic stationary solution of Eq. (4.77) is given by

$$\begin{aligned} \varPsi = \varPsi _0 = constant \;. \end{aligned}$$
(4.78)

4.1 Linear Stability Analysis

The linear stability of the basic solution, \(\varPsi = \varPsi _0\), is studied by superposing to \(\varPsi _0\) a small perturbation, namely

$$\begin{aligned} \varPsi = \varPsi _0 + \varepsilon \, \psi \;, \qquad \varepsilon > 0 \;. \end{aligned}$$
(4.79)

As always, we consider \(\varepsilon \) as a small perturbation parameter, \(\varepsilon \ll 1\). Substitution of Eq. (4.79) into (4.77) yields

$$\begin{aligned}&\displaystyle \varepsilon \, \frac{\partial {\psi }}{\partial {t}} = \varepsilon \, \alpha \, \varPsi _0 \; \frac{\partial {\psi }}{\partial {x}} + \varepsilon ^2\, \alpha \, \psi \; \frac{\partial {\psi }}{\partial {x}} + 6\, \varepsilon ^2\, \varPsi _0\left( \frac{\partial {\psi }}{\partial {x}} \right) ^2 + 6\, \varepsilon ^3\, \psi \left( \frac{\partial {\psi }}{\partial {x}} \right) ^2 \nonumber \\&\quad \,\, \displaystyle + \varepsilon \left( 3\, \varPsi _0^2 - 1 \right) \frac{\partial ^2 {\psi }}{\partial {x}^2} + 6\, \varepsilon ^2 \, \varPsi _0\, \psi \; \frac{\partial ^2 {\psi }}{\partial {x}^2} + 3\, \varepsilon ^3 \, \psi ^2 \; \frac{\partial ^2 {\psi }}{\partial {x}^2} - \varepsilon \, \frac{\partial ^4 \psi }{\partial \,x^4} \;. \end{aligned}$$
(4.80)

According to the hypothesis of small perturbations, we neglect the terms \(O(\varepsilon ^2)\) and \(O(\varepsilon ^3)\). Then, we divide Eq. (4.80) by \(\varepsilon \), and we obtain the linearised equation

$$\begin{aligned} \frac{\partial {\psi }}{\partial {t}} = \alpha \, \varPsi _0 \; \frac{\partial {\psi }}{\partial {x}} + \left( 3\, \varPsi _0^2 - 1 \right) \frac{\partial ^2 {\psi }}{\partial {x}^2} - \frac{\partial ^4 \psi }{\partial \,x^4} \;. \end{aligned}$$
(4.81)

Let us apply the Fourier transform to solve Eq. (4.81), namely

$$\begin{aligned}&\displaystyle \tilde{\psi }(k,t) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } \psi (x,t)\, {\text {e}}^{-I\, k x}\, {\text {d}}\,x \;, \nonumber \\&\displaystyle \psi (x,t) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } \tilde{\psi }(k,t)\, {\text {e}}^{I\, k x}\, {\text {d}}\,k \;. \end{aligned}$$
(4.82)

The transform of Eq. (4.81) is obtained by employing the properties of the Fourier transform of partial derivatives, given by Eqs. (2.17) and (2.18). Then, we write

$$\begin{aligned} \frac{\partial {\tilde{\psi }}}{\partial {t}} = \lambda (k) \, \tilde{\psi } \;, \end{aligned}$$
(4.83)

where

$$\begin{aligned} \lambda (k) = I\, \alpha \, \varPsi _0 \, k - \left( 3\, \varPsi _0^2 - 1 \right) k^2 - k^4 \;. \end{aligned}$$
(4.84)

The solution of Eq. (4.83) is

$$\begin{aligned} \tilde{\psi }(k,t) = \tilde{\psi }(k,0) \; {\text {e}}^{\lambda (k) \, t} \;. \end{aligned}$$
(4.85)

On substituting Eq. (4.85) into the expression of \(\psi (x,t)\) given by Eq. (4.82), we write the perturbation as

$$\begin{aligned} \psi (x,t) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } \tilde{\psi }(k,0) \; {\text {e}}^{I\, k x} \, {\text {e}}^{\lambda (k) \, t}\, {\text {d}}\,k \;. \end{aligned}$$
(4.86)

As implied by Definition 4.1, convective instability occurs when \(\mathfrak {R}(\lambda (k))>0\). On account of Eq. (4.84), this means

$$\begin{aligned} |\varPsi _0| < \sqrt{\frac{1 - k^2}{3}} \;. \end{aligned}$$
(4.87)

We note that the right-hand side of Eq. (4.87) is a function of k with an upper bound, \(1/\sqrt{3}\). Thus, the meaning of Eq. (4.87) is that, whatever is the real value of the constants \(\varPsi _0 < 1/\sqrt{3}\) and \(\alpha \), there always exists a normal mode with a suitable wave number k that can destabilise the basic solution \(\varPsi = \varPsi _0\). In other words, convective instability to some normal modes is always possible provided that \(\varPsi _0 < 1/\sqrt{3}\). Furthermore, the value of the constant \(\alpha \) does not influence in any way the onset of convective instability.

We now investigate the transition from convective to absolute instability by employing Definition 4.2 and the steepest-descent approximation described in Sect. 3.5.3. We first determine the saddle points of \(\mathfrak {R}(\lambda (k))\), namely the solutions of

$$\begin{aligned} \lambda '(k) = I\, \alpha \, \varPsi _0 - 2 \left( 3\, \varPsi _0^2 - 1 \right) k - 4\, k^3 = 0 \;. \end{aligned}$$
(4.88)

For every assigned pair \((\alpha , \varPsi _0)\), there are three saddle points: \(k_{01}\), \(k_{02}\) and \(k_{03}\). In general, by fixing the value of \(\varPsi _0\), we can trace graphically the value of \(\mathfrak {R}(\lambda (k_{0 i}))\), with \(i=1,2,3\), as a function of \(\alpha \).

A notable case is the limit where the driving force becomes vanishingly small, \(\alpha \rightarrow 0\). In this limit, the three saddle points are

$$\begin{aligned} k_{01} = 0\;, \quad k_{02} = \sqrt{\frac{ 1 - 3\, \varPsi _0^2 }{2}}\;, \quad k_{03} = - \sqrt{\frac{ 1 - 3\, \varPsi _0^2 }{2}} \;. \end{aligned}$$
(4.89)

The saddle points \(k_{02}\) and \(k_{03}\) can be either purely imaginary or real depending on whether \(|\varPsi _0|\) is larger or smaller than \(1/\sqrt{3}\). In every case, we obtain from Eq. (4.84)

$$\begin{aligned} \lambda (k_{01}) = 0\;, \quad \lambda (k_{02}) = \lambda (k_{03}) = \frac{1}{4} \left( 3\, \varPsi _0^2 - 1 \right) ^2 \;. \end{aligned}$$
(4.90)

The conclusion drawn from Eq. (4.90) is that the large-time approximation of the wave packet growth rate can never be negative. Thus, according to the steepest-descent approximation of Eq. (4.86), the dominant saddle points for the assessment of the large-time behaviour of \(|\psi (x,t)|\) are \(k_{02}\) and \(k_{03}\), which are endowed with the largest value of \(\mathfrak {R}(\lambda )\). This means that, for every choice of \(\varPsi _0\), there is a transition from stability to absolute instability in the limiting case \(\alpha \rightarrow 0\) when \(\varPsi _0 = 1/\sqrt{3}\). Stated differently, in the limit \(\alpha \rightarrow 0\), every constant solution \(\varPsi = \varPsi _0 < 1/\sqrt{3}\) can be destabilised by normal modes with suitable values of k. Moreover, the amplitude of a wave packet perturbation of \(\varPsi = \varPsi _0 < 1/\sqrt{3}\) ultimately grows in time, when t is sufficiently large.

Fig. 4.9
figure 9

Regions of convective and absolute instabilities for a convective Cahn–Hilliard process

Let us now consider a nonzero driving force parameter, \(\alpha \). A quite simple case is \(\varPsi _0 = 1/\sqrt{3}\), where we obtain

$$\begin{aligned}&\displaystyle k_{01} = \left( \frac{\alpha }{4 \sqrt{3}} \right) ^{1/3} \frac{\sqrt{3} + I}{2}\;, \quad k_{02} = - \left( \frac{\alpha }{4 \sqrt{3}} \right) ^{1/3} \frac{\sqrt{3} - I}{2}\;, \nonumber \\&\qquad \qquad \qquad \,\,\,\displaystyle k_{03} = - I\left( \frac{\alpha }{4 \sqrt{3}} \right) ^{1/3} \;. \end{aligned}$$
(4.91)

On account of Eqs. (4.84) and (4.91), we can write

$$ \mathfrak {R}(\lambda (k_{01})) = \mathfrak {R}(\lambda (k_{02})) = - \frac{1}{8} \left( \frac{3}{4} \right) ^{1/3} \alpha ^{4/3} \;, $$
$$\begin{aligned} \lambda (k_{03}) = \frac{1}{4} \left( \frac{3}{4} \right) ^{1/3} \alpha ^{4/3} \;. \end{aligned}$$
(4.92)

What can be concluded from Eq. (4.92), and from the steepest-descent approximation of the right-hand side of Eq. (4.86), is that the saddle points that are pertinent to establish the large-time behaviour of \(|\psi (x,t)|\) are \(k_{01}\) and \(k_{02}\). They are equipollent in the sense that they yield the same negative growth rate, \(\mathfrak {R}(\lambda (k_{01})) = \mathfrak {R}(\lambda (k_{02}))\), as shown by Eq. (4.92). On the other hand, the saddle point \(k_{03}\) is to be excluded as the steepest-descent paths departing from this point run along the imaginary k-axis and cannot be employed for the steepest-descent approximation of the perturbation wave packet. We can state that the solution \(\varPsi = \varPsi _0 = 1/\sqrt{3}\) is linearly stable for every positive value of \(\alpha \). The same conclusion is achieved for every choice of \(\varPsi _0\) with \(\varPsi _0 > 1/\sqrt{3}\). On the other hand, when \(0< \varPsi _0 < 1/\sqrt{3}\), the transition to absolute instability takes place only for a sufficiently small \(\alpha \), as illustrated in Fig. 4.9.

We point out that the holomorphy requirement is automatically satisfied, as \(\lambda (k)\) is holomorphic throughout the complex k-plane, as it is shown by Eq. (4.84). Obviously, since we are applying the steepest-descent approximation to the wave packet \(\psi (x,t)\) expressed by Eq. (4.86), the initial condition must be such that \(\tilde{\psi }(k,0)\) is a holomorphic function of k. In fact, as discussed in Sect. 3.5.3, we have to assume the absence of any singularity of \(\tilde{\psi }(k,0)\) in the region of the complex plane bounded by the real k-axis, \(\mathfrak {I}(k)=0\), and the deformed curve \(\gamma ^*\) locally crossing the pertinent saddle points through a path of steepest descent.

5 Some Considerations on Convective and Absolute Instabilities

There is a wide literature regarding the concepts of convective and absolute instabilities. Most of the references regard fluid dynamics and, among them, we mention the books by Charru [15], Manneville [12], Schmid and Henningson [4]. A quite detailed analysis of absolute instability in flow system can be found in the review papers by Huerre [9] and Huerre and Monkewitz [10].

The origin of the concept of absolute instability is usually dated back to studies in the field of plasma physics as reported by Dysthe [5]. A discussion of the concept of absolute instability compared to convective instability is available in the second edition of the book on fluid mechanics by Landau and Lifshitz [11].Footnote 2 It is also worth being mentioned that a slightly different version of the example of one-dimensional Burgers flow, employed in Sect. 4.2 as a test case to introduce convective and absolute instabilities, was previously discussed by Brevdo and Bridges [3], as well as by Barletta and Alves [2].

Several studies available in the literature, and the paper by Brevdo and Bridges [3] is an example, approach the discussion of the transition from convective to absolute instability by employing a representation of the perturbation wave packet in terms of a double Fourier–Laplace transform. This choice yields a mildly complicated version of the mathematical analysis employed for the study of instability, and generally speaking, it is not strictly necessary to achieve a rigorous definition of the concept of absolute instability.

Another aspect of the literature that somehow tends to complicate life for the newcomers of absolute instability is the tendency to mix this topic with that of spatial normal modes. Spatial stability analysis aims to establish the growth or decay of a localized perturbation, periodic in time downstream of the basic flow. Hence, things are adjusted as to control the growth in space of a perturbation instead of assessing its growth in time at a given position, as happens with the convective stability analysis. In practice, spatial normal modes differ from the temporal normal modes, that is the usual Fourier modes employed throughout this book, as the former type of modes features a complex wave number, k, and a purely imaginary time growth, \(\mathfrak {R}(\lambda (k))=0\), which is often described as a purely real angular frequency. For instance, the book by Schmid and Henningson [15] presents spatial normal modes as some sort of prerequisite for the rigorous definition of absolute instability. This choice is perfectly correct although the purely mathematical process of saddle-point detection in the complex k-plane is endowed with a physical meaning, i.e. the dynamics of spatial normal modes, that may sound a bit cryptical for a first approach to absolute instability. In fact, the analysis in the complex k-plane is needed as an implementation of the steepest-descent approximation of a wave packet perturbation. As such, no physical meaning for the complex values of k is strictly necessary as a justification of the method. Following the presentation of absolute instability in terms of spatial normal modes, what is purely mathematical, as the holomorphy requirement discussed in Sect. 3.5.3, becomes a physical process of collision between different branches of spatial normal modes, described through the so-called Briggs’ method [14, 15]. Such a scheme, only apparently different from that presented here, can be extremely suggestive when the concept of absolute instability is familiar to the reader. On the other hand, it may appear to be a little convoluted as a first approach to this matter.