Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

There are well-known and strictly regulated algorithms for the solution of linear problems. The physical meaning of the solution for any linear problem is clear on an intuitive level. The particularity of the linear system does not play an essential role. However, if we want to deal with real situations, we must take into account two new elements—non-linearity and noise. Non-linearity leads to incredible complications in solving technique. The combination of non-linearity with noise complicates the situation even more. In attempts to predict the behavior of such systems, the most refined intuition fails. The stochastic resonance effect represents an example of the paradoxical behavior of non-linear systems under influence of noise. The term “stochastic resonance” unites a group of phenomena for which the growth of disorder (noise amplitude) upon input into a non-linear system leads under certain conditions to an increase of order on the output. Quantitatively, the effect manifests in the fact that such integral system characteristics as gain constant, noise-to-signal ratio have a clearly marked maximum at some optimal noise level. At the same time, the system entropy reaches a minimum, giving evidence of noise-induced order growth.

7.1 Qualitative Description of the Effect

The concept of stochastic resonance was first proposed in papers [1, 2] and independently in [3, 4]. The authors of the works studied the problem of the alternation of ice ages on Earth. Analysis showed that ice ages alternate with the period of an order of 100,000 years. This result seemed curious. The only quantity of this time scale in the dynamics of the Earth is the period of the oscillations of the Earth’s orbital eccentricity, connected to planetary gravitational perturbations . However, changes in the flow of solar energy coming to the Earth, connected to those oscillations, equals only about 0. 1 % of it. At first glance it seemed that this was absolutely insufficient for such radical climate changes , and one should seek principally new mechanisms to amplify the weak periodic oscillations. One of the possible solutions to the problem was found in accounting for the joint action of the two mechanisms: the simultaneous action of periodic perturbation (oscillations of the Earth’s orbital eccentricity) and noise (climatic oscillations) at certain conditions (stochastic resonance) led to a qualitative explanation of the observed climatic changes.

The first works on stochastic resonance necessarily included the following fundamental elements: non-linearity, bistability, external periodic signal and noise. Later, it became clear that the three latter elements are not necessary attributes of the effect. So it appeared that there was no need to consider only bistable systems and stochastic resonance can be presented as purely a threshold effect . It is also possible to set up the problem in the absence of an external periodic signal. In many non-linear systems coherent motion is stimulated by the internal dynamics of the non-linear system rather than by external force. However paradoxical it may be, it is also possible to set up the stochastic resonance problem without external noise [5]. So for chaotic systems noise can be generated by their own chaotic dynamics, and the role of noise intensity is played by the system parameters, determining the measure of chaoticity. There are also less radical deviations from the initial formulation of the problem. For example, the typical signals apprehended by biological systems have a complex spectral composition, and the monochromatic signal seems an excessive idealization for them. Colored noise , i.e. noise with a finite correlation time and limited spectrum, can be more appropriate than white noise for the reality being studied.

Fig. 7.1
figure 1

Stochastic resonance in a bistable symmetric well with periodic perturbation

The physical mechanism of the stochastic resonance effect is demonstrated in the simplest way on its canonical model: a Brownian particle in a symmetric bistable potential

$$\displaystyle{ V (x) = -\frac{a} {2}x^{2} + \frac{b} {4}x^{4};\quad (a,b > 0)\,, }$$
(7.1)

subjected to a weak periodic signal Fcosω t. The potential minima are situated in the points \(x_{\min } = \pm \sqrt{a/b}\) and separated by a potential barrier at x = 0 of height Δ V = a 2∕4b. As is well known, the fluctuating forces cause rare (at low temperature) random transitions over the potential barrier. The rate of the transitions is determined by the famed Kramers formula [6]

$$\displaystyle{ W_{k} \sim e^{-\varDelta V/D}\,, }$$
(7.2)

where D is the intensity of the fluctuations, and the pre-exponential factor depends on the potential geometry. Now suppose that the particle is subjected to an additional deterministic force—a weak periodic perturbation of frequency ω. The term “weak” means that the force itself cannot result in the barrier being overcome. In the presence of periodic perturbation, the initial potential parity will be broken (see Fig. 7.1), which will lead to dependence of the frequency of the transition rates W k on time. Varying the intensity of the noise, it is possible to achieve a situation when the Kramers rate will be close to the frequency of the potential barrier modulation. This can be achieved provided the condition

$$\displaystyle{ W_{k}^{-1}(D) \equiv t_{ k}(D) \approx \frac{\pi } {\omega }\,, }$$
(7.3)

is fulfilled. Analogous considerations can be made also for the more general case of two meta-stable states 1, 2, where the height of the barrier between them changes due to the periodic perturbation of frequency ω = 2πT. Suppose that the particle performs transitions between these states in average times T +(1 → 2) and T (2 → 1). It is natural to assume that the system is optimally adjusted to external perturbation under the condition

$$\displaystyle{ \frac{2\pi } {\omega } \approx T^{+} + T^{-}\,. }$$
(7.4)

In the symmetric case T + = T  = t K we return to the relation (7.3).

For negligibly low noise levels (D → 0) the Kramers transition rate practically equals to zero and no coherence is present. For very high noise levels the coherence is also absent as system response becomes practically random. Between these two extreme cases there is an optimal intensity defined by the relation (7.3), which optimizes coherence. For this situation it seems natural to use the term “resonance,” although it is evident that this phenomenon differs considerably from familiar deterministic resonances, but statistically the term has a well-defined meaning and in the last few years it has become quite widely accepted. The effect consists of the possibility of adjusting a stochastic system by variation in noise intensity to achieve a regime of maximal amplification of the modulation signal. Stochastic resonance can be realized in any non-linear system where several characteristic time scales exist, one of which can be controlled with the help of noise [7, 8].

Systems demonstrating stochastic resonance are in some sense intermediate between regular and irregular: they are described by a random process where jumps do not obey a deterministic law, but nevertheless in a resonance regime they have some degree of regularity. In particular, the coordinate correlation function does not tend to zero over long times.

7.2 The Interaction Between the Particle and Its Surrounding Environment: Langevin’s Equation

For a quantitative description of the stochastic resonance effect, it is necessary to obtain a dynamical equation describing the interaction between a Brownian particle and its environment [912]. The equation must contain only the coordinates of the particle but to account for the interaction with the environment phenomenologically.

Suppose a Brownian particle of mass M moves in the potential V (R) (for example, in a bistable potential, considered in the previous section). The Lagrange function L for such motion reads

$$\displaystyle{ L_{0}(R, \dot{R} ) = \frac{1} {2}M \dot{R} ^{2} - V (R)\,. }$$
(7.5)

For the thermal reservoir (environment) we take the simplest model considering it to be a set of a large number of non-interacting harmonic oscillators with coordinates q i , masses m i , and frequencies ω i

$$\displaystyle{ L_{hb}(q_{i},\dot{q}_{i}) =\sum \limits _{i}\frac{m_{i}} {2} (\dot{q}_{i}^{2} -\omega _{ i}^{2}q_{ i}^{2})\,. }$$
(7.6)

The interaction between the particle and the thermal reservoir we assume to be separable and linear on the oscillatory coordinates. As a result, we obtain for the full Lagrangian

$$\displaystyle{ L(R, \dot{R};q_{i},\dot{q}_{i}) = L_{0}(R, \dot{R} ) + L_{hb}(q_{i},\dot{q}_{i}) +\sum \limits _{i}f_{i}(R)q_{i}\,. }$$
(7.7)

Corresponding Euler–Lagrange equations take the form

$$\displaystyle\begin{array}{rcl} M \ddot{R}& =& -\frac{dV (R)} {dR} +\sum \limits _{i}q_{i}\frac{df_{i}(R)} {dR} \,, \\ m_{i}\ddot{q}_{i}& =& -m_{i}\omega _{i}^{2}q_{ i} + f_{i}(R)\,. {}\end{array}$$
(7.8)

In order to obtain the equation for the coordinates R it is necessary to exclude from those equations the coordinates q i of the thermal reservoir. The latter equal to

$$\displaystyle{ q_{i}(t) = q_{i}^{0}(t) +\int \limits _{ 0}^{t}ds\frac{f_{i}(R(s))} {m_{i}\omega _{i}} \sin \omega _{i}(t - s)\,, }$$
(7.9)

where q i 0(t) are the solutions of the free equation for the reservoir coordinates,

$$\displaystyle{ q_{i}^{0}(t) = q_{ i0}\cos \omega _{i}(t - t_{0}) + (\,p_{i0}/m_{i}\omega _{i})\sin \omega _{i}(t - t_{0})\,, }$$
(7.10)

where q i0 and p i0 are coordinates and momenta of the oscillators in the initial time moment t 0. Substituting (7.10) in the former of Eq. (7.8), we obtain the result

$$\displaystyle{ M \ddot{R}=\tilde{ F}(R) + F_{\mathrm{frict}}(R, \dot{R} ) + F_{L}(R,t)\,. }$$
(7.11)

Here we introduced the renormalized potential force \(\tilde{F}(R)\), the friction force \(F_{\mathrm{frict}}(R, \dot{R} )\), which depends on coordinates and velocities, and the random (Langevin) force F L (R, t). Those forces equal to

$$\displaystyle\begin{array}{rcl} \tilde{F}(R)& =& -\frac{d\tilde{V }(R)} {dR};\quad \tilde{V }(R) = V (R) -\sum \limits _{i} \frac{1} {2m_{i}\omega _{i}^{2}}\left [\,f_{i}(R)\right ]^{2}\,,{}\end{array}$$
(7.12)
$$\displaystyle\begin{array}{rcl} F_{\mathrm{frict}}(R, \dot{R} )& =& -\sum \limits _{i} \frac{1} {m_{i}\omega _{i}^{2}}\int \limits _{t_{0}}^{t}ds\frac{df_{i}(R(t))} {dR} \cos \omega _{i}(t - s)\frac{df_{i}(R(s))} {dR} \dot{R} (S)\,,{}\end{array}$$
(7.13)
$$\displaystyle\begin{array}{rcl} F_{L}(R,t)& =& \sum \limits _{i}q_{i}^{0}(t)\frac{df_{i}(R)} {dR} \,.{}\end{array}$$
(7.14)

The friction force (7.13) represents a delayed force of the form

$$\displaystyle{ F_{\mathrm{frict}}(R, \dot{R} ) = -\int \limits _{t_{0}}^{t}ds\gamma (t,s)\dot{R} (s)\,, }$$
(7.15)

where the integral kernel (assuming the equality of all masses m i  = m and all form-factors f i (R) = f(R)) equals to

$$\displaystyle{ \gamma (t,s) = \frac{df(R(t))} {dR} \frac{df(R(s))} {ds} \sum \limits _{i} \frac{1} {m\omega _{i}^{2}}\cos \omega _{i}(t - s)\,. }$$
(7.16)

The sum of the oscillating terms in (7.16) approximately equals to zero everywhere except the region \(\left \vert t - s\right \vert \leqslant \varepsilon\). Therefore we can approximately write

$$\displaystyle{ \sum \limits _{i} \frac{1} {m\omega _{i}^{2}}\cos \omega _{i}(t - s) \approx 2\gamma _{0}\delta _{\varepsilon }(t - s)\,, }$$
(7.17)

where δ ɛ (ts) is the so-called smeared δ-function

$$\displaystyle{ \delta _{\varepsilon }(t-s) = \left \{\begin{array}{@{}l@{\quad }l@{}} 1/\varepsilon,\quad &\mbox{ if $ -\varepsilon /2 \leq (t - s) \leq \varepsilon /2$},\\ 0, \quad &\text{otherwise} \end{array} \right.\,. }$$
(7.18)

Factor 2 is introduced in (7.17) for further convenience. From (7.17) it follows that

$$\displaystyle{ 2\gamma _{0} =\int \limits _{ -\infty }^{+\infty }dt \frac{1} {m\omega _{i}^{2}}\cos \omega _{i}t\,. }$$
(7.19)

Using (7.17) for the kernel γ(t, s) we get the expression

$$\displaystyle{ \gamma (t,s) = 2\gamma (R)\delta _{\varepsilon }(t - s);\quad \gamma (R) =\gamma _{0}\left [\frac{df(R)} {dR} \right ]^{2}\,. }$$
(7.20)

We made the assumption that R(s) = R(t) for \(\left \vert s - t\right \vert \leqslant \varepsilon\). Substituting (7.20) into (7.15) we finally get that, given the above assumptions, the friction force is a local function of the coordinates

$$\displaystyle{ F_{\mathrm{frict}} = -\gamma (R)\dot{R} \,. }$$
(7.21)

Now we move on to the transformation of the expression (7.14) for random force.

In assuming that the form factors are equal for all oscillators we get

$$\displaystyle{ F_{L}(R,t) = \frac{df(R)} {dR} \xi (t)\,, }$$
(7.22)

where

$$\displaystyle{ \xi (t) =\sum \limits _{i}q_{i}^{0}(t)\,. }$$
(7.23)

The initial conditions for the oscillators representing the thermal reservoir q i0, p i0 should be naturally considered as independent random variables with the following statistical properties

$$\displaystyle\begin{array}{rcl} \left \langle q_{i0}\right \rangle & =& 0;\;\left \langle p_{i0}\right \rangle = 0\,, \\ \left \langle q_{i0}q_{j0}\right \rangle & =& \delta _{ij}\left \langle q_{i0}^{2}\right \rangle;\;\left \langle p_{ i0}p_{j0}\right \rangle =\delta _{ij}\left \langle p_{i0}^{2}\right \rangle;\;\left \langle q_{ i0}p_{j0}\right \rangle = 0\,.{}\end{array}$$
(7.24)

Using (7.10) and (7.24) for the random variable ξ(t) determining the Langevin force, we get

$$\displaystyle{ \left \langle \xi (t)\right \rangle = 0;\quad \left \langle \xi (t)\xi (t')\right \rangle \approx \sum \limits _{i} \frac{\left \langle \varepsilon _{i0}\right \rangle } {m\omega _{i}^{2}}\cos \omega _{i}(t - t')\,, }$$
(7.25)

where 〈ɛ i0〉 is the average energy of the ith oscillator

$$\displaystyle{ \langle \varepsilon _{i0}\rangle = \frac{p_{i0}^{2}} {2m_{i}} + \frac{1} {2}m_{i}\omega _{i}^{2}\left \langle q_{ i0}^{2}\right \rangle \,. }$$
(7.26)

If the thermal reservoir is in thermal equilibrium at temperature T, then

$$\displaystyle{ \left \langle \varepsilon _{i0}\right \rangle = \frac{1} {2}k_{B}T }$$
(7.27)

and

$$\displaystyle{ \left \langle \xi (t)\xi (t')\right \rangle \approx \frac{1} {2}k_{B}T\sum \limits _{i} \frac{1} {m\omega _{i}^{2}}\cos \omega _{i}(t - t')\,. }$$
(7.28)

Taking into account (7.17), the sum in the right-hand side of (7.28) can be replaced by 2γ 0 δ ɛ (tt′) and for the correlation function of the quantity ξ(t) we get

$$\displaystyle{ \left \langle \xi (t)\xi (t')\right \rangle =\gamma _{0}k_{B}T\delta _{\varepsilon }(t - t')\,. }$$
(7.29)

It is convenient to introduce the normalized random variable Γ(t),

$$\displaystyle{ \varGamma (t) = \left (1/\sqrt{\gamma _{0 } k_{B } T}\right )\xi (t)\,, }$$
(7.30)

which, according to the central limit theorem, has a Gaussian distribution, and its mean value and correlation function equal to:

$$\displaystyle{ \left \langle \varGamma (t)\right \rangle = 0;\quad \left \langle \varGamma (t)\varGamma (t')\right \rangle =\delta _{\varepsilon }(t - t')\,. }$$
(7.31)

Substituting (7.30) into (7.22) we get the resulting expression for the Langevin force

$$\displaystyle{ F_{L}(R,t) = \sqrt{D(R)}\varGamma (t);\quad D(R) = \left [\frac{df(R)} {dR} \right ]^{2}\gamma _{ 0}k_{B}T\,. }$$
(7.32)

Comparing (7.20) with (7.32) we find that the intensity of the Langevin force is connected to the friction coefficient and the reservoir temperature by the relation known as a particular case of the fluctuation-dissipation theorem [1315]

$$\displaystyle{ D(R) =\gamma (R)k_{B}T\,. }$$
(7.33)

The origin of this connection is a consequence of the fact that both the friction force and the Langevin force are generated by the same interaction of the Brownian particle with the thermal reservoir. Thus, we have shown that Brownian particle dynamics can be phenomenologically described by a stochastic differential equation (equation with random force)

$$\displaystyle{ M \ddot{R}= F(R) -\gamma (R)\dot{R}+ F\cos \omega t + \sqrt{D(R)}\varGamma (t)\,. }$$
(7.34)

In Eq. (7.34) we included the external periodic force, which is independent of interaction with the thermal reservoir. An equation of the form (7.34) (as well as the method of dynamics description by the direct probabilistic method) was first proposed by Langevin [16] and is named after him. Let us consider the Langevin equation  (7.34) from a more general point of view. It represents a particular case of a stochastic differential equation of the form

$$\displaystyle{ \dot{x}(t) = G\left (x(t),t,\xi (t)\right )\,, }$$
(7.35)

where the variable ξ(t) describes some stochastic (random) process. It can be presented as a family of functions ξ u (t) depending on the results u of some experiment S. Therefore, the stochastic differential equation (7.35) is a family of ordinary differential equations, differing for every result (realization) of u

$$\displaystyle{ \dot{x}_{u}(t) = G\left (x_{u}(t),t,\xi _{u}(t)\right )\,. }$$
(7.36)

The family of solutions of those equations x u (t) for different u represents the stochastic process x(t). We can say that each realization ξ u (t) of the random process ξ(t) corresponds to a realization x u (t) of the random process x(t). Thus the solution x(t) becomes a functional of the process ξ(t).

7.3 The Two-State Model

We now turn to an investigation of stochastic resonance in the simplest analytically solvable model—the two-state model [17]. This model represents a discrete analogue of a continuous bistable system where the state variable takes only two discrete values x ± with probabilities n ±. The rates of transition from one state to another W ± are assumed to be known. It is obvious that they can be obtained only from models accounting for the internal system dynamics, which itself determines the rates of transitions between local minima. If the distribution function P(x) of the continuous analogue of the two-state model is known, then

$$\displaystyle{ n_{-} = 1 - n_{+} =\int \limits _{ -\infty }^{x_{\max }}P(x)dx\,, }$$
(7.37)

where x max is the position of the maximum in the two-well potential. The master equation can be written in the form

$$\displaystyle{ \frac{dn_{+}} {dt} = -\frac{dn_{-}} {dt} = W_{-}(t)n_{-}- W_{+}(t)n_{+} = W_{-}(t) -\left [W_{-}(t) + W_{+}(t)\right ]n_{+}\,. }$$
(7.38)

Transition probabilities depend on time due to the presence of periodic perturbation. The master equation (7.38) is applicable only in the so-called adiabatic limit when the perturbation period is much longer than the characteristic relaxation times. For the two-well problem, under “relaxation time” we understand the time of thermal equilibrium to be established in a separate well.

The solution of the linear differential equation with periodic coefficients (7.38) has the form

$$\displaystyle\begin{array}{rcl} n_{+}(t)& =& g(t)\left [n_{+}(t_{0}) +\int _{ t_{0}}^{t}W_{ +}(\tau )g^{-1}(\tau )d\tau \right ]\,, \\ g(t)& =& \exp \left (-\int _{t_{0}}^{t}\left [W_{ +}(\tau ) + W_{-}(\tau )\right ]d\tau \right )\,, {}\end{array}$$
(7.39)

where n +(t 0) is an as yet undefined initial condition.

Let us now assume that the transition rates have the form

$$\displaystyle{ W_{\pm }(t) = f(\mu \pm \eta _{0}\cos \omega t)\,, }$$
(7.40)

where η 0 is the dimensionless parameter determining the perturbation intensity, and f(μ, η 0 = 0) turns into the Kramers rate (7.2) and no longer depends on time. In other words, we assume that the effect of perturbation reduces to only modulations of the height of the potential barrier determining the Kramers rate. In the case of weak periodic perturbation we can decompose the transition rates (7.40) over the small parameter η 0cosω t:

$$\displaystyle\begin{array}{rcl} W_{\pm }(t)& =& \frac{1} {2}\left (\alpha _{0} \mp \alpha _{1}\eta _{0}\cos \omega t +\alpha _{2}\eta _{0}^{2}\mathop{ \cos }\nolimits ^{2}\omega t \mp \cdots \right )\,, \\ W_{+}(t) + W_{-}(t)& =& \alpha _{0} +\alpha _{2}\eta _{0}^{2}\mathop{ \cos }\nolimits ^{2}\omega t\,, {}\end{array}$$
(7.41)

where

$$\displaystyle{ \frac{1} {2}\alpha _{0} = f(\mu ),\quad \frac{1} {2}\alpha _{n} = \frac{(-1)^{n}} {n!} \frac{d^{n}f} {d\eta ^{n}} \,. }$$
(7.42)

The expression (7.39) now can be integrated and in the first order in the small parameter η 0cosω t we get

$$\displaystyle\begin{array}{rcl} n_{+}\left (\left.t\right \vert x_{0},t_{0}\right )& =& \frac{1} {2}\left [e^{-\alpha _{0}(t-t_{0})}\left [2\delta _{ x_{0}c} - 1 - \frac{\alpha _{1}\eta _{0}\cos (\omega t_{0}-\varphi )} {(\alpha _{0}^{2} +\omega ^{2})^{1/2}}\right ]\right. \\ & & \qquad \quad \left.+1 + \frac{\alpha _{1}\eta _{0}\cos (\omega t-\varphi )} {(\alpha _{0}^{2} +\omega ^{2})^{1/2}}\right ]\,, {}\end{array}$$
(7.43)

where \(\varphi =\arctan (\omega /\alpha _{0})\). The Kronecker symbol \(\delta _{x_{0}c}\) equals unity if in the moment of time t 0 the particle was in a state x + = c, and equals zero if it was in x  = −c. The quantity \(n_{+}\left (\left.t\right \vert x_{0},t_{0}\right )\) is the conditional probability of the particle at time moment t to be in the state c, if in the time moment t 0 it was in the state x 0 (c or − c).

The obtained probabilities allow us to calculate all the statistical characteristics of the process. In particular, reduction to the “continuous” bistable system is done with the help of the relations

$$\displaystyle{ P\left (\left.x,t\right \vert x_{0},t_{0}\right ) = n_{+}\left (\left.t\right \vert x_{0},t_{0}\right )\delta (x - c) + n_{-}\left (\left.t\right \vert x_{0},t_{0}\right )\delta (x + c)\,. }$$
(7.44)

For example, for the average value of coordinate x we get

$$\displaystyle{ \left \langle \left.x(t)\right \vert x_{0},t_{0}\right \rangle =\int xP\left (\left.x,t\right \vert x_{0},t_{0}\right )dx = cn_{+}\left (\left.t\right \vert x_{0},t_{0}\right ) - cn_{-}\left (\left.t\right \vert x_{0},t_{0}\right )\,. }$$
(7.45)

In the absence of periodic modulation in the state of equilibrium n + = n  = 1∕2 that average value equals zero.

Analogously, we can find the autocorrelation function

$$\displaystyle{ \left \langle \left.x(t+\tau )x(t)\right \vert x_{0},t_{0}\right \rangle =\int xyP\left (\left.x,t+\tau \right \vert y,t\right )P\left (\left.y,t\right \vert x_{0},t_{0}\right )dxdy\,, }$$
(7.46)

which we will further need to find the power spectrum. Using the expressions (7.44) for conditional probabilities, we get

$$\displaystyle\begin{array}{rcl} & & \left \langle \left.x(t+\tau )x(t)\right \vert x_{0},t_{0}\right \rangle \\ & & \quad = c^{2}n_{ +}\left (\left.t+\tau \right \vert c,t\right )n_{+}\left (\left.t\right \vert x_{0},t_{0}\right ) - c^{2}n_{ +}\left (\left.t+\tau \right \vert - c,t\right )n_{-}\left (\left.t\right \vert x_{0},t_{0}\right ) \\ & & \qquad - c^{2}n_{ -}\left (\left.t+\tau \right \vert c,t\right )n_{+}\left (\left.t\right \vert x_{0},t_{0}\right ) + c^{2}n_{ -}\left (\left.t+\tau \right \vert - c,t\right )n_{-}\left (\left.t\right \vert x_{0},t_{0}\right ) \\ & & \quad = c^{2}\{2[n_{ +}(\left.t+\tau \right \vert c,t) + n_{+}(\left.t+\tau \right \vert - c,t) - 1]n_{+}(\left.t\right \vert x_{0},t_{0}) \\ & & \qquad - 2n_{+}(\left.t+\tau \right \vert - c,t) + 1\}\,. {}\end{array}$$
(7.47)

The asymptotic limit of the autocorrelation function at t 0 →  presents us with some interest. In that limit, using (7.43), we get

$$\displaystyle\begin{array}{rcl} \left \langle x(t)x(t+\tau )\right \rangle & =& \mathop{\lim }\limits _{t_{0}\rightarrow \infty }\left \langle \left.x(t)x(t+\tau )\right \vert x_{0}t_{0}\right \rangle \\ & =& c^{2}e^{-\alpha _{0}\left \vert \tau \right \vert }\left [1 -\frac{\alpha _{1}^{2}\eta _{ 0}^{2}\mathop{ \cos }\nolimits ^{2}(\omega t-\varphi )} {\alpha _{0}^{2} +\omega ^{2}} \right ] \\ & & \quad + \frac{c^{2}\alpha _{1}^{2}\eta _{0}^{2}\left \{\cos \omega \tau +\cos \left [\omega (2t+\tau ) + 2\varphi \right ]\right \}} {2\left (\alpha _{0}^{2} +\omega ^{2}\right )} \,.{}\end{array}$$
(7.48)

Due to the presence of periodic perturbation, the autocorrelation function depends not only on the time shift τ, but also periodically on time. To calculate the characteristics of stochastic resonance we should average over the perturbation period. This procedure is equivalent to averaging over an ensemble of random initial phases of perturbation and it corresponds to experimental methods of measuring the statistical characteristics obtained from the correlation functions. Considering t as a random variable, uniformly distributed in the interval \(\left [0,2\pi \right ]\), we get

$$\displaystyle\begin{array}{rcl} \left \langle x(t)x(t+\tau )\right \rangle _{t}& =& \frac{\omega } {2\pi }\int \limits _{0}^{2\pi /\omega }\left \langle x(t)x(t+\tau )\right \rangle dt \\ & =& c^{2}e^{-^{\alpha _{0}\left \vert \tau \right \vert } }\left [1 - \frac{\alpha _{1}^{2}\eta _{0}^{2}} {2\left (\alpha _{0}^{2} +\omega ^{2}\right )}\right ] + \frac{c^{2}\alpha _{1}^{2}\eta _{0}^{2}\cos \omega \tau } {2\left (\alpha _{0}^{2} +\omega ^{2}\right )}\,.{}\end{array}$$
(7.49)

Let us recall that (see Sect. 3.2) the power spectrum (spectral density) is the Fourier transform of the autocorrelation function [18], averaged over the perturbation period

$$\displaystyle\begin{array}{rcl} \left \langle S(\varOmega )\right \rangle _{t}& =& \int _{-\infty }^{\infty }\left \langle \left \langle x(t)x(t+\tau )\right \rangle \right \rangle _{ t}e^{-i\varOmega \tau }d\tau \\ & =& \left [1 - \frac{\alpha _{1}^{2}\eta _{0}^{2}} {2\left (\alpha _{0}^{2} +\omega ^{2}\right )}\right ]\left [ \frac{2c^{2}\alpha _{0}} {\alpha _{0}^{2} +\varOmega ^{2}}\right ] \\ & & \quad + \frac{\pi c^{2}\alpha _{1}^{2}\eta _{0}^{2}} {2\left (\alpha _{0}^{2} +\omega ^{2}\right )}\left [\delta \left (\varOmega -\omega \right ) +\delta \left (\varOmega +\omega \right )\right ]\,.{}\end{array}$$
(7.50)

Further we will use the power spectrum S(Ω), defined only for positive Ω,

$$\displaystyle\begin{array}{rcl} S(\varOmega )& =& \left \langle S(\varOmega )\right \rangle _{t} + \left \langle S(-\varOmega )\right \rangle _{t} \\ & =& \left [1 - \frac{\alpha _{1}^{2}\eta _{0}^{2}} {2\left (\alpha _{0}^{2} +\omega ^{2}\right )}\right ]\left [ \frac{4c^{2}\alpha _{0}} {\alpha _{0}^{2} +\varOmega ^{2}}\right ] + \frac{\pi c^{2}\alpha _{1}^{2}\eta _{0}^{2}} {\left (\alpha _{0}^{2} +\omega ^{2}\right )}\delta (\varOmega -\omega ) \\ & =& S_{N}(\varOmega ) + \frac{\pi c^{2}\alpha _{1}^{2}\eta _{0}^{2}} {\left (\alpha _{0}^{2} +\omega ^{2}\right )}\delta (\varOmega -\omega )\,. {}\end{array}$$
(7.51)

The power spectrum naturally divides into two parts: the one describing the periodic component of the output signal on the perturbation frequency (proportional to δ-function) and the noise component S N (Ω). The noise spectrum represents a product of only the Lorenz factor α 0∕(α 0 2 +Ω 2) and the correction factor, describing the influence of the signal on noise. At low signal amplitudes the correction factor is close to unity. This factor describes the energy pumping from the background of noise to the periodic component. It is interesting to note that the total power on the system output does not depend on the frequency nor on the amplitude of the signal: contributions from the correction factor and from the periodic component exactly compensate each other, as

$$\displaystyle{ \int _{0}^{\infty }d\varOmega \frac{\alpha _{0}} {\alpha _{0}^{2} +\varOmega ^{2}} = \frac{\pi } {2}\,. }$$
(7.52)

This exact compensation represents a characteristic feature of the two-state model. It comes from the Persival theorem : the time integral of the signal squared equals the power spectrum integrated over all frequencies. The time integral in the two-state model for any time interval T equals c 2 T and does not depend on perturbation frequency or amplitude. Therefore, the power spectrum integrated over all frequencies must also remain constant.

Let us now return to the continuous analogue of the discrete two-state model—the double symmetric well (7.1). In presence of periodic perturbation the potential energy of the system takes the form

$$\displaystyle{ U(x,t) = -\frac{a} {2}x^{2} + \frac{b} {4}x^{4} - Fx\cos \omega t\,. }$$
(7.53)

For further convenience we present the potential energy in the form

$$\displaystyle{ V (x,t) = V _{0}\left [-2(x/c)^{2} + (x/c)^{4}\right ] - V _{ 1}(x/c)\cos \omega t\,, }$$
(7.54)

here \(c = \pm \sqrt{a/b}\) are the positions of the potential minima at F = 0, V 0 = a 2∕4b is the potential barrier height, and V 1 = Fc is the amplitude of the modulation of barrier height.

As was shown in the previous section, the time evolution of the particle in a potential field interacting with the equilibrium thermal reservoir can be described by the Langevin equation (7.34). We will further consider the so-called over-damped case —a case involving strong friction, when the inertia (mass) term in the equation of motion can be neglected. In this approximation, assuming the coordinate independence of the frequency coefficient (and therefore of the Langevin force intensity), the Langevin equations can be presented in the form

$$\displaystyle{ \dot{x}= -\frac{\partial V (x,t)} {\partial x} + \sqrt{D}\varGamma (t)\,. }$$
(7.55)

The statistical properties of the random force Γ(t) are determined by the relation (7.31).

As we have already mentioned, in the absence of modulation (F = 0) the average rate of the over-barrier transitions—the Kramers rate—is determined by the relation

$$\displaystyle{ W_{k} = \frac{\left [\left \vert V ''(0)\right \vert V ''(c)\right ]^{1/2}} {2\pi } e^{-V _{0}/D} = \frac{a} {\sqrt{2}\pi } e^{-V _{0}/D}\,. }$$
(7.56)

As the Kramers rate depends only on the barrier height and the potential curvature in its extremes, the exact form of the potential is irrelevant. Therefore, the results obtained below are qualitatively applicable to a wide class of bistable systems.

Using the expression (7.40) for transition rates in the presence of periodic perturbation, we obtain

$$\displaystyle{ W_{\pm }(t) = \frac{a} {\sqrt{2}\pi }\exp \left [-\left (V _{0} \pm V _{1}\cos \omega t\right )/D\right ]\,. }$$
(7.57)

We recall that the Kramers rate (7.56) is obtained under the assumption that the particle is in equilibrium with the thermal reservoir. In order to satisfy that condition in the presence of time-dependent perturbation, it is necessary that the perturbation frequency be much smaller than the characteristic speed of the thermal equilibrium setup in the well. The latter is determined by the quantity V ″(±c) = 2a. Therefore, the adiabatic approximation applicability condition is given by the inequality ω ≪ 2a.

As one of the main characteristics of stochastic resonance we will use the signal-to-noise ratio (SNR), under which we will understand the ratio of spectral densities for signal and noise on the signal frequency, i.e.

$$\displaystyle{ \mathrm{SNR} = \left [\mathop{\lim }\limits _{\varDelta \varOmega \rightarrow 0}\int _{\omega +\varDelta \varOmega }^{\omega -\varDelta \varOmega }S(\varOmega )d\varOmega \right ]/S_{ N}(\omega ) = \frac{S(\omega )} {S_{N}(\omega )}\,. }$$
(7.58)

From the relation (7.51), neglecting the influence of the signal on the background of noise, we get

$$\displaystyle{ \mathrm{SNR} = \frac{\pi } {4} \frac{\alpha _{1}^{2}} {\alpha _{0}} \eta _{0}^{2}\,. }$$
(7.59)

The coefficients α 0 and α 1 can be found with the help of the relations (7.42), in which

$$\displaystyle{ f(\mu +\eta _{0}\cos \omega t) = \frac{a} {\sqrt{2}\pi }e^{-(\mu +\eta _{0}\cos \omega t)}\,, }$$
(7.60)

where μ = V 0D, a η 0 = V 1D = FcD. As a result, in the considered approximation for SNR we finally get

$$\displaystyle{ \mathrm{SNR} \approx \frac{a} {\sqrt{2}}\left (\frac{Fc} {D} \right )^{2}e^{-V _{0}/D}\,. }$$
(7.61)

For D ≪ V 0 the exponent tends to zero faster than the denominator and SNR → 0. For large D, the growth of the denominator again assures the tending of SNR to zero. In the intermediate region at D ∼ V 0∕2 the approximate expression for SNR (7.61) has a unique maximum (see Fig. 7.2).

Fig. 7.2
figure 2

Dependence of the signal-to-noise ratio (SNR) on the noise level D in the two-state model

The two-state model also allows us to find contributions for higher approximations in parameter η 0 = FcD, i.e. to calculate higher harmonics of stochastic resonance. The power spectrum \(S\left (\varOmega \right )\), taking into account these contributions, can be represented as a superimposition of the noise background S N (Ω) and δ-peaks, centered at Ω = (2n + 1)ω. The generation of only odd harmonics of the input signal frequency is a consequence of the symmetry of the considered non-linear system [19]. We give the expressions for SNR on the third and fourth harmonics [20]:

$$\displaystyle\begin{array}{rcl} \mathrm{SNR}_{3}& =& \frac{\pi } {72}\omega z\left (\frac{Fc} {D} \right )^{6}\frac{z^{2} + 1/16} {4z^{2} + 1} \,, \\ \mathrm{SNR}_{5}& =& \frac{\pi } {10^{2} \cdot 2^{13}}\omega z\left (\frac{Fc} {D} \right )^{10}\frac{\left (64/3z^{2} - 1\right )^{2} + \left (14z\right )^{2}} {\left (4z^{2} + 1\right )\left (4z^{2} + 9\right )} \,,{}\end{array}$$
(7.62)

where z ≡ W k ω. The maxima of these curves are placed in the points D = V 0∕2k (k is the harmonic number).

Besides SNR, the average value of coordinate x (more precisely, the asymptotic limit of that average at t 0 → −), defined by the relation (7.45), also presents some interest. Using (7.43), we get

$$\displaystyle{ \left \langle x(t)\right \rangle = A(D)\cos \left [\omega t +\phi (D)\right ]\,, }$$
(7.63)

where the amplitude A(D) and phase shift ϕ(D) are determined by the expressions

$$\displaystyle\begin{array}{rcl} A(D)& =& \frac{Fc^{2}} {D} \frac{2W_{k}} {\left (4W_{k}^{2} +\omega ^{2}\right )^{1/2}}\,,{}\end{array}$$
(7.64)
$$\displaystyle\begin{array}{rcl} \phi (D)& =& -\arctan \frac{\omega } {2W_{k}}\,.{}\end{array}$$
(7.65)

From the response amplitude on the system output we will determine the power amplification coefficient η,

$$\displaystyle{ \eta = \frac{A^{2}(D)} {F^{2}} = \frac{4W_{k}^{2}c^{4}} {D^{2}\left (4W_{k}^{2} +\omega ^{2}\right )}\,. }$$
(7.66)

From (7.56) and (7.66) it follows that the amplification coefficient η as function of the noise intensity D has a unique maximum.

7.4 Stochastic Resonance in Chaotic Systems

The coexistence of several attractors is typical for the phase space of chaotic systems. Those attractors undergo an infinite number of bifurcations with variations in the system parameters. As a result, such systems are very sensitive to external perturbations. External perturbations in such systems may generate a row of interesting effects connected to the interaction of the attractors, including noise-induced transitions. Therefore, chaotic deterministic systems open new possibilities to set up a stochastic resonance problem. In particular, one may consider the problem of the interaction of two chaotic attractors subject to the influence of external noise and/or some control parameter variation. This interaction is also characterized by some switching frequency, depending on noise intensity and parameter value. Therefore, we can expect the appearance of resonance effects, and as a consequence the possibility of observing a peculiar stochastic resonance in the presence of additional modulation.

Fig. 7.3
figure 3

Stationary probability density P(x) for the system (7.67) for two parameter a values: (a) a < a , (b) a > a [21]

Following [21], let us consider, as an example, the discrete system

$$\displaystyle{ x_{n+1} = (a - 1)x_{n} - ax_{n}^{3}\,. }$$
(7.67)

For 0 < a < 2 there is a unique stable fixed point at x 1 = 0. At a = 2 a bifurcation takes place, as a result of which in the region 2 < a < 3 there are two stable fixed points at x 2, 3 = ±c, \(c = \left [\left (a - 2\right )/a\right ]^{1/2}\) and one unstable point in the origin. In the region \(3\leqslant a < 3.3\) a period doubling bifurcations cascade takes place, after which for \(a\geqslant 3.3\) the mapping (7.67) demonstrates chaotic behavior.

If \(3.3 < a\leqslant 3.6\), there are two disconnected symmetric attractors. Their attraction basins are separated by the separatrix x = 0. The stationary probability density P(x) for this case is presented in Fig. 7.3a. At \(a\cong a{\ast} = 3.598\) the attractors merge and a new chaotic attractor appears with the probability density shown in Fig. 7.3b for a > a∗. The bifurcation of the merging attractors is accompanied by the alternation phenomenon of the chaos–chaos type [22]: the trajectory lives for a long time in the basin of one of the attractors, and then makes a random transition into the other attractor’s basin. The average residence time τ 1 for each of the attractors obeys the universal critical relation

$$\displaystyle{ \tau _{1} \sim \left (a - a{\ast}\right )^{-\gamma };\quad \gamma = 0.5\,. }$$
(7.68)

The alternation effect of the chaos–chaos type can be achieved also as a result of the action of additive noise. In this case, the dependence (7.68) is preserved, but the critical index γ becomes a function of the noise intensity, γ = γ(D).

Let us introduce periodic modulation and additive noise into the mapping (7.67)

$$\displaystyle{ x_{n+1} = (a - 1)x_{n} - ax_{n}^{3} +\varepsilon \sin (2\pi f_{ 0}n) +\xi (n)\,, }$$
(7.69)

where ɛ and f 0 are amplitude and frequency of modulation, and the statistical properties of the noise are the following

$$\displaystyle{ \left \langle \xi (n)\right \rangle = 0,\quad \left \langle \xi (n)\xi (n + k)\right \rangle = 2D\delta (k)\,. }$$
(7.70)

Let us study the system (7.69) in the two-state approximation, replacing the x(n) coordinate by + 1, if x(n) > 0 and by − 1, if x(n) < 0. In the approximation \(\dot{x}= x_{n+1} - x_{n}\), we can transform the discrete model (7.67) into the differential equation

$$\displaystyle{ \dot{x}= (a - 2)x - ax^{3} }$$
(7.71)

and introduce the potential U(x):

$$\displaystyle{ U(x) = -\frac{a - 2} {2} x^{2} + \frac{a} {4}x^{4}\,. }$$
(7.72)

This allows us to determine the Kramers rate

$$\displaystyle{ W_{k} = -\frac{a - 2} {\pi \sqrt{2}} \exp \left [-\frac{(a - 2)^{2}} {4aD} \right ] }$$
(7.73)

and to obtain the expression for SNR in an adiabatic approximation

$$\displaystyle{ \mathrm{SNR} = -\frac{(a - 2)^{2}\varepsilon ^{2}} {aD^{2}} \exp \left [-\frac{(a - 2)^{2}} {4aD} \right ]\,. }$$
(7.74)

Let us consider the dynamics of (7.69) at a = 3. 4, which corresponds to the case of the coexistence of two disconnected attractors. The addition of noise (at ɛ = 0) smoothes the probability density and induces transitions between the attractors. The basic characteristics of the dynamics in the absence of modulation (probability density P(x), power spectrum S( f), residence time distribution function for the attractor p(n) and average frequency of transitions between the attractors f s ) as functions of the noise amplitude D are presented in Fig. 7.4. They reflect the typical features of the bistable system in presence of noise.

Fig. 7.4
figure 4

Basic dynamical characteristics (7.69) in absence of modulation: (a ) probability density P(x), (b ) power spectrum S( f), (c ) the distribution function of the residence times for the attractor p(n), (d ) average frequency of transitions between the attractors f, as functions of the noise amplitude D; a = 3. 4 [21]

Fig. 7.5
figure 5

Results of numerical analysis of the mapping (7.69) with inclusion of periodic perturbation with ɛ = 0. 05 and f 0 = 0. 125: (a ) power spectrum, (b ) residence time distribution function, (c ) signal-to-noise ratio [21]

Fig. 7.6
figure 6

Results of SNR(a) calculation for the system (7.69) in absence of noise (ɛ = 0. 05, f 0 = 0. 125, D = 0): (a ) in the two-state approximation, (b ) precise dynamics [21]

Figure 7.5 presents the results of the numerical analysis of the mapping (7.69) with the inclusion of periodic perturbation with ɛ = 0. 05 and f 0 = 0. 125. A sharp peak appears in the power spectrum at the frequency f 0. Peaks also appear in the function of the distribution of residence time on the background decay, and they are centered on the times divisible by an even number of the perturbation semi-periods. And finally, SNR(D) demonstrates a clear maximum at a certain noise intensity. The dependence SNR(D) agrees with theoretical predictions (7.74). It may be said that the replacement of potential wells with isolated attractors preserves all the features of stochastic resonance.

Now we turn to a case of the absence of external noise (D = 0). As we have already said (see Fig. 7.3), at a > a∗ random transitions between the attractors occur due to the internal deterministic dynamics of the system. We can assume that in this case the synchronization between those transition and the periodic perturbation frequency also leads to some analogue of the usual stochastic resonance with external noise. A numerical calculation of the SNR(a) dependence confirms this assumption. Figure 7.6 shows the SNR(a) dependencies in the two-state approximation (Fig. 7.6a) and for full dynamics (Fig. 7.6b), described by (7.69). Both in the first and second cases we observe clear maxima for the SNR curves at parameter a values corresponding to a ratio of frequencies f s : f 0 = 1: 3, 1: 1, 4: 3.

This result is understandable from the point of view of general stochastic resonance philosophy. As we have already mentioned, stochastic resonance represents a generic phenomenon for non-linear systems with several time scales. The dependence of one of the scales on external perturbation allows us to assure certain resonance conditions. In the original setup of a bistable system perturbed by periodic signal and noise, one utilizes the dependence of the Kramers rate of transitions between the potential minima on the noise amplitude. In order to obtain analogous results in the case of a chaotic system with several attractors in a chaos–chaos alternation regime, one can use the dependence of the average frequency of the transitions between the attractors on the controlling parameter a.

7.5 Stochastic Resonance and Global Change in the Earth’s Climate

Now we discuss in more detail how to use the stochastic resonance effect for a qualitative explanation of the alternation of ice ages on Earth [14].

The chronology of ice ages (the global volume of ice on Earth) can be reconstructed from the ratio of the isotopes16 O and18 O in organic sediments [23]. Almost all of the oxygen in water is made up of the16 O isotope and only fractions of a percent belong to the heavier18 O. As the evaporation of heavier isotopes is less probable, precipitations on land (they are mainly determined by the evaporation of the oceans) are18 O isotope depleted. During ice ages, the continental glacial cover increases at the cost of the ocean (in the last Ice Age 18,000 years ago, the ocean level was almost 100 m lower than in the present, and up to 5 % of all total water volume was on land in form of ice) and they are enriched in18 O isotope. The ratio which interests us can be determined analyzing the isotope composition of calcium carbonate CaCO3, which shells of sea animals are made of. These shells accumulate on the ocean floor in form of the sedimentary layers. The more the ratio18 O16 O is in those sediments, the larger the continental ice volume was at the moment of shell formation.

Fig. 7.7
figure 7

The power spectrum of climatic changes for the last 700,000 years [2]

The isotope composition time dependence [2], constructed based on these measurements, clearly demonstrates the periodicity of the variation in global ice quantity on the planet: the ice ages came every 100,000 years. Of course, the time dependence presented in Fig. 7.7 is non-trivial: the dominating 100,000-year cycle interferes with additional smaller oscillations. What external effects could result in such periodic dependence? In the first half of the twentieth century, a Yugoslavian astronomer, M. Milankovich, developed a theory connecting the global changes of Earth climate to variations in insolation (the quantity of solar energy reaching Earth). Even if we assume that solar radiation is constant, global insolation will still depend on geometrical factors describing the Earth orbit. In order to consider the dynamics of insolation, one should study the time dependence of the following three parameters: the slope of the Earth’s axis in relation to the orbital plane, orbital eccentricity, and the precession of the Earth’s orbit. Gravitational interaction with the Moon and other planets lead to the time dependence of those parameters. Measurements and calculations showed that during the last million years these dependencies have an almost periodic character. The slope of the Earth’s axis changes between 22. 1 and 24. 5 in a period of about 40,000 years (at present, it is 23. 5). The eccentricity of the Earth’s orbit oscillates between 0.005 and 0.06 (being 0. 017 at present) with a period of 100,000 years (the very time scale that interests us). And finally, the period of the precession of the Earth’s axis is 26,000 years. What is the role of these factors in the Earth climate dynamics? An increase in the Earth’s slope increases the amplitude of seasonal oscillations. The precession weakly affects the insolation and mostly determines the perihelion passing time. The latter smoothes the seasonal contrasts in one hemisphere and amplifies them in the other. Therefore, the first two factors do not affect the total insolation, but just redistribute it along latitudes and in seasons. Only the variation of eccentricity changes the total annual insolation. However, the insolation oscillations connected with that effect do not exceed 0. 3 %, which leads to average temperature changes of not more than a few tenths of a degree, while during an ice age, the average annual temperature decreases in the order of 10C. So how can variations in the parameters of the Earth’s orbit cause global climate changes? The answer to the question is given by the following statement: a simultaneous account of a small external periodic force with a period of 105 years (modeling the oscillations of the eccentricity of the Earth’s orbit) and random noise effects (modeling climate fluctuations at shorter time scales, connected with random processes in the atmosphere and in oceanic currents) in the dynamics of climate changes allows us to satisfactorily reproduce the observed periodicity of ice ages.

In order to prove the above made statement we consider a simple model allowing us to account for the influence of insolation variation on the average temperature of the Earth T. The model represents the heat-balance equation for the radiation coming to Earth R in and emitted by it R out

$$\displaystyle{ C\frac{dT} {dt} = R_{\mathrm{in}}(T) - R_{\mathrm{out}}(T)\,, }$$
(7.75)

where C is the Earth’s thermal capacity. For the quantities R in and R out we use the following parametrization

$$\displaystyle\begin{array}{rcl} R_{\mathrm{in}}(T)& =& Q\mu \,, \\ R_{\mathrm{out}}(T)& =& \alpha (T)Q\mu +\varepsilon (T)\,.{}\end{array}$$
(7.76)

Here Q is the solar radiation reaching the earth, averaged over a long time period, μ is the dimensionless parameter allowing us to introduce explicit time variation in the incident flow, α(T) is the average albedo of the Earth surface (albedo is the photometric quantity determining the ability of a matte surface to reflect the incident radiation, i.e. ratio of the radiation reflected by the surface to the incident), ɛ(T) is the long-wave surface radiation of the heated Earth \(\left (\varepsilon (T) \sim T^{4}\right )\).

Let us rewrite (7.75) in the form

$$\displaystyle{ \frac{dT} {dt} = F(T);\quad F(T) \equiv \left (R_{\mathrm{in}}(T) - R_{\mathrm{out}}(T)\right )/C\,. }$$
(7.77)

Solutions of the equation F(T) = 0 represent physically observable equilibrium states of the considered model (7.75). They are usually called “climates” [2]. The properties of climatic stability are determined by the pseudo-potential Φ,

$$\displaystyle{ \varPhi = -\int F(T)dT\,. }$$
(7.78)

It is evident that

$$\displaystyle{ F(T) = - \frac{\partial \varPhi } {\partial T} }$$
(7.79)

and therefore the extrema of function Φ correspond to the above notions of climate. The climate will be stable if it corresponds to a minimum of Φ. In this case it is physically observable. The model (7.75) or (7.77) must reproduce the two following basic observable facts of the Earth’s climate dynamics:

  1. 1.

    Local climatic changes are limited by a temperature scale in the order of a few degrees.

  2. 2.

    At time scales of an order of 105 years, substantially larger average temperature changes occur (in the order of 10), resulting in drastic changes in the planet’s climate.

For a description of such dynamics, it is natural to use the pseudo-potential Φ with two stable climates T 1 and T 3 (the minima), separated by the temperature interval of an order of 10, and one unstable climate T 2 (the maximum) between them (see Fig. 7.8). One of the minima \(\left (T_{1}\right )\) corresponds to the ice age climate, the second \(\left (T_{3}\right )\)—to the present climate. The appearance of an unstable climate \(\left (T_{2}\right )\) in the intermediate region is easily understood from simple physical considerations. Let the unstable state correspond to some quantity of planetary ice cover. If, due to local fluctuations, the temperature decreases, the ice surface will increase, which will lead to an increase of the Earth’s local albedo and further temperature decrease. An analogous situation also takes place with local temperature increases in the vicinity of the same point.

Let us now introduce into Eq. (7.75) the time dependent factor μ(t), accounting for the variations of insolation, connected to the oscillations of the eccentricity of the Earth’s orbit

$$\displaystyle{ \mu (t) = 1 + 0.0005\cos \omega t;\quad \omega = \frac{2\pi } {10^{5}\ \text{years}}\,. }$$
(7.80)

The transition F(T) → F(T, t) corresponds to the introduction of the time-dependent potential Φ(T, t). A time dependence (7.80) of such low amplitude leads only to temperature oscillations in vicinity of the states (climates) T 1 and T 3, and it cannot explain the alternation of ice ages.

Fig. 7.8
figure 8

Pseudo-potential Φ with two stable (T 1, T 3) and one unstable T 2 climates

Let us now take into account short time scale climate fluctuations, including into Eq. (7.77) white noise, and transforming it into the stochastic differential equation

$$\displaystyle{ \frac{dT} {dt} = F(T,t) +\sigma \xi (t)\,. }$$
(7.81)

In this formulation the problem of the Earth’s climate changes coincides exactly with the above considered problem of particle dynamics in the symmetric double well under the simultaneous influence of weak periodic perturbation and noise. Figure 7.9 shows the numerical solution of Eq. (7.81) with parameters T 3T 1 = 10 K and a white noise dispersion of 0. 15 K2∕year. The figure clearly shows the stochastic resonance effect, manifesting in the periodic transitions T 1 ↔ T 3, accompanied by small oscillations around the stable states.

Fig. 7.9
figure 9

Results of numerical solving of the stochastic differential equation (7.81) [2]