Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The synchronization of stable oscillations is a well-known non-linear phenomenon frequently found in nature and widely used in technology [15]. Under synchronization, one usually understands the ability of coupled oscillators to switch from an independent oscillation regime, characterized by beats, to a stable coupled oscillation regime with identical or rational frequencies, when the coupling constant increases.

The statement of the problem of chaotic oscillation synchronization may appear paradoxical in contrast to stable oscillations. Two identical autonomous chaotic systems with almost the same initial conditions diverge exponentially quickly in the phase space. This is the main difficulty, at first sight making it impossible to create synchronized chaotic systems which will function in reality. Nevertheless, there are several reasons which make the realization of chaotic synchronization a very promising goal.

The noise-like behavior of chaotic systems suggests that they can be useful for secure communications . Even a fleeting glance at the Fourier spectrum of a chaotic system confirms this: no dominating peaks, no dominating frequencies, a normal broadband spectrum. Any attempt to use a chaotic signal for communication purposes makes it necessary for the recipient to have a duplicate of the signal used in the transmitter (i.e., the synchronized signal). In practice, synchronization is needed for many communication systems, not necessarily just chaotic ones. Unfortunately, existing synchronization methods are not suitable for chaotic systems, and therefore this purpose requires the development of new ones.

Chaos is widely used in cybernetic, synergetic, and biological applications [57]. If we have a system composed of several chaotic subsystems, then it is clear that their efficient joint functioning is possible only after the synchronization problem is solved.

In spatially extended systems, we often face the transition from homogeneous spatial motion to one changing in space (including also chaotic changes). For example, in the Belousoff–Zhabotinski reaction , dynamics can be chaotic but spatially homogeneous. This means that different spatial parts are synchronized with each other, i.e., they perform the same motions in the same moment of time, even if those motions are chaotic. But under other conditions the homogeneity loses stability and the system becomes inhomogeneous. Such spatial homogeneity ↔ inhomogeneity transitions are typical for extended systems, and synchronization must play a key role there.

The interest in the chaotic synchronization problem goes far beyond the limits of the natural sciences. It seems natural that the efficiency of an advertisement is determined by ability of the advertising objects to synchronize. The same can also be said about the unified perception of the mass culture.

6.1 Statement of Problem

The first works on synchronization of coupled chaotic systems were written by Yamada and Fujisaka [8]. They used local analysis (special Lyapunov exponents) to investigate changes in the dynamical systems when the coupling constant increased. Afraimovich et al. [9] introduced the basic notions now used in the description of the chaotic synchronization process. A principally important role in the development of the chaotic synchronization theory was played by the paper [10], where a new geometrical point of view on the synchronization phenomenon was developed.

Let us formulate the synchronization problem for a dynamical system described by a system of ordinary differential equations [10]. A generalization for the case of mappings requires only minimal changes. Consider an n-dimensional dynamical system

$$\displaystyle{ \dot{u} = f(u)\,. }$$
(6.1)

Let us divide the system arbitrarily into two subsystems u = (v, w)

$$\displaystyle{\dot{v} = g(v,w)\,,}$$
$$\displaystyle{ \dot{w} = h(v,w)\,, }$$
(6.2)

where

$$\displaystyle\begin{array}{rcl} v& =& \left (u_{1}\ldots u_{m}\right );\quad w = \left (u_{m+1}\ldots u_{n}\right )\,, \\ g& =& \left (\,f_{1}(u)\ldots f_{m}(u)\right );\quad h = \left (\,f_{m+1}(u)\ldots f_{n}(u)\right )\,.{}\end{array}$$
(6.3)

Now we create a new subsystem w′, identical to w, and we make the change v′ → v in the function h, attaching to (6.2) the equation for the new subsystem

$$\displaystyle\begin{array}{rcl} \dot{v}& =& g(v,w)\,, \\ \dot{w}& =& h(v,w)\,, \\ \dot{w}'& =& h(v,w')\,.{}\end{array}$$
(6.4)

The coordinates \(v = \left (v_{1}\ldots v_{m}\right )\) are called forcing variables, and \(w' = \left (w'_{m+1}\quad w'_{n}\right )\) are the forced variables. Consider the difference Δ w = w′ − w. The subsystem components w and w′ will be considered synchronized if Δ w → 0 at t → . In the limit Δ w → 0 the equation for variations Δ w ≡ ξ reads the following:

$$\displaystyle{ \dot{\xi }_{i} = \left [D_{w}h(v(t),w(t)\right ]_{ij}\xi _{j}\,, }$$
(6.5)

where D w h is the Jacobian for the w subsystem with respect to variable w only. It is clear that if ξ(t) → 0 at t → , then the trajectories of one of the subsystems converge to the same values of the other one. In other words, the subsystems are synchronized. The necessary condition of this subsystem synchronization is the negativity of the Lyapunov exponents of the equation system (6.5). It can be shown [11] that these Lyapunov exponents are negative when the Lyapunov exponents of subsystem w are negative. This condition is necessary but insufficient for synchronization. One should separately consider the question of the initial set of conditions w′, which can be synchronized with w.

6.2 Geometry and Dynamics of the Synchronization Process

Let us begin the description of the synchronization process with the example of one well-known dynamical Lorenz system. We will also consider general cases and types of synchronization below. Assuming that we have two identical chaotic Lorenz systems, already considered in the previous chapter, can we synchronize these two chaotic systems by transmitting some signal from the first system to the second one? Let this signal be x component of the first Lorenz system. Throughout the second system, we replace x component with the signal from the first system. Such an operation is commonly called a complete replacement [12]. Thus, we get a system of five connected equations:

$$\displaystyle\begin{array}{rcl} \dot{x}_{1}& =& -\sigma (\,y_{1} - x_{1})\,, \\ \dot{y}_{1}& =& -x_{1}z_{1} + rx_{1} - y_{1},\quad \dot{y} _{2} = -x_{1}z_{2} + rx_{1} - y_{2}\,, \\ \dot{z}_{1}& =& x_{1}y_{1} - bz_{1},\quad \dot{z} _{2} = x_{1}y_{2} - bz_{2}\,. {}\end{array}$$
(6.6)

The variable x 1 can be considered the driving force for the second system. If we start in (6.6) with arbitrary initial conditions, then, analyzing the numerical solution of the system, we will see that y 2 converges to y 1, and z 2 to z 1, after several oscillations and in the long-time asymptotic y 2 = y 1,  z 2 = z 1 (see Fig. 6.1). Hence we have two synchronized chaotic systems. Usually, this situation is called identical synchronization since both subsystems are identical and have equal components.

Fig. 6.1
figure 1

Time dependence of the z(t) coordinate for the driving (dashed line) and the driven (solid line) Lorenz systems [13]

The equations y 1 = y 2 and z 1 = z 2 determine a hyperplane in the original five-dimensional phase space \(\left (x_{2} \rightarrow x_{1}\right )\). The limitation of motion by the hyperplane is the geometrical image of the identical synchronization. Therefore, this hyperplane is sometimes [12] called the synchronization manifold .

In the example of two synchronized Lorenz systems considered above, we saw that the differences \(\left \vert y_{1} - y_{2}\right \vert \rightarrow 0\) and \(\left \vert z_{1} - z_{2}\right \vert \rightarrow 0\) at t → . This is possible only if the synchronization manifold is stable. In order to make sure of this, we transform to the new coordinates

$$\displaystyle\begin{array}{rcl} x_{1}& =& x_{1}\,, \\ y_{\perp }& =& y_{1} - y_{2};\quad y_{\parallel } = y_{1} + y_{2}\,, \\ z_{\perp }& =& \;\;z_{1} - z_{2};\quad z_{\parallel } = z_{1} + z_{2}.{}\end{array}$$
(6.7)

In the new variables the three coordinates \(\left (x_{1},y_{\parallel },z_{\parallel }\right )\) belong to the synchronization manifold, and the two others \(\left (y_{\perp },z_{\perp }\right )\) to the transversal. The synchronization condition is satisfied by the tending to zero of the variables y  ⊥  and z  ⊥  at t → . In other words, the point \(\left (0,0\right )\) in the transversal manifold must be stable. The system dynamics in the vicinity of that point is described by the equation

$$\displaystyle{ \left (\begin{array}{*{20}c} \dot{y} _{\perp } \\ \dot{z} _{\perp }\\ \end{array} \right ) = \left (\begin{array}{*{20}c} -1&-x_{1} \\ x_{1} & -b\\ \end{array} \right )\left (\begin{array}{*{20}c} y_{\perp } \\ z_{\perp }\\ \end{array} \right )\,. }$$
(6.8)

The general condition of stability is to have negative Lyapunov exponents for Eq. (6.8). This condition is equivalent to the negativity of Lyapunov exponents for the variables y 2, z 2 for the system (6.6) since the Jacobi matrices for this subsystems are identical. Therefore, we can consider the driven system \(\left (y_{2},z_{2}\right )\) to be a separate dynamical system, driven by the driving signal x 1 and we can calculate the Lyapunov exponents for that subsystem in the usual way. Those Lyapunov exponents will depend on x 1 and therefore they will be called conditional Lyapunov exponents [13]. The values for the conditional Lyapunov exponents for a given dynamical system will depend on the choice of driving coordinate.

This complete replacement scheme can be slightly modernized [14]. The modernization procedure entails introducing the driving coordinate only in some, but not in all, driven system equations. The choice of the equations, where the replacement is performed, is dictated by two factors. First, whether the replacement leads to stable synchronization. Second, whether it is possible to realize the corresponding replacement in a real physical device which we want to construct. Let us consider the following example of partial replacement , based on the Lorenz system

$$\displaystyle\begin{array}{rcl} \dot{x}_{1}& =& \sigma \left (y_{1} - x_{1}\right ),\quad \quad \dot{x} _{2} =\sigma (\,y_{1} - x_{2})\,, \\ \dot{y}_{1}& =& rx_{1} - y_{1} - x_{1}z_{1},\quad \dot{y} _{2} = rx_{2} - y_{2} - x_{2}z_{2}\,, \\ \dot{z}_{1}& =& x_{1}y_{1} - bz_{1},\quad \quad \quad \dot{z} _{2} = x_{2}y_{2} - bz_{2}\,. {}\end{array}$$
(6.9)

In (6.9) the replacement was made only in the second equation. This replacement will lead to a new Jacobi matrix defining the stability condition. Now it is a 3 × 3 matrix with zeroes in the positions of the partial replacement

$$\displaystyle{ \left (\begin{array}{*{20}c} \dot{x} _{\perp } \\ \dot{y} _{\perp } \\ \dot{z} _{\perp }\\ \end{array} \right ) \approx \left (\begin{array}{*{20}c} -\sigma & 0 & 0\\ r - z_{ 2} & -1& x_{2} \\ y_{2} & x_{2} & -b\\ \end{array} \right )\left (\begin{array}{*{20}c} x_{\perp } \\ y_{\perp } \\ z_{\perp }\\ \end{array} \right )\,. }$$
(6.10)

Generally speaking, in such cases the stability conditions differ from complete replacements. Sometimes they can appear to be more preferable.

In some cases, it may be useful to send the driving signal only at random moments of time. In this synchronization version (which is called “random synchronization” [15]), the driven system is subject to influence only in random moments, and in the intervals between them, it evolves freely. It is interesting to note that in this approach it is sometimes possible to achieve the stability of the synchronized state even in cases when continuous driving does not work.

From a more general point of view, the synchronization of chaotic systems can be considered in terms of negative feedback, which we used earlier in the example of continuous control. Introducing a damped term into the equations for the driving system, we get the following:

$$\displaystyle{ \dot{\mathbf{x}}_{1} =\mathbf{ F}(\mathbf{x}_{1}),\quad \quad \dot{\mathbf{x}} =\mathbf{ F}(\mathbf{x}_{2}) +\alpha \hat{E} (\mathbf{x}_{1} -\mathbf{ x}_{2})\,, }$$
(6.11)

where matrix \(\hat{E}\) determines the linear combinations of the \(\mathbf{x}\)-components, which form the feedback loop, α is the coupling constant. For example, for the Rössler system

$$\displaystyle\begin{array}{rcl} \dot{x}_{1}& =& -(\,y_{1} + z_{1}),\quad \dot{x} _{2} = -(\,y_{2} + z_{2}) +\alpha (x_{1} - x_{2})\,, \\ \dot{y}_{1}& =& x_{1} + ay_{1},\quad \dot{y} _{2} = x_{2} + ay_{2}\,, \\ \dot{z}_{1}& =& b + z_{1}(x_{1} - c);\quad \dot{z} _{2} = b + z_{2}(x_{2} - c)\,. {}\end{array}$$
(6.12)

In this case

$$\displaystyle{ \hat{E}= \left (\begin{array}{*{20}c} 1&0&0\\ 0 &0 &0 \\ 0&0&0\\ \end{array} \right )\,. }$$
(6.13)

(Equations of motion for the transversal manifold coordinates)

This gives us a new equation of motion for the transversal manifold coordinates

$$\displaystyle{ \left (\begin{array}{*{20}c} \dot{x} _{\perp } \\ \dot{y} _{\perp } \\ \dot{z} _{\perp }\\ \end{array} \right ) = \left (\begin{array}{*{20}c} -\alpha &-1& -1\\ 1 & \alpha & 0 \\ z & 0 &x - c\\ \end{array} \right )\left (\begin{array}{*{20}c} x_{\perp } \\ y_{\perp } \\ z_{\perp }\\ \end{array} \right )\,. }$$
(6.14)

By calculating the conditional Lyapunov exponents for the matrix in (6.14), we can see whether the transversal perturbations are damped and therefore if the synchronization manifold is stable. In practice, it is sufficient to find only the maximal transversal Lyapunov exponent λ max  ⊥ . Its negativity guarantees the stability of the synchronization process.

Fig. 6.2
figure 2

The maximal Lyapunov exponent λ max  ⊥  as function of the coupling constant α in the Rössler system [12]

Figure 6.2 shows the dependence of the maximal transversal Lyapunov exponent on the coupling constant α for the Rössler system. Introduction of feedback initially leads to a decrease in the Lyapunov exponent. Therefore, in some intermediate region of the coupling constant values, the two Rössler systems can be synchronized. However, with further increases of the coupling constant, λ max  ⊥  becomes positive and synchronization is impossible. It is easy to see that for extremely large values of α x 2 → x 1 and the feedback introduced in (6.12) becomes equivalent to the full replacement considered above. Then the sign of quantity \(\lambda _{\max }^{\perp }\left (\alpha \rightarrow \infty \right )\) determines the possibility of system synchronization in the case of full replacement.

6.3 General Definition of Dynamical System Synchronization

In the last decade many new types of chaotic synchronization appeared: apart from those mentioned in the preceding sections, there are phase synchronization , delayed synchronization , generalized synchronization , and others. As almost always happens in the first stages of investigation of any newly discovered phenomenon, there are no strict universal definitions. Such definitions are replaced by a “list”: when the researches face a new effect in a discovered phenomenon, they just extend the list. This situation is clearly unsatisfactory and at some stage this list must be replaced by a strict definition, encompassing all known effects connected with the phenomenon, as well as those to be discovered in future.

In the present section, following [16], we will make an attempt to give such a definition for finite-dimensional systems. Although we discuss explicitly the case of synchronization for two time-continuous dynamical systems, the results can be generalized for N systems, both continuous and discrete in time.

In order to construct the definition, let us assume that some large stationary dynamical system can be divided into two subsystems

$$\displaystyle\begin{array}{rcl} \dot{\mathbf{x}}& =& \mathbf{f}_{1}(\mathbf{x},\mathbf{y};t)\,, \\ \dot{\mathbf{y}}& =& \mathbf{f}_{2}(\mathbf{x},\mathbf{y};t)\,.{}\end{array}$$
(6.15)

The vectors \(\mathbf{x}\) and \(\mathbf{y}\) can have different dimensions. The phase space and the vector field of the big system are direct products of the phase spaces and vector fields of the subsystems. The list of phenomena described by (6.15) is inexhaustible.

Generally speaking, under synchronization we understand the time-correlated behavior of two different processes. The Oxford English Dictionary defines synchronization as “to agree in time” and “to happen at the same time.” This intuitive definition means that there are ways of measuring the characteristics of subsystems as well as the criterion of concordance in time of these measured data. If these conditions are satisfied, we can say that the systems are synchronized. Further on, we will attempt to formalize each of these intuitive concepts. Let \(\varphi (\mathbf{z}_{0})\) be a trajectory of the original system, given by (6.15) with the initial condition \(\mathbf{z}_{0} = \left [\mathbf{x}_{0},\mathbf{y}_{0}\right ]\). Respectively, the curves \(\varphi _{x}(\mathbf{z}_{0})\) and \(\varphi _{y}(\mathbf{z}_{0})\) are obtained by inclusion of \(\mathbf{y}\) and \(\mathbf{x}\) components, e.g., by projecting. The functions \(\varphi _{x}(\mathbf{z}_{0})\) and \(\varphi _{y}(\mathbf{z}_{0})\) may be considered as the trajectories of the first and of the second subsystem, respectively. The set of trajectories of each subsystem can be used to construct subsystems characteristics \(g(\mathbf{x})\) or \(g\left (\mathbf{y}\right )\). The measurable characteristic can either depend on time explicitly [for example, the first subsystem coordinate at time moment t, \(\mathbf{x}(t) =\mathbf{ g}(\mathbf{x})\)], or represent a time average [for example, the Lyapunov exponent \(\lambda = g\left (\mathbf{x}\right )\)].

Let us now give the following definition of synchronization: two subsystems (6.15) are synchronized on the trajectory \(\varphi \left (\mathbf{z}_{0}\right )\) with respect to properties \(\mathbf{g}_{x}\) and \(\mathbf{g}_{y}\), if there is a time independent comparison function \(\mathbf{h}\), for which

$$\displaystyle{ \left \|\mathbf{h}\left [\mathbf{g}\left (\mathbf{x}\right ),\mathbf{g}\left (\mathbf{y}\right )\right ]\right \| = 0\,. }$$
(6.16)

We would like to emphasize that this definition must be satisfied for all trajectories. The given definition is convenient because it a priori does not depend on the measured characteristics, nor on comparison function.

The most frequently used types of comparison functions are

$$\displaystyle\begin{array}{rcl} \mathbf{h}\left [\mathbf{g}\left (\mathbf{x}\right ),\mathbf{g}\left (\mathbf{y}\right )\right ]& \equiv & \mathbf{g}\left (\mathbf{x}\right ) -\mathbf{ g}\left (\mathbf{y}\right )\,, \\ \mathbf{h}\left [\mathbf{g}\left (\mathbf{x}\right ),\mathbf{g}\left (\mathbf{y}\right )\right ]& \equiv & \mathop{\lim }\limits _{t\rightarrow \infty }\left [\mathbf{g}\left (\mathbf{x}\right ) -\mathbf{ g}\left (\mathbf{y}\right )\right ]\,, \\ \mathbf{h}\left [\mathbf{g}\left (\mathbf{x}\right ),\mathbf{g}\left (\mathbf{y}\right )\right ]& \equiv & \mathop{\lim }\limits _{T\rightarrow \infty }\frac{1} {T}\int _{t}^{t+T}\left [\mathbf{g}\left (\mathbf{x}(s)\right ) -\mathbf{ g}\left (\mathbf{y}(s)\right )\right ]ds\,.{}\end{array}$$
(6.17)

This definition is quite useful because the most important characteristic of finite motion is the frequency spectrum. The measured frequencies \(\omega _{x} = g\left (\mathbf{x}\right )\) and \(\omega _{y} = g\left (\mathbf{y}\right )\) represent peaks in the power spectrum. To study frequency synchronization we usually take the comparison function in the form:

$$\displaystyle{ h\left [g\left (\mathbf{x}\right ),g\left (\mathbf{y}\right )\right ] = n_{x}\omega _{x} - n_{y}\omega _{y} = 0\,. }$$
(6.18)

In case of identical synchronization the second equation (6.17) is necessary to compare the trajectory of one system with another one, i.e., \(\mathbf{g}\left (\mathbf{x}\right ) =\mathbf{ x}\left (t\right ),\;\mathbf{g}\left (\mathbf{y}\right ) =\mathbf{ y}\left (t\right )\).

This definition also covers the so-called delayed synchronization, when some measured characteristics are delayed with respect to others during the same time period τ. In that case, we can take \(\mathbf{g}\left (\mathbf{x}\right ) =\mathbf{ x}\left (t\right )\) and \(\mathbf{g}\left (\mathbf{y}\right ) =\mathbf{ y}\left (t+\tau \right )\), up to use the first relation in (6.17) as the comparison function.

Therefore, the definition (6.16) includes all the examples of finite-dimensional dynamical system synchronization listed above.

6.4 Chaotic Synchronization of Hamiltonian Systems

Up to now we considered chaotic synchronization only for dissipative systems. In the present section we show [17] that using the same approach as for dissipative systems, we can synchronize two Hamiltonian systems. At first glance, it seems that any attempt to synchronize two chaotic Hamiltonian systems is doomed to failure. Indeed, as was shown above, the necessary condition of any synchronization is the local synchronization, provided by the negativity of all Lyapunov exponents for a driven subsystem (recall that we called them the conditional Lyapunov exponents, because they depend also on the driving subsystem coordinates). However, if the system preserves the phase volume, as we have seen in Chap. 3 and it would seem that the synchronization is impossible. However, it does not follow that is the sum of the Lyapunov exponents is equal to zero, for a subsystem the sum of the conditional Lyapunov exponents also equals zero; a subsystem of a phase volume preserving system does not necessarily preserve the phase volume, and therefore a Hamiltonian system can be synchronized.

Let us consider as an example the so-called standard mapping , which we dealt with in the previous chapter, in the following form:

$$\displaystyle\begin{array}{rcl} I_{n+1}& =& I_{n} + k\sin \theta _{n}\,, \\ \theta _{n+1}& =& \theta _{n} + I_{n} + k\sin \theta _{n},\;\bmod 2\pi;\quad k > 0\,.{}\end{array}$$
(6.19)

We will further drop \(\bmod 2\pi\). On the variable I the mapping has period 2π, therefore it is sufficient to study it in the square \(\left [0,2\pi \right ] \times \left [0,2\pi \right ]\) with identifying the opposite sides. The mapping has a well-known physical interpretation [18]—the frictionless pendulum driven by periodic pulses. In this interpretation I n , θ n represents the angular momentum and angular coordinate immediately before nth pulse.

Following the standard synchronization procedure, we make a duplicate of the original system

$$\displaystyle\begin{array}{rcl} J_{n+1}& =& J_{n} + k\sin \phi _{n}\,, \\ \phi _{n+1}& =& \phi _{n} + J_{n} + k\sin \phi _{n}\,.{}\end{array}$$
(6.20)

Let us chose the angular momentum of the first system I as the driving variable. Then the full system will be described by the system of the connected equations

$$\displaystyle\begin{array}{rcl} I_{n+1}& =& I_{n} + k\sin \theta _{n}\,, \\ \theta _{n+1}& =& \theta _{n} + I_{n} + k\sin \theta _{n}\,, \\ J_{n+1}& =& I_{n} + k\sin \phi _{n}\,, \\ \phi _{n+1}& =& \phi _{n} + I_{n} + k\sin \phi _{n}\,.{}\end{array}$$
(6.21)

The subsystems will be synchronized provided the condition

$$\displaystyle{ \mathop{\lim }\limits _{n\rightarrow \infty }\left \vert \theta _{n} -\phi _{n}\right \vert = 0\,. }$$
(6.22)

Difference between the driving and the driven angular variables is

$$\displaystyle{ \theta _{n+1} -\phi _{n+1} =\theta _{n} -\phi _{n} + k(\sin \theta _{n} -\sin \phi _{n})\,. }$$
(6.23)

Linearization of (6.23) at small deviations of φ n from the driving angular variable θ n gives

$$\displaystyle{ \varDelta _{n+1} =\varDelta _{n}(1 + k\cos \theta _{n})\,, }$$
(6.24)

where Δ n  = θ n φ n . Equation (6.24) has a solution

$$\displaystyle{ \varDelta _{n} =\prod \limits _{ j=0}^{n-1}(1 +\cos \theta _{ j})\varDelta _{0}\,. }$$
(6.25)

Local synchronization takes place if this product at n →  tends to zero. It is equivalent to the requirement that the conditional Lyapunov exponent on the angular variable

$$\displaystyle{ \lambda _{\theta } =\mathop{ \lim }\limits _{n\rightarrow \infty }\frac{1} {n}\sum \limits _{j=0}^{n-1}\ln \left \vert 1 + k\cos \theta _{ j}\right \vert }$$
(6.26)

is negative. The sum entering (6.26) represents the time average of the function \(g(\theta ) =\ln \left \vert 1 + k\cos \theta \right \vert \). This time averaging can be formally represented as a mean value of that function with respect to the invariant measure ρ(θ) (see Chap. 3). The latter determines the iteration density for the mapping θ n+1 = f(θ n ) and is defined in the following way:

$$\displaystyle{ \rho (\theta ) =\mathop{ \lim }\limits _{n\rightarrow \infty }\frac{1} {n}\sum \limits _{i=0}^{n-1}\delta [\theta -f^{i}(\theta _{ 0})]\,. }$$
(6.27)

It allows us to replace the time average \(\bar{g}(\theta )\) by the average over the invariant measure

$$\displaystyle{ \bar{g}(\theta ) =\mathop{ \lim }\limits _{n\rightarrow \infty }\frac{1} {n}\sum \limits _{i=0}^{n-1}g(\theta _{ i}) =\mathop{ \lim }\limits _{n\rightarrow \infty }\frac{1} {n}\sum \limits _{i=0}^{n-1}g\left [f^{i}(\theta _{ 0})\right ] =\int d\theta \rho (\theta )g(\theta )\,. }$$
(6.28)

Let us use this expression to transform the relation (6.26).

Fig. 6.3
figure 3

Conditional Lyapunov exponent for the standard mapping as function of k [17]

In a rough approximation for chaotic orbits in the standard mapping (6.19) the invariant measure can be considered homogeneous on the interval \(\left [0,2\pi \right ]\), i.e., ρ(θ) = 1∕2π and for λ θ we obtain

$$\displaystyle{ \lambda _{\theta } = \frac{1} {2\pi }\int _{0}^{2\pi }\ln \left \vert 1 + k\cos \theta \right \vert d\theta \,. }$$
(6.29)

The integral (6.29) can be calculated analytically,

$$\displaystyle{ \lambda _{\theta } = \left \{\begin{array}{@{}l@{\quad }l@{}} \ln \left (\frac{1+\sqrt{1-k^{2}}} {2} \right ),\quad &0 \leq k \leq 1 \\ \ln \left (\frac{k} {2} \right ), \quad &k \geq 1\\ \quad \end{array} \right.\,. }$$
(6.30)

Figure 6.3 presents the conditional Lyapunov exponent λ θ as function of k. Quantity λ θ is negative for k < 2. As is well known, the Chirikov criterion of non-linear resonance overlap determines the start of the transition to global stochasticity in the standard mapping at k ≈ 1. From there it follows that in the global stochasticity region 1 < k < 2 it is possible to synchronize the Hamiltonian system (6.19), if we choose the angular momentum I as the driving variable. It is interesting to note that the minimal value of the conditional Lyapunov exponent \(\left (\lambda _{\theta }\right )_{\min } = -\ln 2\) is achieved at k = 1. It means that this value of k corresponds to the minimal time needed to achieve synchronization.

Fig. 6.4
figure 4

(a ) A chaotic trajectory for the driving system (standard mapping). Arrows point to the initial conditions for the two subsystems; (b ) difference of angular coordinates for the driving and the driven subsystems as function of time (or iteration number) [17]

Figure 6.4a presents a chaotic trajectory of the driving system \(\left (I,\theta \right )\) and shows the initial conditions for the two subsystems. In Fig. 6.4b the difference of the angular coordinates Δ n is plotted as a function of the iteration number n. Complete synchronization is achieved at n ∼ 100. If we take the angular coordinate θ as the driving variable, then it can be shown that the conditional Lyapunov exponent equals to zero in that case. It means that synchronization is impossible, because each subsystem preserves the phase volume separately.

6.5 Realization of Chaotic Synchronization Using Control Methods

In this section, taking after [19], we will try to answer the following problem. Suppose that we have two almost identical chaotic systems. So, can we, using the OGY parametric control method considered in the previous chapter, achieve synchronization of chaotic trajectories? In other words, if the original OGY method was used to stabilize unstable periodic orbits, can we modify it in order to stabilize a chaotic trajectory of one system in a relatively small vicinity of the chaotic trajectory of another system? A positive answer to this question was already obtained by using of continuous control methods. Now we consider this question as applied to discrete parametric OGY control.

Suppose we have two chaotic systems A and B, and let some system parameter (say, of system B) is available for alteration. Let us also assume that some system variables of both systems can be measured. Based on those measurements we can change a moment of time when the measured variables are close to each other. Having calculated the required parameter perturbation using the OGY method we can synchronize the systems in a short time period. Due to the inevitable presence of noise there is a finite probability of losing the synchronization. However, because of ergodicity, after some time the system’s trajectories will again appear close in the phase space, and we will be able to synchronize them anew.

Let us realize this scheme for the case of two almost identical chaotic systems, which can be described by the following two-dimensional mappings:

$$\displaystyle\begin{array}{rcl} \mathbf{x}_{n+1}& =& \mathbf{F}(\mathbf{x}_{n},p_{0})\;\;[A]\,, \\ \mathbf{y}_{n+1}& =& \mathbf{F}(\mathbf{y}_{n+1},p)\quad [B]\,,{}\end{array}$$
(6.31)

where \(\mathbf{x}_{n},\mathbf{y}_{n} \in \text{R}^{2}\), \(\mathbf{F}\) is an analytic function of its variables, p 0 is a fixed parameter for the system A, and p is an externally fitted parameter of the system B. As in the OGY control case, we require a small variation region of the parameter p \(\left \vert p - p_{0}\right \vert <\delta\). Suppose that the systems start from different initial conditions. Generally speaking, the chaotic trajectories describing the evolution of each system are absolutely uncorrelated. However, due to ergodicity of motion, with unit probability they will appear arbitrary close to each other at some later moment n c . Without control, the trajectories begin to diverge exponentially for n > n c . Our goal is to program the parameter p variation in such way that \(\left \vert \mathbf{y}_{n} -\mathbf{ x}_{n}\right \vert \rightarrow 0\) for \(n\geqslant n_{c}\). Linearized dynamics in vicinity of the target trajectory \(\left \{x_{n}\right \}\)

$$\displaystyle{ \mathbf{y}_{n+1} -\mathbf{ x}_{n+1}(\,p_{0}) = \hat{A} \left [\mathbf{y}_{n} -\mathbf{ x}_{n}(\,p_{0})\right ] +\mathbf{ B}\delta p_{n} }$$
(6.32)

(see definitions in Sect. 5.3 of Chap. 5). As we have already pointed out in consideration of chaos control in Hamiltonian systems, due to the conservation of phase volume, the Jacobi matrix can have complex eigenvalues in this case. That is why it is convenient for the description of linearized dynamics to transit from eigenvectors to stable and unstable directions at every point of the chaotic orbit. Let \(\mathbf{e}_{s(n)}\) and \(\mathbf{e}_{u(n)}\) be unit vectors in those directions, and \(\left \{\mathbf{f}_{s(n)},\mathbf{f}_{u(n)}\right \}\) is the corresponding “orthogonal” basis, defined by the relations (5.15) in Chap. 5. Then, in this basis the condition under which the vector \(\mathbf{y}_{n+1}\) gets onto the stable direction of the point \(\mathbf{x}_{n+1}(\,p_{0})\), which is required to realize synchronization, reads as the following:

$$\displaystyle{ \left [\mathbf{y}_{n+1} -\mathbf{ x}_{n+1}(\,p_{0})\right ] \cdot \mathbf{ f}_{u(n+1)} = 0\,. }$$
(6.33)

Using (6.32) and (6.33), we get the parameter perturbation δ p n  = p n p 0, necessary to satisfy that condition:

$$\displaystyle{ \delta p_{n} = \frac{\left \{\hat{A}\cdot \left [\mathbf{y}_{n} -\mathbf{ x}_{n}(\,p_{0})\right ]\right \} \cdot \mathbf{ f}_{u(n+1)}} {-\mathbf{B} \cdot \mathbf{ f}_{u(n+1)}} \,. }$$
(6.34)

If \(\left (\varDelta p\right )_{n}\) calculated according to (6.34) appears greater than δ, we set δ p n  = 0.

Fig. 6.5
figure 5

Synchronization of two Hénon mappings: (a ) two chaotic trajectories before and after the control switch on; (b ) time dependence of Δ x = x 2x 1, corresponding to (a ) [19]

Let us check the efficiency of the functioning of this scheme in a Hénon mapping ((5.15), Chap. 5). Let us fix the value of p = p 0 = 1. 4 for one of the systems, and for the other, we will consider it as a fitting parameter, changing according to (6.34) in a small interval \(\left [1.39,1.41\right ]\). Let the two systems start in the moment t = 0 from different initial conditions: \(\left (x_{1},y_{1}\right ) = \left (0.5,-0.8\right )\) and \(\left (x_{2},y_{2}\right ) = \left (0.001,0.001\right )\). Then the two systems move along completely uncorrelated chaotic trajectories. At some moment, the systems appear sufficiently close one to another. The required proximity of the trajectories is determined by the magnitude of the parameter δ. When that happens, we switch on the synchronization mechanism, i.e., the perturbation of the parameter p according to (6.34). Figure 6.5a shows time sequences for the two chaotic trajectories (crosses and squares) before and after the synchronization mechanism is switched on. It is clear that after the control is switched on (approximately the 2500th iteration) the crosses and the squares overlap, though the trajectories remain chaotic. Figure 6.5b presents the time dependence of Δ x(t) = x 2(t) − x 1(t), tending to zero after the synchronization mechanism is switched on. The time needed to achieve the synchronization, as well as the control setup time, dramatically grows with the decrease of δ. Unfortunately, a direct application of the targeting methods considered in the previous chapter, allowing us to shorten the control setup time considerably, is impossible: in the case of control the target unstable periodic orbit is fixed, and in the case of synchronization the target is not only unfixed, but it also moves chaotically, which is why the problem becomes extremely complicated.

The following problem [20] is very close in formulation to the problems of periodic control, where stabilization is achieved due to the purposeful alteration of its parameters. Suppose

$$\displaystyle{ \dot{\mathbf{x}} =\mathbf{ f}\left (\mathbf{x},\mathbf{p}\right ) }$$
(6.35)

is an experimental realization of a dynamical system, whose parameters p ∈ Rm are known. Let us consider that we know the time dependence of some scalar observable quantity \(s = h(\mathbf{x})\) and function \(\mathbf{f}\), describing the model dynamics. Suppose, then, that we can construct the system

$$\displaystyle{ \dot{\mathbf{y}} =\mathbf{ g}\left (s,\mathbf{y},\mathbf{q}\right )\,, }$$
(6.36)

which will be synchronized with the first one \(\left (\mathbf{y} \rightarrow \mathbf{ x},\;t \rightarrow \infty \right )\), if \(\mathbf{q} =\mathbf{ p}\). If the functional form of the vector field \(\mathbf{f}\) is known, then for the construction of the required subsystem we can use the decomposition methods considered in Sect. 6.1. The answer that we are interested in is the following: can we construct an ordinary differential equations system for parameters \(\mathbf{q}\),

$$\displaystyle{ \dot{\mathbf{q}} =\mathbf{ u}\left (s,\mathbf{y},\mathbf{q}\right ) }$$
(6.37)

such that \(\left (\mathbf{y},\mathbf{q}\right ) \rightarrow \left (\mathbf{x},\mathbf{p}\right )\) if t → . Let us show on a concrete example that, generally speaking, there is a positive answer to that question. To that end, we again address the Lorentz system

$$\displaystyle\begin{array}{rcl} \dot{x}_{1}& =& \sigma \left (x_{2} - x_{1}\right )\,, \\ \dot{x}_{2}& =& p_{1}x_{1} - p_{2}x_{2} - x_{1}x_{3} + p_{3}\,, \\ \dot{x}_{3}& =& x_{1}x_{2} - bx_{3}\,, {}\end{array}$$
(6.38)

with p 1 = 28, p 2 = 1, p 3 = 0, b = 8∕3. We will assume that the observable variable is \(s = h\left (\mathbf{x}\right ) = x_{2}\). We will use it as the driving variable,

$$\displaystyle\begin{array}{rcl} \dot{y}_{1}& =& \sigma \left (s - y_{1}\right )\,, \\ \dot{y}_{2}& =& q_{1}y_{1} - q_{2}y_{2} - y_{1}y_{3} + q_{3}\,, \\ \dot{y}_{3}& =& y_{1}y_{2} - by_{3}\,. {}\end{array}$$
(6.39)

Suppose that the parameters q variation process is described by the following system of equations:

$$\displaystyle\begin{array}{rcl} \dot{q}_{1}& =& u_{1}\left (s,\mathbf{y},\mathbf{q}\right ) = \left [s - h\left (\mathbf{y}\right )\right ]y_{1} = \left (x_{2} - y_{2}\right )y_{1}\,, \\ \dot{q}_{2}& =& u_{2}\left (s,\mathbf{y},\mathbf{q}\right ) = \left [s - h\left (\mathbf{y}\right )\right ]y_{2} = -\left (x_{2} - y_{2}\right )y_{2}\,, \\ \dot{q}_{3}& =& u_{3}\left (s,\mathbf{y},\mathbf{q}\right ) = \left [s - h\left (\mathbf{y}\right )\right ] = \left (x_{2} - y_{2}\right )\,. {}\end{array}$$
(6.40)

In order to show that \(\left (\mathbf{y},\mathbf{q}\right ) = \left (\mathbf{x},\mathbf{p}\right )\) are the stable solutions of (6.39), (6.40), it is necessary to study dynamics of the differences \(\mathbf{e} \equiv \mathbf{ y} -\mathbf{ x}\) and \(\mathbf{f} =\mathbf{ q} -\mathbf{ p}\). Those differences obey the following system of equations:

$$\displaystyle\begin{array}{rcl} \dot{e}_{1}& =& -\sigma e_{1}\,, \\ \dot{e}_{2}& =& q_{1}y_{1} - p_{1}x_{1} - q_{2}y_{2} + p_{2}x_{2} - y_{1}y_{3} + x_{1}x_{3} + f_{3}\,, \\ \dot{e}_{3}& =& y_{1}y_{2} - x_{1}x_{2} - be_{3} \\ \dot{f}_{1}& =& -e_{2}y_{1},\quad \dot{f}_{2} = e_{2}y_{2},\quad \dot{f}_{3} = -e_{2}\,. {}\end{array}$$
(6.41)

where the parameters \(\mathbf{p}\) are assumed to be constant. From the first equation it follows that e 1 → 0, i.e., y 1 → x 1. In the limit t →  the system (6.41) can be

$$\displaystyle\begin{array}{rcl} \dot{e}_{2}& =& f_{1}y_{1} - f_{2}y_{2} - p_{2}e_{2} - y_{1}e_{3} + f_{3}\,, \\ \dot{e}_{3}& =& y_{1}e_{2} - be_{3}\,, \\ \dot{f}_{1}& =& -e_{2}y_{1},\quad \dot{f}_{2} = e_{2}y_{2},\quad \dot{f}_{3} = -e_{2}\,. {}\end{array}$$
(6.42)

In order to study the global stability of the system we will use the Lyapunov functions method [21], whose main principle is the following. Suppose that on a plane (the method works in a space of any dimension, but we restrict ourselves to the plane) there is a vector field with a fixed point \((\bar{x},\bar{y})\), and we want to know whether it is stable. In accordance with obvious ideas about stability, it will suffice to find some vicinity U of the fixed point, such that the trajectory starting in U remains inside it at all the consecutive time moments. This condition can be satisfied if the vector field on the boundary of U is directed either inside the region towards \(\left (\bar{x},\bar{y}\right )\), or is tangential to the boundary (see Fig. 6.6a). The Lyapunov functions method allows us to answer the question of whether the considered vector field has such a geometry.

Fig. 6.6
figure 6

(a ) Vector field on the boundary of U. (b ) Gradient of V in different points of the boundary

Suppose that the considered vector field is defined by the equations

$$\displaystyle\begin{array}{rcl} \frac{dx} {dt} & =& f(x,y)\,, \\ \frac{dy} {dt} & =& g(x,y)\,.{}\end{array}$$
(6.43)

Let V (x, y) be some scalar function on R2, at least once differentiable. As well, \(V (\bar{x},\bar{y}) = 0\) and the set of points, satisfying the condition V (x, y) = C, form closed curves, surrounding the point \(\left (\bar{x},\bar{y}\right )\) for different values of C, while \(V \left (x,y\right ) > 0\) (see Fig. 6.6b). It is easy to understand that if the vector field has the above geometry, then

$$\displaystyle{ \nabla V (x,y) \cdot \left (dx/dt,dy/dt\right ) =\dot{ V }\leqslant 0\,. }$$
(6.44)

Thus, if it is possible to construct a function with given properties (the Lyapunov function), satisfying the condition (6.44), then the considered fixed point is globally stable.

Let us now return to considering the stability of the system (6.42). For the Lyapunov function we choose the following:

$$\displaystyle{ V = e_{2}^{2} + e_{ 3}^{2} + f_{ 1}^{2} + f_{ 2}^{2} + f_{ 3}^{2}\,. }$$
(6.45)

Using Eq. (6.42), we get

$$\displaystyle{ \dot{V } = -2\left (p_{2}e_{2}^{2} + be_{ 3}^{2}\right )\,. }$$
(6.46)

For p 2 > 0 (b = 8∕3) that derivative is negative, and, therefore, according to (6.44), the driven system parameters \(\mathbf{q}\) on large time scales tend to values of the initial system parameters \(\mathbf{p}\). Figure 6.7a illustrates this process. For initial conditions we have chosen the following \(\mathbf{x} = \left (0.1,0.1,0.1\right ),\;\mathbf{y} = \left (-0.1,0.1,0\right )\), \(\mathbf{q} = \left (10,10,10\right )\). The points on the figure denote the values of the parameters p 1∕10 = 2. 8, p 2 = 1, p 3 = 0 (the first coefficient is divided by ten for convenience). In this case, we assume that all other coefficients coincide exactly. On the figure, one can see quite rapid \(\left (\mathbf{q} \rightarrow \mathbf{ p}\right )\) convergence. Figure 6.7b shows the same process, but for a case when the driving system parameter σ = 10 is replaced by the value σ = 10. 1 for the driven system. In this case, there is no exact convergence, but oscillations of the parameters \(\mathbf{q}\) around the exact values are observed.

Fig. 6.7
figure 7

(a ) Process of \(\overline{q} \rightarrow \overline{p}\) convergence for coinciding values of other parameters. (b ) The same process for values σ = 10 (driving system) and σ = 10. 1 (driven system) [20]

6.6 Synchronization Induced by Noise

In this section we will consider one more example of the constructive role of chaos—the synchronization of chaotic systems with help of additive noise [22]. The effect that we intend to study consists of the fact that the introduction of noise (with sufficiently high intensity) in independent copies of systems makes them collapse in the same trajectory, independently of the initial conditions of each copy. This synchronization of chaotic systems represents one more example that contradicts intuitive ideas of the destructive role of noise. We want to clarify the essence of the effect and to analyze the structural stability of the phenomenon.

Noise-induced synchronization has a short but interesting history. The ordering effects of noise in chaotic systems were first considered in the paper [23], the authors of which came to the conclusion that noise can make a system less chaotic. Later, in [24] the noise-induced chaos-regularity transition was demonstrated. Noise-induced synchronization was considered for the first time in [25]. The authors showed that particles in external potential, subject to random forces, tend to collapse on the same trajectory. Among the further papers written on that topic we would emphasize the one [26] which evoked violent polemics. The authors of the paper analyzed the logistic mapping

$$\displaystyle{ x_{n+1} = 4x_{n}(1 - x_{n}) +\xi _{n}\,, }$$
(6.47)

where ξ n is the noise term with homogeneous distribution on the interval \(\left [-W,W\right ]\).

They showed that if W is sufficiently large (i.e., for high noise intensities), two different trajectories starting from distinct initial conditions but subject to identical noise (the same sequences of random numbers) will at last end at the same trajectory. The authors showed that the same situation also takes place for the Lorenz system. This result provoked a harsh criticism [27], connected with the fact that the two systems can be synchronized only in the case when the maximal Lyapunov exponent is negative. For the logistic mapping in presence of noise, the maximal Lyapunov exponent is positive and therefore the observed synchronization is the result of a loss in calculation accuracy. It was also noted [28] that the noise used for the simulation (6.47) is not symmetrical in reality. A non-zero mean value \(\left \langle \xi _{n}\right \rangle\) appears because the requirement \(x_{n} \in \left [0,1\right ]\) forces us to exclude those random numbers that induce any violation of that condition. The introduction of noise with a non-zero mean value implies that the authors of [26] essentially changed properties of the original deterministic mapping. As a result of a whole row of works it was, however, shown that some chaotic mappings can nevertheless be synchronized by additive noise with zero mean. The mechanism leading to synchronization was explained in [29]; its essence is the following. As we have already mentioned, synchronization can be achieved only in the case of negative Lyapunov exponents. In presence of noise, due to the reconstruction of the distribution function, the system appears to spend more time in the regions of stability, where the Lyapunov exponents are negative, and it provides the global negativity of the Lyapunov exponents. Let us analyze this reasoning in more detail.

Let us consider the mapping

$$\displaystyle{ x_{n+1} = F(x_{n}) = f(x_{n}) +\varepsilon \xi _{n}\,, }$$
(6.48)

where \(\left \{\xi _{n}\right \}\) is the set of uncorrelated Gaussian variables with zero mean value and unit dispersion. For an example of a concrete realization of (6.48) we choose the following:

$$\displaystyle{ f(x) =\exp \left [-\left (\frac{x - 0.5} {\omega } \right )^{2}\right ]\,. }$$
(6.49)

The investigation of the relative behavior of the trajectories, described by (6.48) and starting from different initial conditions, is equivalent to an analysis of such behavior in two identical systems of the form (6.48) subject to the same noise, under which we understand using the same sequence of random numbers \(\left \{\xi _{n}\right \}\). Figure 6.8 shows the bifurcation diagram for that mapping in absence of noise. The chaoticity regions are well visible on the diagram. In those regions the maximal Lyapunov exponent is positive. So, for example, for ω = 0. 3 (this case will be analyzed further) λ ≈ 0. 53. In Fig. 6.9 one can see that at a sufficient noise level ɛ, for most values of ω this Lyapunov exponent becomes negative. So for ω = 0. 3 and ɛ = 0. 2 we find that λ = −0. 17.

Fig. 6.8
figure 8

Bifurcation diagram for the mapping (6.48) in absence of noise [22]

Fig. 6.9
figure 9

Lyapunov exponent for the mapping (6.49): ɛ = 0 (solid line), ɛ = 0. 1 (dashed line), ɛ = 0. 2 (dash-dot line) [22]

Fig. 6.10
figure 10

The synchronization diagram (x (2) as a function of x (1)) for the case ω = 0. 3. (a ) ɛ = 0, (b ) ɛ = 0. 2 [22]

The positivity of the Lyapunov exponent in a noiseless case means that the trajectories starting from different initial conditions are excited by the determined part f(x n ), and by the same random sequence of numbers \(\left \{\xi _{n}\right \}\), will not coincide at any arbitrarily large n. In this case, the synchronization diagram (x (2) as a function of x (1)) represents a wide and almost uniform distribution (Fig. 6.10a). However, at \(\varepsilon \geqslant 0.2\), when the maximal Lyapunov exponent becomes negative, we observe almost complete synchronization (Fig. 6.10b). The noise intensity is not high enough to neglect the deterministic term in (6.48). Therefore, the synchronization mechanism that we want to understand is far from trivial.

Fig. 6.11
figure 11

Distribution function for the mapping (6.48) in the case ω = 0. 3. (a ) ɛ = 0, (b ) ɛ = 0. 2 [22]

The Lyapunov exponent determining the synchronization condition for the mapping (6.48) can be represented in the form

$$\displaystyle{ \lambda =\mathop{ \lim }\limits _{N\rightarrow \infty } \frac{1} {N}\sum \limits _{i=1}^{N}\ln \left \vert F'(x_{ i})\right \vert \,. }$$
(6.50)

This expression represents the mean value of the logarithm of the absolute value of the derivative F′ (slope), calculated along the trajectory \(\left \{x_{i}\right \}\). The slopes in the interval \(\left [-1,1\right ]\) give negative contribution in λ, leading to the synchronization. Larger or smaller slopes give positive contribution in λ and generate a divergence of the trajectories. At first sight it seems, as F′ = f′, that the presence of noise does not modify the Lyapunov exponent. However, this is not so. The modification of the Lyapunov exponent due to noise is connected with noise-induced modification of the trajectory, along which the averaging (6.50) takes place. In order to understand this, we will use the expression for the Lyapunov exponent in terms of the stationary distribution function P st(x),

$$\displaystyle{ \lambda = \left \langle \log \left \vert F'(x)\right \vert \right \rangle = \left \langle \log \left \vert \,f'(x)\right \vert \right \rangle \equiv \int P_{\mathrm{st}}(x)\log \left \vert \,f'(x)\right \vert dx\,. }$$
(6.51)

We see that with the inclusion of any perturbation there are two mechanisms leading to the modification of the Lyapunov exponent: the change of \(\left \vert \,f'(x)\right \vert \) and the reconstruction of the distribution function. At the inclusion of the additive noise, the latter mechanism works. In Fig. 6.11, one can see the reconstruction of the stationary distribution function for the mapping (6.48). We can conclude that synchronization will be a common feature of those mappings [for example, (6.48)], for which, with the inclusion of noise, the regions with \(\left \vert \,f'(x)\right \vert < 1\) have sufficient statistical weight.

Fig. 6.12
figure 12

Noise-induced synchronization for the Lorenz system [22]. (a ) ɛ = 0, (b ) ɛ = 40

Let us consider one more example—noise-induced synchronization in the Lorenz system with additive noise, introduced into the equation for the coordinate y,

$$\displaystyle\begin{array}{rcl} \dot{x}& =& p(\,y - x)\,, \\ \dot{y}& =& -xz + rx - y+\varepsilon \xi \,, \\ \dot{z}& =& xy - bz\,. {}\end{array}$$
(6.52)

Here ξ(t)—the white noise—is the Gaussian random process with zero mean: \(\left \langle \xi (t)\right \rangle = 0;\quad \left \langle \xi (t)\xi (t')\right \rangle =\delta (t - t')\). As we have already seen in the previous chapter, for the parameter values p = 0,  b = 8∕3,  r = 28 and in the absence of noise \(\left (\varepsilon = 0\right )\), the system (6.52) is chaotic (the maximal Lyapunov exponent is λ ≈ 0. 9 > 0). Therefore, the trajectories starting from different initial conditions are absolutely uncorrelated (see Fig. 6.12a). The same situation also takes place at low noise intensities. However, at a noise intensity that provides a negative maximal Lyapunov exponent (for ɛ = 40, λ ≈ −0. 2), almost complete synchronization of all three coordinates is observed (see Fig. 6.12b for the coordinate z). We stress that, although the noise intensity is relatively high, the structure of the strange attractor preserves the “butterfly” topology, characteristic for the deterministic case. This fact stresses again that in the considered examples we are not dealing with trivial synchronization, which takes place in the case when the deterministic terms in the mapping (or equations of motion) can be neglected.

A natural question arises about the structural stability of the considered phenomenon. Unlike the two identical Lorenz systems (with the same coefficients p, b, r) two real systems never have identical sets of parameters. Therefore, if we intend to use noise-induced synchronization, for communication purposes, for example, we should preliminarily estimate the permissible difference between the parameters of the transmitter and the receiver. In order to solve this problem, we will numerically analyze the dynamics of two Lorenz systems with slightly different parameters (p 1, b 1, r 1) and (p 2, b 2, r 2), but subject to the same noise factor ɛ ξ. In order to estimate the effects of variation on each of the parameters, we will vary them independently. The result of the procedure is presented in Fig. 6.13. On that figure we plot the part of the full observation time (in percent), during which the systems were synchronized with an accuracy up to 10 %. This means that the trajectories of the two systems were considered synchronized if the relative difference of their coordinates was less than 10 %. From Fig. 6.13 one can conclude that, with a parameter variation of an order of 1 %, during 85 % of total observation time, the systems remained synchronized.

Fig. 6.13
figure 13

The synchronization time for the Lorenz system (in percents with respect to the total observation time) as a function of the parameters [22]: (a ) b, (b ) r, (c ) p

6.7 Synchronization of Space-Temporal Chaos

Most physical phenomena in domains where we deal with extended physical objects (hydrodynamics, electromagnetism, plasma physics, chemical dynamics, biological physics, and many others) can be described only with the help of partial derivative equations. Only with some simplifying assumptions do those equations reduce to a system of connected ordinary differential equations or connected grid mappings. All of the examples of chaotic systems synchronization that we have considered belong to finite-dimensional (moreover, low-dimensional) systems. The behavior of spatially extended non-linear systems is considerably complicated by space-temporal chaos (turbulence), which is characteristic for most of them. In these cases, chaotic behavior is observed both in time and in space. A natural question arises: how efficient will the above low-dimensional systems synchronization methods be for space-temporal chaos? We will not dwell on this question in detail, instead redirecting the reader to the reviews [12, 30]. We will only consider the possibility of space-temporal chaos synchronization [31] on an example of an autocatalytic model, demonstrating chaos [32],

$$\displaystyle\begin{array}{rcl} \frac{\partial u_{1}} {\partial t} & =& -u_{1}v_{1}^{2} + a(1 - u_{ 1}) + D_{u}\nabla ^{2}u_{ 1}\,, \\ \frac{\partial v_{1}} {\partial t} & =& u_{1}v_{1}^{2} - (a + b)v_{ 1} + D_{v}\nabla ^{2}v_{ 1}\,,{}\end{array}$$
(6.53)

where u 1 and v 1 are reactive and activator concentrations, respectively, a, b are reaction parameters, and D u , D v are diffusion constants. We will consider the system (6.53) as driving in relation to the analogous system

$$\displaystyle\begin{array}{rcl} \frac{\partial u_{2}} {\partial t} & =& -u_{2}v_{2}^{2} + a(1 - u_{ 2}) + D_{u}\nabla ^{2}u_{ 2}\,, \\ \frac{\partial v_{2}} {\partial t} & =& u_{2}v_{2}^{2} - (a + b)v_{ 2} + D_{v}\nabla ^{2}v_{ 2} + f(x,t)\,.{}\end{array}$$
(6.54)

Suppose v 2(t − 0) is value of v 2 immediately before time moment t 2. The driving function f(x, t) acts on the system in the following way. Let L be the linear dimension of the chemical reactor, L = NX, t = kT, T > 0, X > 0, N, k are integer numbers. In every moment of time t = kT in N spatial points x = 0, X, 2X, , (N − 1)X the driving function transits simultaneously

$$\displaystyle{ v_{2}(kT - 0) \rightarrow v_{2}(kT) = v_{2}(kT - 0) +\varepsilon \left [v_{1}(kT) - v_{2}(kT - 0)\right ]\,. }$$
(6.55)

In the time moments tkT the systems (6.53) and (6.54) are not connected and evolve independently. We note that for X = T = 0, ɛ = 1 such driving reduces to the full replacement considered above. Motivation to select driving in the form (6.55) is determined by two reasons. On one hand we intend to achieve synchronization by controlling only a finite number N of spatial points, and on the other hand, we want to use time-discrete perturbation to do this.

Fig. 6.14
figure 14

The results of numerical simulation of evolution described by the systems (6.53), (6.54): (a ) space-temporal dependence u 1(x, t); (b ) difference | u 1u 2 | ; (c ) global synchronization error e(t)(6.56) [30]

The results of numerical simulation of evolution described by (6.53), (6.54) are presented in Fig. 6.14. For integration, the Euler scheme was implemented with M = 256 spatial nodes and time step equal to Δ t = 0. 05. The following parameter values were chosen:

$$\displaystyle{a = 0.028,\;b = 0.053,\;D_{v} = 1.0 \times 10^{-5},\;D_{ u} = 2D_{v},\;L = 2.5\,.}$$

Figure 6.14a demonstrates the space-temporal evolution u 1(x, t), described by (6.53), with initial conditions u(x) = 1, v(x) = 0.

To simulate the partial derivative equation systems (6.53), (6.54) with the condition (6.55) the following parameters values were taken: ɛ = 0. 2, T = 20Δ t, \(X = \left (8/256\right )L\). In other words, the perturbation acted on 32 of 256 spatial nodes. It appeared that there is a critical value X cr, such that for all X < X cr the systems (6.53) and (6.54) can be synchronized. For the chosen parameter set \(X_{\mathrm{cr}} = \left (14/256\right )L\), and this number does not change with an increase of M. This important example shows that an infinite-dimensional system can be synchronized by the perturbation of a finite number of points, i.e., synchronization is achieved with help of the driving signal in the form of N-dimensional vector.

Suppose the driving function is turned on at t = 5000. Figure 6.14b presents the difference \(\left \vert u_{1} - u_{2}\right \vert \) (the turn-on moment is denoted by the dashed line). Those regions of \(\left (x,t\right )\) space, where that function is large, i.e., the desynchronization regions, are painted in black. One can see that such regions are present only before the moment the driving signal is turned on, t < 5000. In order to make the effect clearer, we introduce the global synchronization error e(t),

$$\displaystyle{ e = \sqrt{ \frac{1} {L}\int _{0}^{L}\left [\left (u_{1} - u_{2}\right )^{2} + \left (v_{1} - v_{2}\right )^{2}\right ]dx}\,. }$$
(6.56)

As one can see from Fig. 6.14c, that error tends to zero after the synchronization mechanism is turned on (6.55).

6.8 Additive Noise and Non-identity Systems Influence on Synchronization Effects

In the present section we intend to make a quantitative investigation of the transition of the initial idealized problem formulation (identical system synchronization in absence of noise) to a realistic one, accounting for the obligatory presence of internal noise and deviation in the system parameters [33]. The latter implies that the free dynamics of the driving and of the driven systems will differ for the same initial conditions. In the transition from idealization to reality we face the problem of the experimentally measurable time series synchronization. Under the driving system we will understand an experimentally observable system, whose dynamics are known only in the sense that the time series of the system’s characteristics measurements are given. The driven system represents a model that can be constructed based on the temporal measurements made with the driving system. Suppose that the unknown dynamics of the driving system in some “work phase space” is represented by the equation

Fig. 6.15
figure 15

An example of synchronization for the model (6.58). Lower curve: squared difference between the driving and the driven trajectories | z | 2 =  | xy | 2; upper curve: distance between the driving and the free trajectory of the model system \(\vert \overline{z}\vert ^{2} = \vert \overline{x} -\overline{w}\vert ^{2}\) [33]

$$\displaystyle{ \dot{\mathbf{x}} =\mathbf{ G}(\mathbf{x}) }$$
(6.57)

and the model dynamics in the same space is

$$\displaystyle{ \dot{\mathbf{x}} =\mathbf{ F}(\mathbf{x})\,. }$$
(6.58)

We assume that the corresponding embedding theorems (see Chap. 4) provide existence of (6.57) in the work phase space. Figure 6.15 represents an example of synchronization in the model (6.58) with the time series obtained from (6.57). Let \(\mathbf{x}(t)\) be some trajectory, measured with help of some “experimental setup” (6.57). We now use that trajectory and the model (6.58) in order to generate two new trajectories. The trajectory \(\mathbf{w}(t)\) is obtained by forward time integration of (6.58) using the first point of the trajectory \(\mathbf{x}(t)\) as the initial condition. The trajectory \(\mathbf{y}(t)\) results from the synchronization process: the substitution of the measured time series for one coordinate into the model equation (6.58). The lower curve in Fig. 6.15 represents the square of the distance between the driving and the driven trajectories \(\left \vert \mathbf{z}\right \vert ^{2} = \left \vert \mathbf{x} -\mathbf{ y}\right \vert ^{2}\). The upper curve is the distance between the driving trajectory and the free one in the model system \(\left \vert \mathbf{z}\right \vert ^{2} = \left \vert \mathbf{x} -\mathbf{ w}\right \vert ^{2}\). The degree of smallness of the lower curve with respect to the upper one determines the quality of the synchronization.

In the total absence of noise and model errors (i.e., for \(\mathbf{F} =\mathbf{ G}\)) we expect exact synchronization \(\left \vert \mathbf{z}\right \vert ^{2} = 0\). For physical devices and model equations, this will never happen, as in the driving signal there is always a noise component and model errors are inevitable. Therefore, a physical device and a model can be synchronized only approximately. As there are no two exactly identical devices, this also concerns the synchronization of two experimental setups. It is natural to expect that with an increase of the noise level or of the magnitude of model errors, the lower curve amplitude in Fig. 6.15 will grow. It is the character of that growth which determines the quantitative measure of the influence of noise and model errors on the synchronization process.

Let us now use the following quantity as the driving signal:

$$\displaystyle{ \mathbf{x} +\sigma \mathbf{ u}\,. }$$
(6.59)

Here \(\mathbf{x}\) is the time series (6.57), \(\sigma \mathbf{u}\) is the additive noise term, associated with errors in the driving signal measurements, σ is the noise level, and \(\mathbf{u}\) is a random Gaussian vector with unit dispersion of the components and zero mean value. Errors may be induced by random deviations of the device parameters from nominal values and by background noise, measured together with the signal. To synchronize the device (6.57) and the model (6.58) we use negative feedback (6.59)

$$\displaystyle{ \dot{\mathbf{y}} =\mathbf{ F}(\mathbf{y}) -\hat{E} \left [\mathbf{y} - (\mathbf{x} +\sigma \mathbf{ u})\right ]\,. }$$
(6.60)

The matrix \(\hat{E}\) determines the connection between \(\mathbf{y}\) and the experimentally measured time series. Further, we assume that the matrix has a unique non-zero element lying on the diagonal and let this element be E ii  = ɛ, if the ith component of \(\mathbf{x} +\sigma \mathbf{ u}\) is used as the driving signal. Inside some region of values ɛ, determining the negativity of the maximal conditional Lyapunov exponent, the feedback (6.60) must lead to synchronization between \(\mathbf{x}\) and \(\mathbf{y}\), and all deviations are connected either to model errors or to the presence of noise. Assuming the smallness of \(\left \vert \mathbf{z}\right \vert \)—the deviation of the model dynamics from the device dynamics—the linearized time evolution \(\mathbf{z}\) is described by the equation

$$\displaystyle{ \dot{\mathbf{z}} = \left [\mathbf{D}\mathbf{F}(\mathbf{x}) -\hat{E} \right ]\mathbf{z} +\sigma \hat{E} \mathbf{u} +\varDelta \mathbf{ G}\left (x\right ), }$$
(6.61)

where \(\varDelta \mathbf{G} =\mathbf{ F} -\mathbf{ G}\), \(\left (\mathbf{D}\mathbf{F}\right )_{ij} = \frac{\partial F_{i}} {\partial x_{j}}\). The quantity \(\varDelta \mathbf{G}\) has two potential sources. The first source generating \(\varDelta \mathbf{G}\) is the error arising from the modeling of an unknown vector field \(\mathbf{G}\). In any real situation \(\mathbf{F}\) and \(\mathbf{G}\) never coincide. The second source is connected to the fact that the driving signal dynamics differ from the dynamics reproduced by the time series used to construct the model. In order to separate these two sources we assume that the time series used to construct the model comes from vector field \(\mathbf{G}\), while the driving signal is generated by the field \(\mathbf{G}'\). We will consider that distinction of those two fields is connected to the variation of some parameter set \(\mathbf{p}\) of the driving system, i.e.,

$$\displaystyle{ \mathbf{G}\cong \mathbf{G}' + \left (\partial \mathbf{G}'/\partial \mathbf{p}\right ) \cdot \delta \mathbf{ p} }$$
(6.62)

then

$$\displaystyle{ \varDelta \mathbf{G}\left (\mathbf{x}\right )\cong \varDelta \mathbf{G}'\left (\mathbf{x}\right ) + \left (\partial /\partial \mathbf{p}\left (\varDelta \mathbf{G}'\left (\mathbf{x}\right )\right )\right ) \cdot \delta \mathbf{ p}\,, }$$
(6.63)

where \(\varDelta \mathbf{G}' \equiv \mathbf{ F} -\mathbf{ G}'\). Equation (6.61) [accounting for (6.63)] is an evolutionary equation for the connected device-model system in the vicinity of synchronized motion. In the absence of noise \(\left (\sigma = 0\right )\) and for ideal model dynamics \(\left (\varDelta \mathbf{G} = 0\right )\)

$$\displaystyle{ \dot{\mathbf{z}} = \left [\mathbf{D}\mathbf{F}\left (\mathbf{x}\right ) -\hat{E} \right ]\mathbf{z}\,. }$$
(6.64)

The formal solution of that homogeneous linear equation reads

$$\displaystyle{ \mathbf{z}(t) =\exp \left [\int _{t_{0}}^{t}\left [\mathbf{D}\mathbf{F}(\tau ) -\hat{E} \right ]d\tau \right ]\mathbf{z}(t_{ 0}) \equiv \hat{U } (t,t_{0})\mathbf{z}(t_{0})\,, }$$
(6.65)

where \(\mathbf{D}\mathbf{F}(\tau ) =\mathbf{ D}\mathbf{F}\left [\mathbf{x}(\tau )\right ]\). The evolution operator \(\hat{U } (t,t_{0})\) maps the initial condition \(\mathbf{z}(t_{0})\) forward in time accounting for the connection, but in the absence of noise and modeling errors. In order to obtain the general solution of Eq. (6.61) one should add its particular solution to the general solution of the homogeneous equation (6.65). To obtain the particular solution we make the variables transformation \(\mathbf{z}(t) = \hat{U } (t,t_{0})\mathbf{w}(t)\). Substitution in (6.61) gives

$$\displaystyle{ \frac{dw} {dt} = U^{-1}(t,t_{ 0})\left [\varDelta G(t) +\sigma E \cdot u(t)\right ]\,. }$$
(6.66)

Solving this equation taking into account (6.65), we obtain the general solution of Eq. (6.61) in the form

$$\displaystyle{ \mathbf{z}(t) = \hat{U } (t,t_{0}) \cdot \mathbf{ z}(t_{0}) +\int _{ t_{0}}^{t}\hat{U } (t,\tau ) \cdot \left [\varDelta G(\tau ) +\sigma \hat{E}\cdot \mathbf{ u}(\tau )\right ]d\tau \,. }$$
(6.67)

This equation describes the time evolution of the difference between the trajectory given by Eq. (6.58) and the “exact” system trajectory. Such a solution has a place only under conditions close to the synchronization regime. Because of the stability of synchronized motion, we can neglect the first term in (6.67), as it tends exponentially quickly to zero with increasing time. The second term in (6.67) describes complicated non-local dependence of \(\mathbf{z}(t)\) on model errors and noise: the degree of synchronization at moment t is determined by model and noise fluctuations in all preceding moments.

Returning to Fig. 6.15 we note that, although the time dependence of \(\left \vert \mathbf{z}\right \vert ^{2}\) is very complex, its mean value is practically constant. This mean can be used to characterize the degree of synchronization between the exact driving signal and the one generated by the model. We define that characteristic by the following time average:

$$\displaystyle{ \left [\left \langle \left \vert \mathbf{z}\right \vert ^{2}\right \rangle \right ]^{1/2} = \left [\mathop{\lim \frac{1} {t - t_{0}}\int _{t_{0}}^{t}\left \vert \mathbf{z}(\tau )\right \vert ^{2}d\tau }\limits _{ t\rightarrow \infty }\right ]^{1/2}\,. }$$
(6.68)

This expression can be represented in the form

$$\displaystyle{ \left [\left \langle \left \vert \mathbf{z}\right \vert ^{2}\right \rangle \right ]^{1/2} = \left [A^{2} + \left (\sigma B\right )^{2}\right ]^{1/2}\,, }$$
(6.69)

where A is some complicated function of the model errors, and the quantity B is determined by the statistical properties of the noise. We stress that neither A nor B depend on the noise level σ. The dependence (6.69) is confirmed by numerical experiments [33].

We finish this section by discussing the connection between the obtained results and their possible applications. On of them is the identification of chaotic sources. Suppose that the only available information about some non-linear system is the preliminarily measured time dependence \(\mathbf{x}(t)\). At some later time moment we get a new time dependence \(\mathbf{x}'(t)\), and we want to know whether both signals come from the same system. In order to answer this question we should construct a model approximately reproducing the series \(\mathbf{x}(t)\) and try to synchronize it with an analogous model for \(\mathbf{x}'(t)\). If synchronization is possible, then there is a high probability that \(\mathbf{x}(t)\) and \(\mathbf{x}'(t)\) have a common source. Noise and errors in model construction will obviously affect the synchronization quality. Therefore, if we want to use synchronization as a system identification method we must know to how estimate the influence of noise and model errors.

An interesting application of the obtained results is connected with the realization of the so-called non-destructive control methods. Let us consider some device to be placed in a difficult-to-access work space (for example, a sensor in a nuclear reactor). Before use, the device is subjected to a calibrating signal and the corresponding time dependence is recorded. After that, a device model is constructed and one determines the degree of synchronization between the model and the recorded time dependence. After some time we again act on the device with a calibrating signal and record the new time series. Then we try to synchronize that series with the old model. As the device was under the strong influence of the environment, its dynamics changed. This will lead in turn to changes in the degree of synchronization. Observing these changes, we can make conclusions about the need to repair or replace the device. In order to make correct conclusions one needs the above quantitative estimates of the influence of dynamics changes on synchronization.

6.9 Synchronization of Chaotic Systems and Transmission of Information

The possibility of synchronizing chaotic systems opens wide possibilities for the application of chaos for information transmission purposes. Any new information transmission scheme must satisfy some fairly evident requirements:

  • Competitiveness (realization simplicity and at least partial superiority over existing analogues).

  • High performance.

  • Reliability and stability with respect to noise of different types: self-noise and external noise.

  • Guarantee of a given security level .

  • Simultaneous access to multiple users.

Of course, originally every new scheme is oriented to achieve success in one of the above points, but then one should show that the proposed scheme to some extent satisfies all other requirements. We will choose for the central requirement the achievement of a security level which exceeds available analogues. Our choice is dictated by the fact that it is connected with the use of new physics—the synchronization of chaotic systems.

Codes appeared in antiquity. Caesar had his own secret alphabet. In Middle Ages Bacon, Viet, and Cardano worked at inventing secret ciphers. Edgar Allen Poe and Sir Arthur Conan Doyle did a great deal to popularize the deciphering process. During the Second World War, the unraveling of the enemy’s ciphers played an important role in the outcome of particular episodes. Finally, Shannon demonstrated that it is possible to construct a cryptogram which cannot be deciphered if the method of its composition is unknown.

Random variables have many advantages in the transmission of secure information. First, a random signal can be unrecognizable on a background of natural noise. Second, even if the signal could be detected, the unpredictability of its variation will not furnish any direct clues to the information contained in it. Also, a broadband chaotic signal is harder to jam. However, the legal recipient should be able to decode the information. In principle, a secret communication system of this type could use two identical chaotic oscillators: one as a transmitter and another as a receiver. The chaotic oscillations of the transmitter would be used for coding and those of the receiver for decoding. The idea is simple but difficult to realize, because any small difference in the initial conditions and parameters of the chaotic system will lead to totally different output signals.

Different ways to overcome this principal difficulty were investigated. At the last it appeared that the most likely direction was the chaotic synchronization which has been considered in the present chapter. Using synchronized chaos for secret communications was the topic of a series of papers published in the 1990s (see [3436]).

The principal scheme for the transmission of coded information based on the chaos synchronization effect is presented in Fig. 6.16. The transmitter adds to the informational (for example, sound) signal i the chaotic x-component, generated by the driving system. The addition should be understood in a broad sense. This includes: (1) the transmission of the proper sum of chaotic x(t) and informational i(t) signals; (2) the transmission of the product x(t)i(t); and (3) the transmission of the combination x(t)[1 + i(t)]. The sum signal is detected by the receiver. The synchronized signal, generated in the receiver, is subtracted from the received message. The difference approximately equals to the coded informative signal.

It is evident that the principal ability of such a scheme to work is based on roughness of the synchronization process: addition of a weak informative signal to a chaotic one does not affect its ability to synchronize the receiver and the transmitter.

Fig. 6.16
figure 16

Principal scheme of the coded information transmission, based on the chaos synchronization effect

Fig. 6.17
figure 17

Analog realization of the Van der Pol–Duffing oscillator model

Let us analyze in more detail the function of this scheme [36] on the example of a physically interesting model—the Van der Pol–Duffing oscillator. Its analog realization is presented in Fig. 6.17. Recall that under an analog setup we understand a system where every instantaneous value of the quantity entering into the input relations corresponds to an instantaneous value of another quantity, often different from the original one in its physical nature. Every elementary mathematical operation on the machine’s quantities corresponds to some physical law. This law establishes the dependence between the physical quantities on the input and output of the deciding setup: for example, Ohm’s law—to division, Kirchhoff’s law—to addition, and the Lorenz force—to vector product.

We introduce a cubically non-linear element N into the chain (Fig. 6.17), which gives the following relation:

$$\displaystyle{ I\left (V \right ) = aV + bV ^{3};\quad a < 0,\;b > 0 }$$
(6.70)

between the current I and applied voltage V. Applying Kirchhoff’s laws to different parts of the chain and rescaling the the variables, we obtain the following set of dynamical equations:

$$\displaystyle\begin{array}{rcl} \dot{x}& =& -\gamma (x^{3} -\alpha x - y)\,, \\ \dot{y}& =& x - y - z\,, \\ \dot{z}& =& \beta y\,. {}\end{array}$$
(6.71)

Here x, y, z are rescaled voltage on C 1, C 2 and current through L, respectively; α, β, γ are rescaled chain parameters. Numerical simulation of Eqs. (6.71) with fixed α, γ demonstrates the transition to chaos by the scenario of period doubling with the decrease of β. In particular, for γ = 100, α = 0. 35, β = 300 a chaotic attractor is observed in the phase space. We will consider the system (6.71) as the driving one, and the coordinate x as the full replacement variable. Then, the equations of motion for the driven system (its coordinates are stroked)

$$\displaystyle\begin{array}{rcl} x'& =& x\,, \\ \dot{y}'& =& x - y' - z'\,, \\ \dot{z}'& =& \beta y'\,. {}\end{array}$$
(6.72)

Let us now show that the subsystem (6.72), which we have chosen for the driven one, is globally stable. For this we use the Lyapunov function method. Denoting yy′ = y , zz′ = z , from (6.71), (6.72) we get

$$\displaystyle{ \left (\begin{array}{*{20}c} \dot{y} ^{{\ast}} \\ \dot{z} ^{{\ast}}\\ \end{array} \right ) = \left (\begin{array}{*{20}c} -1&-1\\ \beta & 0\\ \end{array} \right )\left (\begin{array}{*{20}c} y^{{\ast}} \\ z^{{\ast}}\\ \end{array} \right )\,. }$$
(6.73)

For the Lyapunov function we take the following:

$$\displaystyle{ L = \frac{1} {2}\left [\left (\beta y^{{\ast}} + z^{{\ast}}\right )^{2} +\beta y^{{\ast}2} + (1+\beta )z^{{\ast}2}\right ]\,. }$$
(6.74)

Using the equations of motion (6.73), we find

$$\displaystyle\begin{array}{rcl} \dot{L}& =& \left (\beta y^{{\ast}} + z^{{\ast}}\right )\left (\beta \dot{y} ^{{\ast}} + \dot{z} ^{{\ast}}\right ) +\beta y^{{\ast}}\dot{y} ^{{\ast}} + \left (1+\beta \right )z^{{\ast}}\dot{z} ^{{\ast}} \\ & =& -\beta \left (y^{{\ast}2} + z^{{\ast}2}\right )\leqslant 0,\quad \left (\beta > 0\right )\,. {}\end{array}$$
(6.75)

Therefore the subsystem (6.72) is globally stable, i.e., for t → 

$$\displaystyle{ \left \vert y - y'\right \vert \rightarrow 0,\quad \left \vert z - z'\right \vert \rightarrow 0\,. }$$
(6.76)

There is an interesting possibility to obtain a cascade of the driven subsystems [37]. Suppose that the driving system is represented by (6.72), and the first driven system is represented in terms of variables y′, z′, excited by x(t). In addition, we can imagine that we have a system containing the variable x″, excited by the variable y′. The total cascade of the systems looks like the following.

The driving system

$$\displaystyle\begin{array}{rcl} \dot{x}& =& -\gamma (x^{3} -\alpha x - y)\,, \\ \dot{y}& =& x - y - z\,, \\ \dot{z}& =& \beta y\,. {}\end{array}$$
(6.77)

The first driven system

$$\displaystyle\begin{array}{rcl} \dot{y}'& =& x - y' - z'\,, \\ \dot{z}'& =& \beta y'\,. {}\end{array}$$
(6.78)

The second driven system

$$\displaystyle{ \dot{x} ''= -\gamma \left [\left (x''\right )^{3} -\alpha \left (x''\right ) - y'\right ]\,. }$$
(6.79)

If all the systems are synchronized, the signal x″(t) is identical to the driving signal x(t).

Let us now focus our attention on using the constructed cascade system for the transmission of secret information. In accordance with the above principal scheme we use the x(t) signal as the one of mask noise, and s(t) as the information medium. Let the receiver detect the transmitted signal r(t) = x(t) + s(t). As an analysis of the system of equations (6.77) shows (6.78), (6.79) [36], if the power level of the informative signal is considerably lower than the noise medium power level \(\left \vert x(t)\right \vert >> \left \vert s(t)\right \vert \), then \(\left \vert x(t) - x''(t)\right \vert << \left \vert s(t)\right \vert \). This, in turn, means that the signal s (1) obtained as the result of the operation

$$\displaystyle{ s^{(1)} = r(t) - x''(t) = x(t) + s(t) - x''(t) \approx s(t) }$$
(6.80)

will be close to the initial informative signal s(t). Authors of [36] numerically solved the system of equations (6.77), ((6.78), (6.79) with parameters α = 0. 35, β = 300, γ = 100). The information medium signal s(t) was chosen in the three following forms:

Monochromatic signal::

s(t) = Fsin(ω t), F = 0. 02, ω = 1.

Amplitude-modulated signal::

s(t) = Fsin(ω t)[1 + fsin(Ω t)], F = 0. 02, ω = 1, f = 1, Ω = 0. 2.

Frequency-modulated signal::

\(s(t) = F\sin [\omega t + f\sin \left (\varOmega t\right )]\), F = 0. 02, ω = 1, f = 0. 2, Ω = 0. 2.

Fig. 6.18
figure 18

Power spectra for the informative signal s(t), transmitted signal r(t), and reconstructed signal s (1)(t) for monochromatic (a ), amplitude-modulated (b ), and frequency-modulated (c ) signals s(t) [36].

The informative signal s (1)(t) was reconstructed from numerical calculation results according to (6.80). Figure 6.18 presents power spectra for the informational signal s(t), transmitted signal r(t) = x(t) + s(t), and reconstructed signal s (1)(t) for all three listed cases. If the informational signal power level is considerably lower than for the chaotic medium, the frequency components of the informational signal in the transmission are not detectable, at least visually. The spectrum quality of the reconstructed signal is comparable to the received one.