Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

2.1 Noise

This section is aimed at familiarizing the reader with most common noise models in hysteretic systems, emphasizing disruptive and constructive effects of noise on system behavior. It also addresses the main numerical techniques used for noise simulation.

2.1.1 Introduction

Everybody hates noise while the world tends to become even noisier. There is an increasing amount of evidence regarding the negative effects of noise on human health and environment while the measures taken against noise pollution proved to be inefficient. For example, World Health Organization (WHO) recommends an average bellow 35 dB for continuous background noise in hospitals but most of the measurements presented by numerous scientific articles have indicated average noise levels between 50 and 70 dB featuring generally flat spectra over the 60–2000 Hz band. A relevant survey on this topic is provided by Busch-Vishniac and his colleagues from Johns Hopkins University in Ref. [1] indicating a trend of increasing noise level in hospitals over the last half a century in spite of WHO recommendations and the implementations of modern noise reduction techniques. While the general public is much more aware of and concerned about this acoustic noise, scientists and engineers are most commonly challenged by electromagnetic noise, from the cosmic microwave background radiation generated by Big Bang to the electronic noise generated by all electronic circuits.

One of the first areas that addressed noise problem systematically was communication. It is well-known that transmitted signal can be significantly altered by the noise existent in a communication channel due to the thermal agitation of molecules, the interference with other signals moving simultaneously through the same channel or neighboring ones, defects of the material structure, etc. Various techniques are used to reduce these disruptive effects of noise added to the signal such as filtering the noise out, using redundant coding routines of the transmitted signal, controlling the transmission environment, or additional processing of the received signal [2, 3].

The interest in noise analysis has significantly expanded during the last years with the advancement in nanoscience and nanotechnology. Noise is playing a major role in the behavior of nanoscale systems and its effects are increasingly pronounced with the decrease in system size. Let us consider the case of magnetic recording nanotechnology, where thermal noise poses fundamental limits for further improvements in magnetic data storage density. As predicted theoretically by Néel-Arrhenius theory [4] and proved experimentally by Wernsdorfer and his collaborators [5, 6], the switching fields of magnetic nanoparticles decrease with the increase in the temperature up to some blocking temperature when magnetization becomes completely unstable. For a 3 nm cubo-octahedral Co nanoparticle considered in the experiments, the blocking temperature is about 14 K, and, in general, for nanoparticles with diameters below 20 nm the blocking temperatures were found to be below 200 K [68]. It is apparent that this superparamagnetic effect found in magnetic nanoparticles and nanograins limits the advances in magnetic data storage density under the current paradigm. On the other hand, thermal noise may also play a positive role in achieving higher storage densities by using the recently developed technology referred as thermally assisted magnetic recording [9, 10]. While high anisotropy media are used in order to provide sufficiently stable magnetic bits at room temperature, the data are recorded at high temperature which reduces significantly the coercive field to values accessible by the current recording heads (see Fig. 2.1). It is foreseen that this recording nanotechnology will be the key for exceeding 1 Tb/in2 storage density. In conclusion, the thermal noise in nanoscale devices might jeopardize the future development of several nanotechnologies, such as magnetic data recording, but it could also provide the keys for solving the challenges encountered by such technologies.

Fig. 2.1
figure 1

Schematic representation of a heat assisted magnetic recording system. The laser is heating the memory cell in order to generate fast thermal induced switching of the magnetization at magnetic fields accessible to the recording head (formed by a current source and a yoke which amplifies the field in the air-gape)

Since noise can have only negative effects in linear systems, its potential benefits seem rather counterintuitive and have been overlooked by researchers for a long period of time [11]. However, the recent studies on stochastically driven nonlinear systems proved that such phenomena are quite common and their applications range from signal processing (dithering effect) and nanotechnology (thermal assisted magnetic recording; noise enhanced characteristics of nanotube transistors) to neuroscience (neuron models) and climate models (possible explanations of ice age) [1115]. These constructive aspects of noise in hysteretic systems will be addressed in Chap. 6.

2.1.2 Wiener Process

Almost two centuries ago, Scottish botanist Robert Brown was the first to systematically analyze the perpetual irregular motion of small pollen grains suspended in water. In general, this random drifting, known today as Brownian motion, was observed for any small particles suspended in a fluid. A pertinent explanation of these phenomena did not come until the beginning of twentieth century, when Albert Einstein published his first paper on Brownian motion that contains the key ideas for developing a stochastic analysis. Wiener construction can be seen as the limiting case of the particle Brownian motion as the number of particles and collision rates go to infinity. In addition to its practical applications in the various areas such as physics, biology and finance, Wiener process plays a vital role in stochastic analysis being the foundation for defining more complicated stochastic processes.

The transition probability function of the Wiener process satisfies the following Fokker-Planck equation (FPE) [16]:

$$ \frac{\partial }{\partial t}p\left( {x,t|x_{0} ,t_{0} } \right) = \frac{1}{2}\frac{{\partial^{2} }}{{\partial x^{2} }}\,p\left( {x,t|x_{0} ,t_{0} } \right), $$
(2.1)

with the initial condition \( p\left( {x,t_{0} |x_{0} ,t_{0} } \right) = \delta \left( {x - x_{0} } \right) \). By applying Fourier transformation with respect to x variable, \( \tilde{p}\left( {s,t|x_{0} ,t_{0} } \right) = \int_{ - \infty }^{\infty } {p\left( {x,t|x_{0} ,t_{0} } \right)e^{isx} dx} \), Eq. (2.1) becomes:

$$ \frac{\partial }{\partial t}\tilde{p}\left( {s,t|x_{0} ,t_{0} } \right) = - \frac{1}{2}s^{2} \tilde{p}\left( {s,t|x_{0} ,t_{0} } \right), $$
(2.2)

subject to initial condition \( \tilde{p}\left( {s,t_{0} |x_{0} ,t_{0} } \right) = e^{{isx_{0} }} \), which can be simply solved by separation of variables leading to the following solution:

$$ \tilde{p}\left( {s,t|x_{0} ,t_{0} } \right) = e^{{isx_{0} - \frac{1}{2}s^{2} (t - t_{0} )}} $$
(2.3)

By Fourier inversion, the solution of Eq. (2.1) can be obtained as follows:

$$ p\left( {x,t|x_{0} ,t_{0} } \right) = \frac{1}{{\sqrt {2\pi (t - t_{0} )} }}e^{{ - \frac{{(x - x_{0} )^{2} }}{{2(t - t_{0} )}}}} $$
(2.4)

As a result, the transition probability for the Wiener process has a Gaussian shape with the center in x 0 and variance (tt 0). Thus the initial δ – distribution is spread in time (see Fig. 2.2) and the variance becomes infinite as t → ∞ indicating a high irregularity of the sample paths, as it is illustrated in Fig. 2.3a.

Fig. 2.2
figure 2

Evolution of the transition probability density for a Wiener process

Fig. 2.3
figure 3

a Simulated sample paths of Wiener process with σ = 1 starting at x 0 = 0; b Power spectral density of Wiener process with σ = 1 starting at x 0 = 0

Although Wiener process has continuous paths, they are almost everywhere not differentiable and have unbounded variation on any finite time interval [17]. If we return to the physical origins of the Wiener process, this indicates an infinite speed of the Brownian particle, which is obviously one of the drawbacks of Wiener model. A more realistic model, but also more complex, of the Brownian motion is Ornstein-Uhlenbeck process which will be analyzed in the Sect. 2.1.5.

Another important property of Wiener process is the autocorrelation which is defined as follows:

$$ \left\langle {X(t_{1} ) \cdot X(t_{2} )|x_{0} ,t_{0} } \right\rangle = \iint\limits_{{R^{2} }} {x_{1} x_{2} p\left( {x_{1} ,t_{1} ;x_{2} ,t_{2} |x_{0} ,t_{0} } \right)dx_{1} dx_{2} } $$
(2.5)

By using the Markovian property of Wiener process and assuming that t 2 > t 1, the autocorrelation function can be written as follows:

$$ \left\langle {X(t_{2} ) \cdot X(t_{1} )|x_{0} ,t_{0} } \right\rangle = \iint\limits_{{R^{2} }} {x_{2} x_{1} p\left( {x_{2} ,t_{2} |x_{1} ,t_{1} } \right)p\left( {x_{1} ,t_{1} |x_{0} ,t_{0} } \right)dx_{1} dx_{2} } $$
(2.6)

By taking into account the expression for the first two moments of transition probability density (2.4), one can simply derive the following:

$$ \begin{aligned} \left\langle {X(t_{2} ) \cdot X(t_{1} )|x_{0} ,t_{0} } \right\rangle & = \int\limits_{R} {\left\langle {X(t_{2} )|x_{1} ,t_{1} } \right\rangle x_{1} p\left( {x_{1} ,t_{1} |x_{0} ,t_{0} } \right)dx_{1} } \\ & = \frac{1}{{\sqrt {2\pi (t_{1} - t_{0} )} }}\int\limits_{R} {x_{1}^{2} e^{{ - \frac{{(x_{1} - x_{0} )^{2} }}{{2(t_{1} - t_{0} )}}}} dx_{1} } \\ \end{aligned} $$
(2.7)

As a result, the Wiener autocorrelation function is

$$ \left\langle {X(t_{2} ) \cdot X(t_{1} )|x_{0} ,t_{0} } \right\rangle = (t_{1} - t_{0} ) + x_{0}^{2} $$
(2.8)

When t 2 is smaller than t 1, the Wiener autocorrelation is obtained by simply replacing t 1 with t 2 in the final formula. Thus, the general expression can be written as:

$$ \left\langle {X(t_{2} ) \cdot X(t_{1} )|x_{0} ,t_{0} } \right\rangle = (\hbox{min} \{ t_{1} ,t_{2} \} - t_{0} ) + x_{0}^{2} $$
(2.9)

It is apparent from formula (2.9) that the Wiener process is not stationary, and consequently the power spectral density cannot be expressed in the classical terms as Fourier transform of autocorrelation function. However, a time-dependent spectrum can be defined according to the Wigner-Ville approach:

$$ S_{WV} (t,\omega ) = \int\limits_{ - \infty }^{\infty } {x\left( {t + {\tau \mathord{\left/ {\vphantom {\tau 2}} \right. \kern-0pt} 2}} \right)x^{*} \;\left( {t - {\tau \mathord{\left/ {\vphantom {\tau 2}} \right. \kern-0pt} 2}} \right)e^{ - i\tau \omega } d\tau } $$
(2.10)

where the equality is understood in the mean-square sense, x * denotes the complex conjugate of x, and \( \,i = \sqrt { - 1} \). For real valued processes only the real part of the formula is considered. Applying this formula to the Wiener process with t 0 = 0 and x 0 = 0 it is found that:

$$ S_{WV} (t,\omega ) = 2\left( {\frac{\sin -\omega t-}{\omega }} \right)^{2} u(t) $$
(2.11)

where u(t) is the step function simply pointing out that t > 0.

It is also customary to define the average spectrum over the certain interval of length T:

$$ S_{WV} (\omega ) = \frac{1}{T}\int\limits_{0}^{T} {S_{WV} (t,\omega )dt} $$
(2.12)

In the case of the Wiener process, the average spectrum is inverse proportional to ω 2 as suggested by the simulation presented in Fig. 2.3b.

By using the autocorrelation formula (2.9) and simple algebraic calculations it can be proven that increments of the Wiener process, X(t)−X(s), are uncorrelated and have variance (ts). Since the difference of two Gaussian variables is also Gaussian, we can conclude that the increments of Wiener process are independent and identically distributed (i.i.d.) Gaussian random variables with zero mean and variance (ts). In addition to the relation with white noise and stochastic differential equations, this property is also useful for the numerical simulation of the Wiener process. Thus a random number Z is generated at each time step according to a standardized normal distribution N(0, 1) and the increments of the sample paths are computed according to the formula

$$ x(t_{n} ) - x(t_{n - 1} ) = Z\sqrt {(t_{n} - t_{n - 1} )} $$
(2.13)

Simulations of the sample paths using this procedure are presented in Fig. 2.3a.

2.1.3 Itô Stochastic Integral and Differential Equations

Stochastic calculus aimed at extending the benefits of deterministic calculus to the area of stochastic processes. After several less successful approaches developed by Wiener and his collaborators, the Japanese mathematician Kyosi Itô introduced a kind of Riemann-Stieljes integral having Wiener process as integrand and proved the convergence of the integral sums. For the introduction of Itô’s construction let us denote Wiener process by W(t) and consider a left-continuous function of time denoted by G(t), which can be either deterministic or stochastic. The stochastic integral \( \int_{{t_{0} }}^{t} {G(t^{\prime})dW(t^{\prime})} \) is defined by using Riemann-Stieljes approach as limit of the integral sums:

$$ S_{n} = \sum\limits_{i = 1}^{n} {G(t_{i - 1} )\left[ {W(t_{i} ) - W(t_{i - 1} )} \right]} $$
(2.14)

over all possible partitions (t 0 ≤ t 1 ≤ t 2 ≤ ··· ≤ t n−1 ≤ t n  = t) of the interval [t 0 ,t], with n approaching infinity. The limit is considered in the mean square sense over the probability space Ω, i.e. \( \mathop {\lim }\limits_{n \to \infty } \int_{\Upomega } {\left[ {S_{n} (\omega ) - S(\omega )} \right]^{2}\; p(\omega )d\omega } = 0 \). The convergence of Itô’s integral sums is rather counterintuitive knowing that W(t) is almost nowhere differentiable and have unbounded variation on any finite time interval. However, let us note that the choice of the intermediate points is restricted to be the left limits of the partition intervals which is essential in obtaining the convergence of the stochastic integral sums.

The construction of a stochastic integral opens the way towards defining and characterizing more complex stochastic processes via stochastic differential equations. Thus, a stochastic process X(t) is considered a solution of Itô’s stochastic differential equation (SDE) written as:

$$ dX(t) = b\left[ {X(t),t} \right]dt + \sigma \left[ {X(t),t} \right]dW(t) $$
(2.15)

if for all t and t 0,

$$ X(t) = X(t_{0} ) + \int\limits_{{t_{0} }}^{t} {b\left[ {X(t^{\prime}),t^{\prime}} \right]dt^{\prime}} + \int\limits_{{t_{0} }}^{t} {\sigma \left[ {X(t^{\prime}),t^{\prime}} \right]dW(t^{\prime})} $$
(2.16)

where b is the drift coefficient and σ is the diffusion coefficient.

The existence and uniqueness of the solution for this equation in a time interval [t 0 , T] subject to a given initial condition can be proven [18] under the following restrictions imposed on the equation coefficients:

  • Lipschitz condition: a K L exists such that for all x and y, and all t in the interval [t 0 , T],

    $$ \left| {b(x,t) - b(y,t)} \right| + \left| {\sigma (x,t) - \sigma (y,t)} \right| \le K_{L} \left| {x - y} \right|; $$
    (2.17)
  • Growth condition : a K G exists such that for all x, and for all t in the interval [t 0 , T],

    $$ \left| {b(x,t)} \right|^{2} \;+\; \left| {\sigma (x,t)} \right|^{2} \le K_{G} \left( {1 + \left| x \right|^{2} } \right). $$
    (2.18)

The Lipschitz condition is usually satisfied by the stochastic differential equation used in practice, but the growth conditions is often violated. This does not preclude the existence of a solution rather it indicates the solution is unbounded on the given finite time interval.

In order to connect the two approaches introduced in this chapter to describe a stochastic process, let us mention that the time evolution of the probability density characterizing the stochastic process defined by (2.15) is the solution of FPE:

$$ \frac{\partial }{\partial t}p\left( {x,t|x_{0} ,t_{0} } \right) = - \frac{\partial }{\partial x}\left[ {b(x,t)p\left( {x,t|x_{0} ,t_{0} } \right)} \right] + \frac{1}{2}\frac{{\partial^{2} }}{{\partial x^{2} }}\left[ {\sigma^{2} (x,t)p\left( {x,t|x_{0} ,t_{0} } \right)} \right] $$
(2.19)

subject to a δ-initial condition \( p\left( {x,t_{0} |x_{0} ,t_{0} } \right) = \delta \left( {x - x_{0} } \right) \) and given boundary conditions.

The generalization of SDE and FPE to multi-dimensional stochastic processes \( \varvec{X}(t) \) is quite straightforward. Thus, multi-dimensional Itô’s SDE reads:

$$ d\varvec{X}(t) = \varvec{b}\left[ {\varvec{X}(t),t} \right]dt + \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\thicksim}$}}{\sigma } \left[ {\varvec{X}(t),t} \right]d\varvec{W}(t) $$
(2.20)

where \( \varvec{b} \) is the drift vector function and \( \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\thicksim}$}}{\sigma } \) is the diffusion tensor function, while \( \varvec{W}(t) \) is the standard multi-dimensional Wiener process. The associated FPE is:

$$ \begin{aligned} \frac{\partial }{\partial t}p\left( {\varvec{x},t|\varvec{x}_{0} ,t_{0} } \right) & = - \sum\limits_{i = 1}^{n} {\frac{\partial }{{\partial x_{i} }}\left[ {b_{i} (\varvec{x},t)p\left( {\varvec{x},t|\varvec{x}_{0} ,t_{0} } \right)} \right]} \\ & + \frac{1}{2}\sum\limits_{i,j = 1}^{n} {\frac{{\partial^{2} }}{{\partial x_{i} \partial x_{j} }}\left[ {\left( {\sum\limits_{k = 1}^{n} {\sigma_{ik} (\varvec{x},t)\sigma_{kj} (\varvec{x},t)} } \right)p\left( {\varvec{x},t|\varvec{x}_{0} ,t_{0} } \right)} \right]} \\ \end{aligned} $$
(2.21)

where \( b_{i} \) and \( \sigma_{ij} \) are elements of the drift vector and the diffusion tensor, respectively.

2.1.4 White Noise

White noise is a stochastic process formed by uncorrelated random variables with constant mean and nonzero variance. It is apparent that the autocorrelation of a white noise is a delta function and consequently, its power spectral density is constant. This explains its name drawn from “white light” which has a flat power spectral density over the visible electromagnetic frequency band.

The definition of white noise places no restriction on the probability distribution functions describing the random variables, except the constant mean and variance. Usually, the notion of white noise is used in a stronger form when the component random variables are i.i.d. The numerical implementation of white noise used in this book is based on this idea but various probability density functions (p.d.f.) are considered. Sample of white noise simulations obtained for Gaussian, uniform, Cauchy, and Laplace distributions are shown in Fig. 2.4. Although there is an infinite variety of white noises, the Gaussian type is the overwhelming common noise model in science and engineering, so common that people use it by default when refering to white noise. That is partially related to the central limit theorem of the probability theory stating that the average of a large number of independent random variables converges, under some conditions, to a random variable with Gaussian distribution [19]. In addition, white Gaussian noise (WGN) is the formal derivative of the Wiener process, so it plays a central role in the theory of stochastic differential equation, as it is next discussed.

Fig. 2.4
figure 4

Sample paths of white noise for the uniform, Gaussian, Laplace, and Cauchy distributions, which are represented on the left hand side

Let us now recall a definition of the generalized derivative for a deterministic function. The generalized derivative of a function w integrable over a real domain D exists and is denoted by dw/dt if the following equality is satisfied for all infinitely differentiable functions g with compact support in D:

$$ \int\limits_{D} {g(t)\frac{dw}{dt}(t)dt} = - \int\limits_{D} {w(t)\frac{dg}{dt}(t)dt} $$
(2.22)

For differentiable functions the above formula is nothing else than the integration by parts, so the classical derivative is equal (almost everywhere) to the generalized derivative. It is known that the Wiener process has continuous paths but they are almost everywhere not differentiable in the classical sense. Nevertheless, the generalized derivatives exist and they are expected to be realizations of a WGN since the derivative should involve increments of the Wiener process which are known to be independent and Gaussian. As a result, the stochastic differential Eq. (2.15) is often written in the following form, known as Langevin’s equation:

$$ \frac{dX}{dt}(t) = b\left[ {X(t),t} \right] + \sigma \left[ {X(t),t} \right]\xi (t) $$
(2.23)

where \( \xi (t) \) is a WGN.

In the end of this section, let us mention that white noise bears a physical inconsistency, namely it requires infinite energy. It is obvious that integrating the constant power spectral density over an infinite frequency band would result an infinite quantity. In practice, a random signal is considered “white noise” if it has a flat spectrum over a definite bandwidth which is of interest for a specific application (for example audio frequency band or radio frequency band).

The physical bandwidth of white noise is limited in practice by various factors such as the mechanism of noise generation, the transmission medium and finite observation capabilities. The finite spectral band implies some correlations between the random variables of the noise process, which significantly increases the mathematical complexity of the problem. A consistent example of finite-band white noise is the Ornstein-Uhlenbeck process, which is addressed in the next section.

2.1.5 Ornstein-Uhlenbeck Noise

The Ornstein-Uhlenbeck (OU) processes belong to a class of finite-band WGN, whose spectral densities are constant in the small frequency region and decrease to zero inversely proportional to the square frequency in the high frequency region. More specifically, OU spectral density has a Lorentzian shape, \( S(f) = {{\sigma^{2} } \mathord{\left/ {\vphantom {{\sigma^{2} } {(b^{2} + 4\pi^{2} f^{2} )}}} \right. \kern-0pt} {(b^{2} + 4\pi^{2} f^{2} )}} \) where σ and b are constants characteristic to the process and f is the frequency (see Fig. 2.5). It is used in modeling various thermal relaxation processes as well as the evolution of exchange rates, bank interest, or prices.

The mathematical description of the OU process can be simply obtained by adding a linear drift term to the FPE characterizing the Wiener process. Thus, the FPE for the transition probability function of the OU noise reads:

$$ \frac{\partial }{\partial t}p\left( {x,t|x_{0} ,t_{0} } \right) = \frac{\partial }{\partial x}\left[ {b(x - x_{s} )p\left( {x,t|x_{0} ,t_{0} } \right)} \right] + \frac{{\sigma^{2} }}{2}\frac{{\partial^{2} }}{{\partial x^{2} }}p\left( {x,t|x_{0} ,t_{0} } \right), $$
(2.24)

where b, x s , and σ are constants known as drift coefficient, stationary average and diffusion coefficient, respectively. The solution is subject to the initial condition p(x, t|x 0 , t 0) = δ(x − x 0) and has to decay to zero as x goes to infinity. In physical terms, OU process can be interpreted as a Brownian particle diffusing in a parabolic potential U(x) with derivative U’(x) = b(x − x s ).

Fig. 2.5
figure 5

Power spectral density of the Ornstein-Uhlenbeck process on, (a) linear scale and, (b) Logarithmic scale

To find the solution of FPE (2.24) let us consider, by simple translation of variables, t 0 = 0 and x s  = 0. By applying Fourier transformation with respect to x variable, \( \tilde{p}\left( {s,t|x_{0} ,0} \right) = \int_{ = \infty }^{\infty } {p\left( {x,t|x_{0} ,0} \right)e^{isx} dx} \), Eq. (2.24) becomes:

$$ \frac{\partial }{\partial t}\tilde{p}\left( {s,t|x_{0} ,0} \right) + bs\frac{\partial }{\partial s}\tilde{p}\left( {s,t|x_{0} ,0} \right) = - \frac{{\sigma^{2} }}{2}s^{2} \tilde{p}\left( {s,t|x_{0} ,0} \right), $$
(2.25)

subject to initial condition \( \tilde{p}\left( {s,0|x_{0} ,0} \right) = e^{{isx_{0} }} \), which can be solved by the method of characteristics. Thus, let us find the characteristic curve from the associated Lagrange-Charpit equations:

$$ \frac{dt}{1} = \frac{ds}{bs} = - \frac{{2d\tilde{p}}}{{\sigma^{2} s^{2} \tilde{p}}} $$
(2.26)

By integrating the first equation in (2.26) using the separation of variables and imposing initial condition s(0) = s 0, the following solution is found:

$$ s(t) = s_{0} e^{bt} $$
(2.27)

By plugging this expression for s in the last term of the formula (2.26) and solving the corresponding differential equation with respect to t subject to initial condition \( e^{{is_{0} x_{0} }} \), one can use separation of variables to obtain the following solution:

$$ \tilde{p}(s_{0} ,t|x_{0} ,0) = \exp \left( {is_{0} x_{0} + \frac{{\sigma^{2} s_{0}^{2} }}{4b}\left( {1 - e^{2bt} } \right)} \right) $$
(2.28)

For the clarity of the previous formula, two notations were used for the exponential function. By substituting s 0 in (2.28) as a function of s and t obtained from (2.27), one arrives at the following solution of the partial differential Eq. (2.25):

$$ \tilde{p}(s,t|x_{0} ,0) = \exp \left( {ix_{0} se^{ - bt} - \frac{{\sigma^{2} s_{{}}^{2} }}{4b}\left( {1 - e^{ - 2bt} } \right)} \right) $$
(2.29)

By performing Fourier inversion, a Gaussian distribution with mean x 0 e bt and variance (σ 2 /2b) (1−e −2bt) is obtained. Taking into account the translation of variable used at the beginning of this derivation, the solution of FPE (2.24) is obtained. Thus, the transition probability function of the Ornstein-Uhlenbeck noise are characterized by drift coefficient b, and diffusion coefficient. The average of the stationary noise x s has the following expression:

$$ p\left( {x,t|x_{0} ,t_{0} } \right) = \frac{1}{{\sigma \sqrt {(\pi /b)(1 - e^{{ - 2b(t - t_{0} )}} )} }}\exp \left[ { - \frac{{b(x - x_{s} - (x_{0} - x_{s} )e^{{ - b(t - t_{0} )}} )^{2} }}{{2\sigma^{2} (1 - e^{{ - 2b(t - t_{0} )}} )}}} \right], $$
(2.30)

When t goes to infinity the transition probability exponentially approaches the stationary distribution, which is Gaussian with mean x s and variance σ 2 /2b. Thus the initial δ-distribution is spread in time (see Fig. 2.6), as happened in the Wiener process, but the standard deviation converges to a finite value when t → ∞. In addition, the distribution center drifts away from the initial condition x 0 to the stationary average x s .

Fig. 2.6
figure 6

Evolution of the transition probability density for an Ornstein-Uhlenbeck process starting at x 0

Let us now compute the autocorrelation function of the OU process by using definition (2.5) and the transition probability function previously derived. Based on the Markovian property of the OU process and assuming that t 2 > t 1 > t 0, the autocorrelation function can be rewritten as follows:

$$ \begin{aligned} \left\langle {X(t_{2} ) \cdot X(t_{1} )|x_{0} ,t_{0} } \right\rangle & = \iint\limits_{{R^{2} }} {x_{2} x_{1} p\left( {x_{2} ,t_{2} |x_{1} ,t_{1} } \right)p\left( {x_{1} ,t_{1} |x_{0} ,t_{0} } \right)dx_{1} dx_{2} } \\ & = \int\limits_{R} {\left\langle {X\left( {t_{2} } \right)|x_{1} ,t_{1} } \right\rangle x_{1} p\left( {x_{1} ,t_{1} |x_{0} ,t_{0} } \right)dx_{1} } \\ \end{aligned} $$
(2.31)

As it was previously derived, the expression for average at t 2 of an OU process initiated at (x 1, t 1) is \( (x_{1} - x_{s} )e^{{ - b(t_{2} - t_{1} )}} \), and consequently formula (2.31) becomes:

$$ \begin{aligned} \left\langle {X(t_{2} ) \cdot X(t_{1} )|x_{0} ,t_{0} } \right\rangle & = e^{{ - b(t_{2} - t_{1} )}} \int\limits_{R} {(x_{1} - x_{s} )x_{1} p\left( {x_{1} ,t_{1} |x_{0} ,t_{0} } \right)dx_{1} } \\ & = e^{{ - b(t_{2} - t_{1} )}} \left[ {\left\langle {X^{2} (t_{1} )|x_{0} ,t_{0} } \right\rangle - x_{s} \left\langle {X(t_{1} )|x_{0} ,t_{0} } \right\rangle } \right] \\ \end{aligned} $$
(2.32)

Since the second moment is the sum of the square average and variance, the autocorrelation function of the OU process becomes:

$$ \left\langle {X(t_{2} ) \cdot X(t_{1} )|x_{0} ,t_{0} } \right\rangle = e^{{ - b(t_{2} - t_{1} )}} \left\{ {\frac{{\sigma^{2} }}{2b} - x_{s} (x_{0} - x_{s} )e^{{ - b(t_{1} - t_{0} )}} + \left[ {(x_{0} - x_{s} )^{2} - \frac{{\sigma^{2} }}{2b}} \right]e^{{ - 2b(t_{1} - t_{0} )}} } \right\} $$
(2.33)

Let us observe that the autocorrelation expression is significantly simplified when the initial condition is the stationary average or is considered in the remote past. Actually, the latter is of much more interest from a practical point of view and is coined as the stationary correlation function, denoted by <X(t 2)X(t 1)> s . By letting t 0 → −∞ in formula (2.33), one gets:

$$ \left\langle {X(t_{2} ) \cdot X(t_{1} )} \right\rangle_{s} \;= \frac{{\sigma^{2} }}{2b}e^{{ - b|t_{2} - t_{1} |}} $$
(2.34)

where the absolute value was used in order to account for both t 2 > t 1, as considered in the previous derivation, and t 1 > t 2. The fact that the autocorrelation function depends only on time difference is characteristic to stationary process. It is natural to require for a stochastic process modeling the noise to be a stationary memoryless (i.e. Markovian) process. If the Gaussian requirement for the distribution function is added then the OU process is the only one that satisfies all these three natural characteristics, as it is proven by the Doob theorem [20].

The power spectral density of the OU process can now be easily obtained as the Fourier transform of the autocorrelation function (2.34) according to the Wiener-Khinchine theorem [16]. Because we deal with an even correlation function, it is enough to compute the Fourier integral on the positive axis. Thus,

$$ S(\omega ) = 2\text{Re} \left\{ {\int\limits_{0}^{\infty } {\frac{{\sigma^{2} }}{2b}e^{ - b\tau } e^{ - j\omega \tau } d\tau } } \right\} = \frac{{\sigma^{2} }}{b}\text{Re} \left\{ {\frac{1}{b + i\omega }} \right\} = \frac{{\sigma^{2} }}{{b^{2} + \omega^{2} }} $$
(2.35)

which proves the Lorentzian shape of the OU spectrum mentioned at the beginning of this section and illustrated in Fig. 2.5.

Based on FPE (2.24) for OU processes, the associated Itô stochastic differential equation can be simply written down as:

$$ dX\left( t \right) = - b\left[ {X\left( t \right) - x_{s} } \right]dt + \sigma \cdot dW\left( t \right) $$
(2.36)

where W(t) is the Wiener process, b and σ are the drift and, respectively, diffusion coefficients of X(t), while x s is the average of the stationary process. The process is also subject to the initial condition X(0) = x 0. While analytical calculations involving the OU process performed in the book are mainly based on the FPE approach, numerical simulations are using the Itô SDE description as it is next discussed.

By using the finite difference technique and the fact that \( W\left( {s + t} \right) = W\left( s \right) + N\left( {0,1} \right)t^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \), where \( N\left( {0,1} \right) \) is a random variable normally distributed with zero average and unit variance, one obtains the following approximate updating formula:

$$ x\left( {t + \Updelta t} \right) \approx x\left( t \right) - b\left[ {x\left( t \right) - x_{s} } \right]\Updelta t + \sigma \cdot N\left( {0,1} \right)\left( {\Updelta t} \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} $$
(2.37)

Although Eq. (2.37) has often been used in the literature to generate OU processes, it is reliable only when \( \Updelta t \) is relatively small. An exact updating formula has been derived in [21] by integrating (2.36) and by using the properties of normal variables:

$$ x(t + \Updelta t) = x(t)e^{ - b\Updelta t} + \left[ {\left( {{\raise0.7ex\hbox{${\sigma^{2} }$} \!\mathord{\left/ {\vphantom {{\sigma^{2} } {2b}}}\right.\kern-0pt} \!\lower0.7ex\hbox{${2b}$}}} \right)(1 - e^{ - 2b\Updelta t} )} \right]^{\,1/2} N(0,1) $$
(2.38)

in which it is assumed that \( x_{0} = x_{s} = 0 \).

As expected, this updating formula is reduced to (2.37) when \( \Updelta t << {1 \mathord{\left/ {\vphantom {1 b}} \right. \kern-0pt} b} \). It is noteworthy that Eq. (2.38) splits explicitly the random process into two terms: the first one is the mean and the second one is proportional to the standard deviation of \( x\left( t \right) \). Since the time step \( \Updelta t \) is usually constant, the factors in (2.38) can be computed in advance and stored in order to increase the computational efficiency. This latter approach has been used in our book to generate OU processes numerically. Sample paths of OU are shown in Fig. 2.7.

Fig. 2.7
figure 7

Simulated sample paths of the Ornstein-Uhlenbeck process starting at x 0 for different values of b, σ, x s

2.1.6 Brownian Motion in a Double Well-Potential

In this section, the discussion is extended from the Brownian motion in one-well potential reflected by Ornstein-Uhlenbeck process to the Brownian motion in double-well potential which obeys the following Fokker-Planck equation:

$$ \frac{\partial }{\partial t}p\left( {x,t|x_{0} ,t_{0} } \right) = \frac{\partial }{\partial x}\left[ {\frac{dU}{dx}(x)p\left( {x,t|x_{0} ,t_{0} } \right)} \right] + \frac{{\sigma^{2} }}{2}\frac{{\partial^{2} }}{{\partial x^{2} }}p\left( {x,t|x_{0} ,t_{0} } \right), $$
(2.39)

where U(x) denotes a function at least twice differentiable having two minima inside the interval of interest for the problem, as illustrated in Fig. 2.8.

Fig. 2.8
figure 8

Plot of double-well potential U(x) and the stationary distribution p s (x)

Let us first find the stationary distribution, which is obtained by solving the following differential equation:

$$ 0 = \frac{d}{dx}\left[ {\frac{dU}{dx}(x)p_{s} \left( x \right)} \right] + \frac{{\sigma^{2} }}{2}\frac{{d^{2} p_{s} (x)}}{{dx^{2} }} $$
(2.40)

Since both the probability function and its derivatives have to approach zero when x goes to infinity, the constant corresponding to the first integration is zero and Eq. (2.40) is equivalent to the following:

$$ \frac{{dp_{s} (x)}}{dx} = - \frac{2}{{\sigma^{2} }}\frac{dU}{dx}(x)p_{s} \left( x \right) $$
(2.41)

which can be easily solved using separation of variables.

It is apparent from formula (2.41) that the minima for potential U(x) are maxima for the stationary probability p s (x) representing metastable states (see also Fig. 2.8). A natural problem to be discussed is the transition between the two metastable states induced by noise. It is intuitively clear that the time needed to pass from one metastable state to another is mostly spent by surmounting the potential barrier between the states. The latter can be seen as the time needed for the Brownian particle initially located in one minimum to escape from the corresponding half-bounded interval ending the maximum point. In order to compute this exit time let us impose an absorbing boundary condition on Eq. (2.39) at the maximum point M, i.e. p(M,t|x 0, 0) = 0. The corresponding solution will provide the probability that, at time t, the particle starting at x 0 is still in the first potential well, which will be denoted by G(x 0 , t) and has the following expression:

$$ G(x_{0} ,t) = \int_{ - \infty }^{M} {p\left( {x,t|x_{0} ,0} \right)dx} $$
(2.42)

In other words, G(x 0 , t) represents the tail distribution of the first exit time from the potential well and consequently, the mean first exit time, denoted by T(x 0) can be expressed as follows:

$$ T\left( {x_{0} } \right) = \left\langle t \right\rangle = \int\limits_{0}^{\infty } t \frac{\partial }{\partial t}(1 - G(x_{0} ,t))dt = \int\limits_{0}^{\infty } {G(x_{0} ,t)dt} $$
(2.43)

where the last equality is obtained using integration by parts.

By taking into account that the transition probability satisfies the backward Fokker-Planck equation as function of initial condition x 0 and time t, G(x 0 , t) obeys the following equation:

$$ \frac{\partial }{\partial t}G\left( {x_{0} ,t} \right) = \frac{dU}{{dx_{0} }}(x_{0} )\frac{\partial }{{\partial x_{0} }}G\left( {x_{0} ,t} \right) + \frac{{\sigma^{2} }}{2}\frac{{\partial^{2} }}{{\partial x_{0}^{2} }}G\left( {x_{0} ,t} \right) $$
(2.44)

subject to the initial condition G(x 0, 0) = 1 for all x 0 smaller than M and boundary condition G(M, t) = 0 and decays to zero as x 0 goes to minus infinity for all t > 0. By integrating this equation over time from 0 to infinity, the equation for the mean first exit time is obtained:

$$ \frac{dU}{{dx_{0} }}(x_{0} )\frac{d}{{dx_{0} }}T\left( {x_{0} } \right) + \frac{{\sigma^{2} }}{2}\frac{{d^{2} }}{{dx_{0}^{2} }}T\left( {x_{0} } \right) = - 1. $$
(2.45)

with the boundary conditions T(M) = 0 and decays to zero when x 0 goes to minus infinity. It is apparent that Eq. (2.45) is a linear first order differential equation in terms of the derivative of T, so the analytical solution is readily available:

$$ \frac{dT}{{dx_{0} }}\left( {x_{0} } \right) = e^{{\frac{{2U(x_{0} )}}{{\sigma^{2} }}}} \left( { - \frac{2}{{\sigma^{2} }}\int\limits_{ - \infty }^{{x_{0} }} {e^{{ - \frac{2U(x)}{{\sigma^{2} }}}} dx} + c} \right) $$
(2.46)

where c is an integration constant that is to be determined from the boundary conditions on T. Let us mention that if instead of −∞ is considered a finite left bound with reflective boundary condition, the derivative of T is equal to zero at that point, so the constant c is also zero. By integrating (2.46) and taking into account the boundary conditions T(M) = 0, the following closed form expression is obtained for the mean first exit time:

$$ T\left( {x_{0} } \right) = \frac{2}{{\sigma^{2} }}\int\limits_{{x_{0} }}^{M} {e^{{\frac{2U(x)}{{\sigma^{2} }}}} \left( {\int\limits_{ - \infty }^{x} {e^{{ - \frac{2U(y)}{{\sigma^{2} }}}} dy} } \right)dx} $$
(2.47)

Once U(x) is explicitly given, the expression (2.47) can be further simplified by computing the two integrals. Here, let us consider that diffusion strength σ 2 is relatively small compared to the height of the potential barrier. On the one hand, exp[2U(x)/σ 2] is sharply peaked at x = M so the main contribution to the first integral comes from a close neighborhood of M, where U(x) can be approximated by U(M)–β(xM)2 with β a constant from Taylor approximation formula. On the other hand, exp[−2U(x)/σ 2] is very small near x = M so the inner integral is very slowly varying in the close neighborhood of M significant for the first integral. As a result, the inner integral can be approximated by setting the integral limit x = M and the resulting constant can be removed from inside the first integral. Moreover, the main contribution to this integral comes from the neighborhood of minimum m 1, where U(x) can be approximated by U(m 1) + α(xm 1)2 with α a constant that comes from the Taylor approximation formula. By taking into consideration all these observations, the mean first exit time of a particle located at metastable state m 1 can be approximated by the following formula:

$$ \begin{aligned} T\left( {m_{1} } \right) & \approx \frac{2}{{\sigma^{2} }}\int\limits_{ - \infty }^{M} {e^{{ - \frac{{2[U(m_{1} ) + \alpha (y - m_{1} )^{2} ]}}{{\sigma^{2} }}}} dy} \int\limits_{{m_{1} }}^{M} {e^{{\frac{{2[U(M) - \beta (x - M)^{2} ]}}{{\sigma^{2} }}}} dx} \\ & \approx \frac{2}{{\sigma^{2} }}e^{{\frac{{2(U(M) - U(m_{1} ))}}{{\sigma^{2} }}}} \int\limits_{ - \infty }^{\infty } {e^{{ - \frac{{2\alpha (y - m_{1} )^{2} }}{{\sigma^{2} }}}} dy} \int\limits_{ - \infty }^{M} {e^{{ - \frac{{2\beta (x - M)^{2} }}{{\sigma^{2} }}}} dx} \\ \end{aligned} $$
(2.48)

It is relatively easy to show that the first integral gives \( \sigma \sqrt {\pi /2\alpha } \) and the second integral gives \( \sigma \sqrt {\pi /8\beta } \). In conclusion, when noise strength is relatively small compared to the potential barrier, the escape time can be approximated by the following expression:

$$ T\left( a \right) \approx \frac{\pi }{{2\sqrt {\alpha \beta } }}e^{{\frac{2(U(b) - U(a))}{{\sigma^{2} }}}} $$
(2.49)

This result is known as Arrhenius formula and has been frequently used in modeling thermal relaxation phenomena, where the noise strength is proportional to the absolute temperature of the system (σ 2/2 = kT, k is the Boltzmann’s constant).

The analytical solutions for the transition probability function of the Brownian motion in double-well potential are much more difficult to find than in the case of one-well potential where Fourier method was effective. These solutions can be obtained in terms of eigenfunctions for FPE (2.39) with the eigenvalues determining the rates of decay to the stationary state [16, 22]. Here, we focus on numerical simulations of the process which are addressed by solving the associated SDE:

$$ dX\left( t \right) = - \frac{dU}{dx}(X(t))dt + \sigma \cdot dW\left( t \right) $$
(2.50)

By using the finite difference technique and \( W\left( {s + t} \right) = W\left( s \right) + N\left( {0,1} \right)t^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \), where N(0,1) is a random variable normally distributed with zero average and unit variance, one obtains the following approximate updating formula:

$$ x\left( {t + \Updelta t} \right) \approx x\left( t \right) - \frac{dU}{dx}\left( {x(t)} \right)\Updelta t + \sigma \cdot N\left( {0,1} \right)\left( {\Updelta t} \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} $$
(2.51)

Simulated sample paths of Brownian motion in a Landau potential and variants thereof are plotted in Fig. 2.9. The Landau potential is a standard case of double-well potential with \( U(x) = - (b/2)x^{2} + dx^{4} \), where constants b and d are positive constants.

Fig. 2.9
figure 9

Simulated sample paths of Brownian motion in various potentials \( U(x) = (bx_{s} )x - (b/2)x^{2} + dx^{4} \) and the associated spectra for different values of b, x s and d

2.1.7 Pink (1/f) Noise

Pink noise is a stochastic process with the power spectral density inverse proportional to frequency, also known as “1/f noise”. The name of “pink noise” is often extended to any noise with a power spectral density of the form 1/f α where α is usually close to 1. Pink noise is considered an intermediate class of noise between the white noise, obtained for α = 0, and the Wiener noise featuring a spectrum with α = 2.

The pink noise was first observed experimentally by Johnson in 1925 [23] when was trying to measure the noise spectrum in triode vacuum tubes. In addition to the white noise spectrum predicted by Schottky [24] he also observed an unexpected 1/f noise at low frequency. In the following years this strange noise appeared again and again in many different electrical devices, as well as in systems from other areas of science and technology, such as biology, astronomy, geophysics, psychology and economics [25, 26]. Several examples are provided in Fig. 2.10.

Fig. 2.10
figure 10

Examples of 1/f noises. Curves are illustrative based on data from the indicated sources. Adjacent pairs of tick marks on the horizontal axis beneath each figure indicate one decade of frequency. Reprinted with permission from [26]

Although these phenomena are widely spread in nature and their analysis led to more than 1500 scientific publications [27], a unified explanation is still missing. An early approach proposed by Johnson [23] and Shottky [24] was to consider the superposition of various OU relaxation process with different relaxation rates. This model was successful in explaining the pink noise in vacuum tubes but less suited in other cases from the area of electronics. Another idea was to look for diffusion processes as possible origins of pink noise. That was not very difficult from the mathematical point of view but failed at giving consistent physical meaning to the mathematical assumptions used to derive 1/f spectrum [28]. Following the Mandelbrot’s work [29] on fractals, pink noise has often been associated to fractal phenomena due to its scale invariance, i.e. it does not change if scales of frequency or time are multiplied by a common factor. Moreover, since various nonlinear systems have fractal attractors, many researchers have looked for dynamical systems with complex behavior mimicking a noise process. A pioneering work in this direction has been performed by Bak, Tang and Wiesenfeld who introduced the so-called self organized criticality as an explanation for 1/f noise [30, 31]. In conclusion, numerous models have been designed to explain the origin of the pink noise and generate its characteristics. Although no universal approach has been developed, ad-hoc models were pretty successful in studying these ubiquitous phenomena.

The simulations of pink noise used in this book are based on the generation of white noise processes and Fourier transforms. If Wiener noise featuring an 1/f 2 spectrum can be interpreted as the integral of white noise then pink noise featuring an 1/f spectrum could be seen as some kind of half-integral of white noise. Let us consider that ξ(t) is a sample path of white noise and compute its Fourier transform. Then, dividing the result by ω 1/2 and taking an inverse Fourier transform, one obtaines a function of time, denoted by p(t), which defines a sample path of pink noise. This procedure can be mathematically expressed as follows:

$$ \begin{aligned} p(t) & = \frac{1}{2\pi }\int\limits_{ - \infty }^{\infty } {\left( {\int\limits_{ - \infty }^{\infty } {\xi (\tilde{t})e^{{j\omega \tilde{t}}} d\tilde{t}} } \right)} \,\omega^{ - 1/2} e^{j\omega t} d\omega \\ & = \int\limits_{ - \infty }^{\infty } {\xi (\tilde{t})\left( {\frac{1}{2\pi }\int\limits_{ - \infty }^{\infty } {\omega^{ - 1/2} e^{{j\omega (\tilde{t} - t)}} d\omega } } \right)} \,d\tilde{t} \\ \end{aligned} $$
(2.52)

This equality shows explicitly that pink noise can be constructed as a linear convolution of white noise with a specific kernel (or Green’s function) and explains the time correlations in the pink noise. Since there are various types of white noise depending on its probability distribution, a given 1/f spectrum can also be associated to a variety of pink noise processes including Gaussian and Laplace noises (Fig. 2.11).

Fig. 2.11
figure 11

Simulated sample paths of pink noise (left) and the associated spectra (right) for Gaussian and Laplacian distribution

2.1.8 Other Classes of Colored Noise

In general, colored noise is the complementary notion of white noise including noises with flat spectrum only on a finite frequency band and noises with non-flat spectrum. In other words, colored noise spikes are correlated to each other. As it was previously mentioned, white noise bears a physical inconsistency since it requires infinite energy. So practically, all real noises are colored to some degree and pure white noise is only used in theoretical analyses due to its simplicity.

Wiener process featuring 1/f 2 spectrum, pink noise characterized by 1/f spectrum, and Ornstein-Uhlenbeck noise with its Lorentzian spectrum are most common models of colored noise. However, from case to case there is a large variety of colored noises, so modeling noise with arbitrary spectrum is desired. Our approach generalizes the technique used in the previous section for simulating pink noise. Thus, let us consider an arbitrary positive frequency function f(ω) sought as the noise spectrum. One first generates numerically an IID process ξ(t) in the time domain as it is done in the white noise case. This process is then converted to the frequency domain by using standard FFT techniques. In order to obtain the desired colored noise c(t) one has to multiply the flat spectrum of the converted signal by the chosen function and convert the signal back to the time domain. This procedure can be mathematically expressed as follows:

$$ \begin{aligned} c(t) & = \frac{1}{2\pi }\int\limits_{ - \infty }^{\infty } {\left( {\int\limits_{ - \infty }^{\infty } {\xi (\tilde{t})e^{{j\omega \tilde{t}}} d\tilde{t}} } \right)} \,\sqrt {f(\omega )} e^{j\omega t} d\omega \\ & = \int\limits_{ - \infty }^{\infty } {\xi (\tilde{t})\left( {\frac{1}{2\pi }\int\limits_{ - \infty }^{\infty } {\sqrt {f(\omega )} e^{{j\omega (\tilde{t} - t)}} d\omega } } \right)} \,d\tilde{t} \\ \end{aligned} $$
(2.53)

It should be noted that the computational cost for the generation of the colored noise is relatively small and depends on n as \( O\left( {n\log n} \right) \) where \( n \) is the length of the signal.

In Fig. 2.12a sample paths of colored noise with a spectrum directly proportional to f (represented in Fig. 2.12b) are generated from white Gaussian noise by using the algorithm previously described. This type of noise is known as blue noise and is often detected and used in image processing. Efficient algorithms for dithering were developed by using blue noise. It was found that retina cells are arranged in blue-noise-type pattern which generates a good visual resolution [32]. Simulated sample paths of a colored noise with a spectrum directly proportional to f 2 are plotted in Fig. 2.13a and the corresponding averaged spectrum is represented in Fig. 2.13b. This is known as violet noise and can be seen as a derivative of white noise. It is apparent that infinite-band blue of violet noise also require infinite energy, so such noises can only exist on a finite band. Figure 2.14b shows a finite band spectrum with triangular shape which has violet part and a Brown part. The sample paths associated to this spectrum are represented on the left part of that figure.

Fig. 2.12
figure 12

Sample paths of blue noise (left) and the associated spectra (right)

Fig. 2.13
figure 13

Sample paths of violet noise (left) and the associated spectra (right)

Fig. 2.14
figure 14

Sample paths of colored noise (left) with finite-band triangular spectrum represented on the (right) figure

Many other classes of noise can be defined based on the characteristics of their spectra but they are much less encountered in hysteretic systems and consequently, they are not addressed in this book. However, they can be easily generated and applied to specific applications by using the Noise Module of HysterSoft and by following the procedure presenting above.

In conclusion, most common noise models in hysteretic systems have been presented along with the main numerical techniques used in this book for noise simulations. It has been emphasized that noise may also play a constructive role in nonlinear systems in opposition to the general image of noise as nuisance. Regardless of their positive or negative roles, it is clear that a physical system is influenced by internal of external noise leading to a stochastic behavior of the system output. The next part of the Chapter is devoted to the theory of stochastic processes defined on graphs, which proved to be naturally suited to the stochastic analysis of the hysteretic system outputs.

2.2 Stochastic Processes Defined on Graphs

This section is devoted to introducing the theory of stochastic processes defined on graphs that was recently developed by Freidlin and Wentzell. Their papers [33, 34] are used as a guide for presenting the basic concepts of this theory. First, several definitions and general properties of stochastic processes are discussed, stressing the relation between transition probabilities of Markov processes and semigroups of contractions. This relation allows the characterization of diffusion processes defined on a graph, which is addressed in the second part of this section and is later applied to the analytical study of hysteretic systems with stochastic input. Readers without a background in measure theory and functional analysis might find difficult to understand this theoretical construction so they can pass directly to the Sect. 2.3.3 part where the theory is applied to the case of Orstein-Uhlenbeck process defined on a graph.

2.2.1 General Properties of Diffusion Processes

Consider a probability space {Ω, F, P} where Ω is the set of outcomes known as sample space, F is a collection of subsets of Ω which forms σ-algebra, and P is the probability measure returning the probability of a specific event in Ω. In addition, let us consider two real intervals X (phase space) and T (time interval). Let us remind that a stochastic process is a family of random variables {X(t)}, t ϵ T, defined on Ω with values in X. For each fixed ω ϵ Ω a function x:T → X, is obtained as x(t) = X(t)(ω) and is known as the trajectory or sample path of the process X(t). A stochastic process is called (right) continuous if “almost all” of its trajectories are (right) continuous, where “almost all” means a property valid on a subset of Ω which has measure 1.

The collection of probability distribution functions \( f_{{t_{1} t_{2} \ldots t_{r} }} \) of random variables (X(t 1), X(t 2), … X(t r )) for any natural number r and for any t 1, t 2, … t r ϵ T is known as the finite-dimensional family of distributions of process X(t). In general, the finite-dimensional family of distributions is not uniquely defining a stochastic process, but there is a large class of stochastic processesFootnote 1 for which it determines “almost” unique a continuous stochastic process. All processes considered in this section satisfy this property.

A homogeneous Markovian process with respect to a non-decreasing system of σ-algebras \( N_{t} \subset F \), where \( t \in T = \left[ {0,\infty } \right) \), is by definition a couple formed by a stochastic process X(t) and a collection of probability measures p x , \( x \in X \), on {Ω, F}, which satisfy the following conditions:

  1. (1)

    for any t, random variable X(t) is measurable with respect to σ-algebra N t ;

  2. (2)

    for any t and any Borel set \( \Upgamma \subset X \), \( P(t,\Upgamma |x) = P_{x} (X(t) \in \Upgamma ) \) is a Borel function with respect to variable x;

  3. (3)

    \( P(0,X\backslash \{ x\} |x) = 0 \);

  4. (4)

    if \( t,u \in T,\,t \le u,\,x \in X,\, \)and \( \Upgamma \, \subset X \) is a Borel set, then equality

    $$ P_{x} \{ X(u) \in \Upgamma |N_{t} \} = P(u - t,\Upgamma |x) $$

    is satisfied almost certainly with respect to the measure P x , where \( P_{x} \{ A|N_{t} \} \) represents the conditional probability of the event in relation to σ-algebra N t ;

  5. (5)

    if \( u \ge 0 \) then for each \( \omega \in \Upomega \) exists \( \omega^{\prime} \in \Upomega \) such that the equality

    $$ \left( {X(t + u)} \right)(\omega^{\prime}) = \left( {X(t)} \right)(\omega ) $$

    is satisfied for all t.

Intuitively, Markov processes can be interpreted as stochastic processes without memory. The definition considered that the process X(t) is defined for any \( t \in [0,\infty ) \). However it should be noted that many problems lead to processes \( \left( {X( \cdot )} \right)(\omega ) \) that are defined only for a finite range \( [0,\xi (\omega )] \), where random variable \( \xi (\omega ) \) is called terminal time. Since no such processes are used in this book, we have simplified to a certain extent this definition.

The notion of homogeneity for a Markov process is directly related to property (5), which implies the invariance of the set of Markov process trajectories at the translation of time. Function P(t,Γ|x) is called the transition probability function of the Markov process and determines, to a certain degree of equivalence,Footnote 2 the stochastic process. Thus, the properties and proper analysis of Markov processes are often reduced to the properties and analysis of transition probabilities. For the rigorous foundation of this schematic presentation the reader may consult the monographs by Dynkin [34] and Mandl [35].

A Markov process can be associated to a semigroup of contractions S t acting on the Banach space B of bounded and measurable functions on X endowed with the supremum norm. It is defined by the formula:

$$ (S_{t} f)(x) = \int_{X} {f(y)} P(t,dy|x) $$
(2.54)

The infinitesimal generator A of this semigroup, and hence of the associated Markov process, is defined by the following formula:

$$ Af = \mathop {\lim }\limits_{t \to 0} \frac{{S_{t} f - f}}{t} $$
(2.55)

where convergence is considered the supremum norm. In general, A cannot be defined for all elements of B. A special problem related to the definition of infinitesimal generator is the boundary condition (\( x \in Fr\{ X\} \)). Thus, different types of behavior of Markov process at phase space boundary correspond to different boundary conditions for the functions f defining the domain D(A) of the infinitesimal generator. For each function \( f \in D_{A} \), the function \( u_{t} (x) = S_{t} f(x) \) is the unique (bounded) solution of the following Cauchy problem:

$$ \frac{{\partial u_{t} (x)}}{\partial t} = Au_{t} (x),\quad \mathop {\lim }\limits_{t \to 0} u_{t} (x) = f(x) $$
(2.56)

If the transition probability of stochastic process is continuous then the semigroup S t (and hence the infinitesimal generator A) uniquely determines this transition probability and all finite-dimensional family of distributions for the Markov process.

An important class of Markov processes is composed of diffusion processes, which requires some additional restrictions on the transition probability functions. Let us consider that, for each \( x \in X \) the following limits exist:

$$ \mathop {\lim }\limits_{t \to 0} t^{ - 1} \left[ {1 - \int\limits_{X} {P(t,dy|x)} } \right] = 0 $$
(2.57)
$$ \mathop {\lim }\limits_{t \to 0} t^{ - 1} \left[ {1 - \int\limits_{X} {(y - x)P(t,dy|x)} } \right] = b(x) $$
(2.58)
$$ \mathop {\lim }\limits_{t \to 0} t^{ - 1} \left[ {1 - \int\limits_{X} {(y - x)^{2} P(t,dy|x)} } \right] = \sigma^{2} (x) $$
(2.59)

where the function b(x) is known as the drift coefficient, while σ(x) ≥ 0 as diffusion coefficient of the transition probability, and hence of the associated Markov process. A Markov process satisfying these conditions is called diffusion process. Note that the action on the class C2(X) functions of the infinitesimal generator associated to a diffusion process is given by:

$$ Af = \mathop {\lim }\limits_{t \to 0} t^{ - 1} \left[ {\int\limits_{X} {f(y)P(t,dy|x) - f(x)} } \right] = \frac{1}{2}\sigma^{2} (x)\frac{{d^{2} f}}{{dx^{2} }}(x) + b(x)\frac{df}{dx}(x) $$
(2.60)

which clarifies to some extent, the conditions imposed to define diffusion processes. This relationship also suggests a deep connection between the diffusion processes and elliptical differential operators.

Differential operator G defined by the formula:

$$ G = \frac{1}{2}\sigma^{2} (x)\frac{{\partial^{2} }}{{\partial x^{2} }} + b(x)\frac{\partial }{\partial x} $$
(2.61)

is known as differential generator of the diffusion process. In some conditions of weak regularity imposed on the drift and diffusion coefficients, the diffusion process is uniquely determined by its differential generator, meaning that any two processes with the same differential generator generate the same distribution in the space of trajectories (sample paths).

If there is a positive real constant K such that for any \( x,y \in X \) these coefficients satisfy:

  • $$ Lipschitz \, condition:\left| {b(x,t) - b(y,t)} \right| + \left| {\sigma (x,t) - \sigma (y,t)} \right| \le K\left| {x - y} \right|; $$
    (2.62)
  • $$ Growth \, condition\; :\left| {b(x,t)} \right|^{2} \,+\, \left| {\sigma (x,t)} \right|^{2} \le K\left( {1 + \left| x \right|^{2} } \right) $$
    (2.63)

then exits a unique fundamental solution, denoted by ρ(t, x, y), of equation \( {{\partial u} \mathord{\left/ {\vphantom {{\partial u} {\partial t}}} \right. \kern-0pt} {\partial t}} = Gu \) satisfying the appropriate initial and boundary conditions. This solution is precisely the transition probability density associated to the given diffusion process. Thus,

$$ P\left( {t,\Upgamma |x} \right) = \int\limits_{\Upgamma } {\rho \left( {t,y|x} \right)dy} $$
(2.64)

Equation \( {{\partial \rho } \mathord{\left/ {\vphantom {{\partial \rho } {\partial t}}} \right. \kern-0pt} {\partial t}} = G\rho \) is called backward Kolmogorov equation of the diffusion process. In the end, let us note that a stochastic process is called conservative if \( P\left( {t,X|x} \right) = 1 \) for any t and x.

2.2.2 Diffusion Processes Defined on Graphs

The theory of stochastic processes on a graph has been recently developed by Freidlin and Wentzell [36]. This theory was first applied to the study of random perturbations of Hamiltonian dynamical systems [33, 36]. Then, it has been realized that this mathematical technique is naturally suitable for the analysis of noise in hysteretic systems [3741]. In this section, we give a short description of diffusion processes on a graph based on the previously cited references and adapted to the problems of interest in this book. In the end, the initial-boundary value problem for the transition probability density of the diffusion process Z(t) defined on a graph is derived.

Consider a connected graph Z with vertices V 1,…, V m and edges E 1,…, E n (see an example in Fig. 2.15). On each edge E j is taken a coordinate x j and the distance between two points on the graph is the length of the shortest path connecting those two points measured using the coordinate x j . Note that the definition of Markov processes given in the previous section can be generalized easily for the case when phase space is considered to be the graph Z, by replacing the symbol X representing a real interval with the symbol Z representing the convex graph. Similarly a semigroup of contractions S t and an infinitesimal operator A, is associated to the Markov process.

Fig. 2.15
figure 15

The graph on which the diffusion process is defined

Several edges can meet at a vertex V k ; we will write E j  ~ V k if the edge E j has the vertex V k as its end. For a function \( f:Z \to R \) and a segment E j  ~ V k , \( (df/dx_{j} )(V_{k} ) \) denotes the derivative function f with respect to the coordinate x j considered towards inside of the edge E j . A diffusion process X j(t) is associated with each edge E j and is defined by the differential generator:

$$ G_{j} = b_{j} \left( {x_{j} } \right)\frac{\partial }{{\partial x_{j} }} + \frac{{\sigma_{j}^{2} \left( {x_{j} } \right)}}{2}\frac{{\partial^{2} }}{{\partial x_{j}^{2} }} $$
(2.65)

where, b j and σ j are continuous functions that satisfy Lipshitz condition (2.62) and growth condition (2.63).

For any nonnegative constants α k and χ kj , with \( \alpha_{k} + \sum\limits_{{j:E_{j} \sim V_{k} }} {\chi_{kj} } > 0 \) for k = 1,…,m, one can define an operator A as:

$$ Af({\mathbf{z}}) = G_{j} f({\mathbf{z}}),\,\,\,{\text{pentru }}\,{\mathbf{z}} \in E_{j} $$
(2.66)

for all functions f from C(Z) that satisfy the following conditions:

  1. 1.

    f is twice continuously differentiable inside the edges E j ;

  2. 2.

    if E j  ~ V k then \( \mathop {\lim }\limits_{{{\mathbf{z}} \to V_{k} ,{\mathbf{z}} \in E_{j} }} G_{j} f\left( {\mathbf{z}} \right) \) exists and is independent of j; this limit will be denoted by Gf (V k );

  3. 3.

    for each vertex V k

    $$ \alpha_{k} Af\left( {V_{k} } \right) = \sum\limits_{{j:E_{j} \sim V_{k} }} {\chi_{kj} \frac{\partial f}{{\partial x_{j} }}\left( {V_{k} } \right)} ; $$
    (2.67)

these conditions at the vertices will be further called “gluing” conditions.

The following result has been proven by Freidlin and Wentzell in Ref. [36]:

Theorem

The operator A defined above is the infinitesimal generator of a continuous semigroup of linear operators on C(Z) corresponding to a continuous conservative Markov process Z (t) on the graph Z.

Conversely, let Z (t) be a continuous conservative Markov process defined on the graph Z whose trajectories coincide, up to the exit from the edge E j , with the diffusion process generated by the operator G j defined by formula ( 2.65 ) and whose associated semigroup of linear operators leads C(Z) into itself. Then there exist unique positive constants χ kj and α k satisfying \( \alpha_{k} + \sum\limits_{{j:E_{j} \sim V_{k} }} {\chi_{kj} } > 0 \) such that the infinitesimal generator associated to the Markov process Z (t) is the operator A defined above.

Intuitively, constants α k describe how much time the process spends in V k and constants χ kj , are (roughly speaking) proportional to the probabilities that the process will “move” from vertex V k along the edges E j .

For the models used in the next chapters the following facts can be established:

  • Since the process has no delay at the vertices, α k  = 0 for all k.

  • In each interior vertex of the graph there are connected three edges and there is zero probability that the process will move from the vertex to one edge (so the associated χ kj coefficient is also zero) while random motion along the other two are equally probable (so the associated χ kj coefficients are equal to one).

The graphs shown in Fig. 2.16 represent typical vertex connections for the problems discussed in this book. For these graphs there is zero probability to move from V 1 along the edge E 3 and equal probability to move from V 1 along the edges E 1 and E 2. Consequently, χ13 = 0, χ11 = 1, χ12 = 1 and taking into account the coordinates on each edges, the following gluing condition can be derived for vertex V1:

$$ \frac{{df_{{E_{1} }} }}{dx}\left( {x_{1} } \right) = \frac{{df_{{E_{2} }} }}{dx}(x_{1} ) $$
(2.68)

Similar assertions are valid for each interior vertex and analogous interface conditions can be derived.

Fig. 2.16
figure 16

Typical graph configurations used in the analysis of hysteretic systems

The next task is to specify the partial differential equations for the transition probability density \( \rho \left( {\left. {t,{\mathbf{z}}} \right|{\mathbf{z}}_{0} ,0} \right) \) corresponding to the Markov process Z(t). The following notation for the transition probability density is used on each edge E j :

$$ \rho^{\left( j \right)} \left( {\left. {t,x} \right|{\mathbf{z}}_{0} ,0} \right) = \left. {\rho \left( {\left. {t,{\mathbf{z}}} \right|{\mathbf{z}}_{0} ,0} \right)} \right|_{{{\mathbf{z}} \in E_{j} }} $$
(2.69)

According to the theory of Markovian processes, the following equality is valid for\( \rho^{\left( j \right)} \):

$$ \sum\limits_{j = 1}^{k} {\int\limits_{{E_{j} }} {f\frac{{\partial \rho^{(j)} }}{\partial t}dx = \sum\limits_{j = 1}^{k} {\int\limits_{{E_{j} }} {\left( {G_{j} f} \right)\rho^{(j)} dx} } } } $$
(2.70)

Integrating by parts in formula (2.70) and taking into account the interface conditions presented above and the fact that f can be chosen arbitrary in the domain of the infinitesimal operator AY, one finds that the transition probability density \( \rho \left( {\left. {t,{\mathbf{z}}} \right|{\mathbf{z}}_{0} ,0} \right) \) satisfies the following forward Kolmogorov equation:

$$ \frac{{\partial \rho_{j} \left( {\left. {x,t} \right|{\mathbf{z}}_{0} ,0} \right)}}{\partial t} + L_{j} \rho_{j} \left( {\left. {x,t} \right|{\mathbf{z}}_{0} ,0} \right) = 0\;{\text{on each edge}}\;E_{j} $$
(2.71)

where

$$ \hat{L}_{j} \rho = - \frac{1}{2}\frac{{\partial^{2} }}{{\partial x_{{}}^{2} }}\left( {\sigma_{j}^{2} \left( x \right)\rho } \right) + \frac{\partial }{\partial x}\left( {b_{j} \left( x \right)\rho } \right) $$
(2.72)

and “vertex” type boundary conditions which express the continuity of the transition probability density at the transition between two edges (for example, edges E 1 and E 2 from the graphs shown in Fig. 2.16) and zero boundary condition imposed on the third edge connected at that vertex. On the other hand, the probability current has to be conserved at each vertex. For vertex V1 from Fig. 2.16, these conditions can be expressed analytically as follows:

$$ \begin{aligned} & \rho_{1} \left( {\left. {x_{1} ,t} \right|{\mathbf{z}}_{0} ,0} \right) = \rho_{2} \left( {\left. {x_{1} ,t} \right|{\mathbf{z}}_{0} ,0} \right)\,,\quad \rho_{3} \left( {\left. {x_{1} ,t} \right|{\mathbf{z}}_{0} ,0} \right) = 0\,, \\ & \frac{{\partial \rho_{2} }}{\partial x}\left( {\left. {x_{1} ,t} \right|{\mathbf{z}}_{0} ,0} \right) + \frac{{\partial \rho_{3} }}{\partial x}\left( {\left. {x_{1} ,t} \right|{\mathbf{z}}_{0} ,0} \right) = \frac{{\partial \rho_{1} }}{\partial x}\left( {\left. {x_{1} ,t} \right|{\mathbf{z}}_{0} ,0} \right)\,. \\ \end{aligned} $$
(2.73)

and the transition probability decays to zero at the external noise of the graphs. In addition, the initial conditions is \( \rho \left( {\left. {{\mathbf{z}},0} \right|{\mathbf{z}}_{0} ,0} \right) = \delta_{{{\mathbf{zz}}_{0} }}^{{}} \).

Fig. 2.17
figure 17

The graph on which the Ornstein-Uhlenbeck process is defined

2.2.3 Examples: Ornstein-Uhlenbeck Processes on Graphs

In this section, it is shown how the theory of stochastic processes on graphs can be applied to specific problems. Examples of Ornstein-Uhlenbeck processes defined on graphs are presented and the explicit forms of the initial-boundary value problems for the associated transition probability function are derived and solved.

The corresponding transition probability function \( \rho \left( {\left. {t,{\mathbf{z}}} \right|{\mathbf{z}}_{0} ,0} \right) \) can be expressed by its four components \( \rho_{i} \left( {\left. {t,x} \right|x_{0} ,0} \right) \) corresponding to the four edges \( E_{i} \) of the graph represented in Fig. 2.17 and defined on the following intervals:

$$ \begin{aligned} & \rho_{1} \left( {\left. {t,x} \right|x_{0} ,0} \right)\,\,{\text{is}}\,\,{\text{defined}}\,{\text{for}}\,x \in \left( { - \infty ,\beta } \right) \\ & \rho_{2} \left( {\left. {t,x} \right|x_{0} ,0} \right)\,\,{\text{is}}\,\,{\text{defined}}\,{\text{for}}\,x \in \left( {\beta ,\alpha } \right) \\ & \rho_{3} \left( {\left. {t,x} \right|x_{0} ,0} \right)\,\,{\text{is}}\,\,{\text{defined}}\,{\text{for}}\,x \in \left( {\beta ,\alpha } \right) \\ & \rho_{4} \left( {\left. {t,x} \right|x_{0} ,0} \right)\,\,{\text{is}}\,\,{\text{defined}}\,{\text{for}}\,x \in \left( {\alpha ,\infty } \right) \\ \end{aligned} $$
(2.74)

Since \( \rho_{i} \left( {\left. {t,x} \right|x_{0} ,0} \right) \) are associated to Ornstein-Uhlenbeck processes (2.89) on these intervals, they are the solutions of the corresponding Fokker–Planck equations defined on the intervals given in formula (2.74):

$$ \frac{\partial }{\partial t}\rho_{i} \left( {x,t|x_{0} ,0} \right) = \frac{\partial }{\partial x}\left[ {b(x - x_{s} )\rho_{i} \left( {x,t|x_{0} ,0} \right)} \right] + \frac{{\sigma^{2} }}{2}\frac{{\partial^{2} }}{{\partial x^{2} }}\rho_{i} \left( {x,t|x_{0} ,0} \right), $$
(2.75)

where x o is the coordinate of initial point z o located on the edge \( E_{{i_{0} }} \). The solutions of Eqs. (2.75) are subject to the initial condition \( \rho_{i} \left( {x,t_{0} |x_{0} ,t_{0} } \right) = \delta_{{ii_{0} }} \delta (x - x_{0} ) \) and to the following “vertex” boundary conditions:

$$ \begin{array}{*{20}c} {\rho_{1} (\beta^{ - } ,t|x_{0} ,0) = \rho_{2} (\beta^{ + } ,t|x_{0} ,0)} \\ {\rho_{3} (\beta^{ + } ,t|x_{0} ,0) = 0} \\ {\rho_{3} (\alpha^{ - } ,t|x_{0} ,0) = \rho_{4} (\alpha^{ + } ,t|x_{0} ,0)} \\ {\rho_{2} (\alpha^{ - } ,t|x_{0} ,0) = 0} \\ \end{array} $$
(2.76)
$$ \begin{aligned} & \frac{{\partial \rho_{1} }}{\partial x}(\beta^{ - } ,t|x_{0} ,0) = \frac{{\partial \rho_{2} }}{\partial x}(\beta^{ + } ,t|x_{0} ,0) + \frac{{\partial \rho_{3} }}{\partial x}(\beta^{ + } ,t|x_{0} ,0) \\ & \frac{{\partial \rho_{4} }}{\partial x}(\alpha^{ + } ,t|x_{0} ,0) = \frac{{\partial \rho_{2} }}{\partial x}(\alpha^{ - } ,t|x_{0} ,0) + \frac{{\partial \rho_{3} }}{\partial x}(\alpha^{ - } ,t|x_{0} ,0) \\ \end{aligned} $$

while \( p_{1} \left( {x,t|x_{0} ,0} \right) \) and \( p_{4} \left( {x,t|x_{0} ,0} \right) \) have to decay to zero as x goes to minus infinity and plus infinity, respectively.

In order to solve these initial boundary value problems let us observe that the sum of these components \( \hat{\rho }\left( {\left. {x,t} \right|x_{0} ,0} \right) \) defined in Eq. (2.77) satisfies Eq. (2.75) for all real values of x except α and β, while vertex boundary conditions prove the continuity and differentiability of this function at α and β and zero decays at \( \pm \infty \).

$$ \hat{\rho }\left( {\left. {x,t} \right|x_{0} ,0} \right) = \left\{ \begin{gathered} \rho_{1} \left( {\left. {x,t} \right|x_{0} ,0} \right)\,\,{\text{for}}\,x \in \left( { - \infty ,\beta } \right) \hfill \\ \rho_{2} \left( {\left. {x,t} \right|x_{0} ,0} \right)\, + \,\rho_{3} \left( {\left. {x,t} \right|x_{0} ,0} \right)\,\,{\text{for}}\,x \in \left( {\beta ,\alpha } \right) \hfill \\ \rho_{4} \left( {\left. {x,t} \right|x_{0} ,0} \right)\,\,{\text{for}}\,x \in \left( {\alpha ,\infty } \right) \hfill \\ \end{gathered} \right. $$
(2.77)

By continuity extension, it is clear that \( \hat{\rho }\left( {\left. {x,t} \right|x_{0} ,0} \right) \) satisfies Eq. (2.75) on the entire real axes subject to initial condition \( \delta (x - x_{0} ) \) and zero decays at infinity as boundary conditions. Consequently, \( \hat{\rho }\left( {\left. {x,t} \right|x_{0} ,0} \right) \) is the standard time-dependent transition probability function of the OU process, which was found in Sect. 1.1.5 to have expression (2.30). As a result,

$$ \frac{\sqrt b }{{\sqrt {\pi \sigma^{2} (1 - e^{ - 2bt} )} }}\text{e}^{{ - \frac{{b(x - x_{s} - (x_{0} - x_{s} )e^{ - bt} )^{2} }}{{2\sigma^{2} (1 - e^{ - 2bt} )}}}} = \left\{ \begin{array}{ll} \rho_{1} \left( {\left. {x,t} \right|x_{0} ,0} \right)\,,\,\,x \in \left( { - \infty ,\beta } \right) \hfill \\ \rho_{2} \left( {\left. {x,t} \right|x_{0} ,0} \right)\, + \,\rho_{3} \left( {\left. {x,t} \right|x_{0} ,0} \right)\,,\,\,x \in \left( {\beta ,\alpha } \right) \hfill \\ \rho_{4} \left( {\left. {x,t} \right|x_{0} ,0} \right)\,,\,\,x \in \left( {\alpha ,\infty } \right) \hfill \\ \end{array} \right. $$
(2.78)

The transition probability functions of the OU process is completely defined by this formula on the edges E 1 and E 4 of the graph. In addition, the sum of the two transition probability functions corresponding to edges E 2 and E 3 is determined, so only one of them is left to be found in order to solve completely the problem. Let us consider that i 0 ≠ 2 and choose \( \rho_{2} \) to be found, otherwise choose \( \rho_{3} \). Function \( \rho_{2} \) is the solution of Eq. (2.75) on the interval \( \left( {\beta ,\alpha } \right) \) subject to the initial condition \( \rho_{2} \left( {x,0|x_{0} ,0} \right) = 0 \) and to the boundary conditions:

$$ \rho_{2} \left( {\left. {\beta ,t} \right|x_{0} ,0} \right) = \frac{\sqrt b }{{\sigma \sqrt {\pi (1 - e^{ - 2bt} )} }}e^{{ - \frac{{b(\beta - x_{0} )^{2} }}{{2\sigma^{2} (1 - e^{ - 2bt} )}}}} ,\quad \rho_{2} \left( {\left. {\alpha ,t} \right|x_{0} ,0} \right) = 0 $$
(2.79)

By using Laplace transformation \( \tilde{\rho }_{2} (x,s|x_{0} ,0) = \int_{0}^{\infty } {e^{ - st} \rho_{2} (x,t|x_{0} ,0)\,} dt \) Eq. (2.75) becomes:

$$ \frac{{\sigma^{2} }}{2}\frac{{\partial^{2} \tilde{\rho }_{2} }}{{\partial x^{2} }}\left( {x,s|x_{0} ,0} \right) + b(x - x_{s} )\frac{{\partial \tilde{\rho }_{2} }}{\partial x}\left( {x,s|x_{0} ,0} \right) + (b - s)\tilde{\rho }_{2} \left( {x,s|x_{0} ,0} \right) = 0 $$
(2.80)

which can be solved in terms of special mathematical functions. Hence by considering \( \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\rho }_{2} (x,s|x_{0} ,0) = \tilde{\rho }_{2} (x,s|x_{0} ,0)\exp ({{b(x - x_{s} )^{2} } \mathord{\left/ {\vphantom {{b(x - x_{s} )^{2} } {\sigma^{2} }}} \right. \kern-0pt} {\sigma^{2} }}) \), one obtains the following equation:

$$ \frac{{\sigma^{2} }}{2}\frac{{\partial^{2} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\rho }_{2} }}{{\partial x^{2} }}\left( {x,s|x_{0} ,0} \right) - \left[ {\frac{{b(x - x_{s} )^{2} }}{{2\sigma^{2} }} + s - \frac{b}{2}} \right]\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\rho }_{2} \left( {x,s|x_{0} ,0} \right) = 0 $$
(2.81)

This equation has two linearly independent solutions, known as parabolic cylinder functions \( U\left( {\frac{s}{b} - \frac{1}{2},\frac{{\sqrt {2b} }}{\sigma }(x - x_{s} )} \right) \) and \( V\left( {\frac{s}{b} - \frac{1}{2},\frac{{\sqrt {2b} }}{\sigma }(x - x_{s} )} \right) \) [42]. Consequently, the solution can be expressed as linear combination of U and V with the coefficients, dependent of s, determined from the boundary conditions. In conclusion, a closed form analytical expression for the transition probability \( \rho_{2} \) can be found in terms of inverse Laplace transforms of the parabolic cylinder functions.

Much simpler analytical results can be found for the stationary distributions. By taking t → ∞ in expression (2.78), one obtains:

$$ \hat{\rho }^{st} \left( x \right) = \frac{\sqrt b }{\sigma \sqrt \pi }e^{{ - \frac{{b(x - x_{s} )^{2} }}{{2\sigma^{2} }}}} = \left\{ \begin{array}{ll} \rho_{1}^{st} \left( x \right)\,,\,\,x \in \left( { - \infty ,\beta } \right) \hfill \\ \rho_{2}^{st} \left( x \right)\, + \,\rho_{3}^{st} \left( x \right)\,,\,\,x \in \left( {\beta ,\alpha } \right) \hfill \\ \rho_{4}^{st} \left( x \right)\,,\,\,x \in \left( {\alpha ,\infty } \right) \hfill \\ \end{array} \right. $$
(2.82)

while \( \rho_{2}^{st} \) has to satisfy the equation:

$$ \frac{{\sigma^{2} }}{2}\frac{{\partial^{2} }}{{\partial x^{2} }}\rho_{2}^{st} \left( x \right) + \frac{\partial }{\partial x}\left[ {b(x - x_{s} )\rho_{2}^{st} (x)} \right] = 0 $$
(2.83)

and boundary conditions:

$$ \rho_{2}^{st} \left( \beta \right) = \sqrt {\frac{b}{{\pi \sigma^{2} }}} e^{{ - \frac{{b(\beta - x_{s} )^{2} }}{{2\sigma^{2} }}}} ,\quad \rho_{2}^{st} \left( \alpha \right) = 0 $$
(2.84)

It is known that the general solution of linear differential Eq. (2.83) has the following form:

$$ \rho_{2}^{st} \left( x \right) = e^{{ - \frac{{b(x - x_{s} )^{2} }}{{2\sigma^{2} }}}} \left( {c\int\limits_{x}^{\alpha } {e^{{\frac{{b(y - x_{s} )^{2} }}{{2\sigma^{2} }}}} dy} + d} \right) $$
(2.85)

where c and d are constants that can be found from boundary conditions (2.84). The null-condition at \( x = \alpha \) implies d = 0, while the condition at \( x = \beta \) leads to:

$$ c = \sqrt {\frac{b}{{\pi \sigma^{2} }}} \left( {\int\limits_{\beta }^{\alpha } {e^{{\frac{{b(y - x_{s} )^{2} }}{{2\sigma^{2} }}}} dy} } \right)^{ - 1} $$
(2.86)

In conclusion, the stationary probability function of the Ornstein-Uhlenbeck process defined on graph Z has the following expression, while a sample obtained for a noise input characterized by b = 1, σ = 1, and x s  = −0.5, and vertex coordinates β = −1 and α = 1 is plotted in Fig. 2.18:

$$ \begin{aligned} \rho_{1}^{st} \left( x \right) & = \frac{\sqrt b }{\sigma \sqrt \pi }e^{{ - \frac{{b(x - x_{s} )^{2} }}{{2\sigma^{2} }}}} ,\,\,\,\,\,x \in \left( { - \infty ,\beta } \right) \\ \rho_{2}^{st} \left( x \right) & = \frac{\sqrt b }{\sigma \sqrt \pi }e^{{ - \frac{{b(x - x_{s} )^{2} }}{{2\sigma^{2} }}}} \left( {\int\limits_{x}^{\alpha } {e^{{\frac{{b(y - x_{s} )^{2} }}{{2\sigma^{2} }}}} dy} } \right)\left( {\int\limits_{\beta }^{\alpha } {e^{{\frac{{b(y - x_{s} )^{2} }}{{2\sigma^{2} }}}} dy} } \right)^{ - 1} ,\,\,\,\,x \in \left( {\beta ,\alpha } \right) \\ \rho_{3}^{st} \left( x \right) & = \frac{\sqrt b }{\sigma \sqrt \pi }e^{{ - \frac{{b(x - x_{s} )^{2} }}{{2\sigma^{2} }}}} \left( {\int\limits_{\beta }^{x} {e^{{\frac{{b(y - x_{s} )^{2} }}{{2\sigma^{2} }}}} dy} } \right)\left( {\int\limits_{\beta }^{\alpha } {e^{{\frac{{b(y - x_{s} )^{2} }}{{2\sigma^{2} }}}} dy} } \right)^{ - 1} ,\,\,\,\,x \in \left( {\beta ,\alpha } \right) \\ \rho_{4}^{st} \left( x \right) & = \frac{\sqrt b }{\sigma \sqrt \pi }e^{{ - \frac{{b(x - x_{s} )^{2} }}{{2\sigma^{2} }}}} ,\,\,\,\,\,x \in \left( {\alpha ,\infty } \right) \\ \end{aligned} $$
(2.87)

An approximation of the transition probability function defined by (2.75) and (2.76) can be obtained by replacing the stationary distribution \( \hat{\rho }_{{}}^{st} \left( x \right) \) of the Orstein-Uhlenbeck process on the real line with the transition probability function \( \hat{\rho }\left( {\left. {x,t} \right|x_{0} ,0} \right) \) of the Orstein-Uhlenbeck process on the real line:

$$ \begin{aligned} \rho_{1} \left( {x,t|x_{0} ,0} \right) & = \frac{\sqrt b }{{\sqrt {\pi \sigma^{2} (1 - e^{ - 2bt} )} }}\text{e}^{{ - \frac{{b(x - x_{s} - (x_{0} - x_{s} )e^{ - bt} )^{2} }}{{2\sigma^{2} (1 - e^{ - 2bt} )}}}} ,\,\,\,\,\,x \in \left( { - \infty ,\beta } \right) \\ \rho_{2} \left( {x,t|x_{0} ,0} \right) & \approx \frac{\sqrt b }{{\sqrt {\pi \sigma^{2} (1 - e^{ - 2bt} )} }}\text{e}^{{ - \frac{{b(x - x_{s} - (x_{0} - x_{s} )e^{ - bt} )^{2} }}{{2\sigma^{2} (1 - e^{ - 2bt} )}}}} \int\limits_{x}^{\alpha } {e^{{\frac{{b(y - x_{s} )^{2} }}{{2\sigma^{2} }}}} dy} \left( {\int\limits_{\beta }^{\alpha } {e^{{\frac{{b(y - x_{s} )^{2} }}{{2\sigma^{2} }}}} dy} } \right)^{ - 1} ,\,\,\,\,x \in \left( {\beta ,\alpha } \right) \\ \rho_{3} \left( {x,t|x_{0} ,0} \right) & \approx \frac{\sqrt b }{{\sqrt {\pi \sigma^{2} (1 - e^{ - 2bt} )} }}\text{e}^{{ - \frac{{b(x - x_{s} - (x_{0} - x_{s} )e^{ - bt} )^{2} }}{{2\sigma^{2} (1 - e^{ - 2bt} )}}}} \int\limits_{\beta }^{x} {e^{{\frac{{b(y - x_{s} )^{2} }}{{2\sigma^{2} }}}} dy} \left( {\int\limits_{\beta }^{\alpha } {e^{{\frac{{b(y - x_{s} )^{2} }}{{2\sigma^{2} }}}} dy} } \right)^{ - 1} ,\,\,\,\,x \in \left( {\beta ,\alpha } \right) \\ \rho_{4} \left( {x,t|x_{0} ,0} \right) & = \frac{\sqrt b }{{\sqrt {\pi \sigma^{2} (1 - e^{ - 2bt} )} }}\text{e}^{{ - \frac{{b(x - x_{s} - (x_{0} - x_{s} )e^{ - bt} )^{2} }}{{2\sigma^{2} (1 - e^{ - 2bt} )}}}} ,\,\,\,\,\,x \in \left( {\alpha ,\infty } \right) \\ \end{aligned} $$
(2.88)
Fig. 2.18
figure 18

Stationary probability components for an Ornstein-Uhlenbeck process defined on graph Z

Samples of these transitions probability functions obtained for a noise input characterized by b = 1, σ = 1, x s  = −0.5, x 0 = 0 and vertex coordinates β = −1 and α = 1 are plotted in Fig. 2.19 at selected instants of time.

Fig. 2.19
figure 19

Evolution of the transition probability components for an Ornstein-Uhlenbeck process defined on graph Z

In the next chapters, it is proven that the stochastic analysis of various hysteretic systems driven by OU processes can be reduced to the analysis of OU processes defined on graphs and the solutions derived here will be useful in expressing the stochastic characteristics of the output.

In the second example, we consider the same graph Z represented in Fig. 2.17 but the Ornstein-Uhlenbeck processes \( X^{i} (t) \) on each edge are governed by different differential generators:

$$ G_{i} = - b\left( {x - x_{s}^{i} } \right)\frac{\partial }{{\partial x_{{}} }} + \frac{{\sigma_{{}}^{2} \left( x \right)}}{2}\frac{{\partial^{2} }}{{\partial x_{{}}^{2} }} $$
(2.89)

where \( x_{s}^{1} = x_{s}^{2} = \tilde{x}_{s} \) and \( x_{s}^{3} = x_{s}^{4} = x_{s} \). While on edges E 1 and E 2, the process can be interpreted as a Brownian motion in a parabolic potential defined on \( ( - \infty ,\alpha ] \) reaching minimum at \( \tilde{x}_{s} \). On edges E 3 and E 4, the process can be interpreted as a Brownian motion in a parabolic potential defined on \( [\beta ,\infty ) \) reaching minimum at \( x_{s} \). A graphic representation of these potentials is shown in Fig. 2.20, with continuous and dashed lines, respectively. The associated transitions probability functions are the solutions of the following Fokker-Planck equations on the corresponding intervals:

$$ \frac{\partial }{\partial t}\rho_{i} \left( {x,t|x_{0} ,0} \right) = \frac{\partial }{\partial x}\left[ {b(x - x_{s}^{i} )\rho_{i} \left( {x,t|x_{0} ,0} \right)} \right] + \frac{{\sigma^{2} }}{2}\frac{{\partial^{2} }}{{\partial x^{2} }}\rho_{i} \left( {x,t|x_{0} ,0} \right), $$
(2.90)

and subject to the initial boundary conditions described in the previous example, partially given in (2.76). A similar procedure using Laplace transformation can be used to find closed form analytical expressions for these components of the transition probability function in terms of inverse Laplace transforms of the parabolic cylinder functions. Much simpler analytical results can be found for the stationary distributions. The components of the stationary distribution for the OU process defined on graph Z are the solutions of the following equations:

$$ \frac{{\sigma^{2} }}{2}\frac{{\partial^{2} }}{{\partial x^{2} }}\rho_{i}^{st} \left( x \right) + \frac{\partial }{\partial x}\left[ {b(x - x_{s}^{i} )\rho_{i}^{st} \left( x \right)} \right] = 0, \ldots i = 1, \ldots , 4 $$
(2.91)

and are subject to the following boundary conditions:

$$ \rho_{1}^{st} (\beta^{ - } ) = \rho_{2}^{st} (\beta^{ + } ),\;\rho_{3}^{st} (\beta^{ + } ) = 0,\;\rho_{3}^{st} (\alpha^{ - } ) = \rho_{4}^{st} (\alpha^{ + } ),\;\rho_{2}^{st} (\alpha^{ - } ) = 0 $$
(2.92)
$$ \frac{{\partial \rho_{1}^{st} }}{\partial x}(\beta^{ - } ) = \frac{{\partial \rho_{2}^{st} }}{\partial x}(\beta^{ + } ) + \frac{{\partial \rho_{3}^{st} }}{\partial x}(\beta^{ + } ),\;\frac{{\partial \rho_{4}^{st} }}{\partial x}(\alpha^{ + } ) = \frac{{\partial \rho_{2}^{st} }}{\partial x}(\alpha^{ - } ) + \frac{{\partial \rho_{3}^{st} }}{\partial x}(\alpha^{ - } ) $$

while \( p_{1}^{st} \left( x \right) \) and \( p_{4}^{st} \left( x \right) \) have to decay to zero as x goes to minus infinity and plus infinity, respectively.

Fig. 2.20
figure 20

The potential wells for the Brownian motion representing the noise characterization for edges E 1 and E 2 (continuous line) and E 3 and E 4 (dashed line), respectively

It is known that the general solutions of linear differential Eqs. (2.91) can be written in the following forms:

$$ \begin{aligned} \rho_{i}^{st} \left( x \right) & = e^{{ - \frac{{b(x - \tilde{x}_{s}^{{}} )^{2} }}{{2\sigma^{2} }}}} \left( {c_{i} \int\limits_{x}^{\alpha } {e^{{\frac{{b(y - \tilde{x}_{s}^{{}} )^{2} }}{{2\sigma^{2} }}}} dy} + d_{i} } \right),\,\,\,\,i = 1,2 \\ \rho_{i}^{st} \left( x \right) & = e^{{ - \frac{{b(x - x_{s}^{{}} )^{2} }}{{2\sigma^{2} }}}} \left( {c_{i} \int\limits_{\beta }^{x} {e^{{\frac{{b(y - x_{s}^{{}} )^{2} }}{{2\sigma^{2} }}}} dy} + d_{i} } \right),\,\,\,\,i = 3,4 \\ \end{aligned} $$
(2.93)

where c i and d i are constants that will be found in our problem from boundary conditions (2.92). The null-conditions \( \rho_{2}^{st} (\alpha^{ - } ) = 0 \) and \( \rho_{3}^{st} (\beta^{ + } ) = 0 \) implies d 2 = d 3 = 0, while zero decay at minus infinity and plus infinity for \( p_{1}^{st} \left( x \right) \) and \( p_{4}^{st} \left( x \right) \), respectively, implies c 1 = c 4 = 0. Moreover, \( \rho_{1}^{st} (\beta^{ - } ) = \rho_{2}^{st} (\beta^{ + } ) \) implies \( d_{1} = c_{2} \int_{\beta }^{\alpha } {\exp (b(y - \tilde{x}_{s} )/2\sigma^{2} )dy} \), while \( \rho_{3}^{st} (\alpha^{ - } ) = \rho_{4}^{st} (\alpha^{ + } ) \) leads to the relation \( d_{4} = c_{3} \int_{\beta }^{\alpha } {\exp (b(y - x_{s} )/2\sigma^{2} )dy} \). The boundary conditions for the derivatives in (2.92) implies c 2 = c 3 that will be denoted by c. As a result,

$$ \begin{aligned} \rho_{1}^{st} \left( x \right) & = c\left( {\int\limits_{\beta }^{\alpha } {e^{{\frac{{b(y - \tilde{x}_{s}^{{}} )^{2} }}{{2\sigma^{2} }}}} dy} } \right)e^{{ - \frac{{b(x - \tilde{x}_{s}^{{}} )^{2} }}{{2\sigma^{2} }}}} ,\quad \rho_{2}^{st} \left( x \right) = c\left( {\int\limits_{x}^{\alpha } {e^{{\frac{{b(y - \tilde{x}_{s}^{{}} )^{2} }}{{2\sigma^{2} }}}} dy} } \right)e^{{ - \frac{{b(x - \tilde{x}_{s}^{{}} )^{2} }}{{2\sigma^{2} }}}} , \\ \rho_{3}^{st} \left( x \right) & = c\left( {\int\limits_{\beta }^{x} {e^{{\frac{{b(y - x_{s}^{{}} )^{2} }}{{2\sigma^{2} }}}} dy} } \right)e^{{ - \frac{{b(x - x_{s}^{{}} )^{2} }}{{2\sigma^{2} }}}} ,\quad \rho_{4}^{st} \left( x \right) = c\left( {\int\limits_{\beta }^{\alpha } {e^{{\frac{{b(y - x_{s}^{{}} )^{2} }}{{2\sigma^{2} }}}} dy} } \right)e^{{ - \frac{{b(x - x_{s}^{{}} )^{2} }}{{2\sigma^{2} }}}} , \\ \end{aligned} $$
(2.94)

where c is determined from the normalization condition for the total stationary probability function \( \int_{ - \infty }^{\beta } {\rho_{1}^{st} (x)dx} + \int_{\beta }^{\alpha } {\rho_{2}^{st} (x)dx} + \int_{\beta }^{\alpha } {\rho_{3}^{st} (x)dx} + \int_{\alpha }^{\infty } {\rho_{4}^{st} (x)dx} = 1 \). An example of the stationary distribution (2.94) obtained for a noise input characterized by b = 1, σ = 1, x s  = −0.5, \( \tilde{x}_{s} = 0.5 \), and vertex coordinates β = −1 and α = 1 is plotted in Fig. 2.21.

Fig. 2.21
figure 21

Stationary probability components for a “two-wells” Ornstein-Uhlenbeck process defined on graph Z

This example of OU process governed by different equations on each edge of the graph is used in describing the stochastic behavior of bistable hysteretic systems where noise is state dependent. In Chap. 5 we will prove that coherence resonance phenomena take place in such system driven by state dependent noise.