Glossary

Additive noise:

Random fluctuations that add to the phase space flow of model systems.

Center manifold theorem:

Mathematical theorem describing the slaving principle in complex systems.

Slaving principle:

Units in a complex system that interact nonlinearly with other units evolve on different time scales. Close to instability points, fast units obey the dynamics of slow units and are enslaved by them. Such units may be spatial modes in spatially extended systems or neural ensembles in neural populations.

Introduction

The dynamics of natural systems is complex, e.g., due to various processes and their interactions on different temporal and spatial scales. Several of such processes appear to be of random nature, i.e., they cannot be predicted by known laws. In this context, it is not necessary to know whether these processes are random in reality or whether we just do not know their deterministic law and they appear to be random. The insight that unknown laws of processes may be replaced or modelled by laws for random processes is helpful in modelling complex systems. Examples for such a replacement are manifold, and we mention model parametrization in meteorology (Noilhan and Planton 1989) and stimulus parametrization in biology (Doiron et al. 2004).

Considering random processes (or noise) in dynamical models, it is important how they are included. If the randomness is taken into account in multiplicative factors, e.g., parametrizing the unknown underlying dynamics of the factor, we call this multiplicative noise . Its effect has been studied extensively for the last decades in physics and mathematics, e.g., see the books of Horsthemke and Lefever (1984) and Garcia-Ojalvo and Sancho (1999). Conversely, additive noise is included in a model when the randomness is just added to the phase space flow. For instant, considering a model of differential equations in time additive noise is just added to the temporal deviation over time. For a long time, it has been known that multiplicative noise easily shifts the stability of systems, i.e., may shift bifurcations, whereas additive noise does not. This paradigm has been challenged recently in the studies of spatially extended systems (Hutt et al. 2007, 2008; Hutt 2008) and delayed systems (Lefebvre et al. 2012; Lefebvre and Hutt 2013; Hutt et al. 2012; Hutt and Lefebvre 2016). These studies show that additive noise may induce bifurcation shifts close to bifurcation points. This recent finding is illustrated and explained in a later section. Moreover, additive noise may not only affect the stability of systems close to instability points, but may also tune intrinsic time scales. We show in a later section that this effect occurs close and far from the bifurcation point.

Taking a close look at the complex systems subjected to additive noise, one learns that additive noise affects the coupling of the systems elements. Such elements may be spatial modes in spatially extended systems or microscopic elements, such as single neurons, whose interactions generate novel macroscopic order parameter modes, such as the macroscopic population dynamics. To illustrate such an interaction before we apply the concept to complex problems, we present briefly the major elements of the slaving principle in synergetics (Haken 1996, 2004) in the subsequent section and put it into relation to its mathematical equivalent, the center manifold theorem.

Slaving Principle and Center Manifold Theorem

We start with the illustration of the major concept of the center manifold theorem and finally put the concept into a physical context to explain the slaving principle.

For illustration, let us consider the dynamical system

$$ {\displaystyle \begin{array}{l}\dot{x}=\alpha x+{ax}^3+ xy-{xy}^2\\ {}\dot{y}=-y+{bx}^2+{x}^2y\end{array}} $$
(1)

with x, y, α, a, b ∈ ℛ. The stationary state is the fixed point x = y = 0. Linearizing about this fixed point shows that there are two eigenvalues λ1 = α, λ2 = 1. Hence the node is asymptotically stable if α < 0 and a saddle node for α > 0.

At the Stability Threshold

For the moment, we assume α = 0. Then close to the fixed point y evolves in the stable subspace spanned by the eigenvector (0, 1)t of the linearized system with corresponding eigenvalue λ2 < 0 and x evolves in the center subspace. We observe that x = 0 is an invariant manifold which is a stable manifold of the origin since dx/dt = 0 and dy/dt < 0. Hence for initial points (0, y0)t the system evolves on the stable manifold.

Now the question arises how one can find the invariant manifold for which the origin is neutrally stable corresponding to the eigenvalue λ1 = 0, i.e., we want to find the center manifold. The center manifold theorem (Carr 1981) applies stating that y = h(x) close to the origin, where h(·) is a nonlinear function with h(0) = 0, dh(x)/dx = 0 at x = 0. This stipulates

$$ {\displaystyle \begin{array}{ll}\dot{y}& =\frac{\partial h}{\partial x}\dot{x}\\ {}-y+{bx}^2+{x}^2y=\frac{\partial h}{\partial x}\left(\alpha x+{ax}^3+ xy-{xy}^2\right)\\ {}-h(x)+{bx}^2+{x}^2h(x)\\ {}& =\frac{\partial h}{\partial x}\left(\alpha x+{ax}^3+ xh(x)-{xh}^2(x)\right)\end{array}} $$

Inserting the polynomial ansatz

$$ h(x)={h}_2{x}^2+{h}_3{x}^3+\cdots $$
(2)

and sorting by orders of x we gain h2 = b, h3 = 0, h4 = −b(2a − 1) and thus y = bx2 + b(2a − 1)x4 up to fourth order. Thus, for α = 0, the system on the center manifold obeys

$$ \dot{x}=\left(a+b\right){x}^3-\left(2 ab-1+{b}^2\right){x}^5+O\left({x}^6\right). $$

For a + b < 0, the origin is attractive, and the manifold is called a slow manifold close to the fixed point.

In physical terms, the variable x evolves on a much larger time scale than y since the time scales are inversely proportional to the corresponding eigenvalues of the linearized system. Moreover, the variable y obeys the slow variable x on the center manifold. In other words, the slow variable x enslaves the fast variable y and determines the dynamics of the full system. This prominent role of x is the reason why it is called an order parameter. Hence at bifurcation points, the slow variables enslave the fast variables. This slaving concept applies at all bifurcations that fulfil the rather general conditions of the center manifold and allows to describe most bifurcations observed in nature (Haken 1983), be oscillatory instabilities in the laser (Haken 1985) or human motor-coordination phase transitions in the brain (Fuchs et al. 1992; Jirsa et al. 1995). By virtue of the generality of this concept, it is called slaving principle. It is often formulated equivalently by an adiabatic approach in which the fast slaved variable decays rapidly and follows the slow order parameter dynamics (Haken 1996; Schanz and Pelster 2003; Schoener and Haken 1986).

About the Stability Threshold

Now we consider the case α ≠ 0, i.e., the system does not evolve necessarily at the bifurcation point. This is the more general case. To be still able to apply the center manifold theorem, we augment the phase space by the variable α and reformulate the model (1) as

$$ {\displaystyle \begin{array}{l}\dot{\alpha}=0\, \\ {}\dot{x}=\alpha x+{ax}^3+ xy-{xy}^2\\ {}\dot{y}=-y+{bx}^2+{x}^2y.\end{array}} $$

The linearized problem about the fixed point (0, 0, 0)t has the eigenvalues λ1,2 = 0 and λ3 = −1, since now the term αx is treated as a nonlinear term. Now the variables α, x evolve in the center subspace while y is still enslaved by y = h(α, x). The center manifold is defined to obey h(0, 0) = 0, ∂h/∂α = ∂h/∂x = 0 at (α, x)t = (0, 0)t. With the ansatz

$$ h\left(\alpha, x\right)={h}_{2,1}{\alpha}^2+{h}_{2,2}\alpha x+{h}_{2,3}{x}^2 $$

we obtain y = bx2 up to second order and the order parameter obeys

$$ \dot{x}=\alpha x+\left(a+b\right){x}^3+O\left({x}^4\right). $$

This solution extends naturally the case α = 0.

The latter discussion assumes deterministic dynamics, while stochastic dynamics on center manifolds close to bifurcation points can be studied as well. This is shown in the subsequent section.

Additive Noise in Low-Dimensional Models: Stochastic Center Manifold Theory

The effects of additive noise emerge in multidimensional systems, e.g., in low-dimensional nondelayed systems or in infinite-dimensional delayed systems. The subsequent sections consider both cases.

Nondelayed Systems

Additive noise in systems close to the bifurcation point has been shown previously to trigger stochastic bifurcations (Boxler 1989; Arnold 1998; Schoener and Haken 1986). To see this, we consider here a reduced system of amplitude equations describing spatial modes of a stochastic Turing bifurcation (Hutt et al. 2007, 2008):

$$ {\displaystyle \begin{array}{l}{du}_c=\left({\alpha}_c{u}_c+2{\beta}_c{u}_0{u}_c+2{\gamma}_c{u}_c^3\right) dt\\ {}{du}_0=\left({\alpha}_0{u}_0+4{\beta}_0{u}_c^2\right) dt+\eta dW(t)\end{array}} $$

with the slow order parameter uc, the slaved fast mode u0, constants αc, α0, βc, β0, γc, and noise level η. The control parameter αc, the fast mode u0, and the order parameter uc are scaled as α ∼ O(ε) and u0 ∼ O(ε) and uc ∼ O(ε1/2). The noise process W(t) is a zero-mean Wiener process with 〈dW(t)dW(τ)〉 = 2δ(t − τ). Here, amplitudes and noise levels are taken into account up to an order O(ϵ3/2). Applying the stochastic center manifold analysis (Boxler 1989; Xu and Roberts 1996; Hutt et al. 2007, 2008; Bloemker et al. 2005; Bloemker 2003) and an adiabatic Fokker-Planck approximation (Drolet and Vinals 1998, 2001; Hutt et al. 2007, 2008), we obtain for large times and scaled order parameter ūc and time T the Fokker–Planck equation (Hutt et al. 2008)

$$ \frac{\partial P\left({\overline{u}}_c,t\right)}{\partial T}=-\frac{\partial }{\partial {\overline{u}}_c}\left(\alpha {\overline{u}}_c+a{\overline{u}}_c^3\right)P\left({\overline{u}}_c,T\right). $$

with new constants α, a. We observe that the additive noise dW(t) in the slaved fast mode u0 has no effect on the order parameter ūc.

For larger orders of amplitude and noise O(ε5/2) in the same stochastic Turing bifurcation problem, amplitudes obey

$$ {\displaystyle \begin{array}{l}{du}_c=\left({\alpha}_c{u}_c+{bu}_0{u}_c+2{\gamma}_c{u}_c^3+3{\gamma}_c{u}_c{u}_0^2\right) dt\\ {}{du}_0=\left({\alpha}_0{u}_0+4{\beta}_0{u}_c^2+{\beta}_0{u}_0^2+2{\gamma}_0{u}_0{u}_c^2\right) dt+\eta dW(t).\end{array}} $$

After an adiabatic Fokker–Planck approximation, we obtain the Fokker–Planck equation for the order parameter (Hutt et al. 2008)

$$ {\displaystyle \begin{array}{l}\frac{\partial P\left({\overline{u}}_c,t\right)}{\partial T}=-\frac{\partial }{\partial {\overline{u}}_c}\\ {}\left(\left(\alpha -{\alpha}_{th}\left(\eta \right)\right){\overline{u}}_c+C{\overline{u}}_c^3+D{\overline{u}}_c^5\right)\;P\left({\overline{u}}_c,T\right).\end{array}} $$
(3)

with the control parameter shift αth(η) ∼ η2. Figure 1 shows this shift of the bifurcation by additive noise.

Fig. 1
figure 1

Additive noise shifts the bifurcation point. The stationary state ustat is the state at the maximum of the stationary probability density of P (ūc, t) from Eq. (3). (a) η = 0. (b) η = 0.02 (Modified Fig. 9 in Hutt et al. 2008)

Equation (3) and Fig. 1 reveal that the bifurcation point of the order parameter uc is shifted by additive noise in the slaved mode u0. The underlying mechanism is known from multiplicative noise and can be understood as follows: the fast mode u0 is stochastic and nonlinear coupling \( {u}_c{u}_0^n \) with even order n yields an effective noise shift since \( \left\langle {u}_0^n\right\rangle \ne 0 \), whereas nonlinear coupling at odd order does not yield a shift since \( \left\langle {u}_0^n\right\rangle =0 \). In sum, additive noise in a mode that is nonlinearly coupled at even order to the order parameter dynamics acts like multiplicative noise and hence tunes the bifurcation.

Delayed Systems

The nonlinear coupling of stochastic modes occurs in systems, where multiple elements couple nonlinearly. This occurs in high-dimensional systems, such as spatially extended systems (Hutt et al. 2008; Bloemker 2003) or delayed systems. To illustrate the corresponding stochastic effect in delayed systems, let us consider the stochastic delay differential equation (Hutt et al. 2012)

$$ dx(t)=\left(-x(t)+\beta x\left(t-\tau \right)-\gamma {x}^3\left(t-\tau \right)\right) dt+\kappa dW(t) $$
(4)

with constants β, γ > 0, the noise level κ and the Wiener noise process W(t). A stochastic center manifold analysis for delayed systems (Hutt and Lefebvre 2016; Lefebvre et al. 2012; Lefebvre and Hutt 2013; Hutt et al. 2012) permits to derive a delay-free stochastic order parameter equation on the center manifold. Applying an adiabatic approximation, the Fokker–Planck equation for the order parameter u reads up to a certain noise and magnitude order (Hutt et al. 2012)

$$ {\displaystyle \begin{array}{l}\frac{\partial P\left(u,t\right)}{\partial t}=-\frac{\partial }{\partial u}\left(\left[{A}_1+{A}_{1,\mathrm{shift}}\right]u+\left[{A}_3+{A}_{3,\mathrm{shift}}\right]{u}^3+{A}_5{u}^5+{A}_7{u}^7+{A}_9{u}^9\right)\\ {}\times P\left(u,t\right)+D\frac{\partial^2}{\partial {u}^2}P\left(u,t\right)\end{array}} $$
(5)

with A1,shift, A3,shift, D ∼ κ2 and constants A1, A3, A5, A7, A9. We observe that additive noise in delayed systems induces a stochastic bifurcation and shifts the bifurcation point (Hutt and Lefebvre 2016; Lefebvre et al. 2012; Lefebvre and Hutt 2013; Hutt et al. 2012). Figure 2 shows the stationary probability functions of the original system (4) and the Fokker–Planck equation of the order parameter (5) for two different delay values. We observe that the stationary probability function of the order parameter Ps(u) is in good accordance to the original probability density function Ps(x) for small delays (a), while differences are visible for larger delays (b). In addition, increasing the noise level κ moves the magnitude of maxima to smaller values and hence shifts the bifurcation. This effect of additive noise is new and known for multiplicative noise only.

Fig. 2
figure 2

Stationary probability density functions of the original system Ps(x) and the order parameter Ps(u) as a solution of Eq. (5). The functions Ps(u) (dotted line) and Ps(x) (solid line) are computed for τ = 0.5 (a) and τ = 1.0 for noise level κ = 0.005 (green), κ = 0.01 (red), and κ = 0.015 (black) (Taken from Hutt et al. 2012 by permission)

Additive Noise in Discrete Network Models

To extend the gained results of additive noise to large and more realistic systems, now we consider network models evolving far from bifurcation points.

Neural Mass Network

We consider a random network of N elements, whose elements with activity un(t), n = 1,, N evolve in time according to (Hutt et al. 2016)

$$ \frac{1}{\alpha}\frac{du_n}{dt}=-{u}_n+\frac{1}{N}\sum \limits_{m=1}^N{w}_{nm}f\left[{u}_m\left(t-\tau \right)\right]+{I}_n(t). $$
(6)

The network elements are delayed to each other by delay τ > 0, 1/α is the characteristic time scale of each element, wnm is the random connection weight between elements n and m with

$$ {w}_{nm}=g+s{\eta}_{nm} $$
(7)

and constants g, s > 0 and the statistically uncorrelated variables ηnm with

$$ \sum \limits_{n=1}^N{\eta}_{nm}=0\, \forall m=1,\dots, N $$
(8)
$$ \sum \limits_{m=1}^N{\eta}_{nm}=0\, \forall n=1,\dots, N $$
(9)
$$ \frac{1}{N}\sum \limits_{n=1}^N{\eta}_{nm}{\eta}_{ln}=\left[\frac{1}{N}\sum \limits_{n=1}^N{\eta}_{nm}\right]\left[\frac{1}{N}\sum \limits_{n=1}^N{\eta}_{ln}\right]\, \forall l,m=1,\dots, N. $$
(10)

The last equation expresses the assumption that all columns and rows are statistically independent from each other.

The variable In = I0 + ξn(t) denotes the external noise driving each element with

$$ \sum \limits_{n=1}^N{\xi}_n(t)=0,\, \left\langle {\xi}_n{\xi}_m\right\rangle =2D{\delta}_{n,m}, $$
(11)

where 〈⋅〉 denotes the ensemble average, D is the noise intensity, and I0 is a spatially constant stimulus bias.

In neural systems, the model (6) describes the spatially coarse-grained potential of N spatial patches subjected to afferent activity from other neural populations (Hutt et al. 2016). The function f[·] represents the activation or output function of each element, typically it is of sigmoidal shape. Figure 3 presents the topology of the network.

Fig. 3
figure 3

Simple spatial network topology

Analysis of the Global Synchronization

In the following, we study the degree of global synchronization in the network considering the network mean

$$ \overline{u}(t)={u}_n(t)-{v}_n(t) $$
(12)

with deviations vn(t) from the mean

$$ \overline{u}(t)=\frac{1}{N}\sum \limits_{n=1}^N{u}_n(t),\, \sum \limits_{n=1}^N{v}_n(t)=0. $$
(13)

Then inserting (12) into (6) leads to

$$ {\displaystyle \begin{array}{l}\frac{1}{\alpha}\frac{d}{dt}\left(\overline{u}(t)+{v}_n(t)\right)=-\overline{u}(t)-{v}_n(t)\\ {}+\frac{1}{N}\sum \limits_{m=1}^N{w}_{nm}f\left[\overline{u}\left(t-\tau \right)+{v}_m\left(t-\tau \right)\right]\\ {}+{I}_0+{\xi}_n(t).\end{array}} $$
(14)

Global Mode

After averaging Eq. (14) over all elements N one obtains the evolution equation of the spatial mean

$$ {\displaystyle \begin{array}{ll}\frac{1}{\alpha}\frac{d}{dt}\overline{u}(t)=& -\overline{u}(t)\\ {}& +\underset{={S}_0}{\underbrace{\frac{1}{N^2}\sum \limits_{n,m=1}^N{w}_{nm}f\left[\overline{u}\left(t-\tau \right)+{v}_m\left(t-\tau \right)\right]}}+{I}_0.\end{array}} $$
(15)

At first we note that with (8)

$$ {\displaystyle \begin{array}{l}\frac{1}{N}\sum \limits_{n=1}^N{w}_{nm}=g+s\frac{1}{N}\sum \limits_{n=1}^N{\eta}_{nm}=g\\ {}\\ {}\forall m=1,\dots, N.\end{array}} $$
(16)

Hence

$$ {S}_0=g\frac{1}{N}\sum \limits_{m=1}^Nf\left[\overline{u}\left(t-\tau \right)+{v}_m\left(t-\tau \right)\right] $$
(17)

If we denote Vm = f[ū(t) + vm(t)], we can write

$$ {\displaystyle \begin{array}{l}{S}_0= gE\left[V\right]={\int}_{-\infty}^{\infty } vp(v) dv\\ {}\\ {}\, ={\int}_{-\infty}^{\infty }f\left[\overline{u}(t)+v(t)\right]p\left(v,t\right) dv\end{array}} $$
(18)

where p(v, t) is the probability density function of the fluctuations {vn} at time instant t. Then

$$ {\displaystyle \begin{array}{ll}\frac{1}{\alpha}\frac{d}{dt}\overline{u}(t)=& -\overline{u}(t)+{\int}_{-\infty}^{\infty }f\left[\overline{u}\left(t-\tau \right)+v\left(t-\tau \right)\right]\\ {}& \times p\left(v,t-\tau \right) dv+{I}_0.\end{array}} $$
(19)

Fluctuation Modes

Inserting Eq. (15) back into (14) yields

$$ {\displaystyle \begin{array}{l}\frac{1}{\alpha}\frac{d}{dt}{v}_n(t)=-{v}_n(t)+{\xi}_n(t)\\ {}+\frac{1}{N}\left[\underset{={S}_1}{\underbrace{\sum \limits_{m=1}^N{w}_{nm}{f}_m\left(t-\tau \right)}}-\underset{={S}_2}{\underbrace{\frac{1}{N}\sum \limits_{n,m=1}^N{w}_{nm}{f}_m\left(t-\tau \right)}}\right]\end{array}} $$
(20)

with fm(t) = f[ū(t) + vm(t)]. With the definition (7) and property (8), it is

$$ \sum \limits_{n=1}^N{w}_{nm}= Ng\;{\forall}_m=1,\dots, N $$
(21)

and hence

$$ {S}_2=\sum \limits_{m=1}^N{gf}_m\left(t-\tau \right). $$
(22)

To calculate S1, we note that

$$ \sum \limits_{m=1}^N{w}_{nm}{f}_m\left(t-\tau \right)=\sum \limits_{m=1}^N{gf}_m\left(t-\tau \right)+s\sum \limits_{m=1}^N{\eta}_{nm}{f}_m\left(t-\tau \right). $$
(23)

If Xm = fm and Ym = ηnmn are statistically independent from each other, then

$$ \frac{1}{N}\sum \limits_{m=1}^N{X}_m{Y}_m=\frac{1}{N}\sum \limits_{k=1}^N{X}_k\frac{1}{N}\sum \limits_{m=1}^N{Y}_m. $$
(24)

This assumption holds true in most cases, since ηnm are static and chosen independently from any dynamics and fm evolves over time. Consequently,

$$ \sum \limits_{m=1}^N{\eta}_{nm}{f}_m\left(t-\tau \right)=\frac{1}{N}\sum \limits_{k=1}^N{\eta}_{nk}\sum \limits_{m=1}^N{f}_m\left(t-\tau \right). $$
(25)

and with condition (9)

$$ \sum \limits_{m=1}^N{\eta}_{nm}{f}_m\left(t-\tau \right)=0 $$
(26)

and hence

$$ {S}_1=\sum \limits_{m=1}^N{gf}_m\left(t-\tau \right). $$
(27)

Since S1 = S2, we obtain from Eq. (20)

$$ \frac{1}{\alpha}\frac{d}{dt}{v}_n(t)=-{v}_n(t)+{\xi}_n(t) $$
(28)

and the fluctuations vn(t) are independent from the spatial mean, but the spatial mean depends on the vn(t), cf. Eq. (15).

If the external stimulus is random, uncorrelated and normal distributed with variance σ2 about the mean I0, then vn(t) is random as well and obeys an Ornstein–Uhlenbeck process. For large times, vn(t) approach a stationary state with the stationary probability density function

$$ {p}_s\left({v}_n\right)=\frac{1}{\sqrt{2\pi \alpha}\sigma }{e}^{-{v}_n^2/2{\alpha \sigma}^2}. $$
(29)

Merging Global and Fluctuation Dynamics

The global mode evolution (19) depends on the probability density function of the fluctuations p(v, t). For large times, the fluctuations approach their stationary state much faster than the global mode evolves, i.e., p(v, t→ ps(v) given in Eq. (29). Hence the global mode (19) evolves on a relative large time scale according to

$$ \frac{1}{\alpha}\frac{d}{dt}\overline{u}(t)=-\overline{u}(t)+\frac{1}{\sqrt{2\pi \alpha \sigma}}{\int}_{-\infty}^{\infty }f\left[\overline{u}\left(t-\tau \right)+v\right]\;{e}^{-{v}^2/2{\alpha \sigma}^2} dv+{I}_0. $$
(30)

For illustration reasons, let us consider the special case of McCulloch–Pitts neurons, whose transfer function is a step function, i.e., f[u] = Θ(u − uth) with threshold uth. Then the integral in Eq. (30) has a rather simple form and we gain (Hutt et al. 2016)

$$ {\displaystyle \begin{array}{ll}\frac{1}{\alpha}\frac{d}{dt}\overline{u}(t)=& -\overline{u}(t)\\ {}& +\underset{=F\left[\overline{u}\left(t-\tau \right)\right]}{\underbrace{\frac{g}{2}\left(1+\operatorname{erf}\left[\frac{\overline{u}\left(t-\tau \right)-{u}_{th}}{\sqrt{2\alpha}\sigma}\right]\right)}}\\ {}& +{I}_0\end{array}} $$
(31)

with the error function erf(·). The transfer function F[ū] has a sigmoidal shape and noise variance σ2 determines its shape: weak noise, i.e., small σ, reflects a rather steep sigmoidal, whereas strong noise renders the sigmoidal function more flat.

From (30), we learn that F has always a sigmoidal shape if the single element transfer function f(u) is a monotonically increasing function of u (Hutt and Buhry 2014). In the following, we assume the standard logistic sigmoidal function \( F\left[u\right]={F}_0/\left(1+{e}^{-\left(u-{u}_{th}\right)/c}\right) \). Here weak noise with low steepness parameter c reflects a steep step-like function whereas enhancing noise with increasing values of c flattens the sigmoid function. This is illustrated in Fig. 4.

Fig. 4
figure 4

Illustration of the transfer function and the resulting stationary constant state. The dashed line denotes the left hand side of Eq. (32) and uth = 3

Equation (30) describes the mean-field evolution of the global mode and permits to illustrate coherent structures. If ū = 0, then network elements are not coherent, whereas ū ≠ 0 reflects coherent activity. In the following examples, we will see that coherence emerges in certain frequency bands dependent of the external noise level σ.

To gain some insights how coupling strength and noise strength modify the system dynamics, at first let us consider stationary solutions with dun/dt = 0, i.e., (t)/dt = 0. This yields

$$ {u}_0-I=F\left[{u}_0\right] $$
(32)

Figure 4 shows both sides of this equation and illustrates that increasing the coupling strength (increasing F0) changes the stationary state u0 and, even more important, the nonlinear gain dF/ computed at ū = u0. Linearizing about that stationary state yields for deviations x

$$ \frac{1}{\alpha}\frac{dx(t)}{dt}=-x(t)+\gamma x\left(t-\tau \right),\, \gamma =\frac{\partial F}{\partial \overline{u}}\left|{}_{\overline{u}={u}_0}\right.. $$
(33)

Hence the stability and timescale of solutions about the stationary state depends on the nonlinear gain. If the level of external noise increases (decreases), the transfer function becomes more flat (steep) and γ at upper and lower stationary state in Fig. 4 γ increases (decreases). In case of an oscillatory stable stationary state, the noise level determines the frequency of solutions (Hutt et al. 2016).

Coupling Induces Self-Organization in the Presence of Noise and Noise Affects System Frequency

To illustrate the network dynamic evolution subjected to noise, we simulate the full network of N = 100 elements that obeys (6) and increase the coupling strength between elements γ = g/N continuously, cf. Fig. 5, lower panel. Synchronously, the noise level σ = , a > 0 is constant. This holds up to a certain time T.

Fig. 5
figure 5

Spectral distribution and phase-locking value (PLV) in a simple spatial neural mass network subjected to varying noise and coupling strength. It is \( \mu =\sqrt{60}\sigma \) and T = 20s, see text body

We compute the time-dependent spectral power distribution and the phase-locking value (PLV) (Lachaux et al. 1999) in the course of time. The spectral power is computed by a windowed Fourier Transform with a 4s-window width, the PLV is computed as the circular variance of phases of 30 randomly chosen elements for each time-frequency pair. The phases result from a Morlet wavelet transform. The maximum value PLV = 1 reflects complete synchronization in the network, whereas the minimum value PLV = 0 reflects vanishing synchrony in the network. Figure 5 shows that the network elements do not synchronize at low coupling strengths since power and PLV are low. However, synchronization emerges with larger coupling expressed by large power and large PLV at ν = 45 Hz. It is well-known that complex systems self-organize if the interaction between subunits are large enough. This is seen in our simple example. Analytically, the stationary state u0 is a stable focus when power and PLV are low. When synchronization sets in at stronger coupling strength, the stable focus becomes unstable, the system oscillates along a limit cycle, and power is much stronger.

Until now, the noise level has been kept constant. Now removing the noise while retaining the coupling strength, cf. Fig. 5, lower panel for t > T = 20s, the PLV jumps to very high values while the maximum power jumps to lower values. Interestingly, the oscillation frequency with maximum power drops to ν = 40 Hz that represents the systems endogenous oscillatory rhythm in the absence of noise. This drop clearly demonstrates that systems’ frequency observed may depend heavily on the intrinsic noise level.

We learn that noise diminishes the synchronization between network elements and moves the system frequency. This is confirmed by Fig. 6 presenting epochs of single time series at low (left) and large coupling strength (center) at high noise levels and in the absence of any noise (right).

Fig. 6
figure 6

Activity at two spatial locations in simple spatial network. (Left panel) Weak coupling, with noise. (Center panel) Strong coupling, with noise. (Right panel) Strong coupling, no noise

Analytically, this behavior can be understood by Eqs. (30) and (32) and Fig. 4: additive noise tunes the effective transfer function, thus determines the stationary state and its stability and consequently the amplitude and frequency of the systems dynamics.

Noise Can Destruct Self-Organization While It Changes the System Frequency

The previous section shows that denoising enhances oscillatory power and shifts frequency of power peaks. Similarly, increasing the noise level may tune the systems’ rhythmic activity and even destroy it at large enough noise levels. Figure 7 demonstrates that increasing noise level shifts the frequency of the system and, finally, destroys the oscillatory state. At large noise levels, the transfer function F is flat and a single stationary state exists. Hence increasing the noise level either makes the system jump from large values u0 to low values of u0 and/or its corresponding nonlinear gain renders the stationary state stable.

Fig. 7
figure 7

Time-frequency spectral power and global phase locking of simulations of the simple spatial neural mass network with increasing noise level

Synchronization in a Spiking Neural Network

To illustrate that the noise-induced change of synchronization also may occur in biologically more realistic networks, we study a spiking neural network of Poisson neurons (Lefebvre et al. 2017), cf. Fig. 8.

Fig. 8
figure 8

Network modeling the thalamo-cortical feedback circuit present in vertebrates. All neurons in the network receive spectral-white Gaussian noise with zero mean and finite variance

Figure 9 shows the average electric potential of cortical neurons (EEG) and their firing activity in a raster plot for two different noise levels D. We observe that denoising induces synchronization between neurons and enhances EEG power. A corresponding mean-field description of the spiking neural network, e.g., along the lines of the derivation shown in (Hutt and Buhry 2014), permits to describe the noise effect. Essentially, the mechanism is the same as the one shown in an above section and in previous studies (Hutt and Buhry 2014; Hutt et al. 2016; Lefebvre et al. 2015; Herrmann et al. 2016): increasing additive noise of network elements smoothens the effective transfer function, consequently shifts the stationary state and tunes its stability and may induce a transition to a new state.

Fig. 9
figure 9

Noise reduction induces synchronization of spiking neurons. (Top panel) EEG(t) is the average electric potential of cortical neurons. (Bottom panel) Firing activity of neurons in the network

Future Directions

Additive noise may have a strong impact on complex systems. The previous sections have shown corresponding conditions and mathematical techniques. Additive noise may shift instability thresholds and tunes frequency and amplitude of rhythmic activity. We showed that additive noise in lower levels, e.g., in neurons or neural ensemble patches, may destroy synchronization in an upper level, e.g., in neural populations or populations of ensemble patches.

The insight, that additive noise affects endogenous brain activity, indicates impact of electric brain stimulation on the behavior of subjects. Corresponding experiments have been performed in the last decades, both for rhythmic stimulation (Herrmann et al. 2013) and noise stimulation (Terney et al. 2008). Perceptual learning under noise stimulation has been shown (Fertonani et al. 2011) to improve considering high-frequency noise (>100 Hz). Understanding how noise stimulation affects neural activity and how it enhances the perceptual learning is one of the great challenges in future years.