13.1 Introduction

The Merriam-Webster Thesaurus [1] lists the following definition for the word “synchronize”: to occur or exist at the same time. It lists the following synonyms: accompany, attend, co-occur, coexist, coincide, concur. None of these convey the full breadth of synchronization phenomena, nor do they exclude behaviors that occur due to an external force, such as electrons flowing in the same direction due to an applied field. The latter are not usually thought of as synchronization. From Wikipedia [2], Tropical fireflies, in particular, in Southeast Asia, routinely synchronise their flashes among large groups. This phenomenon is explained as phase synchronization and spontaneous order. There is a plethora of examples of synchronization phenomena of discrete units, each most likely responding to a different detailed mechanism: heart tissues of different origins can ‘beat’ in sync, dancers coordinating their movements, synchronization in neural networks. The examples are endless in the world of computer science. One can think of two units or many units or an infinite number of units that synchronize.

Synchronization usually refers to emergent macroscopic behavior in systems consisting of microscopic or mesoscopic nonlinearly interacting units. Usually synchronization refers to temporal coincidence in a time dependent situation, for example, a moving pattern of most units being in the same state (e.g. an oscillation). The moving or oscillating units may move in continuous time as do dancers, or in discrete steps as do fireflies. The interactions among units may be short range or long range or anything in between. Furthermore, synchronization may also refer to a static coincidence in the state space of units, that is, to the formation of stationary patterns or agglomerations in a particular state. The topic is vast and can and does fill many books [3, 4].

In this Chapter we must narrow our discussion a great deal, and we focus on the emergence of collective behavior in systems of identical discrete units stepping from one state to another in discrete jumps. We assume all-unit to all-unit interactions. The interactions may all be of the same strength (which we assume), in which case geometry plays no role. In the absence of any disturbances, if the number of units is infinite, the system behaves as described by mean field theory. The final state of the system is then sharply determined.

The synchronization may not be perfect, for example, if there is noise in the system. Imperfect synchronization leads to a distribution of behaviors around a maximum that usually represents the behavior if the synchronization were perfect. This distribution may be stationary or may move in time, depending on the details of the model. This could happen, for instance, if the units are not identical, a case that we do not address here. We do address an important source of noise: when the number of units is finite rather than infinite. Although the synchronization is not perfect in this case, we will loosely use the terminology of dynamical systems to describe the stochastic counterpart. In any case, our goal is and has been to understand synchronization models that are simple enough for entire or partial analytic study. In all cases we choose units that can be described by the smallest possible number of variables that still allow for synchronization.

We introduce two-state models (“on-off”) as well as three-state models. This is accomplished in Sect. 13.2. We take the interactions among units to be nonlinear. Nonlinear interactions are essential to achieve synchronization, and we work with polynomial interactions in the two-state case and exponential interactions in the three-state model. In the two-state models a Markovian transition rate of each unit between the two states leads to a patterned stationary distribution, that is, one of the states turns out to be more populated than the other. To obtain time dependent patterning in the two-state case it is necessary to introduce a memory whereby one of the two transitions of each unit is non-Markovian. In the three-state case we take the transitions to be unidirectional and obtain time-dependent effects such as waves of the majority of units being in one of the three states followed by them being in another state, in turn followed by the third state and then back to the first state. When the number of units in the system is infinite, the final behaviors, be they time dependent or time independent, can often be handled analytically using mean field theory, and the long-time distributions are infinitely sharp. However, if the number of units is finite the problem becomes much more difficult because the evolution equations now acquire a noise term (i.e. they are now Langevin equations) and need to be handled at least in part numerically. We address the mean field theory cases as well as the cases of finite numbers of units in Sect. 13.2.

Finally, it is of course well known that Kuramoto studied one of the first mathematical models of synchronization [5]. His original model describes a continuous phase continuous time array. We asked ourselves this question: can we take Kuramoto’s model and coarse grain it to arrive at discrete models? We discuss this in Sect. 13.3 and arrive at a result that is difficult to “guess” a priori. We will leave this suspense until the reader arrives at that section. Finally, in Sect. 13.4 we end with some conclusions and perspectives.

An additional final note: the various models that we discuss, namely arrays of two-state units, arrays of three-state units, and the coarse graining of Kuramoto’s model are separate in the sense that the interactions among units are different, albeit all nonlinear. This Chapter is thus meant as a presentation of various discrete unit models without necessarily a comparison between them.

13.2 Finite Versus Infinite Population Models

In this section we will discuss the role of the number of units on the synchronization of two-state units (Sect. 13.2.1) and three state units (Sect. 13.2.2) with global coupling. In particular, whenever the steady state (\(t \rightarrow \infty \) limit) for infinite populations presents bistability, the large population limit for finite population destroys this bistability. That is, the order of the limits of large times and large number of units does matter in determining the fate steady state.

13.2.1 Two-State Models

The problem of synchronization of arrays of globally coupled two-state units was discussed in [6, 7]. Here the term synchronization is used loosely to indicate that a transition to an ordered state with more units in one state than the other is achieved.

13.2.1.1 Infinite Population

An  infinite ensemble of two-state (states 1 and 2) stochastic units is governed by the mean field equation

$$\begin{aligned} \dot{n}_1(t) = \gamma _2 n_2(t) - \gamma _1 n_1(t) = \gamma _2 - \left( \gamma _1 + \gamma _2 \right) n_1(t), \end{aligned}$$
(13.1)

where \(n_1(t)\) and \(n_2(t)\) are the densities of units in state 1 or 2 at time t, respectively. Here, we used density normalization \(n_1(t) + n_2(t) = 1\) to write the last equality. Despite the apparent simplicity of this mean field equation, there is a wealth of possibilities hidden in the transition rates. For instance, the units may be coupled or uncoupled, explicitly time-dependent or not, etc. Here, we are concerned with Markovian globally coupled units, that is, the transition rates at any time t depend on the densities of units in the states 1 and 2 at that time. Again, using the density normalization, we may write the transition rates as \(\gamma _1(n_1)\) and \(\gamma _2(n_1)\).

It is worth noting that no fluctuations appear because the population is infinite. Therefore, for infinite populations we have a deterministic evolution that is completely determined by the initial conditions and, obviously, the transition rates. Hence, determining the steady state for infinite populations is a matter of finding the steady state of unidimensional dynamical systems, and phase transitions for those systems correspond to the bifurcations of the dynamical systems. Bearing that in mind, we try to map the mean field equation (13.1), onto one of the well-known normal forms presented, for example, in [8]. In order to do that, we write the transition rates as polynomials,

$$\begin{aligned} \gamma _1(n_1)= & {} \sum ^{\infty }_{k=0} \gamma _1^{(k)} n_1^k, \end{aligned}$$
(13.2)
$$\begin{aligned} \gamma _2(n_1)= & {} \sum ^{\infty }_{k=0} \gamma _2^{(k)} n_1^k. \end{aligned}$$
(13.3)

Different relations between these series lead to different normal forms. For more general transition rates, the normal forms can still be seen as approximations near the bifurcations (phase transitions). Therefore, we can write the mean field equation as

$$\begin{aligned} \dot{n}_1 = \sum ^{\infty }_{k=0} a_k n_1^k, \end{aligned}$$
(13.4)

where

$$\begin{aligned} a_0= & {} \gamma _2^{(0)}, \nonumber \\ a_k= & {} -\gamma _1^{(k-1)} -\gamma _2^{(k-1)} + \gamma _2^{(k)}. \end{aligned}$$
(13.5)

13.2.1.2 Finite Populations

For finite populations, we need to take into account fluctuations due to the finite number or units. Mathematically, we need to consider a Langevin equation instead of the deterministic mean field equation of the previous section. Hence, we start our analysis writing the time-evolution of the number of units in state 1, \(N_1(t)\),

$$\begin{aligned} N_1(t + {\text {d}}t) = N_1(t) - \sum ^{N_1}_{k=1} \theta \left( \gamma _1(n_1) {\text {d}}t - \zeta _k \right) +\sum ^{N}_{k=N_1+1} \theta \left( \gamma _2(n_1) {\text {d}}t - \zeta _k \right) , \end{aligned}$$
(13.6)

where \(\theta (\cdot )\) is the Heaviside step function and \(\zeta _k\) is a random variable uniformly distributed in the interval [0, 1]. Furthermore, dt is an infinitesimal time increment, so that \(\gamma _k(n_1) {\text {d}}t\) is the probability of jumping for a unit in state k. Thus, the first sum represents the number of units that jump out of state 1 and the second sum represents the number of units jumping into state 1. Equation (13.6) leads to the Langevin equation [6]

$$\begin{aligned} \dot{n}_1 = \gamma _2(n_1) - \left[ \gamma _1(n_1)+\gamma _2(n_1) \right] n_1 + \sqrt{\left( 1-n_1 \right) \gamma _2(n_1) + n_1 \gamma _1(n_1)} \frac{\xi (t)}{\sqrt{N}}, \end{aligned}$$
(13.7)

where \(\xi (t)\) is a Gaussian zero-centered white noise. Comparing Langevin equation (13.7) and the mean field equation (13.1) we notice that they differ by the fluctuation term (last term in the Langevin equation). In the limit of large populations \(N \rightarrow \infty \), the fluctuations vanish and we recover the mean field equation. However, it should be noticed that there are two limits to be taken in order to get the steady state for large populations: time and number of units must both go to infinity. While the mean field equation approach takes the limit \(N \rightarrow \infty \) first, the Langevin equation approach takes the limit \(t \rightarrow \infty \) first. Therefore, when considering the Langevin approach we should not take the limit \(N \rightarrow \infty \) at this point. Instead, we must find the steady state before taking the infinite population limit.

Using the Itô interpretation [9] for this Langevin equation, we obtain a Fokker–Planck equation for the probability of finding a fraction of \(n_1\) of units in state 1 at time t,

$$\begin{aligned} \frac{\partial P(n_1,t)}{\partial t} = - \frac{\partial }{\partial n_1} \left[ \mu (n_1) P(n_1,t) \right] + \frac{\partial ^2}{\partial n_1^2} \left[ D(n_1,N) P(n_1, t) \right] , \end{aligned}$$
(13.8)

where

$$\begin{aligned} \mu (n_1) = \gamma _2(n_1) - \left[ \gamma _1(n_1)+\gamma _2(n_1) \right] n_1 \end{aligned}$$
(13.9)

is the drift and

$$\begin{aligned} D(n_1, N) = \frac{\gamma _2(n_1) + \left[ \gamma _1(n_1)-\gamma _2(n_1) \right] n_1}{2 N} \end{aligned}$$
(13.10)

is the diffusion coefficient. The Fokker–Planck equation (13.8) has the stationary solution

$$\begin{aligned} P_{st}(n_1) = c_N \frac{\exp \left[ \int _0^{n_1} \frac{\mu (n)}{D(n,N)}dn \right] }{D(n_1,N)}, \end{aligned}$$
(13.11)

where \(c_N\) is an N-dependent normalization constant for the probability density \(P_{st}(n_1)\). At this point, a few comments are relevant. First, since the drift of the Fokker–Planck equation equals the right hand side of the mean field equation, we may expect that for large populations (small fluctuations) the two approaches should give similar results. However, contrary to the infinite population case, the steady state probability density for finite N does not hold any memory about the initial conditions. Therefore, they must differ in case of coexistence of two or more stable solutions for the mean field equation. Here, we will discuss this point in more detail for the saddle-node bifurcation. The reader interested in other types of bifurcations should refer to [7].

Typically, the saddle-node bifurcation creates two fixed points (one stable and one unstable) out of nothing. Hence, there is no coexistence of stable solutions and we should not see the behavior described above. However, the normal form for the saddle-node bifurcation \(\dot{x} = r + x^2\), poses a problem for our model. For \(r<0\), there are two fixed points: the stable \(x^* = \sqrt{-r}\) and the unstable \(x^* = -\sqrt{-r}\) fixed points. Any initial condition greater than the unstable fixed point will grow unboundedly, while our variable \(n_1\) must lie in the interval [0,1]. Therefore, we must add another fixed point above the unstable fixed point. A possible way to add this fixed point without perturbing the bifurcation is to consider the following dynamical system

$$\begin{aligned} \dot{n}_1 = \left[ r + \left( n_1 - n_B \right) ^2 \right] \left\{ 1 - A \left[ r + \left( n_1 - n_B \right) ^2 \right] \right\} , \end{aligned}$$
(13.12)

where A is a positive constant and \(n_B\) is a positive constant in the interval [0, 1] chosen so that the two fixed points arising from the saddle node bifurcation are both positive and also lie in this interval. For certain values of A near the bifurcation point, this modified formula introduces two new fixed points—a stable fixed point for large values of \(n_1\) (but still smaller than 1) and an unstable negative fixed point. Therefore, there is only one new fixed point in the interval of interest. Moreover, this new fixed point introduces a bistability region (see Fig. 13.1). For \(r<0\) there are two stable fixed points: the one from the saddle-node bifurcation and the new fixed point introduced by the \(\left\{ 1 - A \left[ r + \left( n_1 - n_B \right) ^2 \right] \right\} \) term. Consequently, in this bistability region for the infinite population model, any initial condition below the dashed line in the figure (unstable fixed point) will end up at the bottom fixed point, while initial conditions above the dashed line evolve to the upper fixed point. As the bifurcation parameter r increases and crosses zero, there is a saddle-node bifurcation and the bottom stable fixed point collides with the unstable fixed point and they disappear, leaving only one stable fixed point to which all the initial conditions evolve.

Fig. 13.1
figure 1

Bifurcation diagram for the model given by (13.12) with \(A=15\) and \(n_B=0.15\). The full lines represent stable fixed points while the dashed line represents unstable fixed points

For finite population, however, comparing (13.4) and (13.12), and using (13.5), we have

$$\begin{aligned} \gamma _2^{(0)}= & {} -A n_B^4-2 A n_B^2 r-A r^2+n_B^2+r, \\ \gamma _2^{(1)} - \gamma _1^{(0)}= & {} 4 A n_B^3+4 A n_B r-2 n_B + \gamma _2^{(0)}, \\ \gamma _2^{(2)} - \gamma _1^{(1)}= & {} -6 A n_B^2-2 A r+1 + \gamma _2^{(1)}, \\ \gamma _2^{(3)} - \gamma _1^{(2)}= & {} 4 A n_B + \gamma _2^{(2)}, \\ \gamma _2^{(4)} - \gamma _1^{(3)}= & {} -A + \gamma _2^{(3)}, \\ \gamma _2^{(k+1)} - \gamma _1^{(k)}= & {} \gamma _2^{(k)} \qquad \text {for} \; k>3. \end{aligned}$$

Hence, the definition of the bifurcation model does not completely define the transition rates. As a matter of fact, different choices of the transition rates \(\gamma _1(n_1)\) and \(\gamma _2(n_1)\) lead to the same mean field equation. Therefore, different finite population models with different steady states may lead to the same infinite population model. Moreover, the steady state probability density \(P_{st}(n_1)\) favors one state. The finite population fluctuations thus destroy the coexistence, as shown in Fig. 13.2. In this figure, we can clearly see that as the number of units increases, the predominance of one peak becomes stronger. That is, as the number of units goes to infinity, one state becomes more and more probable thus confirming the destruction of the coexistence.

Fig. 13.2
figure 2

Steady state probability density as a function of the density of units in state 1. For this figure, we used the following values for the parameters: \(A=15, n_B=0.15, r=-0.01, \gamma _2^{(1)} = 3/2, \gamma _2^{(2)}=1, \gamma _2^{(3)}= 1/5, \gamma _2^{(4)}=1/4\) and \(\gamma _2^{(k)}=0\) for \(k>4\). The different curves correspond to different numbers of units: \(N=300\) full line, \(N=500\) dashed line and \(N=5000\) dotted line

We end this section with an important comment: while arrays of Markovian two-state units can only lead to stationary ordering, the inclusion of a memory, e.g. a refractory period that forces units arriving in state 2 to wait a certain amount of time before returning to state 1, yield time-dependent oscillations [10].

13.2.2 Three-State Model

While two-state Markovian models can only provide synchronization as an asymmetric steady state for which one state is more populated than the other, three-state Markovian models may lead to the more striking form of synchronization in which an aggregate of units move together from one state to the next, to the next, and so on. We consider a set of states (in this case states 1, 2 and 3) and transition rates that dictate the dynamics. Here, we use Wood’s model [11], for which the transitions are unidirectional (units in a given state can only stay there or move to the next state in a cyclic way: \(1 \rightarrow 2 \rightarrow 3 \rightarrow 1)\).

13.2.2.1 Infinite Population

An infinite array is thus governed by the mean field equations

$$\begin{aligned} \dot{n}_1= & {} \gamma _{31} - \left( \gamma _{12}+\gamma _{31} \right) n_1 - \gamma _{31} n_2, \nonumber \\ \dot{n}_2= & {} \gamma _{12} n_1 - \gamma _{23} n_2. \end{aligned}$$
(13.13)

Once again, we used the density normalization, which in this case reads \(n_1 + n_2 + n_3 = 1\), to eliminate the density of one of the states (\(n_3\)). Moreover, in Wood’s model [11], the transition rates are given by

$$\begin{aligned} \gamma _{i,i+1} = \gamma \exp \left[ a \left( U n_{i+1} + V n_{i-1} + W n_i \right) \right] , \end{aligned}$$
(13.14)

where the indices are cyclical as noted above.

The symmetry of the model implies that the point \(n_1=n_2=n_3=1/3\) is always a fixed point. A linear analysis [11] shows that this fixed point is stable for \(a < a_c = 3/(U-W)\). Further, for \(U \ne V\), there is a Hopf bifurcation at \(a = a_c\). The type of Hopf bifurcation (subcritical or supercritical) is determined by the sign of the first Lyapunov coefficient \(l_1\), which is found to be

$$\begin{aligned} l_1 = - \frac{9\sqrt{3} \left( U+V-2W \right) }{3\left( U-W \right) }. \end{aligned}$$
(13.15)

For \(l_1>0\), the Hopf bifurcation is supercritical, that is, we have a continuous transition and no coexistence region. More interesting for our discussion, for \(l_1<0\), the Hopf bifurcation is subcritical and presents a coexistence region (see Fig. 13.3). In this case, for \(a\equiv a_{lc}\) a pair of limit cycles (one stable and one unstable) is created. The stable limit cycle coexists with the symmetry-dictated fixed point. As a increases further and approaches \(a_c\), the unstable limit cycle shrinks while the stable one grows. At \(a=a_c\) the unstable limit cycle radius vanishes whilst it collides with the fixed point \(n_1=n_2=n_3=1/3\). For \(a<a_{lc}\) and \(a>a_c\) there is only one attractor, the fixed point or the limit cycle, respectively. Between the two, \(a_{lc}<a<a_c\), however, there are two stable attractors and there is coexistence.

Fig. 13.3
figure 3

Bifurcation diagram for the three-state model. For the transition rates we used \(U=1\), \(V=4\), and \(W=0\). The horizontal line represents the fixed point and the curves indicate the maximum and minimum values of \(n_1\) in the limit cycle. In both cases the solid lines represent the stable attractors and the dashed lines the unstable ones

13.2.2.2 Finite Populations

Next we move to the finite population case. As in the two-state model, we start by writing the evolution equation for the number of units in each state. However, for the three-state case, we need to follow the number of units in two states, say 1 and 2, the third one being determined from the condition \(N = N_1 + N_2 + N_3\). A simple counting protocol leads to

$$\begin{aligned} N_{1}\left( t + {\text {d}}t\right)= & {} N_{1}\left( t\right) -\sum _{k=1}^{N_1}\theta \left( \gamma _{12} \left( n_1,n_2\right) {\text {d}}t-\zeta _k\right) +\sum _{k=N_1+N_2 +1}^{N}\theta \left( \gamma _{31} \left( n_1,n_2\right) {\text {d}}t-\zeta _k\right) , \nonumber \\ N_{2}\left( t + {\text {d}}t\right)= & {} N_{2}\left( t\right) -\sum _{k=N_1+1}^{N_1+N_2}\theta \left( \gamma _{23} \left( n_1,n_2\right) {\text {d}}t-\zeta _k\right) +\sum _{k=1}^{N_1}\theta \left( \gamma _{12} \left( n_1,n_2\right) {\text {d}}t-\zeta _k\right) , \nonumber \\ \end{aligned}$$
(13.16)

where the first sum in the \(N_1\) equation counts the number of units leaving state 1 to state 2 between t and \(t+{\text {d}}t\), and the second one counts the number of units arriving at state 1 coming from state 3 during the same time interval. A similar interpretation holds for the \(N_2\) equation. From these microscopic equations, one arrives at the Langevin equations

$$\begin{aligned} \dot{n}_{1}&= \gamma _{31}(1-n_1-n_2)-\gamma _{12} n_1 +\sqrt{\gamma _{31}(1-n_1-n_2)}\ \frac{\xi _1\left( t\right) }{\sqrt{N}} -\sqrt{\gamma _{12}n_1}\ \frac{\xi _2\left( t\right) }{\sqrt{N}}, \nonumber \\ \dot{n}_{2}&= \gamma _{12} n_1-\gamma _{23}n_2 +\sqrt{\gamma _{12}n_1}\ \frac{\xi _2\left( t\right) }{\sqrt{N}} -\sqrt{\gamma _{23}n_2}\ \frac{\xi _3\left( t\right) }{\sqrt{N}}. \end{aligned}$$
(13.17)

From there using the Itô interpretation we obtain the Fokker–Planck equation

$$\begin{aligned} \frac{\partial P\left( n_1, n_2, t \right) }{\partial t} = \frac{\partial \Phi _1 }{\partial n_1} + \frac{\partial \Phi _2 }{\partial n_2} \end{aligned}$$
(13.18)

where

$$\begin{aligned} \Phi _i = - \mu _i P + \sum _{j=1}^{2}\frac{\partial (D_{ij}P)}{\partial n_j}, \end{aligned}$$
(13.19)

with

$$\begin{aligned} \mu = \left( \begin{array} [c]{c}\gamma _{31}(1-n_1-n_2)-\gamma _{12} n_1 \\ \gamma _{12} n_1-\gamma _{23}n_2 \\ \end{array} \right) , \end{aligned}$$
(13.20)

being the drift vector, and

$$\begin{aligned} D=\frac{1}{2N}\begin{pmatrix} \gamma _{12} n_1+\gamma _{31}(1-n_1-n_2) &{} -\gamma _{12}n_1 \\ -\gamma _{12} n_1 &{} \gamma _{12} n_1+\gamma _{23}n_2 \end{pmatrix}, \end{aligned}$$
(13.21)

the diffusion matrix.

The steady state solution \(P_{ss}(n_1,n_2)\) of the Fokker–Planck equation, (13.18), can be obtained numerically. We illustrate the qualitative behavior of \(P_{ss} (n_1, n_2)\) for two values of the control parameter a (both in the infinite population coexistence region) and two different numbers of units. In Fig. 13.4, for the smaller value of a (\(a = 2.85\)), we can see that for small N (left panel) there is a coexistence of the symmetric fixed point (center of the triangle) and the limit cycle, characterized by the brighter triangular region. For larger populations (right panel) there is only one bright spot in the middle of the triangle, indicating that the steady state only presents small fluctuations around the state \(n_1=n_2=n_3=1/3\) and there is no limit cycle and hence no coexistence. This result is thus similar to that of the two-state model and is again due to the fact that the \(t\rightarrow \infty \) and \(N\rightarrow \infty \) limits do not commute.

Fig. 13.4
figure 4

Steady state probability density for the three-state model with \(U=1, \; V=-4\) and \(W=0\). For both panels, \(a=2.85\), while \(N=500\) for the left panel and \(N=5000\) for the right panel. The gray shaded horizontal bar indicates the code for the values of \(P_{ss}\) and the arrows indicate the direction of increase (small values of \(P_{ss}\) are darker and larger values are brighter)

When we increase the value of a and approach \(a_c\) (Fig. 13.5) the situation for small populations (left panel) barely changes—we can see that the limit cycle now is slightly favored in comparison to the fixed point, but the coexistence is still there. However, for the larger population (right panel) the limit cycle clearly dominates (there is only a very weak bright spot in the center of the triangle that completely fades out for even larger populations). Therefore, once again, the finite population fluctuations destroy the coexistence in the limit of large populations.

Fig. 13.5
figure 5

Steady state probability density for the three state model with \(U=1, \; V=-4\) and \(W=0\). For both panels, \(a=2.87\), while \(N=500\) for the left panel and \(N=5000\) for the right panel. The gray shaded horizontal bar indicates the code for the values of \(P_{ss}\) and the arrows indicate the direction of increase (small values of \(P_{ss}\) are darker and larger values are brighter)

An order parameter that is a discrete version of one used by Kuramoto for continuous phases is

$$\begin{aligned} r(n_1,n_2) = \left| \frac{1}{N}\sum _{k=1}^{N} \exp (i \phi _k) \right| =\left| n_1 + n_2 \exp (i 2\pi /3) + (1-n_1-n_2)\exp (i 4\pi /3) \right| , \end{aligned}$$
(13.22)

where the phase \(\phi _k\) of state k is defined as \(\phi _k =\frac{2\pi }{3} (k-1)\). The average of the order parameter over the steady state is then given by

$$\begin{aligned} \langle r \rangle = \int \int r(n_1, n_2) P_{ss} (n_1, n_2) dn_1 dn_2. \end{aligned}$$
(13.23)

In Fig. 13.6 we show the average order parameter as a function of a for several numbers of units and also for infinite N. As N increases, the order parameter curve becomes stiffer and stiffer, indicating a first-order transition and confirming our assertion that the coexistence is destroyed.

Fig. 13.6
figure 6

Kuramoto order parameter as a function of the control parameter a for \(U=1, \; V=-4\) and \(W=0\). The full lines represent different numbers of units (\(N=250\), 500, 1000, 5000 and 50000) and the dashed line corresponds to the case of an infinite number of units (\(N\rightarrow \infty \) and then \(t\rightarrow \infty \)). The gray region, indicates the coexistence region for the infinite number of units case, for which the order parameter is double valued

13.3 Coarse Graining Kuramoto’s Model

So far we have only discussed discrete-state models. In this section we will present a formal connection between continuous phase and discrete phase stochastic dynamics to explore whether coarse graining a continuous phase model can lead to, say, a three-state model. The results, as we will see, are somewhat unexpected. For this purpose we will use the normal form formalism. Full details of this approach can be found in [12].

We start with a globally coupled array of N continuous phase oscillators. The state of each oscillator can be described by a d-dimensional vector \(\mathbf {X}\). That is, the entire array is described by the variables \(\left\{ \mathbf {X}_s\right\} _{s=1}^{N}\), which obey the equations of motion

$$\begin{aligned} \dot{\mathbf {X}}_s = \mathcal {F}\left( \mathbf {X}_s\right) +\mathcal {I}\left( \mathbf {X}_1, \ldots , \mathbf {X}_N\right) + \mathbf {\chi }_s(t). \end{aligned}$$
(13.24)

Here \(\mathcal {F}\) accounts for the internal dynamics of each unit. These dynamics may be schematically represented by Fig. 13.7a, which is meant to show an arbitrary limit cycle. The function \(\mathcal {I}\) accounts for the interactions among the members of the ensemble, which, in the mean field approach, takes the form

$$\begin{aligned} \mathcal {I}\left( \mathbf {X}_1, \ldots , \mathbf {X}_N\right) = \mathcal {I}\left( \mathcal {R}\right) ~~~ \text {where} ~~~ \mathcal {R} = \frac{1}{N}\sum _{s = 1}^{N} A_{s}. \end{aligned}$$
(13.25)

The functions \(\mathcal {F}\) and \(\mathcal {I}\) are identical for all the oscillators and hence do not carry an index. The last term of (13.24), \(\mathbf {\chi }_s(t)\), represents the inherent fluctuations of each oscillator.

In the vicinity of a Hopf bifurcation that gives rise to the formation of a limit cycle, the dynamics of each oscillator can be reduced to a two-dimensional complex amplitude (central manifold theorem) that obeys the normal form

$$\begin{aligned} \dot{A}_s = J \left( 1 - \left| A_s\right| ^2\right) A_s + K f\left( \left| \mathcal {R}\right| ^2\right) \mathcal {R} + \sqrt{\eta }\zeta _s(t). \end{aligned}$$
(13.26)

Here the real constant parameter J governs the internal dynamics of each unit, and we have scaled out irrelevant constants. In particular, this equation is in a moving frame: we have removed the natural frequency \(\omega \) of the oscillators. In these amplitude variables, the internal dynamics lead to a perfectly circular limit cycle of the array, as illustrated in Fig. 13.7b. The parameter K is a measure of the strength of the interactions which is written in a way that respects the phase invariance. Moreover, the generic function f is positive definite so as to model an attractive interaction between oscillators. In the original Kuramoto model [5], \(f(\left| \mathcal {R}\right| ^2) =1\) and the interaction is then linear in \(\mathcal (R)\). For the fluctuations we choose \(\delta \)-correlated complex Gaussian noises:

$$\begin{aligned} \zeta _s(t) = \zeta _R^{s}(t) + i\zeta _I^{s}(t), \end{aligned}$$
(13.27)

where \( \zeta _R^{s}(t)\) and \( \zeta _I^{s}(t)\) are independent real Gaussian white noises of zero mean and correlation functions

$$\begin{aligned} \left\langle \zeta _R^{s}\left( t\right) \zeta _R^{s^{\prime }}\left( t^{\prime }\right) \right\rangle = \left\langle \zeta _I^{s}\left( t\right) \zeta _I^{s^{\prime }}\left( t^{\prime }\right) \right\rangle =\delta _{ss^{\prime }}\delta \left( t-t^{\prime }\right) , ~~~ \text {and} ~~~ \left\langle \zeta _R^{s}\left( t\right) \zeta _I^{s^{\prime }}\left( t^{\prime }\right) \right\rangle = 0. \end{aligned}$$
(13.28)
Fig. 13.7
figure 7

Schematic picture of the coarse-graining of the phase variable. a Schematic of an arbitrary limit cycle. b Amplitude equation near the point where the limit cycle first develops. c Reduction to phase dynamics. d Markov chain model for coarse-grained phases

If the internal dynamics dominates over the interaction and fluctuations (\(J\gg K\) and \(J\gg \eta \)), after a short transient \(\left| A_s\right| \sim 1\), and the system can be described by the phase equations

$$\begin{aligned} \dot{\phi }_s = K F(r)\sin \left( \psi - \phi _s\right) + \sqrt{\eta }\xi _s\left( t\right) ~~~ \text {where} ~~~ R = \frac{1}{N}\sum _{s = 1}^{N} e^{i\phi _{s}} \equiv r e^{i\psi }, \end{aligned}$$
(13.29)

with \( F(r) = rf(r^2)\). The phase evolution is represented by Fig. 13.7c, where only the phase of oscillation is relevant.

At the mean field level, this set of stochastic differential equations can be described by a nonlinear Fokker–Planck equation for the one-particle probability density \(\rho \left( \phi , t\right) \), that takes the form

$$\begin{aligned} \frac{\partial \rho }{\partial t} = \frac{\eta }{2}\frac{\partial ^2\rho }{\partial \phi ^2} - K\frac{\partial }{\partial \phi }\left\{ \rho \Omega \left[ \rho ,\phi \right] \right\} , \end{aligned}$$
(13.30)

where the second derivative term on the right is the diffusion term, and where the drift contains

$$\begin{aligned} \Omega \left[ \rho ,\phi \right] = F(r\left[ \rho \right] )\sin \left( \psi \left[ \rho \right] - \phi \right) , \end{aligned}$$
(13.31)

with

$$\begin{aligned} R=r\left[ \rho \right] e^{i\psi \left[ \rho \right] } \equiv \int _{0}^{2\pi } \rho \left( \phi , t\right) e^{i\phi } d \phi . \end{aligned}$$
(13.32)

The asynchronous state corresponds to a uniform distribution of phases,

$$\begin{aligned} \rho (\phi ) = \frac{1}{2\pi }, \end{aligned}$$
(13.33)

which destabilizes at the critical point

$$\begin{aligned} K_c = \frac{\eta }{f(0)} . \end{aligned}$$
(13.34)

Near the onset of synchronization, the dynamics can be described by the normal form

$$\begin{aligned} \dot{R} = \left( \alpha -\beta \left| R\right| ^2\right) R ~~~ \text {with} ~~~ \alpha = \frac{f(0)}{2}\left( K - K_c\right) , ~~~ \text {and} ~~ \beta = \frac{K_c}{2} \left( \frac{1}{2}f(0) - f^{\prime }(0)\right) , \end{aligned}$$
(13.35)

and \(f^{\prime }(0) \equiv \left. (df(r)/dr)\right| _{r=0}\). The bifurcation is supercritical if \(\beta >0\) and subcritical if \(\beta <0\). Furthermore, the synchronization is tighter if f(r) is an increasing function, representing stronger interactions with increasing r. Since we are interested in a coarse-graining of the phase, for which a coarser synchronization is more propitious, we will focus on a decreasing function f(r). Then \(\beta \) is always positive and the bifurcation will always be supercritical.

We are now ready to perform the coarse graining of the phase. Consider M discrete phases,

$$\begin{aligned} \phi \in \left[ 0,2\pi \right] ~ \rightarrow ~ \phi \in \left\{ j\Delta \phi \right\} _{j=0}^{M-1}, ~~~ \text {where} ~~~ \Delta \phi = \frac{2\pi }{M}. \end{aligned}$$

Discretizing the nonlinear Fokker–Planck (13.30), we obtain

$$\begin{aligned} \dot{P}_j = - \left( w_{j\rightarrow j+1} + w_{j\rightarrow j-1}\right) P_j + w_{j +1 \rightarrow j}P_{j+1} + w_{j - 1 \rightarrow j}P_{j-1}, \end{aligned}$$
(13.36)

where \(P_j (t)\) is the probability to be in the jth phase, and

$$\begin{aligned} w_{j\rightarrow j\pm 1} = \frac{\eta }{2(\Delta \phi )^2} \mp \frac{K}{2\Delta \phi }\Omega _{j}, ~~~ \text {with} ~~~ \Omega _j = F(r)\sin \left( \psi - j\Delta \phi \right) . \end{aligned}$$
(13.37)

For this to be an acceptable physical description, the transition rates (13.37) must be positive. A bound to insure this is (see [12])

$$\begin{aligned} K < K_\mathrm{max} = \frac{\eta }{F_\mathrm{max}\Delta \phi }, \end{aligned}$$
(13.38)

where \(F_\mathrm{max}\) is the maximum of the function F(r) in the interval \(r \in [0,1]\). The coarse-grained dynamics can be interpreted as a Markov chain, as illustrated in Fig. 13.7d.

This discrete formalism reproduces the bifurcation structure of the continuous phase model for \(M\ge 4\). Figure 13.8 displays our numerical results for both dynamics, continuous and discrete phases. Figure 13.8a shows numerical simulations of the amplitude equations (13.26), \(\left| A_s\right| \sim 1\) with a small dispersion but with an agglomeration of the phases that indicates that the system has crossed the critical point where phase synchronization first occurs. The seven-bar histogram of Fig. 13.8b has used the same data as Fig. 13.8a, while the continuous curve is a numerical solution of the nonlinear Fokker–Planck equation (13.30). The black squares in Fig. 13.8b are the result of a seven-node Markov chain following the prescription described above, showing good agreement between both approaches.

Fig. 13.8
figure 8

Numerical computations for \(J=200\), \(N = 5000\), \(K=1.5708 > K_c=0.98696\), \(\eta =0.98696\), and \(f(r) = \exp \left( -r/a\right) \) with \(a=0.3\). a Amplitude equations (13.26), showing an agglomeration of phases along the limit cycle. b Comparison between phase distribution from the solution of the nonlinear Fokker–Planck equation (13.30), and a seven-node Markov chain

The case of \(M=3\), however, turns out to be pathological, which we did not anticipate. The bifurcation turns transcritical, and the system displays a very different dynamical behavior than in the continuous phase model. A full analysis of these sorts of three-state units can be found in [12].

13.4 Conclusions and Perspectives

The vast majority of the enormous literature on synchronization phenomena assumes continuous time and often continuous state space for the evolution of the interacting units of interest. However, a number of years ago we concluded that discrete time and discrete state space would make these problems more approachable. In this Chapter we discussed two aspects of synchronization related to discrete state models, namely, the role of the number of units and of the number of states in the synchronization.

We presented arrays of two-state Markovian units and arrays of three-state Markovian units. In both cases, when the arrays of an infinite number of units present coexistence of stable states, the fluctuations created by a finite number of units destroy the coexistence and the system “chooses” one of the stable states in the limit of a large number N of units. We traced the apparent contradiction of the limit of a large number of units unable to recover the coexistence to an ergodicity breaking—the order of the limits \(t\rightarrow \infty \) and \(N \rightarrow \infty \) matters.

For arrays of globally coupled two-state Markovian units, when there is an infinite number of these units in the array the problem becomes deterministic (described by mean field equations) unless there is some external source of noise, which we have not included in our description. We were particularly interested in the steady state of the system, which is necessarily time independent—two-state Markovian units do not lead to time dependent configurations as \(t\rightarrow \infty \). We assumed a (modified) nonlinear (polynomial) normal form for the interactions among the units which leads the system to a saddle node bifurcation. For values of the control parameter below the bifurcation value, there are two stable states, one arising from the ordinary normal form for this type of bifurcation, and the other from the modification that keeps the fractions of systems in each of the two states in the physical regime 0 to 1. (There is also an unstable state in the bifurcation regime.) Hence, there is a coexistence of two stable states in this region, the steady state being entirely settled by the deterministic equation and the initial condition.

We then considered a globally coupled array of a finite number of two-state units. The finite number introduces a noise term that entirely changes the behavior of the system. In particular, as \(t\rightarrow \infty \) one state becomes more populated than the other, and any memory of the initial state is forgotten. This imbalance in the population of the two states describes a steady state synchronization. If we now take the limit \(N\rightarrow \infty \), the system does not arrive at the description in the previous paragraph. The initial state is forgotten in this case. In other words, as noted above, the limits \(t\rightarrow \infty \) and \(N\rightarrow \infty \) do not commute. As an aside, we noted that the inclusion of a memory, for instance a refractory period that forces any unit that arrives at say state 2 to wait for some time before returning to state 1, yields time-dependent oscillations [10].

We next considered a globally coupled array of three-state Markovian units. Once again we started with a deterministic mean field noiseless model, with a different form of the interactions, namely, exponential (highly nonlinear). As in the two-state case, there is a coexistence of two stable states. However, in this case the bifurcation is more interesting. Here, the bifurcation involves not only fixed points but a fixed point and a limit cycle. The limit cycle implies that the steady state encompasses a periodic variation of the densities. This is a more striking form of synchronization, that is, one in which the units move together in unison. We moved on to the case of a finite number of globally coupled units in the array, which naturally introduces a noise contribution. Depending on the number of units and other parameter values, the coexistence is still there, but one or the other stable state becomes weaker (i.e. fewer units are in one stable state than in the other) as the number of units increases. Once again the limits \(t\rightarrow \infty \) and \(N\rightarrow \infty \) do not commute and the coexistence is washed away.

Finally we considered an entirely different question: is it possible to coarse grain a collection of nonlinearly interacting Kuramoto oscillators to arrive at a three-state (or two-state) model? The Kuramoto system resides in continuous time and continuous phase, the latter ranging continuously from 0 to \(2\pi \), while our models have a finite number of states. We showed that reduction by coarse graining is possible, but that our particular three-state model can not be obtained via a simple coarse-graining procedure of the Kuramoto model. We are able to reduce the continuum model to discrete ones in this manner but with more than three states. On the other hand, reduction to a three-state model is possible, but with different critical synchronization behavior than our model.

What remains to be done? Of course in any attempt to model synchronization phenomena there is a very large field that others have populated, and that we have not yet approached. There is also a huge literature on synchronization applications. To name just a few directions we may take, we note the problems of the range of interactions in three-state models, the role of the fluctuations due to the finite number of units in systems with growing populations and the establishment of a general framework for the coarse graining of continuous models.

In [13], we showed that for a two-state model the range of interaction produces a transition from a disordered (short range interactions) to an ordered state (long range interactions). Preliminary studies for a three-state model have indicated that increasing the range of interactions interpolates the disordered and ordered (synchronized) states with patterned spatial formations (spirals appear for middle range interactions). A variant of Wood’s model for which there is a birth term was proposed in [14]. Such a model also presents coexistence in the mean field approach. The question we pose is whether the fluctuations produced by the finite number of units also destroys the coexistence for this model. Finally, we mention again that we studied the coarse graining of Kuramoto’s model. However, the possibility of mapping the phase transition of other continuous models into discrete coarse grained models is still an open question. We hope our work along these lines will continue to shed light on these phenomena.