1 Introduction

Cortical networks can generate a wide variety oscillatory rhythms with frequencies spanning five orders of magnitude (Buzsáki and Draguhn 2004). Slow oscillatory activity (0.1–1 Hz) has been observed in vivo during decreased periods of alertness, such as slow wave sleep and anesthesia (Steriade et al. 1993). Furthermore, such activity can be produced in vitro when bathing cortical slices in a medium with typical extracellular ion concentrations (Sanchez-Vives and McCormick 2000). The function of slow oscillations during sleep still remains uncertain, but there is growing evidence that they play a role in memory consolidation processes (Marshall et al. 2006).

A key feature of these slow oscillations is that they tend to be an alternating sequence of two bistable states, referred to as the up and down states. Up states in networks are characterized by high levels of firing activity, due to depolarization in single cells. Down states in networks typically appear quiescent, due to hyperpolarization in single cells. There is strong evidence that up states are generated by recurrent connections (Cossart et al. 2003). This suggests up states may be spontaneous remnants of stimulus-induced persistent states utilized for working memory (Wang 2001) and other network computations (Major and Tank 2004). Furthermore, recent two-photon imaging in vivo has revealed that subthreshold NMDA-receptor-dependent calcium signals are widespread during up states (Chen et al. 2013). Such signals may serve to reenforce cellular transformations underlying memory consolidation during sleep (Marshall et al. 2006). In addition, while neurons tend to be closer to spike threshold during up states, sensory stimuli may actually produce weaker responses during these periods (Petersen et al. 2003). Correspondences between spontaneous neural activity and underlying network stimulus tuning have been observed in a variety of brain areas (Fox and Raichle 2007). For instance, the dynamic evolution of spontaneous visual cortical activity has been linked to the architecture producing orientation maps (Kenet et al. 2003).

Several different cellular and synaptic mechanisms have been suggested to underlie the transitions between up and down states. One possibility is that the network is recurrently coupled with excitation, stabilizing both a quiescent and active state (Amit and Brunel 1997; Renart et al. 2007). Fluctuations due to probabilistic synapses, channel noise, and randomness in network connectivity can then lead to spontaneous transitions between the quiescent and active state (Parga and Abbott 2007; Bressloff 2010; Litwin-Kumar and Doiron 2012). Alternatively, switches between low and high activity states may arise by some underlying systematic slow process. For instance, it has been shown that competition between recurrent excitation and the negative feedback produced by activity-dependent synaptic depression can lead to slow oscillations in firing rate whose timescale is set by the depression timescale (Bart et al. 2005; Holcman and Tsodyks 2006; Kilpatrick and Bressloff 2010). Excitatory-inhibitory networks with facilitation can produce slow oscillations, due to the slow facilitation of feedback inhibition that terminates the up state, the down state is then rekindled due to positive feedback from recurrent excitation (Melamed et al. 2008). These neural mechanisms utilize dynamic changes in the strength of neural architecture. However, Compte et al. (2003) proposed that single cell mechanisms can also shift network states between up and down states. The up state is maintained by strong recurrent excitation balanced by inhibition, and transitions to the down state occur due to a slow hyperpolarizing current. Once in the down state, the slow hyperpolarizing current is inactivated, and excitation reinitiates the up state. Slow hyperpolarizing currents are prime examples of mechanisms underlying spike rate adaptation (Benda and Herz 2003). One particularly well studied example is the class of currents generated by calcium-gated potassium channels, often referred to as medium afterhyperpolarization currents (Madison and Nicoll 1984; Pedarzani and Storm 1993; Higgs et al. 2006). A similar mechanism has been utilized in models of perceptual rivalry, where dominance switches between two mutually inhibiting populations occur due to the build up of network-wide spike rate adaptation (Laing and Chow 2002; Moreno-Bote et al. 2007).

In this paper, we utilize a rate-based model of an excitatory network with spike rate adaptation to explore the impact that noise perturbations have upon the relative phase and duration of slow oscillations. The adaptation variable quantifies the average effects of slow hyperpolarizing currents across the network, as derived by Vreeswijk and Hansel (2001). We find that, as in the spiking model studied by Compte et al. (2003), the interplay between recurrent excitation and the adaptation variable produces a slow oscillation in the firing rate of the network. In fact, for slow timescale adaptation, the oscillations evolve as fast switches between a low and high activity state, stable fixed points of the adaptation-free system. Since the timescale and slow dynamics of the oscillation are set by the adaptation variable, we mainly focus on the impact of perturbation to the adaptation variable in our model. As we will show, perturbations of the activity variable have much lower impact on the oscillation phase than noise to the slow adaptation variable.

Another remarkable feature of slow oscillations, observed during slow-wave sleep and anesthesia, is that the up and down states tend to be synchronized across different regions of cortex and thalamus (Steriade et al. 1993; Massimini et al. 2004). Specifically, both the up and down states start near synchronously in cells located up to 12 mm apart (Volgushev et al. 2006). Such remarkable coherence between distant network activity cannot be accomplished by single cell mechanisms, but require either long range network connectivity or some external signal forcing entrainment (Traub et al. 1996; Smeal et al. 2010). Activity tends to originate from several different foci in the network, quickly spreading across the rest of the network on a timescale orders of magnitude faster than the oscillation itself (Compte et al. 2003; Massimini et al. 2004). While the synchronous initiation of up states may be explained by recurrent architecture, synchronization of the down states are more difficult to explain and remain an unexplained phenomenon (Volgushev et al. 2006). The fact that the onset of quiescence is fast and well synchronized means there must be either a rapid relay signal between all foci or there is some global signal cueing the down state. Rather than suggest a disynaptic relay, using long range excitation acting on local inhibition, we suggest that background noise can serve as a synchronizing global signal (Ermentrout et al. 2008). For example, up/down state correlations in visual cortex have also been observed across 500 \(\upmu \)m, and it has been suggested this may arise due to common input from LGN (Lampl et al. 1999). Noisy but correlated inputs have been shown to be capable of synchronizing uncoupled populations of phase oscillators (Teramae and Tanaka 2004) as well as experimentally recorded cells in vitro (Galán et al. 2006). Here we will show correlated noise is a viable mechanisms for coordinating slow oscillations in distinct uncoupled neural populations.

The paper is organized as follows: We introduce the neural population model in Sect. 2, indicating the way external noise is incorporated into the model. In Sect. 3, we demonstrate the periodic solutions that emerge in the noise-free model, demonstrating it is possible to derive analytical expressions for the oscillation period in the case of steep firing rate functions. Then, in Sect. 4 we show how to derive phase sensitivity functions that describe how external perturbations to the periodic solution impact the asymptotic phase of the oscillation. As demonstrated, the impact of perturbations to the adaptation variable is much stronger than activity variable perturbations, especially for longer adaptation timescales. Thus, our studies of the impact of noise mainly focus on the effects of fluctuations in the adaptation variable. We find, in Sect. 5, that adding noise to the adaptation variable leads to up and down state durations that are shorter and more balanced, so that the up and down state last for similar lengths of time. In Sect. 6, we demonstrate that slow oscillations in distinct populations can become entrained to one another when both populations are forced by the same common noise signal. This phenomenon is robust to the introduction of independent noise in each population, as we show in Sect. 7. Lastly, we demonstrate that the rate and spike patterns of two uncoupled spiking networks can be synchronized by common noise in Sect. 8.

2 Adaptive neural populations: deterministic and stochastic models

We begin by describing the models we will use to explore the impact of external perturbations on slow oscillations. Motivated by Compte et al. (2003), we will focus on a neural population model with spike rate adaptation, akin to mutual inhibitory models used to study perceptual rivalry (Laing and Chow 2002; Moreno-Bote et al. 2007).

Single population model In a single population, neural activity u(t) receives negative feedback due to a subtractive spike rate adaptation variable (Benda and Herz 2003)

$$\begin{aligned} \dot{u}(t)&= -u(t) + f(\alpha u(t) - a(t) + I), \end{aligned}$$
(1a)
$$\begin{aligned} \tau \dot{a}(t)&= -a(t) + \phi u (t). \end{aligned}$$
(1b)

Here, u represents the mean firing rate of the neural population with excitatory connection strength \(\alpha \). The negative feedback variable a is spike frequency adaptation with strength \(\phi \) and time constant \(\tau \). For some of our analysis we will utilize the assumption \(\tau \gg 1\), based on the fact that many forms of spike rate adaptation tend to be much slower than neural membrane time constants (Benda and Herz 2003). The constant tonic drive I initiates the high firing rate (up) state, and slow adaptation eventually attenuates activity to a low firing rate (down) state. Weak but positive drive \(I>0\) is meant to model the presence of low spiking threshold cells that spontaneously fire, utilized as a mechanism for initiating the up state in Compte et al. (2003). The firing rate function f is monotone and saturating function such as the sigmoid

$$\begin{aligned} f(x) = \frac{1}{1 + \mathrm{e}^{- \gamma x}}. \end{aligned}$$
(2)

Commonly, in studies of neural field models, the high gain limit (\(\gamma \rightarrow \infty \)) of Eq. (2) is taken to yield the Heaviside firing rate function (Amari 1977; Laing and Chow 2002)

$$\begin{aligned} H(x)= & {} \left\{ \begin{array}{l@{\quad }l} 1 : &{} x \ge 0, \\ 0 : &{} x < 0, \end{array} \right. \end{aligned}$$
(3)

which often allows for a more straightforward analytical study of model dynamics. We exploit this fact extensively in our study. Nonetheless, we have also carried out many numerical simulations of the model for a smooth firing rate function Eq. (2), and they correspond to the results we present for sufficiently high gain. Note, this form of adaptation is often referred to as subtractive negative feedback, as current is subtracted from the population input. Alternative models of slow neural population oscillations have employed short term synaptic depression (Tabak et al. 2000; Bart et al. 2005; Holcman and Tsodyks 2006; Kilpatrick and Bressloff 2010), a form of divisive negative feedback.

A primary concern of this work is the response of Eq. (1) to external perturbations, acting on the activity u and adaptation a variables. To do so, we will use both an exact method and a linearization to identify the phase response curve of the limit cycle solutions to Eq. (1). Understanding the susceptibility of limit cycles of Eq. (1) to inputs will help us understand ways in which noise will influence the frequency and regularity of oscillations.

Stochastic single population model Following our analysis of the noise-free system, we will consider how fluctuations influence oscillatory solutions to Eq. (1). To do so, we will employ the following stochastic differential equations (SDEs) for Eq. (1) forced by white noise

$$\begin{aligned} \mathrm{d}u(t)&= \left[ - u(t) + f(\alpha u(t) - a(t) + I ) \right] \mathrm{d}t + \mathrm{d}\xi _u(t) \end{aligned}$$
(4a)
$$\begin{aligned} \mathrm{d}a(t)&= \left[ - a(t) + \phi u(t) \right] \mathrm{d}t/ \tau + \mathrm{d}\xi _a(t), \end{aligned}$$
(4b)

where we have introduced the independent Gaussian white noise processes \(\xi _u(t)\) and \(\xi _a(t)\) with zero mean \(\langle \xi _u(t) \rangle = \langle \xi _a(t) \rangle = 0\) and variances \(\langle \xi _u(t)^2 \rangle = \sigma _u^2 t\) and \(\langle \xi _a(t)^2 \rangle = \sigma _a^2 t\). Extending our results concerning the phase response curve, we will explore how noise forcing impacts the statistics of the resulting stochastic oscillations in Eq. (4). In particular, since we find noise tends to impact the phase of the oscillation more strongly when applied to the adaptation variable, we will tend to focus on the case \(\xi _u \equiv 0\).

Stochastic dual population model Finally, we will focus on how correlations in noise-forcing impact the coherence of two distinct uncoupled populations

$$\begin{aligned} \mathrm{d}u_1&= \left[ - u_1(t) + f(\alpha u_1(t) - a_1(t) + I ) \right] \mathrm{d}t + \mathrm{d}\xi _{u} \end{aligned}$$
(5a)
$$\begin{aligned} \mathrm{d}a_1&= \left[ - a_1(t) + \phi u_1(t) \right] \mathrm{d}t / \tau + \mathrm{d}\xi _{a} \end{aligned}$$
(5b)
$$\begin{aligned} \mathrm{d}u_2&= \left[ -u_2 (t) + f(\alpha u_2(t) - a_2(t) + I ) \right] \mathrm{d}t + \mathrm{d}\xi _{u} \end{aligned}$$
(5c)
$$\begin{aligned} \mathrm{d}a_2&= \left[ -a_2 (t) + \phi u_2 (t) \right] \mathrm{d}t / \tau + \mathrm{d}\xi _{a}. \end{aligned}$$
(5d)

Thus, the system Eq. (5) describes the dynamics of two distinct neural populations \(u_1\) and \(u_2\), with inputs I. Our main interest lies in the impact the noise terms have upon the phase relationship between the two systems’ states. In this version of the model, noise to the activity variables \(\xi _u\) is totally correlated, as is noise to the adaptation variables \(\xi _a\). Thus, all means are zero and \(\langle \xi _{u}^2 (t) \rangle = \sigma _u^2 t = \mathbf{D}_{11} t\). Furthermore, \(\langle \xi _{a}^2 (t) \rangle = \sigma _a^2 t = \mathbf{D}_{22} t\). For this study, we assume there are no correlations between activity and adaptation noise, so \(\langle \xi _u(t) \xi _{a} (t) \rangle = 0\). A more general version of the model Eq. (5) would consider the possibility of independent noise in each population

$$\begin{aligned} \mathrm{d}u_1&= \left[ - u_1(t) + f(\alpha u_1(t) - a_1(t) + I ) \right] \mathrm{d}t + \chi _u \mathrm{d}\xi _{uc} + \sqrt{1 - \chi _u^2} \mathrm{d}\xi _{u1} \end{aligned}$$
(6a)
$$\begin{aligned} \mathrm{d}a_1&= \left[ - a_1(t) + \phi u_1(t) \right] \mathrm{d}t / \tau + \chi _a \mathrm{d}\xi _{ac} + \sqrt{1 - \chi _a^2} \mathrm{d}\xi _{a1} \end{aligned}$$
(6b)
$$\begin{aligned} \mathrm{d}u_2&= \left[ -u_2 (t) + f(\alpha u_2(t) - a_2(t) + I ) \right] \mathrm{d}t + \chi _u \mathrm{d}\xi _{uc} + \sqrt{1 - \chi _u^2} \mathrm{d}\xi _{u2} \end{aligned}$$
(6c)
$$\begin{aligned} \mathrm{d}a_2&= \left[ -a_2 (t) + \phi u_2 (t) \right] \mathrm{d}t / \tau + \chi _a \mathrm{d}\xi _{ac} + \sqrt{1 - \chi _a^2} \mathrm{d}\xi _{a2} . \end{aligned}$$
(6d)

Noise terms all have zero mean and variances defined \(\langle \xi _{uj}^2 (t) \rangle = \sigma _{uj}^2 t = \mathbf{D}_{uj} t\) and \(\langle \xi _{aj}^2 (t) \rangle = \sigma _{aj}^2 t= \mathbf{D}_{aj} t\) (\(j=1,2,c\)). To ease calculations, we take \(\mathbf{D}_{u1} = \mathbf{D}_{u2} \equiv \mathbf{D}_{ul} = \sigma _u^2\) and \(\mathbf{D}_{a1} = \mathbf{D}_{a2} \equiv \mathbf{D}_{al} = \sigma _a^2\). The degree of noise correlation between populations is controlled by the parameters \(\chi _u\) and \(\chi _a\), so in the limit \(\chi _{u,a} \rightarrow 1\), the model Eq. (6) becomes Eq. (5).

Fig. 1
figure 1

Single adapting neural population Eq. (1) generates slow oscillations. a Numerical simulation of Eq. (1) for adaptation timescale \(\tau = 100\) (1 s) and input \(I=0.2\). b Partitioning of \((\tau , I)\) parameter space shows the range of inputs I leading to oscillations expands as the adaptation timescale \(\tau \) is increased, according to Eq. (9). c,d Bifurcation diagram showing lower (\(I_{-H}\)) and upper (\(I_{+H}\)) boundaries of instability that arise as the input is increased for (c) \(\tau =10\) and (d) \(\tau =100\). Firing rate function is sigmoidal Eq. (2). Other parameters are \(\phi = 1\), \(\alpha = 0.5\), and \(\gamma = 15\). Numerical simulations employ Euler’s method with a timestep \(dt = 10^{-6}\) s

3 Periodic solutions of a single population

We begin by studying periodic solutions of the single population system Eq. (1), as demonstrated in Fig. 1a. First, we note that for firing rate functions f with finite gain, we can identify the emergence of oscillations by analyzing the stability of the equilibria of Eq. (1). That is, we assume \((\dot{u}, \dot{a}) = (0,0)\), so the system becomes

$$\begin{aligned} \bar{u}&= f( \alpha \bar{u} - \bar{a} + I), \\ \bar{a}&= \phi \bar{u}, \end{aligned}$$

which can be reduced to the single equation

$$\begin{aligned} \bar{u}&= f((\alpha - \phi ) \bar{u} + I) = g(\bar{u}). \end{aligned}$$
(7)

Roots of Eq. (7), defining fixed points of Eq. (1) are plotted as a function of the input I in Fig. 1c, d. Utilizing the sigmoidal firing rate function f given by Eq. (2), we can show that there will be a single fixed point as long as \(\phi > \alpha \). In this case, we can compute

$$\begin{aligned} \frac{\mathrm{d}g(\bar{u})}{\mathrm{d}\bar{u}} = - (\phi - \alpha ) f'((\alpha - \phi ) \bar{u} + I) = - \frac{(\phi - \alpha ) \mathrm{e}^{- \gamma ((\alpha - \phi ) \bar{u} + I)}}{\left( 1 + \mathrm{e}^{- \gamma ((\alpha - \phi ) \bar{u} + I)} \right) ^2} <0. \end{aligned}$$

Since \(\bar{u}\) is monotone increasing, then \(\bar{u} - g(\bar{u})\) is monotone increasing. Further, noting \(\lim _{\bar{u} \rightarrow \pm \infty } \left[ \bar{u} - g(\bar{u}) \right] = \pm \infty \), it is clear \(\bar{u} - g(\bar{u})\) crosses zero once, so Eq. (7) has a single root when \(\phi > \alpha \). Stability of this equilibrium is given by the eigenvalues of the associated Jacobian

$$\begin{aligned} J(\bar{u}, \bar{a}) = \left( \begin{array}{c@{\quad }c} -1 + \alpha f'((\alpha - \phi ) \bar{u} + I ) &{}\quad - f'((\alpha - \phi ) \bar{u} +I) \\ \phi / \tau &{}\quad -1/ \tau \end{array} \right) . \end{aligned}$$

We note that the sigmoid Eq. (2) satisfies the Ricatti equation \(f' = \gamma f (1 - f)\), so we can use Eq. (7) to write

$$\begin{aligned} J(\bar{u}, \bar{a}) = \left( \begin{array}{c@{\quad }c} -1 + \alpha \gamma \bar{u} (1- \bar{u}) &{}\quad - \gamma \bar{u} (1- \bar{u}) \\ \phi / \tau &{}\quad -1 / \tau \end{array} \right) . \end{aligned}$$

Oscillatory instabilities arise when the complex eigenvalues associated with fixed points \((\bar{u}, \bar{a})\) cross from the left to the right half plane. All numerical simulations have shown that these instabilities lead to limit cycles (Fig. 1b). We identify the parameter values at which this destabilization occurs by using the relations: \(\mathrm{tr}(J) = 0\) and \(\mathrm{tr}(J)^2 < 4 \mathrm{det} (J)\). Thus, a necessary condition for the onset of oscillatory instabilities is that the equilibrium value \(\bar{u}\) satisfies

$$\begin{aligned} \alpha \gamma \bar{u} ( 1- \bar{u}) = 1 + 1 / \tau . \end{aligned}$$

Solving this for \(\bar{u}\) yields

$$\begin{aligned} \bar{u}_{\pm H} = \frac{1}{2} \left[ 1 \pm \sqrt{1 - 4 \chi } \right] , \quad \chi = \frac{1 + 1 / \tau }{\alpha \gamma }. \end{aligned}$$
(8)

Thus, these instabilities will only occur when the timescale of adaptation is sufficiently large \(\tau > \left[ \alpha \gamma / 4 - 1\right] ^{-1}\). Plugging the formula Eq. (8) back into the fixed point equation Eq. (7) and solving for the input I, we can parameterize the boundary of instability based upon the equation

$$\begin{aligned} I_{\pm H} = \frac{1}{\gamma } \ln \left[ \frac{\bar{u}_{\pm H}}{1 - \bar{u}_{\pm H}} \right] - (\alpha - \phi ) \bar{u}_{\pm H}, \end{aligned}$$
(9)

along with the additional condition \(\mathrm{tr}(J)^2 < 4 \mathrm{det} (J)\) which becomes

$$\begin{aligned} \frac{4}{\tau ^2} < \frac{4 \phi }{\alpha \tau ^2} + \frac{4 \phi }{\alpha \tau }, \end{aligned}$$
(10)

which will always hold as long as \(\phi > \alpha \). We partition the parameter space \((\tau , I)\) using our formula Eq. (9) in Fig. 1b. As demonstrated, there tend to be either two or zero stability transitions for a given timescale \(\tau \), and the coalescence of the two points is given by the point where \(\tau = \left[ \alpha \gamma / 4 - 1\right] ^{-1}\).

In the limit of slow adaptation \(\tau \gg 1\), we can separate the timescales of the activity u and adaptation a variables, finding u will equilibrate according to the equation

$$\begin{aligned} \hat{u}(t) = f(\alpha \hat{u}(t) - a(t) + I), \end{aligned}$$
(11)

and subsequently a will slowly evolve according to the equation

$$\begin{aligned} \dot{a}(t) = \left[ \phi \hat{u}(t) - a \right] / \tau . \end{aligned}$$
(12)

We always have an implicit formula for \(\hat{u}(t)\) in terms of a(t), so the dynamics will tend to slowly evolve along the direction of the a variable. This demonstrates why periodic solutions to Eq. (1) are comprised of a slow rise and decay phase of a, punctuated by fast excursions in the activity variable u. In general, it is not straightforward to analytically treat the pair of Eqs. (11) and (12), but we will show how computing solutions of the singular system becomes straightforward when we take the high gain limit \(\gamma \rightarrow \infty \).

Having established the existence of oscillatory instabilities and numerically shown periodic solutions to Eq. (1) in the case of sigmoid firing rates Eq. (2), we now explore the system in the high gain limit \(\gamma \rightarrow \infty \) whereby the firing rate function becomes a Heaviside Eq. (3). In this case, fixed points \((\bar{u},\bar{a})\) satisfy the equations

$$\begin{aligned} \bar{u} = H( ( \alpha - \phi ) \bar{u} + I ) = \left\{ \begin{array}{l@{\quad }l} 1 \ &{}: \bar{u} < I/(\phi - \alpha ), \\ 0 \ &{}: \bar{u} > I/(\phi - \alpha ), \end{array} \right. \end{aligned}$$

and \(\bar{a} = \phi \bar{u}\). Thus, assuming \(\phi > \alpha \), then \(\bar{u}=0\) when \(I<0\) and \(\bar{u} = 1\) when \(I> (\phi - \alpha )\). In both cases, the fixed points are linearly stable. When \(0 < I < (\phi - \alpha )\), there are no fixed points and we expect to find oscillatory solutions. Assuming \(\tau \gg 1\), we can exploit a separation of timescales to identify the shape and period of these limit cycles. To begin, we note that on fast timescales

$$\begin{aligned} \dot{u}(t)&= - u(t) + H(I + \alpha u(t) - a_0), \end{aligned}$$

where a(t) is assumed to be changing slow enough that it is in a quasi-steady state \(a_0\) on short timescales. On longer timescales of order \(\tau \), then u(t) quickly equilibrates and

$$\begin{aligned} u(t)&= H(I + \alpha u(t) - a(t)), \end{aligned}$$
(13a)
$$\begin{aligned} \tau \dot{a}(t)&= -a(t) + \phi u(t). \end{aligned}$$
(13b)

Periodic solutions to Eq. (1) must obey the condition \((u(t),a(t)) = (u(t+nT),a(t+nT))\) for \(t \in [0,T]\) and \(n \in {{\mathbb {Z}}}\), so we focus on the domain \(t \in [0,T]\). Examining Eq. (13), we can see oscillations in Eq. (1) involve switches between \(u(t) \approx 1\) and \(u(t) \approx 0\). We translate time so that \(u(t) \approx 1\) on \(t \in [0,T_1)\) and \(u(t) \approx 0\) on \(t \in [T_1,T)\). Subsequently, this means for \(t \in [0,T_1]\) the system Eq. (13) becomes \(u \equiv 1\) and \(\tau \dot{a} = -a + \phi \) so \(a(t) = \phi - (\phi - I) \mathrm{e}^{-t/\tau }\). We know \(a(0)=I\) because \(u(0^-) \equiv 0\) in Eq. (13), and the argument of H(x) must have crossed zero at \(t=0\). In a similar way, we find on \(t \in [T_1,T)\) that \(u \equiv 0\) and \(a(t) = (I+ \alpha ) \mathrm{e}^{-(t-T_1)/\tau }\). Using the conditions \(a(T_1) = I+ \alpha \) and \(a(T) = I\), we find that the rise time of the adaptation variable (or the duration of the up state) is

$$\begin{aligned} T_1&= \tau \ln \left[ \frac{\phi - I}{\phi - \alpha - I} \right] , \end{aligned}$$
Fig. 2
figure 2

Analytical approximations to periodic solutions of Eq. (1) with a Heaviside firing rate function Eq. (3). a Numerical simulation (solid lines) of the periodic solution is well approximated by the analytical approximation (dashed lines) given by Eq. (15) when \(I=0.2\) and \(\tau = 100\). b The period of the oscillation T computed from numerical simulations (dots) is accurately approximated by the analytical formula (solid lines) given by Eq. (14). Other parameters are \(\alpha = 0.5\) and \(\phi = 1\)

and the decay time (or the duration of the down state) is

$$\begin{aligned} T_2&= \tau \ln \left[ \frac{I+\alpha }{I} \right] , \end{aligned}$$

and the total period of the oscillation is

$$\begin{aligned} T&= \tau \ln \left[ \frac{(I+ \alpha ) (\phi - I)}{I(\phi - \alpha - I)} \right] . \end{aligned}$$
(14)

Thus, approximate periodic solutions to Eq. (1) in the case of a Heaviside firing rate Eq. (3) take the form

$$\begin{aligned} u(t)&= \left\{ \begin{array}{l@{\quad }l} 1 \ &{} : t \in [0,T_1), \\ 0 \ &{} : t \in [T_1,T], \end{array} \right. \end{aligned}$$
(15a)
$$\begin{aligned} a(t)&= \left\{ \begin{array}{l@{\quad }l} \phi - (\phi - I) \mathrm{e}^{-t/\tau } \ &{} : t \in [0,T_1), \\ (I+ \alpha ) \mathrm{e}^{-(t-T_1)/\tau } \ &{} : t \in [T_1,T]. \end{array} \right. \end{aligned}$$
(15b)

We demonstrate the accuracy of the approximation Eq. (15) in Fig. 2a. Furthermore, we show that relationship between the period T and model parameters is well captured by the formula Eq. (14). Notice there is a non-monotonic relationship between the period T and the input I. We can understand this further by noting that the rise time \(T_1\) of the adaptation variable a increases monotonically with input

$$\begin{aligned} \frac{\mathrm{d}T_1}{\mathrm{d}I} = \frac{\tau \alpha }{(\phi - I)(\phi - \alpha - I )} > 0, \end{aligned}$$

when \(0 < I < (\phi - \alpha )\). Furthermore, the decay time \(T_2\) of the adaptation variable a decreases monotonically with input

$$\begin{aligned} \frac{\mathrm{d}T_2}{\mathrm{d}I} = - \frac{\tau \alpha }{I(I+ \alpha )} < 0, \end{aligned}$$

when \(0 < I < (\phi - \alpha )\). Thus, as \(I \rightarrow 0^+\), the slow oscillation’s period T is dominated by very long decay times \(T_2 \gg 1\) and as \(I \rightarrow (\phi - \alpha )^-\), it is dominated by very long rise times \(T_1 \gg 1\). We can identify the minimal period as a function of the input I by finding the critical point of T(I). To do so, we differentiate and simplify

$$\begin{aligned} \frac{\mathrm{d}T}{\mathrm{d}I} = - \frac{\tau \alpha \phi (2 I - (\phi - \alpha ))}{I(I+\alpha )(\phi - I)(\phi - \alpha - I)}, \end{aligned}$$

so the critical point of T(I) is at \(I_{crit} = (\phi - \alpha )/2\), which corresponds to the minimal value of the period \(T_{min}(I) = 2 \tau \ln \left[ (\phi + \alpha )/(\phi - \alpha ) \right] \) as pictured in Fig. 2b.

4 Phase response curves

We can further understand the dynamics of the slow oscillations in Eq. (1) by computing phase response curves for both the case of a sigmoidal firing rate Eq. (2) and the Heaviside firing rate Eq. (3). As we will show, perturbations of the activity variable u have decreasing impact as the timescale of adaptation \(\tau \) and the gain \(\gamma \) of the firing rate are increased. Perturbations of the adaptation variable a typically lead to larger phase shifts than perturbations to the neural activity variable u. This is due to the fact that the rise and decay of the slow adaptation variable a are primarily what determines the phase of the oscillation.

Fig. 3
figure 3

a, b, c Periodic solution (ua) and d, e, f phase sensitivity function \((Z_u,Z_a)\) of Eq. (1) plotted as a function of phase \(\theta = t/T\) for a sigmoidal firing rate function Eq. (2). a, d For shorter adaptation timescale \(\tau = 10\) and input \(I=0.2\), the activity variable u has a more rounded trajectory, so perturbations to activity influence the oscillation phase more [note size of lobes on \(Z_u\) in (d)]. b, e As the adaptation timescale is increased to \(\tau =100\), with \(I=0.2\), the influence of perturbations to the neural activity variable decrease [compare lobes of \(Z_u\) to those in (d)]. Perturbations of the adaptation variable influence the phase more strongly as shown by the change in the relative amplitude of \(Z_a\). c,f Increasing the input \(I=0.4\), with \(\tau = 10\), increases the relative duration of the rise time of a. As a result, there is a wider region where perturbations to a advance the phase. Other parameters are \(\alpha = 0.5\), \(\phi =1\), and \(\gamma = 15\)

To begin, we derive a general formula that linearly approximates the influence of small perturbations on limit cycle solutions \((u_0(t), a_0(t))\) to Eq. (1). Essentially, we utilize the fact that solutions \(\mathbf{Z}(t)\) to the adjoint equation associated with linearization about the limit cycle solution \((u_0(t), a_0(t))\) provide a complete description of how infinitesimal perturbations of the limit cycle impact its phase (Ermentrout 1996; Brown et al. 2004). To start, we note that

$$\begin{aligned} {\mathcal L} \left( \begin{array}{c} u_1 \\ a_1 \end{array} \right) = \left( \begin{array}{l} \dot{u_1} +u_1 - \alpha f'(\alpha u_0 - a_0 + I) u_1 + f'(\alpha u_0 - a_0 + I) a_1 \\ \dot{a_1} - \phi u_1/\tau + a_1/\tau \end{array} \right) = \left( \begin{array}{c} 0 \\ 0 \end{array} \right) , \end{aligned}$$

is the linearization of Eq. (1) about the limit cycle \((u_0(t), a_0(t))\). Defining the inner product on T-periodic functions in \({{\mathbb {R}}}^2\) as \(\langle F(t), G(t) \rangle = \int _0^T F(t) \cdot G(t) \mathrm{d}t\), we can find the adjoint operator \({\mathcal L}^*\) by noting it satisfies \(\langle F, {\mathcal L}G \rangle = \langle {\mathcal L}^*F, G \rangle \) for all \(L^2\) integrable vector functions FG. We can then compute

$$\begin{aligned} {\mathcal L}^* \left( \begin{array}{c} v \\ b \end{array} \right) = \left( \begin{array}{l} - \dot{v} + v - \alpha f'(\alpha u_0 - a_0 + I) v - \phi b / \tau \\ -\dot{b} + f'(\alpha u_0 - a_0 + I) v +b/ \tau \end{array} \right) . \end{aligned}$$
(16)

It can be shown that the null space of \({\mathcal L}^*\) describes the response of the phase of the limit cycle \((u_0(t),a_0(t))\) to infinitesimal perturbations (Brown et al. 2004). Note that if \((u_0(t),a_0(t))\) is a stable limit cycle then the nullspace of \({\mathcal L}\) is spanned by scalar multiples of \((u_0'(t),a_0'(t))\). Furthermore, appropriate normalization requires that \(\mathbf{Z}(t) \cdot (u_0'(t),a_0'(t)) = 1\) along with \({\mathcal L}^* \mathbf{Z}= 0\) (Ermentrout 1996). To numerically compute \(\mathbf{Z}(t) = (Z_u(t), Z_a(t))\), we thus integrate the system

$$\begin{aligned} \dot{Z}_u(t)&= Z_u(t) - \alpha f'(\alpha u_0(t) - a_0(t) + I) Z_u(t) - \phi Z_a(t) / \tau , \end{aligned}$$
(17a)
$$\begin{aligned} \dot{Z}_a(t)&= f'(\alpha u_0(t) - a_0(t) + I) Z_u(t) - Z_a(t)/ \tau , \end{aligned}$$
(17b)

backward in time, taking the long time limit to find \((Z_u(t), Z_a(t))\) on \(t \in [0,T]\), and normalizing \(\langle (Z_u(t), Z_a(t)),(u_0'(t),a_0'(t)) \rangle = 1\) by rescaling appropriately. We demonstrate this result in Fig. 3, showing the relationship between the shape and relative amplitude of the phase sensitivity functions \((Z_u,Z_a)\) and the parameters. Notably, perturbations of the activity variable u become less influential as the timescale of adaptation \(\tau \) is increased (\(Z_u\)). Furthermore, there is a sharper transition between phase advance and phase delay region of the adaptation phase response (\(Z_a\)) for larger timescales \(\tau \).

In addition to a general formula for the phase sensitivity functions \((Z_u(t),Z_a(t))\), we can derive an amplitude-dependent formula for the response of limit cycle solutions \((u_0(t),a_0(t))\) of Eq. (1) with a Heaviside firing rate Eq. (3), assuming \(\tau \gg 1\). In this case, we utilize the formula for the period Eq. (14) and limit cycle Eq. (15), derived using a separation of timescales assumption. Then, we can compute the change to the variables (ua) as a result of a perturbation \((\delta _u, \delta _a)\), which we denote \((u_0(t),a_0(t)) \mathop {\longmapsto }\limits ^{(\delta _u, \delta _a)} (\tilde{u}_0(t), \tilde{a}_0(t))\). We are primarily interested in how the relative time in the limit cycle is altered by a perturbation \(\delta _u\) - how much closer or further the limit cycle is to the end of the period T after being perturbed. We can readily determine this by first inverting the formula we have for \((u_0(t),a_0(t))\), given by Eq. (15), to see how this value determines the time \(t_0\) along the limit cycle

$$\begin{aligned} t_0 (u_0,a_0) = \left\{ \begin{array}{l} \tau \ln \left[ (\phi - I)/(\phi - a_0) \right] : u_0 = 1, \\ \tau \ln \left[ (\phi - I)(I+\alpha )/a_0/(\phi - \alpha - I) \right] : u_0 = 0. \end{array} \right. \end{aligned}$$
(18)

Using this formula, we can now map the value \((\tilde{u}_0,\tilde{a}_0)\) to an associated updated relative time \(t_0\) along the oscillation.

Here, we decompose the impact of perturbations to the u and a variables. We begin by studying the impact of perturbations \(\delta _u\) to the activity variable u. We can directly compute

$$\begin{aligned} \tilde{u}_0(t) = H(I + \alpha \left[ u_0(t) + \delta _u \right] - a_0(t)). \end{aligned}$$

Thus, the singular system Eq. (13) will be unaffected by such perturbations if \(\mathrm{sgn}(I + \alpha [u + \delta _u] - a ) = \mathrm{sgn}(I + \alpha u - a )\). This is related to the flatness of the susceptibility function \(Z_u\) over much of the time domain in Fig. 3d–f. However, if \(\mathrm{sgn}(I + \alpha [u + \delta _u] - a ) \ne \mathrm{sgn}(I + \alpha u - a )\), then \(\tilde{u}_0(t) = 1-u_0(t)\), as detailed in the following piecewise smooth map:

$$\begin{aligned} \begin{array}{l@{\quad }l} u_0(t) = 0 \mapsto \tilde{u}_0(t) = 1 \ &{} : \delta _u > - (I - a_0(t))/ \alpha > 0 , \\ u_0(t) = 0 \mapsto \tilde{u}_0(t) = 0 \ &{} : \delta _u < - (I - a_0(t))/ \alpha > 0 , \\ u_0(t) = 1 \mapsto \tilde{u}_0(t) = 0 \ &{} : - \delta _u < - ( I + \alpha - a_0(t))/ \alpha < 0, \\ u_0(t) = 1 \mapsto \tilde{u}_0(t) = 1 \ &{} : - \delta _u > - ( I + \alpha - a_0(t))/ \alpha < 0, \\ \end{array} \end{aligned}$$

where \((u_0(t),a_0(t))\) are defined by Eq. (15). The formula Eq. (18) can then be utilized to compute the updated relative time \(\tilde{t}_0 := t_0(\tilde{u}_0,\tilde{a}_0)\), finding

$$\begin{aligned} \tilde{t}_0 = \left\{ \begin{array}{l@{\quad }l} \tau \ln \left[ (\phi - I)(I+ \alpha )/a_0/(\phi - \alpha - I) \right] \ &{} : \delta _u > ( I + \alpha - a_0)/ \alpha > 0, \ u_0 = 1 \\ T + \tau \ln \left[ ( \phi - I)/(\phi - a_0) \right] \ &{} : -\delta _u > (a_0 - I)/ \alpha > 0 , \ u_0 = 0 \\ t_0(u_0,a_0) \ &{} : \mathrm{otherwise}, \end{array} \right. \end{aligned}$$
(19)

where \(a_0 = \phi - (\phi - I)\mathrm{e}^{-t_0/\tau }\) if \(u_0=1\) and \(a_0 = (I+\alpha )\mathrm{e}^{-(t_0-T_1)/\tau }\) if \(u_0 = 0\). We can refer to the function \(\tilde{t}_0/T\), where \(\tilde{t}_0\) is defined by Eq. (19), as the phase transition curve for u perturbations. Thus, the function \(G_u ( \theta , \delta _u) = (\tilde{t}_0 -t_0)/T\) will be the phase response curve, where \(\theta = t_0/T\), and phase advances occur for positive values and phase delays occur for negative values. We plot the function \(G_u(\theta , \delta _u)\) in Fig. 4a for different values of \(\delta _u\), demonstrating the nontrivial dependence on the perturbation amplitude is not simply a rescaling but an expansion of the non-zero phase shift region. Due to the singular nature of the fast–slow limit cycle Eq. (15), the size of the phase perturbation has a piecewise constant dependence on the amplitude of the u perturbation. Note, this formulation allows us to quantify phase shifts that would not be captured by a perturbative theory for phase sensitivity functions, as computed for the general system in Eq. (17).

Fig. 4
figure 4

Phase response curves of the fast–slow timescale separated system \(\tau \gg 1\). a, b Amplitude \(\delta _u\)- and \(\delta _a\)-dependent phase response curves \(G_u(\theta , \delta _u)\) and \(G_a(\theta , \delta _u)\) characterizing phase advances/delays resulting from perturbation of neural activity u and adaptation a. We compare analytical formulae (solid lines) to numerically computed PRCs (dashed lines). c Phase response curve associated with perturbations of the adaptation variable a in the small amplitude \(0 < |\delta _a| \ll 1\) limit. We compare the large amplitude formula (solid line) determined by Eq. (24) to the linear approximation (dotted line) given by Eq. (25) to numerical computations (dashed line)

For perturbations \(\delta _a\) of the adaptation variable a, there is a more graded dependence of the phase advance/delay amplitude on the perturbation amplitude \(\delta _a\). We expect this, as it was a property we observed in \(Z_a\) as we varied parameters in Fig. 3. We can partition the limit cycle \((u_0(t),a_0(t))\) into four different regions: two advance/delay regions of exponential saturation and two early threshold crossings. First, note if \(u_0(t) = 1\) and \(a_0(t) + \delta _a < I + \alpha \), then

$$\begin{aligned} \tilde{u}_0(t) = 1, \quad \tilde{a}_0(t) = \phi - (\phi - I)\mathrm{e}^{-t/\tau } + \delta _a, \end{aligned}$$
(20)

so \(\tilde{t}_0 = T_1 - t_w\) with \(t_w = \tau \ln \left[ (\phi - a_0 - \delta _a)/(\phi - I - \alpha ) \right] \), but if \(a_0(t) + \delta _a > I + \alpha \), then

$$\begin{aligned} \tilde{u}_0(t) = 0, \quad \tilde{a}_0(t) = \phi - (\phi - I)\mathrm{e}^{-t/\tau } + \delta _a. \end{aligned}$$
(21)

Determining the relative time of the perturbed variables \((\tilde{u}_0(t), \tilde{a}_0(t))\) in Eq. (20) is straightforward using the mapping Eq. (18). However, to determine the relative time described by Eq. (21), we compute the time, after the perturbation, until \(\tilde{a}_0 (t) = I+\alpha \), which will be \(t_w = \tau \ln \left[ (a_0 + \delta _a)/(I+\alpha ) \right] \), so \(\tilde{t}_0 = T_1 - t_w\). Second, note if \(u_0(t) = 0\) and \(a_0(t) + \delta _a > I \), then

$$\begin{aligned} \tilde{u}_0(t) = 0,\quad \tilde{a}_0(t) = (I+ \alpha ) \mathrm{e}^{-(t-T_1)/\tau } + \delta _a, \end{aligned}$$
(22)

so \(\tilde{t}_0 = T - t_w\) with \(t_w = \tau \ln \left[ (a_0 + \delta _0)/I \right] \), but if \(a_0(t) + \delta _a < I\), so that it is necessary that \(\delta _a<0\), then

$$\begin{aligned} \tilde{u}_0(t) = 1, \quad \tilde{a}_0(t) = (I+\alpha ) \mathrm{e}^{-(t-T_1)/\tau } + \delta _a. \end{aligned}$$
(23)

In the case of Eq. (23), we note that \(t_w = \tau \ln \left[ (\phi - a_0 - \delta _a)/(\phi - I) \right] \), so \(\tilde{t}_0 = T - t_w\). Combining our results, we find we can map the relative time to the perturbed relative time as

$$\begin{aligned} \tilde{t}_0 = \left\{ \begin{array}{l@{\quad }l} T_1 - \tau \ln \left[ (\phi - a_0 - \delta _a)/(\phi - I - \alpha ) \right] \ &{} : \delta _a < I + \alpha - a_0, \ u_0 = 1, \\ T_1 - \tau \ln \left[ (a_0 + \delta _a)/(I + \alpha ) \right] \ &{} : \delta _a > I + \alpha - a_0, \ u_0 = 1, \\ T - \tau \ln \left[ (a_0 + \delta _a)/I \right] \ &{} : \delta _a > I - a_0, \ u_0 = 0, \\ T - \tau \ln \left[ (\phi - a_0 - \delta _a )/ (\phi - I) \right] \ &{} : \delta _a < I - a_0, \ u_0 = 0, \end{array} \right. \end{aligned}$$
(24)

where \(a_0 = \phi - ( \phi - I) \mathrm{e}^{-t_0/\tau }\) if \(u_0 = 1\) and \(a_0 = (I+ \alpha ) \mathrm{e}^{- (t_0 - T_1)/\tau }\) if \(u_0 = 0\). Again, we have a phase transition curve given by the function \(\tilde{t}_0/T\) and phase response curve given by \(G_a(\theta , \delta _a ) = (\tilde{t}_0 - t_0)/T\), where \(\theta = t_0/T\). As opposed to the case of u perturbations, the phase perturbation here depends smoothly on the amplitude of the a perturbation \(\delta _a\).

Furthermore, we can obtain a perturbative description of the phase response curve for the singular system Eq. (13) in two ways: (a) Taylor expand the amplitude-dependent phase response curve expressions defined by Eqs. (19) and (24) and truncate to linear order or (b) solving the adjoint Eq. (17) in the case of a Heaviside firing rate Eq. (3) and long adaptation timescale \(\tau \gg 1\). We begin with the first derivation, which simply requires differentiating Eq. (19) to demonstrate that the infinitesimal phase response curve (iPRC) associated with perturbations of the u variable is zero almost everywhere. However, differentiating Eq. (24) reveals that the iPRC associated with perturbations of the adaptation variable a is given by the piecewise smooth function

$$\begin{aligned} Z_a (t) = \left\{ \begin{array}{l@{\quad }l} \displaystyle \frac{\tau }{T(\phi - I)s} \mathrm{e}^{t/\tau } \ &{} : t \in (0, T_1), \\ \displaystyle - \frac{\tau }{T(I+\alpha )} \mathrm{e}^{(t - T_1)/\tau } \ &{} : t \in (T_1, T). \end{array} \right. \end{aligned}$$
(25)

Note, we could derive the same result by solving the adjoint equations Eq. (17) in the case of Heaviside firing rate Eq. (3), so that

$$\begin{aligned} \dot{Z}_u(t)&= - Z_u(t) + \alpha \delta ( \alpha u_0(t) - a_0(t) + I) Z_u(t) + \phi Z_a(t) / \tau , \end{aligned}$$
(26a)
$$\begin{aligned} \dot{Z}_a(t)&= - \delta ( \alpha u_0(t) - a_0 (t) + I) Z_u(t) + Z_a (t) / \tau . \end{aligned}$$
(26b)

We have reversed time \(t \mapsto - t\), so we can simply solve the system forward. Furthermore, we can use the identity

$$\begin{aligned} \delta ( \alpha u_0(t) - a_0(t) + I) = \frac{\delta (t)}{u'(0) - a'(0)} + \frac{\delta (t-T_1)}{u'(T_1) - a'(T_1)}. \end{aligned}$$
(27)

Utilizing the separation of timescales, \(\tau \gg 1\), we find that almost everywhere (except where \(t=0, T_1, T\)), we have that Eq. (26) becomes the system

$$\begin{aligned} \dot{Z}_u(t) = -Z_u(t), \quad \tau \dot{Z}_a(t) = Z_a(t). \end{aligned}$$
(28)

As before \(Z_u(t)\) will be zero almost everywhere, whereas \(Z_a(t) = A(t) \mathrm{e}^{t/\tau }\), where A(t) is a piecewise constant function taking two different values on \(t \in (0,T_1)\) and \(t \in (T_1, T)\), determined by considering the \(\delta \) distribution terms. This indicates how one would derive the formula Eq. (25) using the adjoint equations Eq. (26).

Note, in previous work (Jayasuriya and Kilpatrick 2012), we explored the entrainment of slowly adapting populations to external forcing, comprised of smooth and non-smooth inputs to the system Eq. (1). In the next section, we explore the impact of external noise forcing on the slow oscillations of Eq. (1), subsequently demonstrating that noise can be utilized to entrain the up and down states of two distinct networks.

5 Impact of noise on the timing of up/down states

We now study the effects of noise on the duration of up and down states of the single population model Eq. (1). Switches between high and low firing rate states can occur at irregular intervals (Sanchez-Vives and McCormick 2000), suggesting internal or external sources of noise determine state changes. This section focuses on how noise can reshape the mean duration of up and down residence times. Due to our findings in the previous sections, we focus on noise applied to the adaptation variable in this section. As we have shown, very weak perturbations to the neural activity variable have a negligible effect on the phase of oscillations. We conceive of the noise arising due to ion channel fluctuations (White et al. 2000), specifically related to the slow adaptive potassium currents that serve to hyperpolarize individual cells in the network (Stocker et al. 1999). Analytic calculations are presented for the piecewise smooth system with Heaviside firing rate Eq. (3), as accurate approximations of the mean up and down state durations can be computed.

Fig. 5
figure 5

Noise alters the duration of up and down states. a Numerical simulation of the stochastically driven population model Eq. (4) demonstrates up and down state durations (e.g., \(T_1\) and \(T_2\)) are variable when driven by adaption noise \(\xi _a\) with \(\langle \xi _a^2 \rangle = \sigma _a^2 t\), \(\sigma _a = 0.01\). Switches are determined by the threshold crossings of the adaptation variable \(a(t)=I\) and \(a(t) = I+ \alpha \). b Up/down states become more variable when the noise amplitude \(\sigma _a = 0.02\). c Mean durations of the up and down state, \(\langle T_1 \rangle \) and \(\langle T_2 \rangle \), decrease as a function noise amplitude \(\sigma _a\). d Impact of noise \(\sigma _a\) on the balance of up to down state durations \(\bar{T}_1/\bar{T}_2\) as input I is varied. Firing rate is given by the Heaviside function Eq. (3). Other parameters are \(\alpha = 0.5\), \(\phi = 1\), and \(\tau = 50\)

Our approach is to derive expressions for the mean first passage times of both the up and down state (\(\bar{T}_1\) and \(\bar{T}_2\)) of the stochastic population model Eq. (4). Focusing on adaptation noise allows us to utilize the separation of fast–slow timescales, and recast the pair of equations as a stochastic-hybrid system

$$\begin{aligned} u(t)&= H( (\alpha u(t) + I - a(t)), \\ \mathrm{d}{a}(t)&= \left[ -a(t) + \phi u(t) \right] \mathrm{d}t/ \tau + \mathrm{d}\xi _a(t), \end{aligned}$$

where \(\xi _a\) is white noise with mean \(\langle \xi _a \rangle = 0\) and variance \(\langle \xi _a^2 \rangle = \sigma _a^2 t\). To begin, assume the system has just switched to the up state, so the initial conditions are \(u(0) = 1\) and \(a(0) = I\). Determining the amount of time until a switch to the down state requires we calculate the time \(T_1\) until the threshold crossing \(a(T_1) = I + \alpha \) where a(t) is determined by the SDE

$$\begin{aligned} \mathrm{d}a(t) = \left[ - a(t) + \phi \right] \mathrm{d}t/ \tau + \mathrm{d}\xi _a, \end{aligned}$$

which is the well-known threshold crossing problem for an Ornstein-Uhlenbeck process (Gardiner 2004). The mean \(\bar{T}_1\) of the passage time distribution is thus given by defining the potential \(V(a)=\frac{a^2}{2 \tau } - \frac{\phi a}{\tau }\) and computing the integral

$$\begin{aligned} \bar{T}_1&= \frac{1}{\sigma _a^2} \int _{I}^{I + \alpha } \int _{- \infty }^{x} \mathrm{e}^{\left[ V(x) - V(y) \right] / \sigma _a^2} \mathrm{d}y \mathrm{d}x. \end{aligned}$$

Next, note that the duration of the down state \(T_2\) will be the amount of time until the threshold crossing \(a(T_2) = I\) given \(u(0) = 0\) and \(a(0) = I+ \alpha \), where a(t) obeys the SDE

$$\begin{aligned} \mathrm{d}a(t) = \left[ -a(t) \right] \mathrm{d}t/ \tau + \mathrm{d}\xi _a(t). \end{aligned}$$

Again, defining the potential \(V(a) = \frac{a^2}{2 \tau }\), we can compute

$$\begin{aligned} \bar{T}_2&= \frac{1}{\sigma _a^2} \int _{-I-\alpha }^{-I} \int _{- \infty }^{x} \mathrm{e}^{\left[ V(x) - V(y) \right] / \sigma _a^2} \mathrm{d}y \mathrm{d}x. \end{aligned}$$

We compare the theory we have developed utilizing passage time problems to residence times computed numerically in Fig. 5c. Notice that increasing the noise amplitude tends to shorten both up and down state durations on average, due to early threshold crossings of the variable a(t).

Furthermore, we can examine how noise reshapes the relative balance of up versus down state durations. Specifically, we will explore how the relative fraction of time the up state persists \(\bar{T}_1/(\bar{T}_1 + \bar{T}_2)\) changes with noise intensity \(\sigma _a\) and input I. First, notice that, in the absence of noise the ratio

$$\begin{aligned} \frac{T_1}{T_1+T_2} = \frac{\ln \left[ (\phi - I)/(\phi - \alpha - I) \right] }{\ln \left[ (I+ \alpha )(\phi - I)/I(\phi - \alpha - I) \right] }. \end{aligned}$$
(29)

The up and down state have equal duration when \(T_1/(T_1 + T_2) = 1/2\), or when the input \(I = (\phi - \alpha )/2\), as shown in Fig. 5d. Interestingly, this is the precise input value at which the period obtains a minimum, as we demonstrated in Sect. 3. Along with our plot of Eq. (29) in the noise-free case (\(\sigma _a = 0\)), we also study the impact of noise on this measure of up–down state balance. Noise leads to up and down state durations becoming more similar, so the ratio Eq. (29) of the means \(\bar{T}_1\) and \(\bar{T}_2\) flattens as a function of the input I. This is due to the fact that long durations, wherein the variable a(t) occupies the tail of exponentially saturating functions \(A_0 + A_1 \mathrm{e}^{-t/\tau }\), are shortened by early threshold crossings due to the external noise forcing. This is consistent with the experimental findings of Fröhlich and McCormick (2010), which showed that applied electric fields decrease the period of the slow oscillation. The speeding up of the oscillation is mostly due to there being less time on average spent in the down state. In our model, the parameter regime where one would expect to find this behavior is one where the background input I is low to begin with, as shown in Fig. 2b.

Fig. 6
figure 6

Noise reshapes the distributions of (a) up and (b) down state durations. As the level of noise \(\sigma _a\) is increased, the intrinsic period of the deterministic oscillation is masked by the predominance of durations punctuated by noise-driven transitions. This results in an exponentially decaying distribution, rather than a peaked distribution, for large noise levels \(\sigma _a\). Firing rate is given by the Heaviside function Eq. (3). Other parameters are \(I=0.2\), \(\alpha = 0.5\), \(\phi = 1\), and \(\tau = 50\)

We can further observe the impact of noise on the up and down state durations by studying their distributions in Fig. 6. Noise perturbs the limit cycle present in the deterministic system (\(\sigma _a = 0\)), so that there is a wide range of durations for the up and down state. The broadness of the distribution increases as the level of noise \(\sigma _a\) is increased. Furthermore, the peak of the distribution shifts to shorter dominance times for larger noise levels. Similar observations have been made in the slowly changing energy landscape model of perceptual rivalry by Moreno-Bote et al. (2007). When noise dominates transition dynamics, the lifetimes of up and down states are distributed exponentially (Gardiner 2004). However, when adaptation plays a role in reshaping the energy landscape explored by the stochastic system, the barrier the system state must surmount shrinks over time. This leads to a resonance in the state durations represented by the peak of the distribution in Fig. 6 for smaller noise values \(\sigma _a\). The prominence of a specific range of state durations is well supported by many previous experimental papers exploring the statistics of up and down states (Isomura et al. 2006; Sanchez-Vives and McCormick 2000; Steriade et al. 1993; Cunningham et al. 2006).

6 Synchronizing two uncoupled populations

Now, we demonstrate that common noise can synchronize the up and down states of two distinct and uncoupled populations. We begin with the case of identical noise and then, in Sect. 7, relax these assumptions to show that some level of coherence is still possible when each population has an intrinsic and an independent source of noise. This is motivated by the observation that the SDE derived in the large system-size limit of a neural master equation tends to possess intrinsic noise in each population, in addition to an extrinsic common noise term (Bressloff and Lai 2011). As we will show, intrinsic noise tends to disrupt the phase synchronization due to extrinsic noise.

To begin, we recast the stochastic system Eq. (5), describing a pair of adapting noise-driven neural populations, as a pair of phase equations:

$$\begin{aligned} \mathrm{d}\theta _1 (t)&= \omega \mathrm{d}t + \mathbf{Z}( \theta _1(t) ) \cdot \mathrm{d}{\varvec{\xi }}(t), \end{aligned}$$
(30a)
$$\begin{aligned} \mathrm{d}\theta _2 (t)&= \omega \mathrm{d}t + \mathbf{Z}( \theta _2(t)) \cdot \mathrm{d}{\varvec{\xi }}(t), \end{aligned}$$
(30b)

where \(\theta _1\) and \(\theta _2\) are the phase of the first and second neural populations. Note that the phase equations Eq. (30) are in Stratonovich form since the original noise term in Eq. (5) was converted to Stratonovich form in anticipation of the standard rules of calculus needed for the phase reduction (Ermentrout 2009). As we demonstrate in Fig. 7a, this introduction of common noise tends to drive the oscillation phases \(\theta _1(t)\) and \(\theta _2(t)\) toward one another. Note that since the governing equations of both populations are the same, then the phase sensitivity function \(\mathbf{Z}(\theta )\) will be the same for both. Furthermore, the synchronized solution \(\theta _1(t) = \theta _2(t)\) is absorbing—once the phases synchronize, they remain so. We can analytically calculate the Lyapunov exponent \(\lambda \) of the synchronized state to determine its stability. In particular, we are interested in how this stability depends on the parameters that shape the dynamics of adaptation.

Fig. 7
figure 7

Synchronizing slow oscillations in two uncoupled populations described by Eq. (5) with sigmoidal firing rate Eq. (2). a Single realization of the system Eq. (5) driven by common noise \(\xi _a\) to the adaptation variable (\(\langle \xi _a^2 \rangle = \varepsilon ^2 t\), \(\varepsilon = 0.01\)) with input \(I=0.2\) and adaptation timescale \(\tau = 50\). Notice that the phase difference \(\psi (t) = {\varDelta }_1(t) - {\varDelta }_2(t)\) roughly decreases over time. b Plot of the log of the phase difference \(y(t) = \ln \psi (t)\) for several realizations (thin lines) compared with the theory (thick line) of the mean \(y(0) + \lambda t\) computed using the Lyapunov exponent Eq. (31). c Lyapunov exponent \(\lambda \) decreases as a function of the adaptation timescale \(\tau \), for \(I = 0.2\). We compare numerical simulations (dots) to theory (solid). d Lyapunov exponent \(\lambda \) varies non-monotonically with the strength of the input I. Other parameters are \(\alpha = 0.5\), \(\gamma = 15\), and \(\phi = 1\)

Following the work of Teramae and Tanaka (2004), we can convert Eq. (30) to the equivalent Ito form, linearize, and average to approximate the Lyapunov exponent

$$\begin{aligned} \lambda&= - \int _0^1 \mathbf{Z}'^T(\theta ) \mathbf{D}\mathbf{Z}'(\theta ) \mathrm{d}\theta . \end{aligned}$$
(31)

Assuming noise to the activity variable u and adaptation variable a is not correlated, \(\mathbf{D}\) will be diagonal. In this case, we can further decompose the phase sensitivity function into its Fourier expansion

$$\begin{aligned} \mathbf{Z}(\theta ) = \sum _{k=0}^{\infty } \mathbf{a}_k \sin (2 \pi k \theta ) + \mathbf{b}_k \cos (2 \pi k \theta ), \end{aligned}$$

where \(\mathbf{a}_k = (\mathbf{a}_{k1}, \mathbf{a}_{k2})^T\) and \(\mathbf{b}_k = (\mathbf{b}_{k1}, \mathbf{b}_{k2})^T\) are vectors in \({{\mathbb {R}}}^2\) so that

$$\begin{aligned} \mathbf{Z}' (\theta ) = \sum _{k=0}^{\infty } 2 \pi k \left[ \mathbf{a}_k \cos (2 \pi k \theta ) - \mathbf{b}_k \sin (2 \pi k \theta ) \right] , \end{aligned}$$

and we can expand the terms in Eq. (31) to yield

$$\begin{aligned} \lambda = - \sum _{k = 0}^{\infty } 2 \pi ^2 k^2 \left[ \left( \mathbf{a}_{k1}^2 + \mathbf{b}_{k1}^2 \right) D_{11} + \left( \mathbf{a}_{k2}^2 + \mathbf{b}_{k2}^2 \right) D_{22} \right] . \end{aligned}$$

Thus, as long as \(\mathbf{Z}(\theta )\) is continuous and non-constant, the Lyapunov exponent \(\lambda \) will be negative, so the synchronous state \(\theta _1 = \theta _2\) will be stable. Note, continuity is not satisfied in the case of our singular approximation to \(\mathbf{Z}(\theta )\). We demonstrate the accuracy of our theory Eq. (31) in Fig. 7c, d, showing that \(\lambda \) decreases as a function of \(\tau \) and is non-monotonic in I. Thus, slow oscillations with longer periods are synchronized more quickly, relative to the number of oscillation cycles. Since the Lyapunov exponent has highest amplitude \(|\lambda |\) for both low and high values of the tonic input I, we also suspect this is related to the period of the oscillation T.

Fig. 8
figure 8

Slow oscillations in Eq. (5) can also be synchronized via common noise to the neural activity variables \(u_j\) (\(\langle \xi _u^2 \rangle = \varepsilon ^2 t\)). Lyapunov exponent \(\lambda \) decreases as a function of the adaptation timescale \(\tau \), for \(I=0.2\) in both numerical simulations (dots) and theory (solid). Other parameters are \(\alpha = 0.5\), \(\gamma = 15\), and \(\phi = 1\)

Furthermore, we demonstrate that common noise to the neural activity variables \(u_j\) can stochastically synchronize the phases of the two uncoupled populations in Eq. (5). As shown in Fig. 8, the Lyapunov exponent \(\lambda \) again decreases as a function of the adaptation timescale \(\tau \). Thus, whether noisy inputs arrive through the adaptation or activity variables, the two population oscillations will eventually be synchronized. As expected, the amplitude of the Lyapunov exponent is weaker, for the same level of noise, in the case of input to the neural activity variable.

7 Impact of intrinsic noise on stochastic synchronization

We now extend our results from the previous section by studying the impact of independent noise in each population. In order for our theory of stochastic synchronization of slow oscillations to be generalizable, it must be robust to the effects of local noise in each of the neural populations. We demonstrate here that oscillation phases of stochastically driven populations still remain relatively close in this case (Fig. 9). Independent noise is incorporated into the modified model Eq. (6). Since there is a periodic solution to the noise-free version of this system, phase-reduction methods can be used to obtain approximate SDEs for the phase variables (Nakao et al. 2007)

$$\begin{aligned} \mathrm{d}\theta _1&= \omega \mathrm{d}t + \mathbf{Z}( \theta _1(t) ) \cdot \left[ \mathrm{d}{\varvec{\xi }}_c (t) + \mathrm{d}{\varvec{\xi }}_1 (t) \right] , \end{aligned}$$
(32a)
$$\begin{aligned} \mathrm{d}\theta _2&= \omega \mathrm{d}t + \mathbf{Z}( \theta _2(t) ) \cdot \left[ \mathrm{d}{\varvec{\xi }}_c (t) + \mathrm{d}{\varvec{\xi }}_2 (t) \right] , \end{aligned}$$
(32b)

where the noise vectors \({\varvec{\xi }}_c = (\chi _u \xi _{uc}, \chi _a \xi _{ac})^T\) and \({\varvec{\xi }}_j = (\sqrt{1- \chi _u^2} \xi _{uj}, \sqrt{1 - \chi _a^2} \xi _{aj})^T\) (\(j=1,2\)). Reformulating the system Eq. (32) in terms of an Ito SDE, we can analyze the resulting Fokker–Planck equation along the lines of Nakao et al. (2007) to identify the stationary distribution \(M_0(\psi )\) for the phase-difference \(\psi = \theta _1 - \theta _2\):

$$\begin{aligned} M_0( \psi ) = \frac{m_0}{\sigma _u^2 \left[ (2-\chi _u^2) g_u(0) - \chi _u^2 g_u( \psi ) \right] + \sigma _a^2 \left[ (2-\chi _a^2) g_a(0) - \chi _a^2 g_a(\psi ) \right] }, \end{aligned}$$
(33)

where \(m_0\) is a normalization factor and we have simplified the expression using \(\mathbf{D}_{u1} = \mathbf{D}_{u2} \equiv \mathbf{D}_{ul} = \sigma _u^2\) and \(\mathbf{D}_{a1} = \mathbf{D}_{a2} \equiv \mathbf{D}_{al} = \sigma _a^2\) and defined

$$\begin{aligned} g_j( \psi ) = \int _0^1 Z_j( \theta ) Z_j( \theta + \phi ) \mathrm{d}\theta . \end{aligned}$$
Fig. 9
figure 9

Stationary density \(M_0(\psi )\) of the phase difference \(\psi = \theta _1 - \theta _2\) for two slowly oscillating neural population driven by both common and independent noise Eq. (6). As the degree of noise correlation is decreased from (a) \(\chi _a = 0.95\) to (b) \(\chi _a = 0.90\), the density spreads, but there is still a peak at \(\psi = 0\), the phase-locked state. We focus on noise in the adaptation variable, so \(\sigma _u = 0\) and \(\sigma _a = 0.01\). Other parameters are \(\alpha = 0.5\), \(\gamma = 15\), \(\phi = 1\), and \(\tau = 20\)

When noise to each layer is independent (\(\chi _u, \chi _a \rightarrow 0\)), then \(M_0(\psi ) = 1\) is constant in space. When noise is totally correlated (\(\chi _u, \chi _a \rightarrow 1\)), then \(M_0(\psi ) = \delta ( \phi )\). The stationary distribution \(M_0(\psi )\) will broaden as the correlations \(\chi _u\) and \(\chi _a\) are decreased from unity, with a peak remaining at \(\phi = 0\). We demonstrate the accuracy of the formula Eq. (33) for the stationary density of the phase difference \(\psi \) in Fig. 9, showing that it widens as the level noise correlation is decreased. Again, we focus on the impact of adaptation noise. Thus, even when independent noise is introduced, there is some semblance of synchronization in the phases of two noise-driven neural populations Eq. (6).

8 Entrainment in a pair of spiking populations

Thus far, we have examined the response of a rate model of slow oscillations Eq. (1) to external perturbations. Mean field analyses of recurrently coupled spiking neurons tend to yield such coarse-grained descriptions of the activity of network dynamics (Treves 1993; Brunel 2000). In particular, mean field analyses of spiking neurons with an afterhyperpolarizing current responsible for spike rate adaptation tends to yield firing rate models analogous to Eq. (1) (Vreeswijk and Hansel 2001; Benda and Herz 2003). However, these mean field descriptions average out the effects of heterogeneity and noise that can arise at the level of single neurons. We demonstrate here that these microscopic effects do not significantly modify the qualitative results of previous sections. Namely, two uncoupled populations of spiking cells with adaptation can still support slow oscillations that become entrained when the slow hyperpolarizing currents receive common stochastic forcing.

We will demonstrate our results using the excitatory-inhibitory network of integrate-and-fire cells studied by Vreeswijk and Hansel (2001). Each of the \(N_e\) excitatory neurons possesses an independent afterhyperpolarizing current, while the \(N_i\) inhibitory neurons have no such current. The pattern of connectivity between neurons is described by the matrices \(G_{ij}^{AB}\), where ij, index individual neurons and \(A,B \in \{e,i\} \) indexes the population type. For the simulations presented here, we consider all to all coupling (\(G_{ij}^{AB} = \bar{G}_{AB}\)), but similar results hold for a wider variety of couplings. The network is characterized by the following set of differential equations

$$\begin{aligned} \mathrm{d}v_j^e&= \left[ I_j^e - v_j^e + E_j^e - q_j \right] \mathrm{d}t/ \tau _e + \mathrm{d}\xi _{ej}, \quad j=1,\ldots ,N_e, \end{aligned}$$
(34a)
$$\begin{aligned} \mathrm{d}q_j&= \left[ - q_j \right] \mathrm{d}t / \tau _q + \mathrm{d}\xi _{qj}, \quad j=1,\ldots ,N_e, \end{aligned}$$
(34b)
$$\begin{aligned} \mathrm{d}v_j^i&= \left[ I_j^i - v_j^i + E_j^i \right] \mathrm{d}t/ \tau _i + \mathrm{d}\xi _{ij},\quad j=1,\ldots ,N_i. \end{aligned}$$
(34c)

Excitatory cells receive external input \(I_j^e\), synaptic input \(E_j^e\), and a slowly evolving hyperpolarizing current \(q_j\), while inhibitory cells are only subject to external and synaptic input. We must also incorporate reset conditions whereby if at time t, \(v_j^e\) reaches 1, a spike occurs and \(v_j^e\) is instantaneously reset to 0 (\(v_j^e(t^+)=0\)), and the hyperpolarizing current \(q_j\) is increased by \(g_q/ \tau _q\) (\(q_j(t^+) = q_j(t^-) + g_q/\tau _q\)). The same reset condition holds for inhibitory cells \(v_j^i(t^+)=0\) when \(v_j^i(t^-) = 1\), but they engage no slow hyperpolarizing current (Ahmed et al. 1998). Excitatory and inhibitory neurons are assumed to have typical membrane time constants of \(\tau _e = \tau _i = 10\) ms. Synaptic currents are assumed to increment instantaneously and decay exponentially according to the spike-triggered sums

$$\begin{aligned} E_j^A = \sum _{B = e,i} \sum _{k,l} G_{kl}^{AB} H(t - t_{l}^B) \mathrm{e}^{-(t-t_l^B)/ \tau _{sB}}, \end{aligned}$$
(35)

where \(G_{jk}^{AB}\) is the connectivity matrix, \(\tau _{sB}\) is the decay time constant of the synapses from population B, \(t_l^B\) is the lth spike time of neuron l in population B. Note that excitatory couplings \(G_{kl}^{ee}\) and \(G_{kl}^{ie}\) are non-negative and inhibitory couplings \(G_{kl}^{ie}\) and \(G_{kl}^{ii}\) are non-positive. Lastly, both cell voltage and hyperpolarizing currents are subject to white noise processes \(\xi _{}(t)\) with zero mean and variance \(\langle \xi _{Aj}(t)^2 \rangle = \sigma _{Aj}^2 t\) for \(A = \left\{ e, i, q \right\} \).

Fig. 10
figure 10

Slow oscillations in an excitatory-inhibitory spiking network of \(N_e=80\) excitatory cells and \(N_i = 20\) inhibitory cells. a The voltage \(v_j^e\) of two randomly selected excitatory cells shows that periods of quiescence and activity alternate synchronously. b Up and down state transitions are apparent in the spike raster plot of all the excitatory cells. c Average spike rates of the excitatory populations similarly show the slow switching between the two stable states of the system: a low and a high firing rate state. Parameters are \(I_e = 1.05\), \(I_i = 0.95\), \(g_q = 0.8/\tau _q\), \(\tau _{se} = 40\) ms, \(\tau _{si} = 30\) ms, \(\tau _q = 500\) ms, \(\bar{G}_{ee} = 0.4/N_e\), \(\bar{G}_{ei} = 0.32/N_i\), \(\bar{G}_{ie} = 0.15/N_e\), \(\bar{G}_{ii} = 0.01/N_i\), \(\sigma _e = \sigma _i = \sigma _q = 0.01\). Numerical simulations of Eq. (34) employ the Euler–Maruyama method with a timestep \(dt = 10^{-6}\) s

The recurrent excitatory connectivity of Eq. (34) generates a bistable network. Sufficiently high spike rates will be sustained, due to repeated reactivation of excitatory cells, but low spike rates do not engender persistent high spike rates. Transitions between these two states are generated by the slow build up and decay of the slow hyperpolarizing currents of the excitatory cells. We demonstrate the ability of the network Eq. (34) to generate synchronous up and down state transitions in Fig. 10. Single cells tend to occupy either a depolarized or hyperpolarized state, where they spike repeatedly or are quiescent (Fig. 10a). This is due to the network-wide states which are either high activity (up) or low activity (down) (Fig. 10b, c), and most cells transition between these states synchronously due to the recurrent coupling.

We also explore the impact of common noise on a pair of networks, each described by the equations Eq. (34), and indexed by 1 and 2:

$$\begin{aligned} \mathrm{d}v_j^{1e}&= \left[ I_j^{1e} - v_j^{1e} + E_j^{1e} - q_j^1 \right] \mathrm{d}t/ \tau _e + \mathrm{d}\xi _{1ej}, \quad j=1,\ldots ,N_e, \end{aligned}$$
(36a)
$$\begin{aligned} \mathrm{d}q_j^1&= \left[ - q_j^1 \right] \mathrm{d}t / \tau _q + \mathrm{d}\xi _{1qj}, \quad j=1,\ldots ,N_e, \end{aligned}$$
(36b)
$$\begin{aligned} \mathrm{d}v_j^{1i}&= \left[ I_j^{1i} - v_j^{1i} + E_j^{1i} \right] \mathrm{d}t/ \tau _i + \mathrm{d}\xi _{1ij}, \quad j=1,\ldots ,N_i, \end{aligned}$$
(36c)
$$\begin{aligned} \mathrm{d}v_j^{2e}&= \left[ I_j^{2e} - v_j^{2e} + E_j^{2e} - q_j^2 \right] \mathrm{d}t/ \tau _e + \mathrm{d}\xi _{2ej}, \quad j=1,\ldots ,N_e, \end{aligned}$$
(36d)
$$\begin{aligned} \mathrm{d}q_j^2&= \left[ - q_j^2 \right] \mathrm{d}t / \tau _q + \mathrm{d}\xi _{1qj}, \quad j=1,\ldots ,N_e, \end{aligned}$$
(36e)
$$\begin{aligned} \mathrm{d}v_j^{2i}&= \left[ I_j^{2i} - v_j^{2i} + E_j^{2i} \right] \mathrm{d}t/ \tau _i + \mathrm{d}\xi _{2ij}, \quad j=1,\ldots ,N_i. \end{aligned}$$
(36f)

Each network’s state is initialized randomly, by selecting a random time point in the simulation presented in Fig. 10, so that both networks are in a randomly chosen phase of an evolving slow oscillation. Noise to the voltage variables \(v_j^{BA}\) of each network is taken to be uncorrelated, but noise to the adaptation variables \(q_j^B\) is taken to be fully correlated so that each variable receives an identical white noise sample. As a result, the spike and rate patterns of these two uncoupled networks become more correlated over time (Fig. 11a). We quantify the effect on spike correlation by digitizing all spike times of each network’s excitatory population into 10ms bins and then use MATLAB’s xcorr function to compute an unnormalized correlation function between network 1 and network 2. This is then normalized by dividing by the geometric mean \(\sqrt{\nu _j \nu _k}\) of both neuron’s total firing rate \(\nu _j\) and \(\nu _k\) over the time interval. The time interval [0, 3]s is compared to [3, 6]s in Fig. 11b, demonstrating the correlation coefficient increases at later times. Thus, common noise in the slow hyperpolarizing currents can help to correlate the temporal evolution of firing rate and spiking in this spiking network model.

Fig. 11
figure 11

Noise-induced correlation of the spike pattern of two uncoupled excitatory-inhibitory networks Eq. (34) with adaptation. Noise to all adaptation variables \(\sigma _{q} = 0.001\) in both populations is fully correlated. a Excitatory population-wide spike rates of the first (solid line) and second (dashed line) networks become more correlated over time. Up and down transitions begin to occur more coherently at later times. b Correlation coefficient (CC) associated with the spike trains of network 1 as compared to those of network 2, averaged across all possible pairings. The CC for the later time window [3, 6]s (solid line) is larger than that for the earlier time window [0, 3]s (dashed line), demonstrating the noise-induced increase in activity correlation between the two uncoupled networks. Other parameters and methods are the same as in Fig. 10

9 Discussion

We have studied the impact of deterministic and stochastic perturbations to a neural population model of slow oscillations. The model was comprised of a single recurrently coupled excitatory population with negative feedback from a slow adaptive current (Laing and Chow 2002; Jayasuriya and Kilpatrick 2012). By examining the phase sensitivity function \((Z_u, Z_a)\), we found that perturbations of the adaptation variable lead to much larger changes in oscillation phase than perturbations of neural activity. Furthermore, this effect becomes more pronounced as the timescale \(\tau \) of adaptation is increased. Introducing noise in the model decreases the oscillation period and helps to balance the mean duration of the oscillation’s up and down states. When two uncoupled populations receive common noise, their oscillation phases \(\theta _1\) and \(\theta _2\) eventually become synchronized, which can be shown by deriving a formula for the Lyapunov exponent of the absorbing state \(\theta _1 \equiv \theta _2\) (Teramae and Tanaka 2004). When independent noise is introduced to each population, in addition to common noise, the long-term state of the system is described by a probability density for \(\psi = \theta _1 - \theta _2\), which peaks at \(\psi \equiv 0\).

Our study was motivated by the observation that recurrent cortical networks can spontaneously generate stochastic oscillations between up and down states. Guided by previous work in spiking models (Compte et al. 2003), we explored a rate model of a recurrent excitatory network with slow spike frequency adaptation. We expect that we would have obtained similar results from an excitatory-inhibitory network, since inhibition tends to act faster than excitation, essentially reducing the effective recurrent excitatory strength (Pinto and Ermentrout 2001). One of the open questions about up and down state transitions concerns the degree to which they are generated by noise or by more deterministic mechanisms, such as slow currents or short term plasticity (Cossart et al. 2003). Here, we have provided some characteristic features that emerge as the level of noise responsible for transitions is increased. Similar questions have been explored in the context of models of perceptual rivalry (Moreno-Bote et al. 2007). In addition, we have provided a plausible mechanism whereby the onset of up and down states could be synchronized in distinct networks (Volgushev et al. 2006).

Nonmonotonic residence time distributions for up states provide compelling evidence for the theory that switches from up to down states are partially governed by deterministic neural processes (Cheng-yu et al. 2009). This idea is explored in detail in a recent study which employed a neuronal network model with short term depression (Dao Duc et al. 2015). Recordings presented therein from both auditory and barrel cortices revealed up state duration distributions which are peaked away from zero. Furthermore, the tail of the duration distribution has an oscillatory decay with several peaks, which may arise due to specific properties of the underlying network’s dynamics. Indeed, the authors were able to account for these peaks in a neuronal network model with an up state whose attracting trajectories are oscillatory. It would interesting to extend the present study to try and understand how external inputs might entrain such up and down state transitions that occur via more complex dynamics.

Synchronizing up and down states across multiple areas of the brain may be particularly important for memory consolidation processes (Diekelmann and Born 2010). Long term potentiation (LTP), the process by which the strength of synapses is strengthened for a lasting period of time (Alberini 2009), is one of the chief mechanisms thought to underlie memory formation (Takeuchi et al. 2014). Both cortical and hippocampal LTP are typically restricted to the up states of slow oscillations during slow wave sleep (Rosanova and Ulrich 2005). Furthermore, up states may then repetitively activate memory traces in hippocampus, along with thalamus and cortex, reenforcing memory persistence (Marshall and Born 2007). Thus, subnetworks whose slow oscillations are coordinated are more likely to be further linked through long term plasticity. Indeed, boosting slow oscillations by external potential fields has been shown to enhance declarative memories, providing further evidence that coherent up and down state transitions may subserve memory consolidation processes (Marshall et al. 2006). In total, synchrony may provide a functionally relevant way to link the activities of related neuronal assemblies, allowing appropriate reactivation during waking hours (Steriade 2006).

We have proposed two possible ways for synchrony of up and down states to occur: (a) common noise in the activating currents of neurons in distinct populations and (b) common noise in the slow hyperpolarizing currents of distinct neural populations. The first mechanisms could arise through common excitatory input to each population, as in previous studies of correlation-induced synchrony in olfactory bulb neurons (Galán et al. 2006). The second mechanism must arise via common chemical forcing of hyperpolarizing current. One way this could occur is via common astrocytic calcium signaling (Volterra et al. 2014). Calcium propagates rapidly in waves through astrocytes (Newman 2001), which could generate a common signal on calcium activating hyperpolarizing currents (Bond et al. 2004). Furthermore, slow afterhyperpolarizing currents can be modulated by acetylcholine (Faber and Sah 2005). Global modulations of acetylcholine are often observed during slow wave sleep (Steriade 2004), so this may provide another mechanism for the common perturbation of slow afterhyperpolarizing currents.

Other previous studies have explored phenomenological models of up/down state transitions in neural populations. For instance, Holcman and Tsodyks (2006) introduced an excitatory network with activity-dependent synaptic depression having two attractors that represented the up and down state of the network. Synaptic noise, rather than a systematic slow process like rate adaptation, drove the system between these two attractors. These authors explored the stochastic dynamics of a single neural population that did not possess a deterministic limit cycle in the absence of noise. Parga and Abbott (2007) have addressed such dynamics in a complementary way, by studying a network of integrate-and-fire neurons with nonlinear membrane current. The resulting bistability in the resting membrane potential of single cells is inherited by the dynamics of the full network. The noise-driven network ceaselessly switches between low and high firing rate states. The durations of up and down states are given by exponential distributions, since they arise from noise-induced escape from a local attractor. Our work is distinct from these previous studies in several ways. First, we note that the mechanism underlying transitions between up and down states is a combination of slow rate adaptation and noise in our full stochastic model. Under small noise assumptions, this allows us to examine the susceptibility of the network state to external perturbations using a phase reduction method (Brown et al. 2004). Second, a chief concern of our work is the synchronization of multiple populations undergoing coherent slow oscillations, as in Volgushev et al. (2006). One particularly interesting extension of previous studies of noise-induced up and down state transitions (Holcman and Tsodyks 2006; Parga and Abbott 2007), would be to define some semblance of a phase response curve, based on knowledge of the network’s underlying state. Recent work on the PRCs of excitable systems and asymptotically phaseless systems may be particularly helpful (Shaw et al. 2012; Wilson and Moehlis 2015).

There are several other potential extensions to this work. For instance, we could examine the impact of long-range connections between networks to see how these interact with common and independent noise to shape the phase coherence of oscillations. Similar studies have been performed in spiking models by Ly and Ermentrout (2009). Interestingly, shared noise can actually stabilize the anti-phase locked state in this case, even though it is unstable in the absence of noise. Furthermore, it is known that coupling spanning long distances can be subject to axonal delays. In spite of this, networks of distantly coupled clusters of cells can still sustain zero-lag synchronized states (Vicente et al. 2008). However, there are some cases in which such delays can destabilize the phase-locked states (Earl and Strogatz 2003; Ermentrout and Ko 2009), in which can another mechanism would be needed to explain the synchronization of up/down states. Thus, we could also explore the impact of delayed coupling, determining how features of phase sensitivity function interact with delay to promote in-phase or anti-phase synchronized states. Lastly, we note that a systematic analysis of phase equations for relaxation oscillators has been applied to the general case of slow variables in (Izhikevich 2000). We expect that the approach developed therein, using the Malkin theorem, could be be applied to the system Eq. (1), even in the case of a discontinuous firing rate function.