Abstract
Slow oscillations in firing rate of neural populations are commonly observed during slow wave sleep. These oscillations are partitioned into up and down states, where the population switches between high and low firing rates (Sanchez-Vives and McCormick in Nat Neurosci 3:1027–1034, 2000). Transitions between up and down states can be synchronized at considerably long ranges (Volgushev et al. in J Neurosci 26:5665–5672, 2006). To explore how external perturbations shape the phase of slow oscillations, we analyze a reduced model of up and down state transitions involving a population neural activity variable and a global adaptation variable. The adaptation variable represents the average of all the slow hyperpolarizing currents received by neurons in a large population. Recurrent connectivity leads to a bistable neural population, where a low firing rate state coexists with a high firing rate state, where persistent activity is maintained via excitatory connections. Adaptation eventually inactivates the high activity state, and the low activity state then persists until adaptation has significantly decayed. We analyze the phase response of the rate model by taking advantage of the separation of timescales between the fast activity and slow adaptation variables. This analysis reveals that perturbations to the neural activity variable have a considerably weaker effect on the oscillation phase than adaptation perturbations. When noise is not incorporated into the rate model, the period of the slow oscillation is determined by the timescale of the slow adaptation variable. In the presence of noise, times at which the population transitions between the low and high activity states become variable. This is because the rise and decay of the adaptation variable is now stochastically-driven, leading to a distribution of transition times. Interestingly, common noise in the adaptation variable can lead to a correlation of two distinct slow oscillating populations. This effect is still significant in the event that each population contains its own local sources of noise. We also show this phenomenon can occur a spiking network. Our results demonstrate the relative contributions of excitatory input and hyperpolarizing current fluctuations on the phase of slow oscillations.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Cortical networks can generate a wide variety oscillatory rhythms with frequencies spanning five orders of magnitude (Buzsáki and Draguhn 2004). Slow oscillatory activity (0.1–1 Hz) has been observed in vivo during decreased periods of alertness, such as slow wave sleep and anesthesia (Steriade et al. 1993). Furthermore, such activity can be produced in vitro when bathing cortical slices in a medium with typical extracellular ion concentrations (Sanchez-Vives and McCormick 2000). The function of slow oscillations during sleep still remains uncertain, but there is growing evidence that they play a role in memory consolidation processes (Marshall et al. 2006).
A key feature of these slow oscillations is that they tend to be an alternating sequence of two bistable states, referred to as the up and down states. Up states in networks are characterized by high levels of firing activity, due to depolarization in single cells. Down states in networks typically appear quiescent, due to hyperpolarization in single cells. There is strong evidence that up states are generated by recurrent connections (Cossart et al. 2003). This suggests up states may be spontaneous remnants of stimulus-induced persistent states utilized for working memory (Wang 2001) and other network computations (Major and Tank 2004). Furthermore, recent two-photon imaging in vivo has revealed that subthreshold NMDA-receptor-dependent calcium signals are widespread during up states (Chen et al. 2013). Such signals may serve to reenforce cellular transformations underlying memory consolidation during sleep (Marshall et al. 2006). In addition, while neurons tend to be closer to spike threshold during up states, sensory stimuli may actually produce weaker responses during these periods (Petersen et al. 2003). Correspondences between spontaneous neural activity and underlying network stimulus tuning have been observed in a variety of brain areas (Fox and Raichle 2007). For instance, the dynamic evolution of spontaneous visual cortical activity has been linked to the architecture producing orientation maps (Kenet et al. 2003).
Several different cellular and synaptic mechanisms have been suggested to underlie the transitions between up and down states. One possibility is that the network is recurrently coupled with excitation, stabilizing both a quiescent and active state (Amit and Brunel 1997; Renart et al. 2007). Fluctuations due to probabilistic synapses, channel noise, and randomness in network connectivity can then lead to spontaneous transitions between the quiescent and active state (Parga and Abbott 2007; Bressloff 2010; Litwin-Kumar and Doiron 2012). Alternatively, switches between low and high activity states may arise by some underlying systematic slow process. For instance, it has been shown that competition between recurrent excitation and the negative feedback produced by activity-dependent synaptic depression can lead to slow oscillations in firing rate whose timescale is set by the depression timescale (Bart et al. 2005; Holcman and Tsodyks 2006; Kilpatrick and Bressloff 2010). Excitatory-inhibitory networks with facilitation can produce slow oscillations, due to the slow facilitation of feedback inhibition that terminates the up state, the down state is then rekindled due to positive feedback from recurrent excitation (Melamed et al. 2008). These neural mechanisms utilize dynamic changes in the strength of neural architecture. However, Compte et al. (2003) proposed that single cell mechanisms can also shift network states between up and down states. The up state is maintained by strong recurrent excitation balanced by inhibition, and transitions to the down state occur due to a slow hyperpolarizing current. Once in the down state, the slow hyperpolarizing current is inactivated, and excitation reinitiates the up state. Slow hyperpolarizing currents are prime examples of mechanisms underlying spike rate adaptation (Benda and Herz 2003). One particularly well studied example is the class of currents generated by calcium-gated potassium channels, often referred to as medium afterhyperpolarization currents (Madison and Nicoll 1984; Pedarzani and Storm 1993; Higgs et al. 2006). A similar mechanism has been utilized in models of perceptual rivalry, where dominance switches between two mutually inhibiting populations occur due to the build up of network-wide spike rate adaptation (Laing and Chow 2002; Moreno-Bote et al. 2007).
In this paper, we utilize a rate-based model of an excitatory network with spike rate adaptation to explore the impact that noise perturbations have upon the relative phase and duration of slow oscillations. The adaptation variable quantifies the average effects of slow hyperpolarizing currents across the network, as derived by Vreeswijk and Hansel (2001). We find that, as in the spiking model studied by Compte et al. (2003), the interplay between recurrent excitation and the adaptation variable produces a slow oscillation in the firing rate of the network. In fact, for slow timescale adaptation, the oscillations evolve as fast switches between a low and high activity state, stable fixed points of the adaptation-free system. Since the timescale and slow dynamics of the oscillation are set by the adaptation variable, we mainly focus on the impact of perturbation to the adaptation variable in our model. As we will show, perturbations of the activity variable have much lower impact on the oscillation phase than noise to the slow adaptation variable.
Another remarkable feature of slow oscillations, observed during slow-wave sleep and anesthesia, is that the up and down states tend to be synchronized across different regions of cortex and thalamus (Steriade et al. 1993; Massimini et al. 2004). Specifically, both the up and down states start near synchronously in cells located up to 12 mm apart (Volgushev et al. 2006). Such remarkable coherence between distant network activity cannot be accomplished by single cell mechanisms, but require either long range network connectivity or some external signal forcing entrainment (Traub et al. 1996; Smeal et al. 2010). Activity tends to originate from several different foci in the network, quickly spreading across the rest of the network on a timescale orders of magnitude faster than the oscillation itself (Compte et al. 2003; Massimini et al. 2004). While the synchronous initiation of up states may be explained by recurrent architecture, synchronization of the down states are more difficult to explain and remain an unexplained phenomenon (Volgushev et al. 2006). The fact that the onset of quiescence is fast and well synchronized means there must be either a rapid relay signal between all foci or there is some global signal cueing the down state. Rather than suggest a disynaptic relay, using long range excitation acting on local inhibition, we suggest that background noise can serve as a synchronizing global signal (Ermentrout et al. 2008). For example, up/down state correlations in visual cortex have also been observed across 500 \(\upmu \)m, and it has been suggested this may arise due to common input from LGN (Lampl et al. 1999). Noisy but correlated inputs have been shown to be capable of synchronizing uncoupled populations of phase oscillators (Teramae and Tanaka 2004) as well as experimentally recorded cells in vitro (Galán et al. 2006). Here we will show correlated noise is a viable mechanisms for coordinating slow oscillations in distinct uncoupled neural populations.
The paper is organized as follows: We introduce the neural population model in Sect. 2, indicating the way external noise is incorporated into the model. In Sect. 3, we demonstrate the periodic solutions that emerge in the noise-free model, demonstrating it is possible to derive analytical expressions for the oscillation period in the case of steep firing rate functions. Then, in Sect. 4 we show how to derive phase sensitivity functions that describe how external perturbations to the periodic solution impact the asymptotic phase of the oscillation. As demonstrated, the impact of perturbations to the adaptation variable is much stronger than activity variable perturbations, especially for longer adaptation timescales. Thus, our studies of the impact of noise mainly focus on the effects of fluctuations in the adaptation variable. We find, in Sect. 5, that adding noise to the adaptation variable leads to up and down state durations that are shorter and more balanced, so that the up and down state last for similar lengths of time. In Sect. 6, we demonstrate that slow oscillations in distinct populations can become entrained to one another when both populations are forced by the same common noise signal. This phenomenon is robust to the introduction of independent noise in each population, as we show in Sect. 7. Lastly, we demonstrate that the rate and spike patterns of two uncoupled spiking networks can be synchronized by common noise in Sect. 8.
2 Adaptive neural populations: deterministic and stochastic models
We begin by describing the models we will use to explore the impact of external perturbations on slow oscillations. Motivated by Compte et al. (2003), we will focus on a neural population model with spike rate adaptation, akin to mutual inhibitory models used to study perceptual rivalry (Laing and Chow 2002; Moreno-Bote et al. 2007).
Single population model In a single population, neural activity u(t) receives negative feedback due to a subtractive spike rate adaptation variable (Benda and Herz 2003)
Here, u represents the mean firing rate of the neural population with excitatory connection strength \(\alpha \). The negative feedback variable a is spike frequency adaptation with strength \(\phi \) and time constant \(\tau \). For some of our analysis we will utilize the assumption \(\tau \gg 1\), based on the fact that many forms of spike rate adaptation tend to be much slower than neural membrane time constants (Benda and Herz 2003). The constant tonic drive I initiates the high firing rate (up) state, and slow adaptation eventually attenuates activity to a low firing rate (down) state. Weak but positive drive \(I>0\) is meant to model the presence of low spiking threshold cells that spontaneously fire, utilized as a mechanism for initiating the up state in Compte et al. (2003). The firing rate function f is monotone and saturating function such as the sigmoid
Commonly, in studies of neural field models, the high gain limit (\(\gamma \rightarrow \infty \)) of Eq. (2) is taken to yield the Heaviside firing rate function (Amari 1977; Laing and Chow 2002)
which often allows for a more straightforward analytical study of model dynamics. We exploit this fact extensively in our study. Nonetheless, we have also carried out many numerical simulations of the model for a smooth firing rate function Eq. (2), and they correspond to the results we present for sufficiently high gain. Note, this form of adaptation is often referred to as subtractive negative feedback, as current is subtracted from the population input. Alternative models of slow neural population oscillations have employed short term synaptic depression (Tabak et al. 2000; Bart et al. 2005; Holcman and Tsodyks 2006; Kilpatrick and Bressloff 2010), a form of divisive negative feedback.
A primary concern of this work is the response of Eq. (1) to external perturbations, acting on the activity u and adaptation a variables. To do so, we will use both an exact method and a linearization to identify the phase response curve of the limit cycle solutions to Eq. (1). Understanding the susceptibility of limit cycles of Eq. (1) to inputs will help us understand ways in which noise will influence the frequency and regularity of oscillations.
Stochastic single population model Following our analysis of the noise-free system, we will consider how fluctuations influence oscillatory solutions to Eq. (1). To do so, we will employ the following stochastic differential equations (SDEs) for Eq. (1) forced by white noise
where we have introduced the independent Gaussian white noise processes \(\xi _u(t)\) and \(\xi _a(t)\) with zero mean \(\langle \xi _u(t) \rangle = \langle \xi _a(t) \rangle = 0\) and variances \(\langle \xi _u(t)^2 \rangle = \sigma _u^2 t\) and \(\langle \xi _a(t)^2 \rangle = \sigma _a^2 t\). Extending our results concerning the phase response curve, we will explore how noise forcing impacts the statistics of the resulting stochastic oscillations in Eq. (4). In particular, since we find noise tends to impact the phase of the oscillation more strongly when applied to the adaptation variable, we will tend to focus on the case \(\xi _u \equiv 0\).
Stochastic dual population model Finally, we will focus on how correlations in noise-forcing impact the coherence of two distinct uncoupled populations
Thus, the system Eq. (5) describes the dynamics of two distinct neural populations \(u_1\) and \(u_2\), with inputs I. Our main interest lies in the impact the noise terms have upon the phase relationship between the two systems’ states. In this version of the model, noise to the activity variables \(\xi _u\) is totally correlated, as is noise to the adaptation variables \(\xi _a\). Thus, all means are zero and \(\langle \xi _{u}^2 (t) \rangle = \sigma _u^2 t = \mathbf{D}_{11} t\). Furthermore, \(\langle \xi _{a}^2 (t) \rangle = \sigma _a^2 t = \mathbf{D}_{22} t\). For this study, we assume there are no correlations between activity and adaptation noise, so \(\langle \xi _u(t) \xi _{a} (t) \rangle = 0\). A more general version of the model Eq. (5) would consider the possibility of independent noise in each population
Noise terms all have zero mean and variances defined \(\langle \xi _{uj}^2 (t) \rangle = \sigma _{uj}^2 t = \mathbf{D}_{uj} t\) and \(\langle \xi _{aj}^2 (t) \rangle = \sigma _{aj}^2 t= \mathbf{D}_{aj} t\) (\(j=1,2,c\)). To ease calculations, we take \(\mathbf{D}_{u1} = \mathbf{D}_{u2} \equiv \mathbf{D}_{ul} = \sigma _u^2\) and \(\mathbf{D}_{a1} = \mathbf{D}_{a2} \equiv \mathbf{D}_{al} = \sigma _a^2\). The degree of noise correlation between populations is controlled by the parameters \(\chi _u\) and \(\chi _a\), so in the limit \(\chi _{u,a} \rightarrow 1\), the model Eq. (6) becomes Eq. (5).
3 Periodic solutions of a single population
We begin by studying periodic solutions of the single population system Eq. (1), as demonstrated in Fig. 1a. First, we note that for firing rate functions f with finite gain, we can identify the emergence of oscillations by analyzing the stability of the equilibria of Eq. (1). That is, we assume \((\dot{u}, \dot{a}) = (0,0)\), so the system becomes
which can be reduced to the single equation
Roots of Eq. (7), defining fixed points of Eq. (1) are plotted as a function of the input I in Fig. 1c, d. Utilizing the sigmoidal firing rate function f given by Eq. (2), we can show that there will be a single fixed point as long as \(\phi > \alpha \). In this case, we can compute
Since \(\bar{u}\) is monotone increasing, then \(\bar{u} - g(\bar{u})\) is monotone increasing. Further, noting \(\lim _{\bar{u} \rightarrow \pm \infty } \left[ \bar{u} - g(\bar{u}) \right] = \pm \infty \), it is clear \(\bar{u} - g(\bar{u})\) crosses zero once, so Eq. (7) has a single root when \(\phi > \alpha \). Stability of this equilibrium is given by the eigenvalues of the associated Jacobian
We note that the sigmoid Eq. (2) satisfies the Ricatti equation \(f' = \gamma f (1 - f)\), so we can use Eq. (7) to write
Oscillatory instabilities arise when the complex eigenvalues associated with fixed points \((\bar{u}, \bar{a})\) cross from the left to the right half plane. All numerical simulations have shown that these instabilities lead to limit cycles (Fig. 1b). We identify the parameter values at which this destabilization occurs by using the relations: \(\mathrm{tr}(J) = 0\) and \(\mathrm{tr}(J)^2 < 4 \mathrm{det} (J)\). Thus, a necessary condition for the onset of oscillatory instabilities is that the equilibrium value \(\bar{u}\) satisfies
Solving this for \(\bar{u}\) yields
Thus, these instabilities will only occur when the timescale of adaptation is sufficiently large \(\tau > \left[ \alpha \gamma / 4 - 1\right] ^{-1}\). Plugging the formula Eq. (8) back into the fixed point equation Eq. (7) and solving for the input I, we can parameterize the boundary of instability based upon the equation
along with the additional condition \(\mathrm{tr}(J)^2 < 4 \mathrm{det} (J)\) which becomes
which will always hold as long as \(\phi > \alpha \). We partition the parameter space \((\tau , I)\) using our formula Eq. (9) in Fig. 1b. As demonstrated, there tend to be either two or zero stability transitions for a given timescale \(\tau \), and the coalescence of the two points is given by the point where \(\tau = \left[ \alpha \gamma / 4 - 1\right] ^{-1}\).
In the limit of slow adaptation \(\tau \gg 1\), we can separate the timescales of the activity u and adaptation a variables, finding u will equilibrate according to the equation
and subsequently a will slowly evolve according to the equation
We always have an implicit formula for \(\hat{u}(t)\) in terms of a(t), so the dynamics will tend to slowly evolve along the direction of the a variable. This demonstrates why periodic solutions to Eq. (1) are comprised of a slow rise and decay phase of a, punctuated by fast excursions in the activity variable u. In general, it is not straightforward to analytically treat the pair of Eqs. (11) and (12), but we will show how computing solutions of the singular system becomes straightforward when we take the high gain limit \(\gamma \rightarrow \infty \).
Having established the existence of oscillatory instabilities and numerically shown periodic solutions to Eq. (1) in the case of sigmoid firing rates Eq. (2), we now explore the system in the high gain limit \(\gamma \rightarrow \infty \) whereby the firing rate function becomes a Heaviside Eq. (3). In this case, fixed points \((\bar{u},\bar{a})\) satisfy the equations
and \(\bar{a} = \phi \bar{u}\). Thus, assuming \(\phi > \alpha \), then \(\bar{u}=0\) when \(I<0\) and \(\bar{u} = 1\) when \(I> (\phi - \alpha )\). In both cases, the fixed points are linearly stable. When \(0 < I < (\phi - \alpha )\), there are no fixed points and we expect to find oscillatory solutions. Assuming \(\tau \gg 1\), we can exploit a separation of timescales to identify the shape and period of these limit cycles. To begin, we note that on fast timescales
where a(t) is assumed to be changing slow enough that it is in a quasi-steady state \(a_0\) on short timescales. On longer timescales of order \(\tau \), then u(t) quickly equilibrates and
Periodic solutions to Eq. (1) must obey the condition \((u(t),a(t)) = (u(t+nT),a(t+nT))\) for \(t \in [0,T]\) and \(n \in {{\mathbb {Z}}}\), so we focus on the domain \(t \in [0,T]\). Examining Eq. (13), we can see oscillations in Eq. (1) involve switches between \(u(t) \approx 1\) and \(u(t) \approx 0\). We translate time so that \(u(t) \approx 1\) on \(t \in [0,T_1)\) and \(u(t) \approx 0\) on \(t \in [T_1,T)\). Subsequently, this means for \(t \in [0,T_1]\) the system Eq. (13) becomes \(u \equiv 1\) and \(\tau \dot{a} = -a + \phi \) so \(a(t) = \phi - (\phi - I) \mathrm{e}^{-t/\tau }\). We know \(a(0)=I\) because \(u(0^-) \equiv 0\) in Eq. (13), and the argument of H(x) must have crossed zero at \(t=0\). In a similar way, we find on \(t \in [T_1,T)\) that \(u \equiv 0\) and \(a(t) = (I+ \alpha ) \mathrm{e}^{-(t-T_1)/\tau }\). Using the conditions \(a(T_1) = I+ \alpha \) and \(a(T) = I\), we find that the rise time of the adaptation variable (or the duration of the up state) is
and the decay time (or the duration of the down state) is
and the total period of the oscillation is
Thus, approximate periodic solutions to Eq. (1) in the case of a Heaviside firing rate Eq. (3) take the form
We demonstrate the accuracy of the approximation Eq. (15) in Fig. 2a. Furthermore, we show that relationship between the period T and model parameters is well captured by the formula Eq. (14). Notice there is a non-monotonic relationship between the period T and the input I. We can understand this further by noting that the rise time \(T_1\) of the adaptation variable a increases monotonically with input
when \(0 < I < (\phi - \alpha )\). Furthermore, the decay time \(T_2\) of the adaptation variable a decreases monotonically with input
when \(0 < I < (\phi - \alpha )\). Thus, as \(I \rightarrow 0^+\), the slow oscillation’s period T is dominated by very long decay times \(T_2 \gg 1\) and as \(I \rightarrow (\phi - \alpha )^-\), it is dominated by very long rise times \(T_1 \gg 1\). We can identify the minimal period as a function of the input I by finding the critical point of T(I). To do so, we differentiate and simplify
so the critical point of T(I) is at \(I_{crit} = (\phi - \alpha )/2\), which corresponds to the minimal value of the period \(T_{min}(I) = 2 \tau \ln \left[ (\phi + \alpha )/(\phi - \alpha ) \right] \) as pictured in Fig. 2b.
4 Phase response curves
We can further understand the dynamics of the slow oscillations in Eq. (1) by computing phase response curves for both the case of a sigmoidal firing rate Eq. (2) and the Heaviside firing rate Eq. (3). As we will show, perturbations of the activity variable u have decreasing impact as the timescale of adaptation \(\tau \) and the gain \(\gamma \) of the firing rate are increased. Perturbations of the adaptation variable a typically lead to larger phase shifts than perturbations to the neural activity variable u. This is due to the fact that the rise and decay of the slow adaptation variable a are primarily what determines the phase of the oscillation.
To begin, we derive a general formula that linearly approximates the influence of small perturbations on limit cycle solutions \((u_0(t), a_0(t))\) to Eq. (1). Essentially, we utilize the fact that solutions \(\mathbf{Z}(t)\) to the adjoint equation associated with linearization about the limit cycle solution \((u_0(t), a_0(t))\) provide a complete description of how infinitesimal perturbations of the limit cycle impact its phase (Ermentrout 1996; Brown et al. 2004). To start, we note that
is the linearization of Eq. (1) about the limit cycle \((u_0(t), a_0(t))\). Defining the inner product on T-periodic functions in \({{\mathbb {R}}}^2\) as \(\langle F(t), G(t) \rangle = \int _0^T F(t) \cdot G(t) \mathrm{d}t\), we can find the adjoint operator \({\mathcal L}^*\) by noting it satisfies \(\langle F, {\mathcal L}G \rangle = \langle {\mathcal L}^*F, G \rangle \) for all \(L^2\) integrable vector functions F, G. We can then compute
It can be shown that the null space of \({\mathcal L}^*\) describes the response of the phase of the limit cycle \((u_0(t),a_0(t))\) to infinitesimal perturbations (Brown et al. 2004). Note that if \((u_0(t),a_0(t))\) is a stable limit cycle then the nullspace of \({\mathcal L}\) is spanned by scalar multiples of \((u_0'(t),a_0'(t))\). Furthermore, appropriate normalization requires that \(\mathbf{Z}(t) \cdot (u_0'(t),a_0'(t)) = 1\) along with \({\mathcal L}^* \mathbf{Z}= 0\) (Ermentrout 1996). To numerically compute \(\mathbf{Z}(t) = (Z_u(t), Z_a(t))\), we thus integrate the system
backward in time, taking the long time limit to find \((Z_u(t), Z_a(t))\) on \(t \in [0,T]\), and normalizing \(\langle (Z_u(t), Z_a(t)),(u_0'(t),a_0'(t)) \rangle = 1\) by rescaling appropriately. We demonstrate this result in Fig. 3, showing the relationship between the shape and relative amplitude of the phase sensitivity functions \((Z_u,Z_a)\) and the parameters. Notably, perturbations of the activity variable u become less influential as the timescale of adaptation \(\tau \) is increased (\(Z_u\)). Furthermore, there is a sharper transition between phase advance and phase delay region of the adaptation phase response (\(Z_a\)) for larger timescales \(\tau \).
In addition to a general formula for the phase sensitivity functions \((Z_u(t),Z_a(t))\), we can derive an amplitude-dependent formula for the response of limit cycle solutions \((u_0(t),a_0(t))\) of Eq. (1) with a Heaviside firing rate Eq. (3), assuming \(\tau \gg 1\). In this case, we utilize the formula for the period Eq. (14) and limit cycle Eq. (15), derived using a separation of timescales assumption. Then, we can compute the change to the variables (u, a) as a result of a perturbation \((\delta _u, \delta _a)\), which we denote \((u_0(t),a_0(t)) \mathop {\longmapsto }\limits ^{(\delta _u, \delta _a)} (\tilde{u}_0(t), \tilde{a}_0(t))\). We are primarily interested in how the relative time in the limit cycle is altered by a perturbation \(\delta _u\) - how much closer or further the limit cycle is to the end of the period T after being perturbed. We can readily determine this by first inverting the formula we have for \((u_0(t),a_0(t))\), given by Eq. (15), to see how this value determines the time \(t_0\) along the limit cycle
Using this formula, we can now map the value \((\tilde{u}_0,\tilde{a}_0)\) to an associated updated relative time \(t_0\) along the oscillation.
Here, we decompose the impact of perturbations to the u and a variables. We begin by studying the impact of perturbations \(\delta _u\) to the activity variable u. We can directly compute
Thus, the singular system Eq. (13) will be unaffected by such perturbations if \(\mathrm{sgn}(I + \alpha [u + \delta _u] - a ) = \mathrm{sgn}(I + \alpha u - a )\). This is related to the flatness of the susceptibility function \(Z_u\) over much of the time domain in Fig. 3d–f. However, if \(\mathrm{sgn}(I + \alpha [u + \delta _u] - a ) \ne \mathrm{sgn}(I + \alpha u - a )\), then \(\tilde{u}_0(t) = 1-u_0(t)\), as detailed in the following piecewise smooth map:
where \((u_0(t),a_0(t))\) are defined by Eq. (15). The formula Eq. (18) can then be utilized to compute the updated relative time \(\tilde{t}_0 := t_0(\tilde{u}_0,\tilde{a}_0)\), finding
where \(a_0 = \phi - (\phi - I)\mathrm{e}^{-t_0/\tau }\) if \(u_0=1\) and \(a_0 = (I+\alpha )\mathrm{e}^{-(t_0-T_1)/\tau }\) if \(u_0 = 0\). We can refer to the function \(\tilde{t}_0/T\), where \(\tilde{t}_0\) is defined by Eq. (19), as the phase transition curve for u perturbations. Thus, the function \(G_u ( \theta , \delta _u) = (\tilde{t}_0 -t_0)/T\) will be the phase response curve, where \(\theta = t_0/T\), and phase advances occur for positive values and phase delays occur for negative values. We plot the function \(G_u(\theta , \delta _u)\) in Fig. 4a for different values of \(\delta _u\), demonstrating the nontrivial dependence on the perturbation amplitude is not simply a rescaling but an expansion of the non-zero phase shift region. Due to the singular nature of the fast–slow limit cycle Eq. (15), the size of the phase perturbation has a piecewise constant dependence on the amplitude of the u perturbation. Note, this formulation allows us to quantify phase shifts that would not be captured by a perturbative theory for phase sensitivity functions, as computed for the general system in Eq. (17).
For perturbations \(\delta _a\) of the adaptation variable a, there is a more graded dependence of the phase advance/delay amplitude on the perturbation amplitude \(\delta _a\). We expect this, as it was a property we observed in \(Z_a\) as we varied parameters in Fig. 3. We can partition the limit cycle \((u_0(t),a_0(t))\) into four different regions: two advance/delay regions of exponential saturation and two early threshold crossings. First, note if \(u_0(t) = 1\) and \(a_0(t) + \delta _a < I + \alpha \), then
so \(\tilde{t}_0 = T_1 - t_w\) with \(t_w = \tau \ln \left[ (\phi - a_0 - \delta _a)/(\phi - I - \alpha ) \right] \), but if \(a_0(t) + \delta _a > I + \alpha \), then
Determining the relative time of the perturbed variables \((\tilde{u}_0(t), \tilde{a}_0(t))\) in Eq. (20) is straightforward using the mapping Eq. (18). However, to determine the relative time described by Eq. (21), we compute the time, after the perturbation, until \(\tilde{a}_0 (t) = I+\alpha \), which will be \(t_w = \tau \ln \left[ (a_0 + \delta _a)/(I+\alpha ) \right] \), so \(\tilde{t}_0 = T_1 - t_w\). Second, note if \(u_0(t) = 0\) and \(a_0(t) + \delta _a > I \), then
so \(\tilde{t}_0 = T - t_w\) with \(t_w = \tau \ln \left[ (a_0 + \delta _0)/I \right] \), but if \(a_0(t) + \delta _a < I\), so that it is necessary that \(\delta _a<0\), then
In the case of Eq. (23), we note that \(t_w = \tau \ln \left[ (\phi - a_0 - \delta _a)/(\phi - I) \right] \), so \(\tilde{t}_0 = T - t_w\). Combining our results, we find we can map the relative time to the perturbed relative time as
where \(a_0 = \phi - ( \phi - I) \mathrm{e}^{-t_0/\tau }\) if \(u_0 = 1\) and \(a_0 = (I+ \alpha ) \mathrm{e}^{- (t_0 - T_1)/\tau }\) if \(u_0 = 0\). Again, we have a phase transition curve given by the function \(\tilde{t}_0/T\) and phase response curve given by \(G_a(\theta , \delta _a ) = (\tilde{t}_0 - t_0)/T\), where \(\theta = t_0/T\). As opposed to the case of u perturbations, the phase perturbation here depends smoothly on the amplitude of the a perturbation \(\delta _a\).
Furthermore, we can obtain a perturbative description of the phase response curve for the singular system Eq. (13) in two ways: (a) Taylor expand the amplitude-dependent phase response curve expressions defined by Eqs. (19) and (24) and truncate to linear order or (b) solving the adjoint Eq. (17) in the case of a Heaviside firing rate Eq. (3) and long adaptation timescale \(\tau \gg 1\). We begin with the first derivation, which simply requires differentiating Eq. (19) to demonstrate that the infinitesimal phase response curve (iPRC) associated with perturbations of the u variable is zero almost everywhere. However, differentiating Eq. (24) reveals that the iPRC associated with perturbations of the adaptation variable a is given by the piecewise smooth function
Note, we could derive the same result by solving the adjoint equations Eq. (17) in the case of Heaviside firing rate Eq. (3), so that
We have reversed time \(t \mapsto - t\), so we can simply solve the system forward. Furthermore, we can use the identity
Utilizing the separation of timescales, \(\tau \gg 1\), we find that almost everywhere (except where \(t=0, T_1, T\)), we have that Eq. (26) becomes the system
As before \(Z_u(t)\) will be zero almost everywhere, whereas \(Z_a(t) = A(t) \mathrm{e}^{t/\tau }\), where A(t) is a piecewise constant function taking two different values on \(t \in (0,T_1)\) and \(t \in (T_1, T)\), determined by considering the \(\delta \) distribution terms. This indicates how one would derive the formula Eq. (25) using the adjoint equations Eq. (26).
Note, in previous work (Jayasuriya and Kilpatrick 2012), we explored the entrainment of slowly adapting populations to external forcing, comprised of smooth and non-smooth inputs to the system Eq. (1). In the next section, we explore the impact of external noise forcing on the slow oscillations of Eq. (1), subsequently demonstrating that noise can be utilized to entrain the up and down states of two distinct networks.
5 Impact of noise on the timing of up/down states
We now study the effects of noise on the duration of up and down states of the single population model Eq. (1). Switches between high and low firing rate states can occur at irregular intervals (Sanchez-Vives and McCormick 2000), suggesting internal or external sources of noise determine state changes. This section focuses on how noise can reshape the mean duration of up and down residence times. Due to our findings in the previous sections, we focus on noise applied to the adaptation variable in this section. As we have shown, very weak perturbations to the neural activity variable have a negligible effect on the phase of oscillations. We conceive of the noise arising due to ion channel fluctuations (White et al. 2000), specifically related to the slow adaptive potassium currents that serve to hyperpolarize individual cells in the network (Stocker et al. 1999). Analytic calculations are presented for the piecewise smooth system with Heaviside firing rate Eq. (3), as accurate approximations of the mean up and down state durations can be computed.
Our approach is to derive expressions for the mean first passage times of both the up and down state (\(\bar{T}_1\) and \(\bar{T}_2\)) of the stochastic population model Eq. (4). Focusing on adaptation noise allows us to utilize the separation of fast–slow timescales, and recast the pair of equations as a stochastic-hybrid system
where \(\xi _a\) is white noise with mean \(\langle \xi _a \rangle = 0\) and variance \(\langle \xi _a^2 \rangle = \sigma _a^2 t\). To begin, assume the system has just switched to the up state, so the initial conditions are \(u(0) = 1\) and \(a(0) = I\). Determining the amount of time until a switch to the down state requires we calculate the time \(T_1\) until the threshold crossing \(a(T_1) = I + \alpha \) where a(t) is determined by the SDE
which is the well-known threshold crossing problem for an Ornstein-Uhlenbeck process (Gardiner 2004). The mean \(\bar{T}_1\) of the passage time distribution is thus given by defining the potential \(V(a)=\frac{a^2}{2 \tau } - \frac{\phi a}{\tau }\) and computing the integral
Next, note that the duration of the down state \(T_2\) will be the amount of time until the threshold crossing \(a(T_2) = I\) given \(u(0) = 0\) and \(a(0) = I+ \alpha \), where a(t) obeys the SDE
Again, defining the potential \(V(a) = \frac{a^2}{2 \tau }\), we can compute
We compare the theory we have developed utilizing passage time problems to residence times computed numerically in Fig. 5c. Notice that increasing the noise amplitude tends to shorten both up and down state durations on average, due to early threshold crossings of the variable a(t).
Furthermore, we can examine how noise reshapes the relative balance of up versus down state durations. Specifically, we will explore how the relative fraction of time the up state persists \(\bar{T}_1/(\bar{T}_1 + \bar{T}_2)\) changes with noise intensity \(\sigma _a\) and input I. First, notice that, in the absence of noise the ratio
The up and down state have equal duration when \(T_1/(T_1 + T_2) = 1/2\), or when the input \(I = (\phi - \alpha )/2\), as shown in Fig. 5d. Interestingly, this is the precise input value at which the period obtains a minimum, as we demonstrated in Sect. 3. Along with our plot of Eq. (29) in the noise-free case (\(\sigma _a = 0\)), we also study the impact of noise on this measure of up–down state balance. Noise leads to up and down state durations becoming more similar, so the ratio Eq. (29) of the means \(\bar{T}_1\) and \(\bar{T}_2\) flattens as a function of the input I. This is due to the fact that long durations, wherein the variable a(t) occupies the tail of exponentially saturating functions \(A_0 + A_1 \mathrm{e}^{-t/\tau }\), are shortened by early threshold crossings due to the external noise forcing. This is consistent with the experimental findings of Fröhlich and McCormick (2010), which showed that applied electric fields decrease the period of the slow oscillation. The speeding up of the oscillation is mostly due to there being less time on average spent in the down state. In our model, the parameter regime where one would expect to find this behavior is one where the background input I is low to begin with, as shown in Fig. 2b.
We can further observe the impact of noise on the up and down state durations by studying their distributions in Fig. 6. Noise perturbs the limit cycle present in the deterministic system (\(\sigma _a = 0\)), so that there is a wide range of durations for the up and down state. The broadness of the distribution increases as the level of noise \(\sigma _a\) is increased. Furthermore, the peak of the distribution shifts to shorter dominance times for larger noise levels. Similar observations have been made in the slowly changing energy landscape model of perceptual rivalry by Moreno-Bote et al. (2007). When noise dominates transition dynamics, the lifetimes of up and down states are distributed exponentially (Gardiner 2004). However, when adaptation plays a role in reshaping the energy landscape explored by the stochastic system, the barrier the system state must surmount shrinks over time. This leads to a resonance in the state durations represented by the peak of the distribution in Fig. 6 for smaller noise values \(\sigma _a\). The prominence of a specific range of state durations is well supported by many previous experimental papers exploring the statistics of up and down states (Isomura et al. 2006; Sanchez-Vives and McCormick 2000; Steriade et al. 1993; Cunningham et al. 2006).
6 Synchronizing two uncoupled populations
Now, we demonstrate that common noise can synchronize the up and down states of two distinct and uncoupled populations. We begin with the case of identical noise and then, in Sect. 7, relax these assumptions to show that some level of coherence is still possible when each population has an intrinsic and an independent source of noise. This is motivated by the observation that the SDE derived in the large system-size limit of a neural master equation tends to possess intrinsic noise in each population, in addition to an extrinsic common noise term (Bressloff and Lai 2011). As we will show, intrinsic noise tends to disrupt the phase synchronization due to extrinsic noise.
To begin, we recast the stochastic system Eq. (5), describing a pair of adapting noise-driven neural populations, as a pair of phase equations:
where \(\theta _1\) and \(\theta _2\) are the phase of the first and second neural populations. Note that the phase equations Eq. (30) are in Stratonovich form since the original noise term in Eq. (5) was converted to Stratonovich form in anticipation of the standard rules of calculus needed for the phase reduction (Ermentrout 2009). As we demonstrate in Fig. 7a, this introduction of common noise tends to drive the oscillation phases \(\theta _1(t)\) and \(\theta _2(t)\) toward one another. Note that since the governing equations of both populations are the same, then the phase sensitivity function \(\mathbf{Z}(\theta )\) will be the same for both. Furthermore, the synchronized solution \(\theta _1(t) = \theta _2(t)\) is absorbing—once the phases synchronize, they remain so. We can analytically calculate the Lyapunov exponent \(\lambda \) of the synchronized state to determine its stability. In particular, we are interested in how this stability depends on the parameters that shape the dynamics of adaptation.
Following the work of Teramae and Tanaka (2004), we can convert Eq. (30) to the equivalent Ito form, linearize, and average to approximate the Lyapunov exponent
Assuming noise to the activity variable u and adaptation variable a is not correlated, \(\mathbf{D}\) will be diagonal. In this case, we can further decompose the phase sensitivity function into its Fourier expansion
where \(\mathbf{a}_k = (\mathbf{a}_{k1}, \mathbf{a}_{k2})^T\) and \(\mathbf{b}_k = (\mathbf{b}_{k1}, \mathbf{b}_{k2})^T\) are vectors in \({{\mathbb {R}}}^2\) so that
and we can expand the terms in Eq. (31) to yield
Thus, as long as \(\mathbf{Z}(\theta )\) is continuous and non-constant, the Lyapunov exponent \(\lambda \) will be negative, so the synchronous state \(\theta _1 = \theta _2\) will be stable. Note, continuity is not satisfied in the case of our singular approximation to \(\mathbf{Z}(\theta )\). We demonstrate the accuracy of our theory Eq. (31) in Fig. 7c, d, showing that \(\lambda \) decreases as a function of \(\tau \) and is non-monotonic in I. Thus, slow oscillations with longer periods are synchronized more quickly, relative to the number of oscillation cycles. Since the Lyapunov exponent has highest amplitude \(|\lambda |\) for both low and high values of the tonic input I, we also suspect this is related to the period of the oscillation T.
Furthermore, we demonstrate that common noise to the neural activity variables \(u_j\) can stochastically synchronize the phases of the two uncoupled populations in Eq. (5). As shown in Fig. 8, the Lyapunov exponent \(\lambda \) again decreases as a function of the adaptation timescale \(\tau \). Thus, whether noisy inputs arrive through the adaptation or activity variables, the two population oscillations will eventually be synchronized. As expected, the amplitude of the Lyapunov exponent is weaker, for the same level of noise, in the case of input to the neural activity variable.
7 Impact of intrinsic noise on stochastic synchronization
We now extend our results from the previous section by studying the impact of independent noise in each population. In order for our theory of stochastic synchronization of slow oscillations to be generalizable, it must be robust to the effects of local noise in each of the neural populations. We demonstrate here that oscillation phases of stochastically driven populations still remain relatively close in this case (Fig. 9). Independent noise is incorporated into the modified model Eq. (6). Since there is a periodic solution to the noise-free version of this system, phase-reduction methods can be used to obtain approximate SDEs for the phase variables (Nakao et al. 2007)
where the noise vectors \({\varvec{\xi }}_c = (\chi _u \xi _{uc}, \chi _a \xi _{ac})^T\) and \({\varvec{\xi }}_j = (\sqrt{1- \chi _u^2} \xi _{uj}, \sqrt{1 - \chi _a^2} \xi _{aj})^T\) (\(j=1,2\)). Reformulating the system Eq. (32) in terms of an Ito SDE, we can analyze the resulting Fokker–Planck equation along the lines of Nakao et al. (2007) to identify the stationary distribution \(M_0(\psi )\) for the phase-difference \(\psi = \theta _1 - \theta _2\):
where \(m_0\) is a normalization factor and we have simplified the expression using \(\mathbf{D}_{u1} = \mathbf{D}_{u2} \equiv \mathbf{D}_{ul} = \sigma _u^2\) and \(\mathbf{D}_{a1} = \mathbf{D}_{a2} \equiv \mathbf{D}_{al} = \sigma _a^2\) and defined
When noise to each layer is independent (\(\chi _u, \chi _a \rightarrow 0\)), then \(M_0(\psi ) = 1\) is constant in space. When noise is totally correlated (\(\chi _u, \chi _a \rightarrow 1\)), then \(M_0(\psi ) = \delta ( \phi )\). The stationary distribution \(M_0(\psi )\) will broaden as the correlations \(\chi _u\) and \(\chi _a\) are decreased from unity, with a peak remaining at \(\phi = 0\). We demonstrate the accuracy of the formula Eq. (33) for the stationary density of the phase difference \(\psi \) in Fig. 9, showing that it widens as the level noise correlation is decreased. Again, we focus on the impact of adaptation noise. Thus, even when independent noise is introduced, there is some semblance of synchronization in the phases of two noise-driven neural populations Eq. (6).
8 Entrainment in a pair of spiking populations
Thus far, we have examined the response of a rate model of slow oscillations Eq. (1) to external perturbations. Mean field analyses of recurrently coupled spiking neurons tend to yield such coarse-grained descriptions of the activity of network dynamics (Treves 1993; Brunel 2000). In particular, mean field analyses of spiking neurons with an afterhyperpolarizing current responsible for spike rate adaptation tends to yield firing rate models analogous to Eq. (1) (Vreeswijk and Hansel 2001; Benda and Herz 2003). However, these mean field descriptions average out the effects of heterogeneity and noise that can arise at the level of single neurons. We demonstrate here that these microscopic effects do not significantly modify the qualitative results of previous sections. Namely, two uncoupled populations of spiking cells with adaptation can still support slow oscillations that become entrained when the slow hyperpolarizing currents receive common stochastic forcing.
We will demonstrate our results using the excitatory-inhibitory network of integrate-and-fire cells studied by Vreeswijk and Hansel (2001). Each of the \(N_e\) excitatory neurons possesses an independent afterhyperpolarizing current, while the \(N_i\) inhibitory neurons have no such current. The pattern of connectivity between neurons is described by the matrices \(G_{ij}^{AB}\), where i, j, index individual neurons and \(A,B \in \{e,i\} \) indexes the population type. For the simulations presented here, we consider all to all coupling (\(G_{ij}^{AB} = \bar{G}_{AB}\)), but similar results hold for a wider variety of couplings. The network is characterized by the following set of differential equations
Excitatory cells receive external input \(I_j^e\), synaptic input \(E_j^e\), and a slowly evolving hyperpolarizing current \(q_j\), while inhibitory cells are only subject to external and synaptic input. We must also incorporate reset conditions whereby if at time t, \(v_j^e\) reaches 1, a spike occurs and \(v_j^e\) is instantaneously reset to 0 (\(v_j^e(t^+)=0\)), and the hyperpolarizing current \(q_j\) is increased by \(g_q/ \tau _q\) (\(q_j(t^+) = q_j(t^-) + g_q/\tau _q\)). The same reset condition holds for inhibitory cells \(v_j^i(t^+)=0\) when \(v_j^i(t^-) = 1\), but they engage no slow hyperpolarizing current (Ahmed et al. 1998). Excitatory and inhibitory neurons are assumed to have typical membrane time constants of \(\tau _e = \tau _i = 10\) ms. Synaptic currents are assumed to increment instantaneously and decay exponentially according to the spike-triggered sums
where \(G_{jk}^{AB}\) is the connectivity matrix, \(\tau _{sB}\) is the decay time constant of the synapses from population B, \(t_l^B\) is the lth spike time of neuron l in population B. Note that excitatory couplings \(G_{kl}^{ee}\) and \(G_{kl}^{ie}\) are non-negative and inhibitory couplings \(G_{kl}^{ie}\) and \(G_{kl}^{ii}\) are non-positive. Lastly, both cell voltage and hyperpolarizing currents are subject to white noise processes \(\xi _{}(t)\) with zero mean and variance \(\langle \xi _{Aj}(t)^2 \rangle = \sigma _{Aj}^2 t\) for \(A = \left\{ e, i, q \right\} \).
The recurrent excitatory connectivity of Eq. (34) generates a bistable network. Sufficiently high spike rates will be sustained, due to repeated reactivation of excitatory cells, but low spike rates do not engender persistent high spike rates. Transitions between these two states are generated by the slow build up and decay of the slow hyperpolarizing currents of the excitatory cells. We demonstrate the ability of the network Eq. (34) to generate synchronous up and down state transitions in Fig. 10. Single cells tend to occupy either a depolarized or hyperpolarized state, where they spike repeatedly or are quiescent (Fig. 10a). This is due to the network-wide states which are either high activity (up) or low activity (down) (Fig. 10b, c), and most cells transition between these states synchronously due to the recurrent coupling.
We also explore the impact of common noise on a pair of networks, each described by the equations Eq. (34), and indexed by 1 and 2:
Each network’s state is initialized randomly, by selecting a random time point in the simulation presented in Fig. 10, so that both networks are in a randomly chosen phase of an evolving slow oscillation. Noise to the voltage variables \(v_j^{BA}\) of each network is taken to be uncorrelated, but noise to the adaptation variables \(q_j^B\) is taken to be fully correlated so that each variable receives an identical white noise sample. As a result, the spike and rate patterns of these two uncoupled networks become more correlated over time (Fig. 11a). We quantify the effect on spike correlation by digitizing all spike times of each network’s excitatory population into 10ms bins and then use MATLAB’s xcorr function to compute an unnormalized correlation function between network 1 and network 2. This is then normalized by dividing by the geometric mean \(\sqrt{\nu _j \nu _k}\) of both neuron’s total firing rate \(\nu _j\) and \(\nu _k\) over the time interval. The time interval [0, 3]s is compared to [3, 6]s in Fig. 11b, demonstrating the correlation coefficient increases at later times. Thus, common noise in the slow hyperpolarizing currents can help to correlate the temporal evolution of firing rate and spiking in this spiking network model.
9 Discussion
We have studied the impact of deterministic and stochastic perturbations to a neural population model of slow oscillations. The model was comprised of a single recurrently coupled excitatory population with negative feedback from a slow adaptive current (Laing and Chow 2002; Jayasuriya and Kilpatrick 2012). By examining the phase sensitivity function \((Z_u, Z_a)\), we found that perturbations of the adaptation variable lead to much larger changes in oscillation phase than perturbations of neural activity. Furthermore, this effect becomes more pronounced as the timescale \(\tau \) of adaptation is increased. Introducing noise in the model decreases the oscillation period and helps to balance the mean duration of the oscillation’s up and down states. When two uncoupled populations receive common noise, their oscillation phases \(\theta _1\) and \(\theta _2\) eventually become synchronized, which can be shown by deriving a formula for the Lyapunov exponent of the absorbing state \(\theta _1 \equiv \theta _2\) (Teramae and Tanaka 2004). When independent noise is introduced to each population, in addition to common noise, the long-term state of the system is described by a probability density for \(\psi = \theta _1 - \theta _2\), which peaks at \(\psi \equiv 0\).
Our study was motivated by the observation that recurrent cortical networks can spontaneously generate stochastic oscillations between up and down states. Guided by previous work in spiking models (Compte et al. 2003), we explored a rate model of a recurrent excitatory network with slow spike frequency adaptation. We expect that we would have obtained similar results from an excitatory-inhibitory network, since inhibition tends to act faster than excitation, essentially reducing the effective recurrent excitatory strength (Pinto and Ermentrout 2001). One of the open questions about up and down state transitions concerns the degree to which they are generated by noise or by more deterministic mechanisms, such as slow currents or short term plasticity (Cossart et al. 2003). Here, we have provided some characteristic features that emerge as the level of noise responsible for transitions is increased. Similar questions have been explored in the context of models of perceptual rivalry (Moreno-Bote et al. 2007). In addition, we have provided a plausible mechanism whereby the onset of up and down states could be synchronized in distinct networks (Volgushev et al. 2006).
Nonmonotonic residence time distributions for up states provide compelling evidence for the theory that switches from up to down states are partially governed by deterministic neural processes (Cheng-yu et al. 2009). This idea is explored in detail in a recent study which employed a neuronal network model with short term depression (Dao Duc et al. 2015). Recordings presented therein from both auditory and barrel cortices revealed up state duration distributions which are peaked away from zero. Furthermore, the tail of the duration distribution has an oscillatory decay with several peaks, which may arise due to specific properties of the underlying network’s dynamics. Indeed, the authors were able to account for these peaks in a neuronal network model with an up state whose attracting trajectories are oscillatory. It would interesting to extend the present study to try and understand how external inputs might entrain such up and down state transitions that occur via more complex dynamics.
Synchronizing up and down states across multiple areas of the brain may be particularly important for memory consolidation processes (Diekelmann and Born 2010). Long term potentiation (LTP), the process by which the strength of synapses is strengthened for a lasting period of time (Alberini 2009), is one of the chief mechanisms thought to underlie memory formation (Takeuchi et al. 2014). Both cortical and hippocampal LTP are typically restricted to the up states of slow oscillations during slow wave sleep (Rosanova and Ulrich 2005). Furthermore, up states may then repetitively activate memory traces in hippocampus, along with thalamus and cortex, reenforcing memory persistence (Marshall and Born 2007). Thus, subnetworks whose slow oscillations are coordinated are more likely to be further linked through long term plasticity. Indeed, boosting slow oscillations by external potential fields has been shown to enhance declarative memories, providing further evidence that coherent up and down state transitions may subserve memory consolidation processes (Marshall et al. 2006). In total, synchrony may provide a functionally relevant way to link the activities of related neuronal assemblies, allowing appropriate reactivation during waking hours (Steriade 2006).
We have proposed two possible ways for synchrony of up and down states to occur: (a) common noise in the activating currents of neurons in distinct populations and (b) common noise in the slow hyperpolarizing currents of distinct neural populations. The first mechanisms could arise through common excitatory input to each population, as in previous studies of correlation-induced synchrony in olfactory bulb neurons (Galán et al. 2006). The second mechanism must arise via common chemical forcing of hyperpolarizing current. One way this could occur is via common astrocytic calcium signaling (Volterra et al. 2014). Calcium propagates rapidly in waves through astrocytes (Newman 2001), which could generate a common signal on calcium activating hyperpolarizing currents (Bond et al. 2004). Furthermore, slow afterhyperpolarizing currents can be modulated by acetylcholine (Faber and Sah 2005). Global modulations of acetylcholine are often observed during slow wave sleep (Steriade 2004), so this may provide another mechanism for the common perturbation of slow afterhyperpolarizing currents.
Other previous studies have explored phenomenological models of up/down state transitions in neural populations. For instance, Holcman and Tsodyks (2006) introduced an excitatory network with activity-dependent synaptic depression having two attractors that represented the up and down state of the network. Synaptic noise, rather than a systematic slow process like rate adaptation, drove the system between these two attractors. These authors explored the stochastic dynamics of a single neural population that did not possess a deterministic limit cycle in the absence of noise. Parga and Abbott (2007) have addressed such dynamics in a complementary way, by studying a network of integrate-and-fire neurons with nonlinear membrane current. The resulting bistability in the resting membrane potential of single cells is inherited by the dynamics of the full network. The noise-driven network ceaselessly switches between low and high firing rate states. The durations of up and down states are given by exponential distributions, since they arise from noise-induced escape from a local attractor. Our work is distinct from these previous studies in several ways. First, we note that the mechanism underlying transitions between up and down states is a combination of slow rate adaptation and noise in our full stochastic model. Under small noise assumptions, this allows us to examine the susceptibility of the network state to external perturbations using a phase reduction method (Brown et al. 2004). Second, a chief concern of our work is the synchronization of multiple populations undergoing coherent slow oscillations, as in Volgushev et al. (2006). One particularly interesting extension of previous studies of noise-induced up and down state transitions (Holcman and Tsodyks 2006; Parga and Abbott 2007), would be to define some semblance of a phase response curve, based on knowledge of the network’s underlying state. Recent work on the PRCs of excitable systems and asymptotically phaseless systems may be particularly helpful (Shaw et al. 2012; Wilson and Moehlis 2015).
There are several other potential extensions to this work. For instance, we could examine the impact of long-range connections between networks to see how these interact with common and independent noise to shape the phase coherence of oscillations. Similar studies have been performed in spiking models by Ly and Ermentrout (2009). Interestingly, shared noise can actually stabilize the anti-phase locked state in this case, even though it is unstable in the absence of noise. Furthermore, it is known that coupling spanning long distances can be subject to axonal delays. In spite of this, networks of distantly coupled clusters of cells can still sustain zero-lag synchronized states (Vicente et al. 2008). However, there are some cases in which such delays can destabilize the phase-locked states (Earl and Strogatz 2003; Ermentrout and Ko 2009), in which can another mechanism would be needed to explain the synchronization of up/down states. Thus, we could also explore the impact of delayed coupling, determining how features of phase sensitivity function interact with delay to promote in-phase or anti-phase synchronized states. Lastly, we note that a systematic analysis of phase equations for relaxation oscillators has been applied to the general case of slow variables in (Izhikevich 2000). We expect that the approach developed therein, using the Malkin theorem, could be be applied to the system Eq. (1), even in the case of a discontinuous firing rate function.
References
Ahmed B, Anderson J, Douglas R, Martin K, Whitteridge D (1998) Estimates of the net excitatory currents evoked by visual stimulation of identified neurons in cat visual cortex. Cereb Cortex 8:462–476
Alberini C (2009) Transcription factors in long-term memory and synaptic plasticity. Physiol Rev 89:121–145
Amari S (1977) Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern 27:77–87
Amit DJ, Brunel N (1997) Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cereb Cortex 7:237–252
Bart E, Bao S, Holcman D (2005) Modeling the spontaneous activity of the auditory cortex. J Comput Neurosci 19:357–378
Benda J, Herz AVM (2003) A universal model for spike-frequency adaptation. Neural Comput 15:2523–2564
Bond CT, Herson PS, Strassmaier T, Hammond R, Stackman R, Maylie J, Adelman JP (2004) Small conductance ca\(^2+\)-activated k\(^+\) channel knock-out mice reveal the identity of calcium-dependent afterhyperpolarization currents. J Neurosci 24:5301–5306
Bressloff PC (2010) Metastable states and quasicycles in a stochastic Wilson-Cowan model of neuronal population dynamics. Phys Rev E 82:051903
Bressloff PC, Lai YM (2011) Stochastic synchronization of neuronal populations with intrinsic and extrinsic noise. J Math Neurosci (JMN) 1:1–28
Brown E, Moehlis J, Holmes P (2004) On the phase reduction and response dynamics of neural oscillator populations. Neural Comput 16:673–715
Brunel N (2000) Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci 8:183–208
Buzsáki G, Draguhn A (2004) Neuronal oscillations in cortical networks. Science 304:1926–1929
Chen X, Rochefort NL, Sakmann B, Konnerth A (2013) Reactivation of the same synapses during spontaneous up states and sensory stimuli. Cell Rep 4:31–39
Cheng-yu TL, Mm P, Dan Y (2009) Burst spiking of a single cortical neuron modifies global brain state. Science 324:643–646
Compte A, Sanchez-Vives MV, McCormick DA, Wang XJ (2003) Cellular and network mechanisms of slow oscillatory activity (1 hz) and wave propagations in a cortical network model. J Neurophysiol 89:2707–2725
Cossart R, Aronov D, Yuste R (2003) Attractor dynamics of network up states in the neocortex. Nature 423:283–288
Cunningham M, Pervouchine D, Racca C, Kopell N, Davies C, Jones R, Traub R, Whittington M (2006) Neuronal metabolism governs cortical network response state. Proc Nat Acad Sci 103:5597–5601
Dao Duc K, Parutto P, Chen X, Epsztein J, Konnerth A, Holcman D (2015) Synaptic dynamics and neuronal network connectivity are reflected in the distribution of times in up states. Front Comput Neurosci 9:96
Diekelmann S, Born J (2010) The memory function of sleep. Nat Rev Neurosci 11:114–126
Earl MG, Strogatz SH (2003) Synchronization in oscillator networks with delayed coupling: a stability criterion. Phys Rev E 67:036204
Ermentrout B (1996) Type i membranes, phase resetting curves, and synchrony. Neural Comput 8:979–1001
Ermentrout B, Ko TW (2009) Delays and weakly coupled neuronal oscillators. Philos Trans R Soc Lond A Math Phys Eng Sci 367:1097–1115
Ermentrout GB (2009) Stochastic methods in neuroscience, chapter Noisy Oscillators. Oxford Univ Press, Oxford
Ermentrout GB, Galán RF, Urban NN (2008) Reliability, synchrony and noise. Trends Neurosci 31:428–434
Faber E, Sah P (2005) Independent roles of calcium and voltage-dependent potassium currents in controlling spike frequency adaptation in lateral amygdala pyramidal neurons. Eur J Neurosci 22:1627–1635
Fox MD, Raichle ME (2007) Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging. Nat Rev Neurosci 8:700–711
Fröhlich F, McCormick DA (2010) Endogenous electric fields may guide neocortical network activity. Neuron 67:129–143
Galán RF, Fourcaud-Trocmé N, Ermentrout GB, Urban NN (2006) Correlation-induced synchronization of oscillations in olfactory bulb neurons. J Neurosci 26:3646–3655
Gardiner CW (2004) Handbook of stochastic methods for physics, chemistry, and the natural sciences. Springer, Berlin
Higgs MH, Slee SJ, Spain WJ (2006) Diversity of gain modulation by noise in neocortical neurons: regulation by the slow afterhyperpolarization conductance. J Neurosci 26:8787–8799
Holcman D, Tsodyks M (2006) The emergence of up and down states in cortical networks. PLoS Comput Biol 2:e23–e23
Isomura Y, Sirota A, Ozen S, Montgomery S, Mizuseki K, Henze DA, Buzsáki G (2006) Integration and segregation of activity in entorhinal-hippocampal subregions by neocortical slow oscillations. Neuron 52:871–882
Izhikevich EM (2000) Phase equations for relaxation oscillators. SIAM J Appl Math 60:1789–1804
Jayasuriya S, Kilpatrick ZP (2012) Effects of time-dependent stimuli in a competitive neural network model of perceptual rivalry. Bull Math Biol 74:1396–1426
Kenet T, Bibitchkov D, Tsodyks M, Grinvald A, Arieli A (2003) Spontaneously emerging cortical representations of visual attributes. Nature 425:954–956
Kilpatrick ZP, Bressloff PC (2010) Effects of synaptic depression and adaptation on spatiotemporal dynamics of an excitatory neuronal network. Physica D 239:547–560
Laing CR, Chow CC (2002) A spiking neuron model for binocular rivalry. J Comput Neurosci 12:39–53
Lampl I, Reichova I, Ferster D (1999) Synchronous membrane potential fluctuations in neurons of the cat visual cortex. Neuron 22:361–374
Litwin-Kumar A, Doiron B (2012) Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat Neurosci 15:1498–1505
Ly C, Ermentrout GB (2009) Synchronization dynamics of two coupled neural oscillators receiving shared and unshared noisy stimuli. J Comput Neurosci 26:425–443
Madison D, Nicoll R (1984) Control of the repetitive discharge of rat ca 1 pyramidal neurones in vitro. J Physiol 354:319–331
Major G, Tank D (2004) Persistent neural activity: prevalence and mechanisms. Curr Opinion Neurobiol 14:675–684
Marshall L, Born J (2007) The contribution of sleep to hippocampus-dependent memory consolidation. Trends Cogn Sci 11:442–450
Marshall L, Helgadóttir H, Mölle M, Born J (2006) Boosting slow oscillations during sleep potentiates memory. Nature 444:610–613
Massimini M, Huber R, Ferrarelli F, Hill S, Tononi G (2004) The sleep slow oscillation as a traveling wave. J Neurosci 24:6862–6870
Melamed O, Barak O, Silberberg G, Markram H, Tsodyks M (2008) Slow oscillations in neural networks with facilitating synapses. J Comput Neurosci 25:308–316
Moreno-Bote R, Rinzel J, Rubin N (2007) Noise-induced alternations in an attractor network model of perceptual bistability. J Neurophysiol 98:1125–1139
Nakao H, Arai K, Kawamura Y (2007) Noise-induced synchronization and clustering in ensembles of uncoupled limit-cycle oscillators. Phys Rev Lett 98:184101
Newman EA (2001) Propagation of intercellular calcium waves in retinal astrocytes and müller cells. J Neurosci 21:2215–2223
Parga N, Abbott LF (2007) Network model of spontaneous activity exhibiting synchronous transitions between up and down states. Front Neurosci 1:57–66
Pedarzani P, Storm JF (1993) Pka mediates the effects of monoamine transmitters on the k\(^+\) current underlying the slow spike frequency adaptation in hippocampal neurons. Neuron 11:1023–1035
Petersen CCH, Hahn TTG, Mehta M, Grinvald A, Sakmann B (2003) Interaction of sensory responses with spontaneous depolarization in layer 2/3 barrel cortex. Proc Natl Acad Sci USA 100:13638–13643
Pinto DJ, Ermentrout GB (2001) Spatially structured activity in synaptically coupled neuronal networks: I. traveling fronts and pulses. SIAM J Appl Math 62:206–225
Renart A, Moreno-Bote R, Wang XJ, Parga N (2007) Mean-driven and fluctuation-driven persistent activity in recurrent networks. Neural Comput 19:1–46
Rosanova M, Ulrich D (2005) Pattern-specific associative long-term potentiation induced by a sleep spindle-related spike train. J Neurosci 25:9398–9405
Sanchez-Vives MV, McCormick DA (2000) Cellular and network mechanisms of rhythmic recurrent activity in neocortex. Nat Neurosci 3:1027–1034
Shaw KM, Park YM, Chiel HJ, Thomas PJ (2012) Phase resetting in an asymptotically phaseless system: on the phase response of limit cycles verging on a heteroclinic orbit. SIAM J Appl Dyn Syst 11:350–391
Smeal RM, Ermentrout GB, White JA (2010) Phase-response curves and synchronized neural networks. Philos Trans R Soc B Biol Sci 365:2407–2422
Steriade M (2004) Acetylcholine systems and rhythmic activities during the waking-sleep cycle. Progress Brain Res 145:179–196
Steriade M (2006) Grouping of brain rhythms in corticothalamic systems. Neuroscience 137:1087–1106
Steriade M, Nunez A, Amzica F (1993) A novel slow (1 hz) oscillation of neocortical neurons in vivo: depolarizing and hyperpolarizing components. J Neurosci 13:3252–3265
Stocker M, Krause M, Pedarzani P (1999) An apamin-sensitive ca\(^2+\)-activated k\(^+\) current in hippocampal pyramidal neurons. Proc Natl Acad Sci USA 96:4662–4667
Tabak J, Senn W, O’Donovan MJ, Rinzel J (2000) Modeling of spontaneous activity in developing spinal cord using activity-dependent depression in an excitatory network. J Neurosci 20:3041–3056
Takeuchi T, Duszkiewicz AJ, Morris RG (2014) The synaptic plasticity and memory hypothesis: encoding, storage and persistence. Philos Trans R Soc Lond B 369:20130288
Teramae J, Tanaka D (2004) Robustness of the noise-induced phase synchronization in a general class of limit cycle oscillators. Phys Rev Lett 93:204103
Traub RD, Whittington MA, Stanford IM, Jefferys JG (1996) A mechanism for generation of long-range synchronous fast oscillations in the cortex. Nature 383:621–624
Treves A (1993) Mean-field analysis of neuronal spike dynamics. Netw Comput Neural Syst 4:259–284
van Vreeswijk C, Hansel D (2001) Patterns of synchrony in neural networks with spike adaptation. Neural Comput 13:959–992
Vicente R, Gollo LL, Mirasso CR, Fischer I, Pipa G (2008) Dynamical relaying can yield zero time lag neuronal synchrony despite long conduction delays. Proc Nat Acad Sci 105:17157–17162
Volgushev M, Chauvette S, Mukovski M, Timofeev I (2006) Precise long-range synchronization of activity and silence in neocortical neurons during slow-wave sleep. J Neurosci 26:5665–5672
Volterra A, Liaudet N, Savtchouk I (2014) Astrocyte ca signalling: an unexpected complexity. Nat Rev Neurosci 15:327–335
Wang XJ (2001) Synaptic reverberation underlying mnemonic persistent activity. Trends Neurosci 24:455–463
White JA, Rubinstein JT, Kay AR (2000) Channel noise in neurons. Trends Neurosci 23:131–137
Wilson D, Moehlis J (2015) Extending phase reduction to excitable media: theory and applications. SIAM Rev 57:201–222
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was supported by NSF DMS-1311755 and NSF DMS-1517629.
Rights and permissions
About this article
Cite this article
McCleney, Z.T., Kilpatrick, Z.P. Entrainment in up and down states of neural populations: non-smooth and stochastic models. J. Math. Biol. 73, 1131–1160 (2016). https://doi.org/10.1007/s00285-016-0984-6
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00285-016-0984-6