Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In Sect. 5, we showed how both solitary pulses and oscillatory phase waves could occur in a synaptically coupled network of spiking neurons, where the fundamental element of the network was a single neuron. Hence, whether the network acted as an excitable or an oscillatory medium depended primarily on the intrinsic properties of the individual neurons. In this chapter, we focus on waves in excitable neural fields, where the fundamental network element is a local population of cells (see Sect. 6), and show how many of the PDE methods and results for the analysis of waves in reaction–diffusion equations (see part I) can be extended to the nonlocal integrodifferential equations of neural field theory. We begin by analyzing the existence and stability of solitary traveling fronts in a 1D scalar neural field. (Since there is strong vertical coupling between layers of a cortical column, it is possible to treat a thin vertical cortical slice as an effective 1D medium.) In order to relate the models to experiments on disinhibited cortical slices (Sect.  5.1), we assume that the weight distribution is purely excitatory. This is also motivated by the observation that epileptic seizures are often associated with greatly enhanced levels of recurrent excitation [430] (Sect. 9.4). We then extend the analysis to the case of traveling pulses, which requires the inclusion of some form of local negative feedback mechanism such as synaptic depression or spike frequency adaptation. Next we describe two approaches to analyzing wave propagation failure in inhomogeneous neural media: one based on homogenization theory [79, 332] and the other on interfacial dynamics [132]. This is followed by a discussion of wave propagation in stochastic neural fields.

It is useful to emphasize here that there are two main approaches to analyzing the spatiotemporal dynamics of neural field equations. The first method is based on the original work of Amari [8], in which one establishes the existence of nonlinear traveling wave solutions by explicit construction. This is possible if one takes the firing rate function F to be the Heaviside (1.15). It is also possible to study the linear stability of such solutions by constructing an associated Evans function, whose zeros determine the spectrum of the resulting linear operator [134, 552, 696]. The constructive approach of Amari [8] has been particularly useful in providing explicit insights into how spatiotemporal network dynamics depends on the structure of the synaptic weight kernel as well as various physiological parameters. Moreover, in certain cases, it is possible to use singular perturbation methods [504, 505] or fixed-point theorems [172, 336] to extend results for neural fields with Heaviside nonlinearities to those with more realistic sigmoidal nonlinearities; see also [136]. The second method is based on bifurcation theory, following the original work of Ermentrout and Cowan [169], in which one investigates the emergence of spatially periodic stationary and oscillatory patterns through a combination of linear stability analysis, weakly nonlinear analysis, symmetric bifurcation theory, and numerical simulations, as reviewed in [67, 71, 167]. Rigorous functional analytical techniques combined with numerical bifurcation schemes have also been used to study the existence and (absolute) stability of stationary solutions for a general class of neural field models with smooth firing rate functions [185, 642]. As far as we are aware, these methods have not yet been applied to traveling wave solutions of neural field equations.

1 Traveling Fronts in a Scalar Neural Field

1.1 Propagating Fronts in a Bistable Neural Field

We begin by using Amari’s constructive method [8] to analyze the existence of traveling front solutions in a scalar neural field equation. Similar analyses are found in Refs. [76, 134, 504]. We assume a Heaviside rate function (1.15) and an excitatory weight distribution of the form w(x, y) = w(xy) with w(x) ≥ 0 and w(−x) = w(x). We also assume that w(x) is a monotonically decreasing function of x for x ≥ 0. A common choice is the exponential weight distribution

$$\displaystyle{ w(x) = \frac{1} {2\sigma }{\text{e}}^{-\vert x\vert /\sigma }, }$$
(7.1)

where σ determines the range of synaptic connections. The latter tends to range from 100 μm to 1 mm. The resulting neural field equation is

$$\displaystyle\begin{array}{rcl} \frac{\partial u(x,t)} {\partial t} & =& -u(x,t) +\int _{ -\infty }^{\infty }w(x - x^{\prime})F(u(x^{\prime},t))dx^{\prime},{}\end{array}$$
(7.2)

with F(u) = H(uκ). We have fixed the units of time by setting τ = 1. If τ is interpreted as a membrane time constant, then τ ∼ 10 msec. In order to construct a traveling front solution of (7.2), we introduce the traveling wave coordinate ξ = xct, where c denotes the wave speed, and set u(x, t) = U(ξ) with \(\lim _{\xi \rightarrow -\infty }U(\xi ) = U_{+} > 0\) and lim ξ U(ξ) = 0 such that U(ξ) only crosses the threshold κ once. Here \(U_{+} =\int _{ -\infty }^{\infty }w(y)dy\) is a spatially uniform fixed-point solution of (7.2). Since Eq. (7.2) is equivariant with respect to uniform translations, we are free to take the threshold crossing point to be at the origin, U(0) =κ, so that U(ξ) <κ for ξ > 0 and U(ξ) >κ for ξ < 0. Substituting this traveling front solution into Eq. (7.2) then gives

$$\displaystyle\begin{array}{rcl} - cU^{\prime}(\xi ) + U(\xi ) =\int _{ -\infty }^{0}w(\xi -\xi ^{\prime})d\xi ^{\prime} =\int _{ \xi }^{\infty }w(x)dx \equiv W(\xi ),& &{}\end{array}$$
(7.3)

where U′(ξ) = dU. Multiplying both sides of the above equation by \({\text{e}}^{-\xi /c}\) and integrating with respect to ξ leads to the solution

$$\displaystyle{ U(\xi ) ={ \text{e}}^{\xi /c}\left [\kappa -\frac{1} {c}\int _{0}^{\xi }{\text{e}}^{-y/c}W(y)dy\right ]. }$$
(7.4)

Finally, requiring the solution to remain bounded as ξ (ξ →−) for c > 0 (for c < 0) implies that κ must satisfy the condition

$$\displaystyle{ \kappa = \frac{1} {\vert c\vert }\int _{0}^{\infty }{\text{e}}^{-y/\vert c\vert }W(\mbox{ sign}(c)y)dy. }$$
(7.5)

Thus, one of the useful aspects of the constructive method is that it allows us to derive an explicit expression for the wave speed as a function of physiological parameters such as firing threshold and range of synaptic connections. In the case of the exponential weight distribution (7.1), the relationship between wave speed c and threshold κ is

$$\displaystyle{ c = \frac{\sigma } {2\kappa }[1 - 2\kappa ]\ (\mbox{ for}\ \kappa < 0.5),\quad c = \frac{\sigma } {2} \frac{1 - 2\kappa } {1-\kappa } \ (\mbox{ for}\ 0.5 <\kappa < 1). }$$
(7.6)

This establishes the existence of a unique front solution for fixed κ, which travels to the right (c > 0) when κ < 0. 5 and travels to the left (c < 0) when κ > 0. 5. As we will show below, the traveling front is stable.

Given the existence of a traveling front solution for a Heaviside rate function, it is possible to extend the analysis to a smooth sigmoid nonlinearity using a continuation method [172]. We briefly summarize the main result. Consider the scalar neural field equation (7.2) with F given by the sigmoid function (1.14) and w(x) nonnegative and symmetric with normalization \(\int _{-\infty }^{\infty }w(x)dx = 1\). Suppose that the function \(\tilde{F}(u) = -u + F(u)\) has precisely three zeros at \(u = U_{\pm },U_{0}\) with \(U_{-} < U_{0} < U_{+}\) and \(\tilde{F}^{\prime}(U_{\pm }) < 0\). It can then be shown that (modulo uniform translations) there exists a unique traveling front solution u(x, t) = U(ξ), ξ = xct, with

$$\displaystyle\begin{array}{rcl} - cU^{\prime}(\xi ) + U(\xi ) =\int _{ -\infty }^{\infty }w(\xi -\xi ^{\prime})F(U(\xi ^{\prime}))d\xi ^{\prime},& &{}\end{array}$$
(7.7)

and U(ξ) → U ± as \(\xi \rightarrow \mp \infty \) [172]. Moreover, the speed of the wave satisfies

$$\displaystyle{ c = \frac{\varGamma } {\int _{-\infty }^{\infty }{U^{\prime}(\xi )}^{2}F^{\prime}(U(\xi ))d\xi }, }$$
(7.8)

where F′(U) = dFdU and

$$\displaystyle{ \varGamma =\int _{ U_{-}}^{U_{+} }\tilde{F}(U)dU. }$$
(7.9)

Since the denominator of Eq. (7.8) is positive definite, the sign of c is determined by the sign of the coefficient Γ. In particular, if the threshold κ = 0. 5 and the gain of the sigmoid η > 4 (see Eq. (1.14), then there exists a pair of stable homogeneous fixed points with U = −U +, which in turn implies that Γ = 0 and the front solution is stationary. Note that this analysis has been extended to a more general form of nonlocal equations by Chen [108].

1.2 Wave Stability and Evans Functions

Suppose that the scalar neural field equation (7.2) has a traveling wave solution u(x, t) = U(ξ), ξ = xct with c > 0. Following Coombes and Owen [134], it is convenient to rewrite the neural field equation in the integral form

$$\displaystyle\begin{array}{rcl} u(x,t) =\int _{ -\infty }^{\infty }\int _{ 0}^{\infty }w(y)\varPhi (s)F(u(x - y,t - s))dsdy,& &{}\end{array}$$
(7.10)

with \(\varPhi (t) ={ \text{e}}^{-t}H(t)\). For this representation, the front solution satisfies

$$\displaystyle\begin{array}{rcl} U(\xi ) =\int _{ -\infty }^{\infty }\int _{ 0}^{\infty }w(y)\varPhi (s)F(U(\xi -y + cs)dsdy.& &{}\end{array}$$
(7.11)

In order to determine the stability of the front solutions, we transform to traveling wave coordinates by setting \(u(x,t) = U(\xi,t) = U(\xi ) +\varphi (\xi,t)\), and Taylor expand to first order in \(\varphi\). This leads to the linear integral equation

$$\displaystyle\begin{array}{rcl} \varphi (\xi,t) =\int _{ -\infty }^{\infty }\int _{ 0}^{\infty }\ w(y)\varPhi (s)F^{\prime}(U(\xi -y + cs))\varphi (\xi -y + cs,t - s)dsdy.& &{}\end{array}$$
(7.12)

We now seek solutions of Eq. (7.12) of the form \(\varphi (\xi,t) =\varphi (\xi ){\text{e}}^{\lambda t}\), \(\lambda \in \mathbb{C}\), which leads to the eigenvalue equation \(\varphi = \mathbb{L}(\lambda )\varphi\). That is,

$$\displaystyle\begin{array}{rcl} \varphi (\xi ) =\int _{ -\infty }^{\infty }\int _{ \xi -y}^{\infty }w(y)\varPhi ((s + y-\xi )/c){\text{e}}^{-\lambda (s+y-\xi )/c}F^{\prime}(U(s))\varphi (s)\frac{ds} {c} dy,& &{}\end{array}$$
(7.13)

where we have performed the change of variables cs +ξys. The linear stability of the traveling front can then be determined in terms of the spectrum \(\sigma({\mathbb L}(\lambda))\).

Following appendix section 2.7, we assume that the eigenfunctions \(\varphi \in {L}^{2}(\mathbb{R})\) and introduce the resolvent operator \(\mathcal{R}(\lambda ) = {[\mathbb{L}(\lambda ) - I]}^{-1}\), where I denotes the identity operator. We can then decompose the spectrum \(\sigma (\mathcal{L})\) into the disjoint sum of the discrete spectrum and the essential spectrum. Given the spectrum of the linear operator defined by Eq. (7.13), the traveling wave is said to be linearly stable if [551]

$$\displaystyle{ \max \{\mbox{ Re}(\lambda )\,:\,\lambda \in \sigma (\mathcal{L}),\,\lambda \neq 0\} \leq -K }$$
(7.14)

for some K > 0, and λ = 0 is a simple eigenvalue of λ. The existence of at least one zero eigenvalue is a consequence of translation invariance. Indeed, differentiating equation (7.11) with respect to ξ shows that \(\varphi (\xi ) = U^{\prime}(\xi )\) is an eigenfunction solution of Eq. (7.13) with λ = 0. As in the case of PDEs (see Sect. 2.4), the discrete spectrum may be associated with the zeros of an Evans function. A number of authors have applied the Evans function construction to neural field equations [134, 198, 506, 536, 552, 696], as well as more general nonlocal problems [314]. Moreover, for neural fields with Heaviside firing rate functions, the Evans function can be calculated explicitly. This was first carried out by Zhang [696], who applied the method of variation of parameters to the linearized version of the integrodifferential Eq. (7.2), and was subsequently extended using a more direct integral formulation by Coombes and Owen [134].

Construction of Evans function. Setting F(U) = H(Uκ) in Eq. (7.12) and using the identity

$$\displaystyle{ H^{\prime}(U(\xi )-\kappa ) =\delta (U(\xi )-\kappa ) = \frac{\delta (\xi )} {\vert U^{\prime}(0)\vert } }$$
(7.15)

gives

$$\displaystyle\begin{array}{rcl} \varphi (\xi ) = \frac{\varphi (0)} {c\vert U^{\prime}(0)\vert }\int _{-\infty }^{\infty }w(y)\varPhi ((y-\xi )/c){\text{e}}^{-\lambda (y-\xi )/c}dy.& & {}\end{array}$$
(7.16)

In order to obtain a self-consistent solution at ξ = 0, we require that

$$\displaystyle\begin{array}{rcl} \varphi (0) = \frac{\varphi (0)} {c\vert U^{\prime}(0)\vert }\int _{0}^{\infty }w(y)\varPhi (y/c){\text{e}}^{-\lambda y/c}dy,& & {}\end{array}$$
(7.17)

We have used the fact that Φ(y) = 0 for y < 0, which is a consequence of causality. Hence, a nontrivial solution exists provided that \(\mathcal{E}(\lambda ) = 0\), where

$$\displaystyle{ \mathcal{E}(\lambda ) = 1 - \frac{1} {c\vert U^{\prime}(0)\vert }\int _{0}^{\infty }w(y)\varPhi (y/c){\text{e}}^{-\lambda y/c}dy. }$$
(7.18)

Equation (7.18) can be identified with the Evans function for the traveling front solution of the scalar neural field equation (7.10). It is real valued if λ is real. Furthermore, (i) the complex number λ is an eigenvalue of the operator \(\mathcal{L}\) if and only if \(\mathcal{E}(\lambda ) = 0\), and (ii) the algebraic multiplicity of an eigenvalue is equal to the order of the zero of the Evans function [134, 552, 696]. We briefly indicate the proof of (i) for \(\varPhi (t) ={ \text{e}}^{-t}H(t)\). Equation (7.16) becomes

$$\displaystyle\begin{array}{rcl} \varphi (\xi )& =& \frac{\varphi (0)} {c\vert U^{\prime}(0)\vert }{\text{e}}^{(\lambda +1)\xi /c}\int _{\xi }^{\infty }w(y){\text{e}}^{-(\lambda +1)y/c}dy, {}\\ & =& \varphi (0)\left [1 - \frac{1} {c\vert U^{\prime}(0)\vert }\int _{0}^{\xi }w(y){\text{e}}^{-(\lambda +1)y/c}dy\right ]{\text{e}}^{(\lambda +1)\xi /c}, {}\\ \end{array}$$

which in the limit ξ gives

$$\displaystyle{ \lim _{\xi \rightarrow \infty }\varphi (\xi ) =\varphi (0)\mathcal{E}(\lambda )\lim _{\xi \rightarrow \infty }{\text{e}}^{(\lambda +1)\xi /c}. }$$

Assuming that Reλ > −1 (which turns out to be to the right of the essential spectrum), then \(\varphi (\xi )\) will be unbounded as ξ unless \(\mathcal{E}(\lambda ) = 0\). That is, if \(\mathcal{E}(\lambda ) = 0\), then \(\varphi (\xi )\) is normalizable, the resolvent operator is not invertible and λ is an eigenvalue.

It is also straightforward to show that \(\mathcal{E}(0) = 0\), which we expect from translation invariance. First, setting F(U) = H(Uκ) in Eq. (7.11) and differentiating with respect to ξ show that

$$\displaystyle{ U^{\prime}(\xi ) = -\frac{1} {c}\int _{-\infty }^{\infty }w(y)\varPhi ((y-\xi )/c)dy. }$$
(7.19)

Thus, defining

$$\displaystyle{ \mathcal{H}(\lambda ) =\int _{ 0}^{\infty }w(y)\varPhi (y/c){\text{e}}^{-\lambda y/c}dy, }$$
(7.20)

we see that \(c\vert U^{\prime}(0)\vert = \mathcal{H}(0)\) and, hence,

$$\displaystyle{ \mathcal{E}(\lambda ) = 1 - \frac{\mathcal{H}(\lambda )} {\mathcal{H}(0)}. }$$
(7.21)

It immediately follows that \(\mathcal{E}(0) = 0\).

In order to determine the essential spectrum, consider the inhomogeneous equation

$$\displaystyle\begin{array}{rcl} \varphi (\xi ) - \frac{\varphi (0)} {c\vert U^{\prime}(0)\vert }\int _{-\infty }^{\infty }w(y)\varPhi ((y-\xi )/c){\text{e}}^{-\lambda (y-\xi )/c}dy = h(\xi )& & {}\end{array}$$
(7.22)

for some normalizable smooth function h on \(\mathbb{R}\). Assuming that λ does not belong to the discrete spectrum, \(\mathcal{E}(\lambda )\neq 0\), we can expresses the constant \(\varphi (0)\) in terms of h(0) by setting ξ = 0 in Eq. (7.22): \(\varphi (0) = h(0)/\mathcal{E}(\lambda )\). Thus,

$$\displaystyle{ \varphi (\xi ) = h(\xi ) + \frac{1} {\mathcal{E}(\lambda )} \frac{h(0)} {c\vert U^{\prime}(0)\vert }\int _{-\infty }^{\infty }w(y)\varPhi ((y-\xi )/c){\text{e}}^{-\lambda (y-\xi )/c}dy. }$$
(7.23)

Fourier transforming this equation using the convolution theorem gives

$$\displaystyle{ \hat{\varphi }(k) =\hat{ h}(k) + \frac{1} {\mathcal{E}(\lambda )} \frac{h(0)} {c\vert U^{\prime}(0)\vert }\hat{\omega }(k)\hat{\varPhi }(kc + i\lambda ), }$$
(7.24)

where

$$\displaystyle{ \hat{\varphi }(k) =\int _{ -\infty }^{\infty }\varphi (y){\text{e}}^{iky}dy }$$
(7.25)

etc. Now suppose that for a given value of k, there exists λ =λ(k) for which \({[\hat{\varPhi }(kc + i\lambda (k))]}^{-1} = 0\). It follows that the right-hand side of Eq. (7.24) blows up if λ =λ(k), that is, the dispersion curve belongs to the essential spectrum.

For the sake of illustration, let us calculate the zeros of the Evans function in the case of the exponential weight function (7.1). Substituting \(\varPhi (t) ={ \text{e}}^{-t}\) and \(w(y) ={ \text{e}}^{-\vert y\vert /\sigma }/2\sigma\) in Eq. (7.20) gives

$$\displaystyle{ \mathcal{H}(\lambda ) = \frac{1} {2\sigma }{ \frac{1} {\sigma }^{-1} +\lambda /c + 1/c} }$$

so that [134]

$$\displaystyle{ \mathcal{E}(\lambda ) = \frac{\lambda } {c/\sigma + 1+\lambda }. }$$
(7.26)

It follows that λ = 0 is the only zero of the Evans function and it is a simple root (since \(\mathcal{E}^{\prime}(0) > 0\)). Furthermore, in the particular case \(\varPhi (t) ={ \text{e}}^{-t}\), we have \({[\tilde{\varPhi }(kc + i\lambda )]}^{-1} = 1 - ikc+\lambda\) so that the essential spectrum is λ(k) = −1 + ikc, that is, a vertical line in the complex plane at Reλ = −1. It follows that the corresponding traveling front (it it exists) is stable. This example illustrates one of the powerful features of the constructive method based on Heavisides. Not only is it possible to construct exact traveling wave solutions and derive formulae for the speed of the wave, but one can also explicitly construct the Evans function that determines wave stability. The method extends to multi-population neural field models, neural fields with axonal propagation delays, and adaptive neural fields [134]. (Although taking the high-gain limit of a smooth firing rate function is not very realistic from a biological perspective, one finds that many of the basic features of traveling waves persist for finite gain.) In the particular case of axonal delays, it can be shown that delays reduce the speed of a wave but do not affect its stability properties. For example, given a right-moving traveling front solution of the scalar neural field equation (6.118) with τ = 1 and exponential weights, one finds that the speed of the wave is [134, 139]

$$\displaystyle{ c =\sigma \frac{1 - 2\kappa } {2\kappa +\sigma (1 - 2\kappa )/v}, }$$

where v is the propagation speed along an axon, and the Evans function is

$$\displaystyle{ \mathcal{E}(\lambda ) = \frac{\lambda } {c/\sigma + (1 - c/v)+\lambda }. }$$

1.3 Pulled Fronts

So far we have assumed that the scalar neural field operates in a bistable regime analogous to the FitzHugh–Nagumo equations; see Sect. 2. However, as we explored within the context of CaMKII translocation waves (Sect. 3.2), Fisher-like reaction–diffusion equations support traveling waves propagating into unstable states, resulting in pulled fronts (Sect. 3.3). It turns out that it is also possible to observe pulled fronts in an activity-based version of a scalar neural field equation [70, 132]:

$$\displaystyle\begin{array}{rcl} \tau \frac{\partial a(x,t)} {\partial t} & =& -a(x,t) + F\left (\int _{-\infty }^{\infty }w(x - x^{\prime})a(x^{\prime},t)dx^{\prime}\right ).{}\end{array}$$
(7.27)

with a(x, t) ≥ 0 for all (x, t). Note that the restriction to positive values of a is a feature shared with population models in ecology or evolutionary biology, for example, where the corresponding dependent variables represent number densities. Indeed, Eq. (7.27) has certain similarities with a nonlocal version of the Fisher–KPP equation, which takes the form [238]

$$\displaystyle\begin{array}{rcl} \tau \frac{\partial p(x,t)} {\partial t} & =& D\frac{{\partial }^{2}p(x,t)} {\partial {x}^{2}} +\mu p(x,t)\left (1 -\int _{-\infty }^{\infty }K(x - x^{\prime})p(x^{\prime},t)dx^{\prime}\right ).{}\end{array}$$
(7.28)

One major difference from a mathematical perspective is that Eq. (7.28) supports traveling fronts even when the range of the interaction kernel K goes to zero, that is, K(x) →δ(x), since we recover the standard local Fisher–KPP equation (3.44) [191, 345]. In particular, as the nonlocal interactions appear nonlinearly in Eq. (7.28), they do not contribute to the linear spreading velocity in the leading edge of the front. On the other hand, nonlocal interactions play a necessary role in the generation of fronts in the neural field equation (7.27).

Fig. 7.1
figure 1

Plots of firing rate function. Intercepts of y = F(W 0 a) with the straight line y = a determine homogeneous fixed points. (a) Piecewise linear rate function (7.29) showing the existence of an unstable fixed point at a = 0 and a stable fixed point at a =κ. (b) Sigmoidal rate function \(F(a) = 2/(1 +{ \text{e}}^{-2[a-\kappa ]})\) showing the existence of two stable fixed points separated by an unstable fixed point

Suppose that F(a) in Eq. (7.27) is a positive, bounded, monotonically increasing function of a with F(0) = 0, \(\lim _{a\rightarrow {0}^{+}}F^{\prime}(a) = 1\) and \(\lim _{a\rightarrow \infty }F(a) =\kappa\) for some positive constant κ. For concreteness, we take

$$\displaystyle\begin{array}{rcl} F(a) = \left \{\begin{array}{cc} 0,& a \leq 0 \\ a,&0 < a \leq \kappa \\ \kappa, & a >\kappa. \end{array} \right.& &{}\end{array}$$
(7.29)

A homogeneous fixed-point solution a of Eq. (7.27) satisfies

$$\displaystyle{ {a}^{{\ast}} = F(W_{ 0}{a}^{{\ast}}),\quad W_{ 0} =\int _{ -\infty }^{\infty }w(y)dy. }$$
(7.30)

In the case of the given piecewise linear firing rate function, we find that if W 0 > 1, then there exists an unstable fixed point at a = 0 (absorbing state) and a stable fixed point at a =κ; see Fig. 7.1(a). The construction of a front solution linking the stable and unstable fixed points differs considerably from that considered in neural fields with sigmoidal or Heaviside nonlinearities [8, 167], where the front propagates into a metastable state; see Fig 7.1(b). Following the PDE theory of fronts propagating into unstable states [544] (see Sect. 3.3), we expect there to be a continuum of front velocities and associated traveling wave solutions.

Recall that a conceptual framework for studying such solutions is the linear spreading velocity c , which is the asymptotic rate with which an initial localized perturbation spreads into an unstable state based on the linear equations obtained by linearizing the full nonlinear equations about the unstable state. Therefore, linearizing equation (7.27) about a = 0 gives

$$\displaystyle\begin{array}{rcl} \frac{\partial a(x,t)} {\partial t} & =& -a(x,t) +\int _{ -\infty }^{\infty }w(x - x^{\prime})a(x^{\prime},t)dx^{\prime}.{}\end{array}$$
(7.31)

Note that if a(x, 0) ≥ 0 for all x, then Eq. (7.31) ensures that a(x, t) ≥ 0 for all x and t > 0. One way to see this is to note from Eq. (7.31) that \(a(x,t +\varDelta t) = (1 -\varDelta t)a(x,t) +\varDelta t\int _{-\infty }^{\infty }w(x - x^{\prime})a(x^{\prime},t)dx^{\prime}\). Assuming positivity of the solution at time t and using the fact that the neural field is purely excitatory (w(x) ≥ 0 for all x), it follows that a(x, t +Δ t) is also positive. An arbitrary initial condition a(x, 0) will evolve under Eq. (7.31) as

$$\displaystyle{ a(x,t) =\int _{ -\infty }^{\infty }G(x - y,t)a(y,0)dy, }$$
(7.32)

where G(x, t) is Green’s function

$$\displaystyle{ G(x,t) =\int _{ -\infty }^{\infty }{\text{e}}^{ikx-i\omega (k)t}\frac{dk} {2\pi },\quad \omega (k) = i[\tilde{w}(k) - 1], }$$
(7.33)

and \(\tilde{w}(k)\) is the Fourier transform of the weight distribution w(x). Hence, the solution can be written in the form of Eq. (3.102):

$$\displaystyle\begin{array}{rcl} a(x,t) =\int _{ -\infty }^{\infty }\tilde{a}_{ 0}(k){\text{e}}^{i[kx-\omega (k)t])}\frac{dk} {2\pi }.& &{}\end{array}$$
(7.34)

with \(\tilde{a}_{0}\), the Fourier transform of the initial condition a(x, 0).

Given a sufficiently steep initial condition, for which the Fourier transform \(\tilde{a}(k)\) is analytic, the asymptotic behavior of a(x, t) can be obtained from the large-time asymptotics of G(x, t) based on steepest descents. It immediately follows from the analysis of the Fisher equation in Sect. 3.3 that the linear spreading velocity c is given by c = c(λ ) where

$$\displaystyle{ c(\lambda ) = \frac{\text{Im}(\omega (i\lambda ))} {\lambda },\quad \left.\frac{dc(\lambda )} {d\lambda } \right \vert _{\lambda {=\lambda }^{{\ast}}} = 0. }$$
(7.35)

Using the fact that the Fourier transform of the weight distribution is real valued, we find that

$$\displaystyle{ c(\lambda ) = \frac{1} {\lambda } \left [\mathcal{W}(\lambda ) - 1\right ], }$$
(7.36)

where \(\mathcal{W}(\lambda ) =\hat{ W}(\lambda ) +\hat{ W}(-\lambda )\) and \(\hat{W}(\lambda )\) is the Laplace transform of w(x):

$$\displaystyle{ \hat{W}(\lambda ) =\int _{ 0}^{\infty }w(y){\text{e}}^{-\lambda y}dy. }$$
(7.37)

We are assuming that w(y) decays sufficiently fast as |y|→ so that the Laplace transform \(\hat{W}(\lambda )\) exists for bounded, negative values of λ. This holds in the case of a Gaussian weight distribution

$$\displaystyle{ w(x) = \frac{W_{0}} {\sqrt{{2\pi \sigma }^{2}}}{\text{e}}^{-{x}^{2}/{2\sigma }^{2} }, }$$
(7.38)

since

$$\displaystyle\begin{array}{rcl} \mathcal{W}(\lambda )& =& \int _{-\infty }^{\infty }w(y){\text{e}}^{-\lambda y}dy = \frac{W_{0}} {\sqrt{{2\pi \sigma }^{2}}}\int _{-\infty }^{\infty }{\text{e}}^{-{y}^{2}/{2\sigma }^{2} }{\text{e}}^{-\lambda y}dy = W_{ 0}{\text{e}}^{{\lambda {}^{2}\sigma }^{2}/2 }. {}\\ \end{array}$$

Hence,

$$\displaystyle{ c(\lambda ) = \frac{W_{0}{\text{e}}^{{\lambda {}^{2}\sigma }^{2}/2 } - 1} {\lambda }. }$$
(7.39)

If W 0 > 1 (necessary for the zero activity state to be unstable), then c(λ) is a positive unimodal function with c(λ) → as λ → 0 or λ and a unique minimum at λ =λ 0 with λ 0 the solution to the implicit equation

$$\displaystyle{{ \lambda _{0}}^{2} ={ \frac{W_{0} -{\text{e}}^{-{\lambda _{0}{}^{2}\sigma }^{2}/2 }} {\sigma }^{2}W_{0}}. }$$
(7.40)

Example dispersion curves are shown in Fig. 7.2(a) for various values of the Gaussian weight amplitude W 0. Combining Eqs. (7.39) and (7.40) shows that

$$\displaystyle{ \frac{c_{0}} {\lambda _{0}} {=\sigma }^{2}W_{ 0}{\text{e}}^{{\lambda _{0}{ }^{2} \sigma }^{2}/2 } {=\sigma }^{2}(\lambda _{ 0}c_{0} + 1), }$$
(7.41)

so that

$$\displaystyle{ \lambda _{0} = \frac{1} {2}\left [-\frac{1} {c_{0}} + \sqrt{ \frac{1} {{c_{0}}^{2}} +{ \frac{4} {\sigma }^{2}}} \right ]. }$$
(7.42)

Assuming that the full nonlinear system supports a pulled front (see Sect. 3.3), then a sufficiently localized initial perturbation (one that decays faster than \({\text{e}}^{-\lambda _{0}x}\)) will asymptotically approach the traveling front solution with the minimum wave speed \(c_{0} = c(\lambda _{0})\). Note that c 0σ and \(\lambda _{0} {\sim \sigma }^{-1}\). In Fig. 7.2(b), we show an asymptotic front profile obtained by numerically solving the neural field equation (7.27) when W 0 = 1. 2. The corresponding displacement of the front is a linear function of time with a slope consistent with the minimal wave speed c 0 ≈ 0. 7 of the corresponding dispersion curve shown in Fig. 7.2(a). This wave speed is independent of κ.

Fig. 7.2
figure 2

(a) Velocity dispersion curves c = c(λ) for a pulled front solution of the neural field equation (7.27) with piecewise linear firing rate function (7.29) and a Gaussian weight distribution with amplitude W 0 and width σ. Here σ = 1. 0, κ = 0. 4 and W 0 = 1, 2, 1. 5, 2. 0, 2. 5, 3. 0. Black dots indicate minimum wave speed c 0 for each value of W 0. (b) Snapshots of the front profile evolving from an initial condition consisting of a steep sigmoid function of unit amplitude (gray curve). Here W 0 = 1. 2

The asymptotic analysis of the linear equation (7.31) also shows that, given a sufficiently localized initial condition, \(\vert a(x,t)\vert \sim {\text{e}}^{{-\lambda }^{{\ast}}\xi }\psi (\xi,t)\) as t, where ξ = xc t and the leading-edge variable ψ(ξ, t) is given by

$$\displaystyle{ \psi (\xi,t) \approx \frac{{\text{e}}^{{-\xi }^{2}/(4\mathcal{D}t) }} {\sqrt{4\pi \mathcal{D}t}} }$$
(7.43)

with

$$\displaystyle{ \mathcal{D} = -\frac{\omega _{i}^{\prime\prime}({i\lambda }^{{\ast}})} {2} = \frac{{\lambda }^{{\ast}}} {2}\left.\frac{{d}^{2}c(\lambda )} {{\mathrm{d}\lambda }^{2}} \right \vert _{{\lambda }^{{\ast}}}. }$$
(7.44)

Positivity of \(\mathcal{D}\) follows from the fact that λ is a minimum of c(λ). However, as shown by Ebert and van Saarloos [162], although the spreading of the leading edge under linearization gives the right qualitative behavior, it fails to match correctly the traveling front solution of the full nonlinear system. In particular, the asymptotic front profile takes the form \(\mathcal{A}(\xi ) \sim \xi {\text{e}}^{{-\lambda }^{{\ast}}\xi }\) for \(\xi \gg 1\). The factor of ξ reflects the fact that at the saddle point the two branches of the velocity dispersion curve c(λ) meet, indicating a degeneracy. In order to match the \(\xi {\text{e}}^{{-\lambda }^{{\ast}}\xi }\) asymptotics of the front solution with the leading-edge solution, it is necessary to take the leading-edge function ψ(x, t) to be the so-called dipole solution of the diffusion equation [162]:

$$\displaystyle\begin{array}{rcl} \psi (x,t) = -\partial _{\xi }\frac{{\text{e}}^{{-\xi }^{2}/(4\mathcal{D}t) }} {\sqrt{4\pi \mathcal{D}t}} =\xi \frac{{\text{e}}^{{-\xi }^{2}/(4\mathcal{D}t) }} {\sqrt{2\pi }{(2\mathcal{D}t)}^{3/2}}.& &{}\end{array}$$
(7.45)

Putting all of this together, if the neural field equation supports a pulled front, then the leading edge should relax asymptotically as

$$\displaystyle{ \vert a\vert \sim \xi {\text{e}}^{{-\lambda }^{{\ast}}\xi }{\text{e}}^{{-\xi }^{2}/(4\mathcal{D}t) }{t}^{-3/2} }$$
(7.46)

with ξ = xc t. Finally, writing

$$\displaystyle{{ \text{e}}^{{-\lambda }^{{\ast}}\xi }{t}^{-3/2} ={ \text{e}}^{{-\lambda }^{{\ast}}[x-{v}^{{\ast}}t-X(t)] },\quad X(t) = -\frac{3} {{2\lambda }^{{\ast}}}\ln t }$$
(7.47)

suggests that to leading order, the velocity relaxes to the pulled velocity c according to (see also [162])

$$\displaystyle{ v(t) = {c}^{{\ast}} +\dot{ X}(t) = {c}^{{\ast}}- \frac{3} {{2\lambda }^{{\ast}}t} + h.o.t. }$$
(7.48)

2 Traveling Pulses in Adaptive Neural Fields

Traveling fronts are not particularly realistic, since populations of cells do not stay in the excited state forever. Hence, rather than a traveling front, propagating activity in cortex is usually better described as a traveling pulse. (One example where fronts rather than pulses occur is wave propagation during binocular rivalry [83, 312, 369, 678]; see Sect. 8.) One way to generate a traveling pulse is to include some form of synaptic inhibition, provided that it is not too strong [8]. However, even in the absence of synaptic inhibition, most neurons possess intrinsic negative feedback mechanisms that slowly bring the cell back to resting voltages after periods of high activity. Possible nonlinear mechanisms include synaptic depression or spike frequency adaptation as discussed in Sect. 6.1. However, most analytical studies of traveling pulses in neural field models have been based on a simpler linear form of adaptation introduced by Pinto and Ermentrout [504]. (For an analysis of waves in neural fields with nonlinear adaptation, see, e.g., [135, 329].) The linear adaptation model is given by

$$\displaystyle\begin{array}{rcl} \frac{\partial u(x,t)} {\partial t} & =& -u(x,t) +\int _{ -\infty }^{\infty }w(x - x^{\prime})F(u(x^{\prime},t))dx^{\prime} -\beta q(x,t){}\end{array}$$
(7.49a)
$$\displaystyle\begin{array}{rcl} \frac{1} {\epsilon } \frac{\partial q(x,t)} {\partial t} & =& -q(x,t) + u(x,t),{}\end{array}$$
(7.49b)

where ε and β determine the rate and amplitude of linear adaptation. We first show how to construct a traveling pulse solution of Eq. (7.49) in the case of a Heaviside rate function F(u) = H(uκ), following the particular formulation of [198, 696]. We then indicate how singular perturbation methods can be used to construct a traveling pulse for smooth F, as carried out by Pinto and Ermentrout [504]. The introduction of adaptation means that the neural field can support fronts or pulses, depending on whether there exist one or two stable homogeneous fixed points; see Fig. 7.3. We will focus on the latter here. Note, however, that linear (or nonlinear) adaptation can have a nontrivial effect on the propagation of traveling fronts [76, 80]. This is due to the occurrence of a symmetry breaking front bifurcation analogous to that found in reaction–diffusion systems [251, 252, 524, 561]. That is, a stationary front can undergo a supercritical pitchfork bifurcation at a critical rate of adaptation, leading to bidirectional front propagation. As in the case of reaction–diffusion systems, the front bifurcation acts as an organizing center for a variety of nontrivial dynamics including the formation of oscillatory fronts or breathers. The latter can occur, for example, through a Hopf bifurcation from a stationary front in the presence of a weak stationary input inhomogeneity [76].

Fig. 7.3
figure 3

Plot of nullclines for space-clamped planar system \(\dot{u} = -u + F(u) -\beta u\), \({\epsilon }^{-1}\dot{q} = -q + u\) with \(F(u) = 1/(1 +{ \text{e}}^{-\eta (u-\kappa )})\). Nullcline q = −u + F(u)]∕β for β = 1. 0 (β = 2. 5) intercepts straight nullcline q = u at three fixed points (one fixed point) and the corresponding spatially extended network acts as a bistable (excitable) medium. Other parameters are η = 20, κ = 0. 25

2.1 Exact Traveling Pulse Solution

Without loss of generality, let us consider a right-moving traveling pulse solution of the form (u(x, t), q(x, t)) = (U(xct), Q(xct)) with U), Q) = 0 and U(−Δ) = U(0) =κ; see Fig. 2.1(b). Here c, Δ denote the speed and width of the wave, respectively. We also assume that U(ξ) >κ for ξ ∈ (−Δ, 0) and U(ξ) <κ for ξ < −Δ and ξ > 0. Substituting this solution into Eq. (7.49) with ξ = xct then gives

$$\displaystyle\begin{array}{rcl} -cU^{\prime}(\xi ) + U(\xi ) +\beta Q(\xi )& =& \int _{-\varDelta }^{0}w(\xi -\xi ^{\prime})d\xi ^{\prime} \\ -cQ^{\prime}(\xi ) +\epsilon [Q(\xi ) - U(\xi )]& =& 0. {}\end{array}$$
(7.50)

It is useful to rewrite Eq. (7.50) in the matrix form

$$\displaystyle\begin{array}{rcl} \left (\begin{array}{cc} 1 &\beta \\ -\epsilon &\epsilon \end{array} \right )\left (\begin{array}{c} U\\ Q \end{array} \right ) - c\partial _{\xi }\left (\begin{array}{c} U\\ Q \end{array} \right ) = [W(\xi ) - W(\xi +\varDelta )]\left (\begin{array}{c} 1\\ 0 \end{array} \right )& &{}\end{array}$$
(7.51)

with \(W(\xi ) =\int _{ \xi }^{\infty }w(x)dx\). We proceed by diagonalizing the left-hand side of Eq. (7.51) using the right eigenvectors v of the matrix

$$\displaystyle{ \mathbf{M} = \left (\begin{array}{cc} 1 &\beta \\ -\epsilon &\epsilon \end{array} \right ). }$$
(7.52)

These are given by \(\mathbf{v}_{\pm } = {(\epsilon -\lambda _{\pm },\epsilon )}^{T}\) with corresponding eigenvalues

$$\displaystyle{ \lambda _{\pm } = \frac{1} {2}\left [1 +\epsilon \pm \sqrt{{(1+\epsilon )}^{2 } - 4\epsilon (1+\beta )}\right ]. }$$
(7.53)

We will assume that ε is sufficiently small so that β < (1 −ε)2∕4ε and consequently λ ± are real. (For a discussion of the effects of complex eigenvalues λ ± see [580].) Note that \(\mathbf{v}_{\pm }{\text{e}}^{\lambda _{\pm }\xi /c}\) are the corresponding null vectors of the linear operator on the left-hand side of Eq. (7.51). Performing the transformation

$$\displaystyle{ \left (\begin{array}{c} \tilde{U}\\ \tilde{Q} \end{array} \right ) ={ \mathbf{T}}^{-1}\left (\begin{array}{c} U \\ Q \end{array} \right ),\quad \mathbf{T} = \left (\begin{array}{cc} \mathbf{v}_{+} & \mathbf{v}_{-} \end{array} \right ), }$$
(7.54)

then gives the pair of equations

$$\displaystyle\begin{array}{rcl} -c\partial _{\xi }\tilde{U} +\lambda _{+}\tilde{U}& =& \eta _{+}[W(\xi ) - W(\xi +\varDelta )] {}\\ -c\partial _{\xi }\tilde{Q} +\lambda _{-}\tilde{Q}& =& \eta _{-}[W(\xi ) - W(\xi +\varDelta )] {}\\ \end{array}$$

with \(\eta _{\pm } = \mp 1/(\lambda _{+} -\lambda _{-})\). Integrating the equation for \(\tilde{U}\) from −Δ to , we have

$$\displaystyle\begin{array}{rcl} \tilde{U}(\xi ) ={ \text{e}}^{\lambda _{+}\xi /c}\left [\tilde{U}(-\varDelta ){\text{e}}^{\varDelta \lambda _{+}/c} -\frac{\eta _{+}} {c} \int _{-\varDelta }^{\xi }{\text{e}}^{-\lambda _{+}\xi ^{\prime}/c}[W(\xi ^{\prime}) - W(\xi ^{\prime}+\varDelta )]d\xi ^{\prime}\right ].& & {}\\ \end{array}$$

Finiteness of \(\tilde{U}\) in the limit ξ requires the term in square brackets to cancel. Hence, we can eliminate \(\tilde{U}(-\varDelta )\) to obtain the result

$$\displaystyle{ \tilde{U}(\xi ) = \frac{\eta _{+}} {c} \int _{0}^{\infty }{\text{e}}^{-\lambda _{+}\xi ^{\prime}/c}[W(\xi ^{\prime}+\xi ) - W(\xi ^{\prime} +\xi +\varDelta )]d\xi ^{\prime}. }$$
(7.55)

Similarly,

$$\displaystyle{ \tilde{Q}(\xi ) = \frac{\eta _{-}} {c} \int _{0}^{\infty }{\text{e}}^{-\lambda _{-}\xi ^{\prime}/c}[W(\xi ^{\prime}+\xi ) - W(\xi ^{\prime} +\xi +\varDelta )]d\xi ^{\prime}. }$$
(7.56)

Performing the inverse transformation \(U = (\epsilon -\lambda _{+})\tilde{U} + (\epsilon -\lambda _{-})\tilde{Q}\) we have

$$\displaystyle\begin{array}{rcl} U(\xi )& =& \frac{1} {c}\int _{0}^{\infty }\left [\chi _{ +}{\text{e}}^{-\lambda _{+}\xi ^{\prime}/c} +\chi _{ -}{\text{e}}^{-\lambda _{-}\xi ^{\prime}/c}\right ][W(\xi ^{\prime}+\xi ) - W(\xi ^{\prime} +\xi +\varDelta )]d\xi ^{\prime},{}\end{array}$$
(7.57)

with \(\chi _{\pm } = (\epsilon -\lambda _{\pm })\eta _{\pm }\). The threshold conditions U(−Δ) =κ and U(0) =κ then yield a pair of equations whose solutions determine existence curves relating the speed c and width Δ of a pulse to the threshold κ [134, 198, 504].

For the sake of illustration, let w be given by the exponential function (7.1). In the domain ξ > 0, there is a common factor of \({\text{e}}^{-\xi /\sigma }\) in the integrand of Eq. (7.57) so that \(U(\xi ) =\kappa { \text{e}}^{-\xi /\sigma }\) for ξ > 0 provided that

$$\displaystyle{ \kappa = \frac{1} {2} \frac{\sigma (c+\epsilon \sigma )(1 -{\text{e}}^{-\varDelta /\sigma })} {{c}^{2} + c\sigma (1+\epsilon ) {+\sigma }^{2}\epsilon (1+\beta )}. }$$
(7.58)

(Note that for zero negative feedback (β = 0), Eq. (7.58) reduces to the formula for wave speed of a front in the limit Δ.) On the other hand, when ξ < 0, one has to partition the integral of Eq. (7.57) into the separate domains ξ′ > |ξ|, |ξ|−Δ <ξ′ < |ξ| and ξ′ < |ξ|−Δ. This then determines the second threshold condition as well as the asymptotic behavior of U(ξ) in the limit ξ →−:

$$\displaystyle{ U(\xi ) = A_{+}{\text{e}}^{\lambda _{+}\xi /c} + A_{-}{\text{e}}^{\lambda _{-}\xi /c} + A_{0}{\text{e}}^{\sigma \xi }. }$$
(7.59)
Fig. 7.4
figure 4

Existence of right-moving traveling pulses in the case of the excitatory network (7.49) with linear adaptation for an exponential weight distribution (7.1). Here σ = 1, ε = 0. 01 and β = 2. 5. (a) Plot of pulse width Δ against threshold κ. (b) Plot of wave speed c against threshold κ. Stable (unstable) branches indicated by black (gray) curves

Fig. 7.5
figure 5

(a) Rat cortical slices are bathed in picrotoxin (a GABA A blocker) and a stimulation electrode (SE) is placed in layers 5–6 to initiate epileptiform bursts. An electric field is applied globally or locally across the slice using Ag/AgCl electrodes (FE1,FE2). Layer five neurons have long apical dendrites and are easily polarizable by an electric field, which controls the effective firing threshold of the neuron. (b) The time for an activity pulse to travel between two recording electrodes R1 and R2 depends on the applied electric field, reflecting the dependence of wave speed on the effective firing threshold. [Adapted from Richardson, Schiff and Gluckman [521]]

where the amplitudes A ± and A 0 can be determined from matching conditions at the threshold crossing points [198, 504]. Note that the leading edge of the pulse is positive, whereas the trailing edge is negative due to the effects of adaptation. One finds that for sufficiently slow negative feedback (small ε) and large β there exist two pulse solutions: one narrow and slow and the other wide and fast. This is illustrated in Fig. 7.4. Note that a numerical value of c ∼ 1 in dimensionless units (σ =τ = 1) translates into a physical speed of 60–90 mm/s if the membrane time constant τ = 10 msec and the range of synaptic connections is σ = 600–900 μm. Numerically, the fast solution is found to be stable [504], and this can be confirmed analytically using an Evans function construction [134, 198, 507]; see below. Finally, note that one of the predictions of the neural field model is that the speed of wave propagation should increase as the threshold decreases [504]. Interestingly, this has been confirmed experimentally by applying electric fields to a disinhibited rat cortical slice [521]. The experimental setup is shown in Fig. 7.5. A positive (negative) electric field increases (decreases) the speed of wave propagation by altering the effective excitability of layer V pyramidal neurons. Such neurons have long apical dendrites and are easily polarizable by the electric field.

Construction of Evans function. Rewrite the neural field equation (7.49) in the integral form

$$\displaystyle\begin{array}{rcl} u(x,t) =\int _{ -\infty }^{\infty }\int _{ 0}^{\infty }w(y)\varPhi (s)F(u(x - y,t - s))dsdy -\beta \int _{ 0}^{\infty }\varPsi (s)u(x,t - s)ds,& & {}\end{array}$$
(7.60)

with \(\varPhi (t) ={ \text{e}}^{-t}H(t)\) and \(\varPsi (t) =\int _{ 0}^{t}\varPhi (s){\text{e}}^{-\epsilon (t-s)}ds\). Linearizing about the pulse solution by setting \(u(x,t) = U(\xi ) +\varphi (\xi ){\text{e}}^{\lambda t}\) gives

$$\displaystyle\begin{array}{rcl} \varphi (\xi )& =& \int _{-\infty }^{\infty }\int _{ \xi -y}^{\infty }w(y)\varPhi ((s + y-\xi )/c){\text{e}}^{-\lambda (s+y-\xi )/c}F^{\prime}(U(s))\varphi (s)\frac{ds} {c} dy \\ & & -\beta \int _{\xi }^{\infty }\varPsi ((s-\xi )/c){\text{e}}^{-\lambda (s-\xi )/c}\varphi (s)\frac{ds} {c}. {}\end{array}$$
(7.61)

Proceeding along similar lines to the analysis of front stability in Sect. 7.1, we set F(U) = H(Uκ) and use the identity

$$\displaystyle{ H^{\prime}(U(\xi )-\kappa ) =\delta (U(\xi )-\kappa ) = \frac{\delta (\xi )} {\vert U^{\prime}(0)\vert } + \frac{\delta (\xi +\varDelta )} {\vert U^{\prime}(-\varDelta )\vert }. }$$
(7.62)

This gives

$$\displaystyle\begin{array}{rcl} & & \varphi (\xi ) +\beta \int _{ \xi }^{\infty }\varPsi ((s-\xi )/c){\text{e}}^{-\lambda (s-\xi )/c}\varphi (s)\frac{ds} {c} \\ & & = \frac{\varphi (0)} {c\vert U^{\prime}(0)\vert }\mathcal{H}(\lambda,\xi ) + \frac{\varphi (-\varDelta )} {c\vert U^{\prime}(-\varDelta )\vert }\mathcal{H}(\lambda,\xi +\varDelta ) {}\end{array}$$
(7.63)

where

$$\displaystyle{ \mathcal{H}(\lambda,\xi ) =\int _{ \xi }^{\infty }w(y)\varPhi ((y-\xi )/c){\text{e}}^{-\lambda (y-\xi )/c}dy. }$$
(7.64)

Let \(\hat{\mathcal{H}}(\lambda,k)\) denote the Fourier transform of \(\mathcal{H}(\lambda,\xi )\) and \(\hat{\mathcal{G}}(\lambda,k)\) denote the Fourier transform of \(\varPsi (\xi /c){\text{e}}^{-\xi /c}\). Using Fourier transforms and the convolution theorem, Eq. (7.63) can then be rewritten as

$$\displaystyle\begin{array}{rcl} \varphi (\xi ) = \frac{\varphi (0)} {c\vert U^{\prime}(0)\vert }\mathcal{B}(\lambda,\xi ) + \frac{\varphi (-\varDelta )} {c\vert U^{\prime}(-\varDelta )\vert }\mathcal{B}(\lambda,\xi +\varDelta ),& & {}\end{array}$$
(7.65)

with \(\mathcal{B}(\lambda,\xi )\) the inverse transform of

$$\displaystyle{ \hat{\mathcal{B}}(\lambda,k) = \frac{\hat{\mathcal{H}}(\lambda,k)} {[1 +\beta \hat{ \mathcal{G}}(\lambda,-k)/c]}. }$$
(7.66)

Finally, the eigenvalues λ are determined by setting ξ = 0, −Δ and solving the resulting matrix equation \(\mathbf{f} = \mathcal{M}(\lambda )\mathbf{f}\) with \(\mathbf{f} = (\varphi (0),\varphi (-\varDelta ))\) and

$$\displaystyle\begin{array}{rcl} \mathcal{M}(\lambda ) = \frac{1} {c}\left (\begin{array}{cc} \frac{\mathcal{B}(\lambda,0)} {\vert U^{\prime}(\xi _{1})\vert }& \frac{\mathcal{B}(\lambda,\varDelta )} {\vert U^{\prime}(-\varDelta )\vert } \\ \frac{\mathcal{B}(\lambda,-\varDelta )} {\vert U^{\prime}(0)\vert }& \frac{\mathcal{B}(\lambda,0)} {\vert U^{\prime}(-\varDelta )\vert } \end{array} \right ).& & {}\end{array}$$
(7.67)

It follows that the eigenvalues λ are zeros of the Evans function

$$\displaystyle{ \mathcal{E}(\lambda ) = \mbox{ Det}[\mathbf{1} -\mathcal{M}(\lambda )], }$$
(7.68)

where 1 denotes the identity matrix.

2.2 Singularly Perturbed Pulse Solution

In the case of slow adaptation (ε ≪ 1), Pinto and Ermentrout [504] showed how to construct a traveling pulse solution of Eq. (7.49) for a smooth firing rate function F by exploiting the existence of traveling front solutions of the corresponding scalar equation (7.2). The method is analogous to the construction of traveling pulses in the FitzHugh–Nagumo equation [316]; see Sect.2.3. The basic idea is to analyze separately the fast and slow time behavior of solutions to Eq. (7.49) expressed in traveling wave coordinates:

$$\displaystyle\begin{array}{rcl} -c\,\frac{dU(\xi )} {d\xi } & =& -U(\xi ) -\beta Q(\xi ) +\int _{ -\infty }^{\infty }w(\xi -\xi ^{\prime})F(U(\xi ^{\prime}))d\xi ^{\prime},{}\end{array}$$
(7.69)
$$\displaystyle\begin{array}{rcl} -c\frac{dQ(\xi )} {d\xi } & =& \epsilon [-Q(\xi ) + U(\xi )].{}\end{array}$$
(7.70)

We will assume the normalization \(\int _{-\infty }^{\infty }w(y)dy = 1\). In the case of fast time, the slow adaptation is taken to be constant by setting ε = 0 so that we have the inner layer equations

$$\displaystyle\begin{array}{rcl} -c\,\frac{dU(\xi )} {d\xi } & =& -U -\beta Q_{0} +\int _{ -\infty }^{\infty }w(\xi -\xi ^{\prime})F(U(\xi ^{\prime}))d\xi ^{\prime},{}\end{array}$$
(7.71)
$$\displaystyle\begin{array}{rcl} -c\frac{dQ(\xi )} {d\xi } & =& 0.{}\end{array}$$
(7.72)

Since Q(ξ) = Q 0 is a constant, the term β Q 0 can be absorbed into the threshold of the firing rate function F by making the shift \(U(\xi ) \rightarrow U(\xi ) +\beta Q_{0}\). Hence Eq. (3.55) is equivalent to the scalar equation (7.7), which supports the propagation of traveling fronts. In the case of slow time, we introduce the compressed variable ζ =ε ξ so that

$$\displaystyle\begin{array}{rcl} -c\epsilon \,\frac{dU(\zeta )} {d\zeta } & =& -U(\zeta ) -\beta Q(\zeta ) + \frac{1} {\epsilon } \int _{-\infty }^{\infty }w([\zeta -\zeta ^{\prime}]/\epsilon )F(U(\zeta ^{\prime}))d\zeta ^{\prime},{}\end{array}$$
(7.73)
$$\displaystyle\begin{array}{rcl} -c\frac{dQ(\zeta )} {d\zeta } & =& -Q(\zeta ) + U(\zeta ).{}\end{array}$$
(7.74)

In the limit ε → 0, we have

$$\displaystyle{ \frac{1} {\epsilon } w([\zeta -\zeta ^{\prime}]/\epsilon ) \rightarrow \delta (\zeta -\zeta ^{\prime}) }$$

so that first equation becomes

$$\displaystyle\begin{array}{rcl} \beta Q(\zeta ) = -U(\zeta ) + F(U(\zeta ))& &{}\end{array}$$
(7.75)

Inverting this equation yields two branches U = g ±(Q). Hence we obtain a slow time or outer layer equation on each branch (see Fig. 7.6):

$$\displaystyle\begin{array}{rcl} \frac{dQ} {d\zeta } = \frac{1} {c}[Q - g_{\pm }(Q)]& &{}\end{array}$$
(7.76)

The construction of the traveling pulse now proceeds by matching inner and outer solutions [504]. This can be visualized by considering the nullclines of the space-clamped version of Eq. (7.49); see Fig. 7.6. We assume that the gain of F and the strength β of adaptation are such that there is only a single fixed point of the space-clamped system:

  1. I

    Starting at the unique fixed point, use the fast inner equations and the existence results of [172] to construct a leading front solution at Q = Q 0 with speed c 0 and matching conditions \(\lim _{\xi \pm \infty }U(\xi ) = g_{\pm }(Q_{0})\).

  2. II

    Use the slow outer equations to determine dynamics of Q along upper branch U = g +(Q)

  3. III

    The solution leaves upper branch at some point Q 1. Once again use the fast inner equations and [172] to construct a trailing front solution with speed c 1 and matching conditions

    $$\displaystyle{\lim _{\xi \pm \infty }U(\xi ) = g_{\mp }(Q_{1})}$$
  4. IV

    Finally, use the slow outer equations to determine the return to the fixed point along the lower branch.

In order to establish the existence of a traveling pulse solution, it remains to find a value Q 1 for which c 1 = −c 0 so that the leading and trailing edges of the pulse move at the same speed and thus the pulse maintains its shape as it propagates. (Since Q 0 is known, so is c 0.) Adapting the formula for the wave speed obtained in [172], we have

$$\displaystyle{ c_{1} = - \frac{\varGamma } {\int _{-\infty }^{\infty }{U^{\prime}}^{2}(\xi )F^{\prime}(U(\xi ))d\xi },\qquad \varGamma =\int _{ g_{-}(Q_{1})}^{g_{+}(Q_{1})}[-U - Q_{ 1} + F(U)]dU. }$$
(7.77)

Unfortunately, it is not possible to derive a closed form expression for the wave speed. However, the existence of a matching speed can be established provided that certain additional assumptions are made regarding the shape of the firing rate function; see [504] for more details.

Fig. 7.6
figure 6

Singular perturbation construction of a traveling pulse in (a) the phase plane and (b) traveling wave coordinates. See text for details

3 Wave Propagation in Heterogeneous Neural Fields

Most studies of neural field theory assume that the synaptic weight distribution only depends upon the distance between interacting populations, that is, w(x, y) = w(|xy|). This implies translation symmetry of the underlying integrodifferential equations (in an unbounded or periodic domain) and an excitatory network can support the propagation of solitary traveling waves. However, if one looks more closely at the anatomy of cortex, it is clear that its detailed microstructure is far from homogeneous. For example, to a first approximation, primary visual cortex (V1) has a periodic-like microstructure on the millimeter length scale, reflecting the existence of various stimulus feature maps; see Sect. 8.1. This has motivated a number of studies concerned with the effects of a periodically modulated weight distribution on wave propagation in neural fields [64, 132, 332].

We first consider the voltage-based neural field equation (6.115) with periodically modulated weight distribution

$$\displaystyle{ w(x,y) = w(x - y)[1 +\rho K(y/\varepsilon )], }$$
(7.78)

where ρ is the amplitude of the periodic modulation and \(\varepsilon\) is the period with K(x) = K(x + 1) for all x. It will also be assumed that if ρ = 0 (no periodic modulation), then the resulting homogeneous network supports a traveling front solution of speed c 0 as analyzed in Sect. 7.1.1. We will describe two alternative methods for analyzing the effects of periodic wave modulation: one based on averaging theory for small \(\varepsilon\) [79], which adapts the method use to study the propagation failure in myelinated axons (Sect. 2.5), and the other based on analyzing interfacial dynamics [132]. Both approaches make use of the observation that for sufficiently small ρ, numerical simulations of the inhomogeneous network show a front-like wave separating high and low activity states. However, the wave does not propagate with constant speed, but oscillates periodically in an appropriately chosen moving frame. This pulsating front solution satisfies the periodicity condition \(u(x,t) = u(x+\varepsilon,t + T)\) so that we can define the mean speed of the wave to be \(c =\varepsilon /T\). We will then consider the effects of periodically modulated weights on the propagation of pulled fronts in an activity-based neural field equation, extending the Hamilton–Jacobi method used in the analysis of CaMKII waves in Sect. 3.2.

3.1 Averaging Theory

Suppose that the period \(\varepsilon\) of weight modulations is much smaller than the range of synaptic interactions \(\varepsilon \ll \sigma\). (We fix the length scales by setting σ = 1.) Following the analysis of saltatory waves along myelinated axons (Sect. 2.5) and [318, 319], we want any inhomogeneous terms to be \(\mathcal{O}(\epsilon )\). Therefore, after substituting Eq. (7.78) into (6.115), we integrate by parts to obtain the equation

$$\displaystyle\begin{array}{rcl} \frac{\partial u(x,t)} {\partial t} & =& -u(x,t) +\int _{ -\infty }^{\infty }w(x - x^{\prime})F(u(x^{\prime},t))dx^{\prime} \\ & & +\varepsilon \int _{-\infty }^{\infty }\mathcal{K}(x^{\prime}/\varepsilon )\left [w^{\prime}(x - x^{\prime})F(u(x^{\prime},t)) - w(x - x^{\prime})\frac{\partial F(u(x^{\prime},t))} {\partial x^{\prime}} \right ]dx^{\prime}.{}\end{array}$$
(7.79)

Here \(\mathcal{K}^{\prime}(x) =\rho K(x)\) with \(\mathcal{K}\) only having to be defined up to an arbitrary constant. Motivated by the existence of pulsating front solutions, we perform the change of variables ξ = xϕ(t) and τ = t. Equation (7.79) becomes

$$\displaystyle\begin{array}{rcl} \frac{\partial u} {\partial \tau } & =& -u(\xi,\tau )+\int _{-\infty }^{\infty }w(\xi -\xi ^{\prime})F(u(\xi ^{\prime},\tau ))d\xi ^{\prime}+\phi ^{\prime}\frac{\partial u(\xi,\tau )} {\partial \xi } \\ & & +\varepsilon \int _{-\infty }^{\infty }\mathcal{K}\left (\frac{\xi ^{\prime}+\phi } {\varepsilon } \right )\left [w^{\prime}(\xi -\xi ^{\prime})F(u(\xi ^{\prime},\tau ))-w(\xi -\xi ^{\prime})\frac{\partial F(u(\xi ^{\prime},\tau ))} {\partial \xi ^{\prime}} \right ]d\xi ^{\prime}.{}\end{array}$$
(7.80)

Next perform the perturbation expansions

$$\displaystyle\begin{array}{rcl} u(\xi,\tau ) = U(\xi ) +\varepsilon u_{1}(\xi,\tau ) {+\varepsilon }^{2}u_{ 2}(\xi,\tau )+\ldots,& &{}\end{array}$$
(7.81)
$$\displaystyle\begin{array}{rcl} \phi ^{\prime}(\tau ) = c_{0} +\varepsilon \phi _{1}^{\prime}(\tau )& &{}\end{array}$$
(7.82)

where U(ξ) is the unique traveling wave solution of the corresponding homogeneous equation (7.7) with unperturbed wave speed c = c 0. The first-order term u 1 satisfies the inhomogeneous linear equation

$$\displaystyle{ -\frac{\partial u_{1}(\xi,\tau )} {\partial \tau } + \mathbb{L}u_{1}(\xi,\tau ) = -\phi _{1}^{\prime}(\tau )U^{\prime}(\xi ) + h_{1}(\xi,\phi /\varepsilon ) }$$
(7.83)

where

$$\displaystyle{ \mathbb{L}u(\xi ) = c_{0}\frac{du(\xi )} {d\xi } - u(\xi ) +\int _{ -\infty }^{\infty }w(\xi -\xi ^{\prime})F^{\prime}(U(\xi ^{\prime}))u(\xi ^{\prime})d\xi ^{\prime} }$$
(7.84)

and

$$\displaystyle\begin{array}{rcl} h_{1} =\int _{ -\infty }^{\infty }\mathcal{K}\left (\frac{\xi ^{\prime}+\phi } {\varepsilon } \right )\left [-w^{\prime}(\xi -\xi ^{\prime})F(U(\xi ^{\prime})) + w(\xi -\xi ^{\prime})\frac{dF(U(\xi ^{\prime}))} {d\xi ^{\prime}} \right ]d\xi ^{\prime}.& &{}\end{array}$$
(7.85)

The linear operator \(\mathbb{L}\) has a one-dimensional null-space spanned by U′. The existence of U′ as a null vector follows immediately from differentiating both sides of Eq. (7.7) with respect to ξ, whereas its uniqueness can be shown using properties of positive linear operators [172]. Therefore, a bounded solution of Eq. (7.83) with respect to ξ and τ will only exist if the right-hand side of Eq. (7.83) is orthogonal to all elements of the null-space of the adjoint operator \({\mathbb{L}}^{\dag }\). The latter is defined with respect to the inner product

$$\displaystyle{ \int _{-\infty }^{\infty }u(\xi )\mathbb{L}v(\xi )d\xi =\int _{ -\infty }^{\infty }\left [{\mathbb{L}}^{\dag }u(\xi )\right ]v(\xi )d\xi }$$
(7.86)

where u(ξ) and v(ξ) are arbitrary integrable functions. Hence,

$$\displaystyle{ {\mathbb{L}}^{\dag }u(\xi ) = -c\frac{du(\xi )} {d\xi } - u(\xi ) + F^{\prime}(U(\xi ))\int _{-\infty }^{\infty }w(\xi -\xi ^{\prime})u(\xi ^{\prime})d\xi ^{\prime}. }$$
(7.87)

It can be proven that \({\mathbb{L}}^{\dag }\) also has a one-dimensional null-space [172], that is, it is spanned by some function V (ξ). Equation (7.83) thus has a bounded solution if and only if

$$\displaystyle{ B_{0}\phi _{1}^{\prime}(\tau ) =\int _{ -\infty }^{\infty }V (\xi )h_{ 1}(\xi,\phi /\varepsilon )d\xi }$$
(7.88)

where

$$\displaystyle{ B_{0} =\int _{ -\infty }^{\infty }V (\xi )U^{\prime}(\xi )d\xi. }$$
(7.89)

Note that B 0 is strictly positive since V and U′ can be chosen to have the same sign [172]. Substituting for h 1 using Eqs. (7.85) and (7.82) and performing an integration by parts leads to a differential equation for the phase ϕ:

$$\displaystyle\begin{array}{rcl} \frac{d\phi } {d\tau } = c +\varepsilon \varPhi _{1}\left (\frac{\phi } {\varepsilon }\right ),& &{}\end{array}$$
(7.90)

where

$$\displaystyle\begin{array}{rcl} \varPhi _{1}\left (\frac{\phi } {\varepsilon }\right )& =& \frac{1} {B_{0}}\int _{-\infty }^{\infty }\int _{ -\infty }^{\infty }w(\xi -\xi ^{\prime})\mathcal{K}\left (\frac{\xi ^{\prime}+\phi } {\varepsilon } \right ) \\ & & \quad \times \left [V ^{\prime}(\xi )F(U(\xi ^{\prime})) + V (\xi )\frac{dF(U(\xi ^{\prime}))} {d\xi ^{\prime}} \right ]d\xi ^{\prime}d\xi.{}\end{array}$$
(7.91)

The phase equation (7.90) is identical in form to the one derived in Sect. 2.5 for wave propagation along myelinated axons; see Eq. (2.72). It implies that there are two distinct types of behavior. If the right-hand side of Eq. (7.90) is strictly positive, then there exists a pulsating front of the approximate form U(xϕ(t)) and the average speed of propagation is \(c = \varepsilon /T\) with

$$\displaystyle{ T =\int _{ 0}^{\varepsilon } \frac{d\phi } {c +\varepsilon \varPhi _{1}\left (\frac{\phi }{\varepsilon }\right )}. }$$
(7.92)

On the other hand, if the right-hand side of Eq. (7.90) vanishes for some ϕ, then there is wave propagation failure.

In the case of a Heaviside firing rate function F(u) = H(uκ), it is possible to derive an explicit expression for the wave speed c [64]. The solution for the unperturbed wave front U(ξ) was derived in Sect. 7.1, so it is only necessary to determine the solution V (ξ) of the adjoint equation (7.87), which becomes

$$\displaystyle{ cV ^{\prime}(\xi ) + V (\xi ) = - \frac{\delta (\xi )} {U^{\prime}(0)}\int _{-\infty }^{\infty }w(\xi ^{\prime})V (\xi ^{\prime})d\xi ^{\prime}. }$$
(7.93)

This can be integrated to give

$$\displaystyle{ V (\xi ) = -H(\xi ){\text{e}}^{-\xi /c}. }$$
(7.94)

Given the solutions for U(ξ) and V (ξ), it can then be shown that (7.91) reduces to the form

$$\displaystyle\begin{array}{rcl} B_{0}\varPhi _{1}\left (\frac{\phi } {\varepsilon }\right ) = W(0)\mathcal{K}\left (\frac{\phi } {\varepsilon }\right ) +\int _{ 0}^{\infty }\mathcal{K}\left (\frac{\phi -\xi } {\varepsilon } \right )\left [\frac{W(\xi )} {c} - w(\xi )\right ]d\xi,& &{}\end{array}$$
(7.95)

where

$$\displaystyle{ W(\xi ) =\int _{ 0}^{\infty }{\text{e}}^{-y/c_{0} }w(y+\xi )dy \equiv -cU^{\prime}(\xi ), }$$
(7.96)

and

$$\displaystyle{ B_{0} = \frac{1} {c_{0}}\int _{0}^{\infty }{\text{e}}^{-\xi /c_{0} }W(\xi )d\xi. }$$
(7.97)

Keeping only the lowest-order contribution to Φ 1, Eq. (7.92) reduces to

$$\displaystyle{ T =\int _{ 0}^{\varepsilon } \frac{d\phi } {c_{0} +\varepsilon \varGamma (c_{0})A\left (\frac{\phi }{\varepsilon }\right )} }$$
(7.98)

with Γ(c 0) = W(0)∕B 0. For the sake of illustration, suppose that the periodic modulation functions K and A are pure sinusoids. Setting A(x) =ρsin(2π x)∕(2π) in Eq. (7.98) we find that

$$\displaystyle{ T = \frac{\varepsilon } {\sqrt{c_{0 }^{2 } {-\varepsilon { }^{2 } \rho }^{2 } \varGamma {(c_{0 } )}^{2}}} }$$
(7.99)

and, hence,

$$\displaystyle{ c = \sqrt{c_{0 }^{2 } {-\varepsilon { }^{2 } \rho }^{2 } \varGamma {(c_{0 } )}^{2 } /{(2\pi )}^{2}}. }$$
(7.100)

This establishes that a sinusoidally varying heterogeneous neural medium only supports a propagating wave if the velocity c 0 of the (unique) solution of the corresponding homogeneous medium satisfies the inequality

$$\displaystyle{ c_{0} \geq \varepsilon \rho \varGamma (c_{0}). }$$
(7.101)

For the particular example of an exponential distribution (7.1) with σ = 1, we have c 0 = (1 − 2κ)∕(2κ) and Γ(c 0) = 1 + c 0 so that

$$\displaystyle{ c = c_{0}\sqrt{1 -\gamma { _{0 } \rho {}^{2 } \varepsilon }^{2}},\quad \gamma _{0} = \frac{1} {2\pi (2\kappa - 1)}. }$$
(7.102)
Fig. 7.7
figure 7

Pulsating pulse solutions in a 1D excitatory neural field with linear adaptation and Heaviside firing rate function; see Eq. (7.49). The threshold is κ = 0. 2, strength of adaptation is β = 2. 0, and adaptation rate constant is ε = 0. 04. The weight distribution is given by \(w(x,y) =\rho w(x - y)\sin (2\pi x/\varepsilon )\) with \(2\pi \varepsilon = 0.3\) and w(x) an exponential weight function. (a) Single-bump solution for ρ = 0. 3. The interior of the pulse consists of non-propagating, transient ripples. (b) Multi-bump solution for ρ = 0. 8. The solitary pulse corresponds to the envelope of a multiple bump solution, in which individual bumps are non-propagating and transient. The disappearance of bumps at one end and the emergence of new bumps at the other end generate the propagation of activity [332]

The above averaging method can also be extended to the case of periodically modulated traveling pulses (pulsating pulses) (see [332]), in which there are two threshold crossing points. One simplifying assumption of the analysis is that, in the presence of periodically modulated weights, additional threshold crossing points do not occur. However, numerical solutions of a neural field equation with linear adaptation have shown that in the case of large amplitude modulations, a pulsating pulse can develop multiple threshold crossing points [332]. That is, the traveling wave represents the envelope of a multi-bump solution, in which individual bumps are non-propagating and transient; see Fig. 7.7. The appearance (disappearance) of bumps at the leading (trailing) edge of the pulse generates the coherent propagation of the pulse. Wave propagation failure occurs when activity is insufficient to maintain bumps at the leading edge.

3.2 Interfacial Dynamics

The averaging method provides a reasonable estimate for the mean wave speed and the critical amplitude ρ for wave propagation failure, provided that the spatial period \(\varepsilon \ll 1\). As shown by Coombes and Laing [132] in the case of a Heaviside firing rate function, a more accurate estimate for the wave speed for larger values of \(\varepsilon\) can be obtained by analyzing the dynamics of the interface between high and low activity states, provided that the amplitude of periodic modulations is not too large [132]. The basic idea is to change to a co-moving frame of the unperturbed system, u = u(ξ, t) with ξ = xc 0 t such that Eq. (6.115) becomes

$$\displaystyle{ -c_{0}u_{\xi } + u_{t} = -u +\int _{ -\infty }^{\infty }w(\xi +c_{ 0}t,y)F(u(y - c_{0}t,t)dy, }$$
(7.103)

with w given by Eq. (7.78) and F(u) = H(uκ). The moving interface (level set) is then defined according to the threshold condition

$$\displaystyle{ u(\xi _{0}(t),t) =\kappa. }$$
(7.104)

Differentiating with respect to t then determines the velocity of the interface in the co-moving frame according to

$$\displaystyle{ \frac{d\xi _{0}} {dt} = -\frac{u_{t}(\xi _{0}(t),t)} {u_{\xi }(\xi _{0}(t),t)}. }$$
(7.105)

As in the previous averaging method, suppose that for ρ = 0, there exists a traveling front solution U(ξ) of the homogeneous equation (7.7) with speed c 0. Now make the approximation \(u_{\xi }(\xi _{0}(t),t) = U^{\prime}(0)\), which is based on the assumption that for small amplitudes ρ, the slope of the traveling front varies sufficiently slowly. Setting ξ =ξ 0(t) in Eq. (7.103) and using Eq. (7.3), it is then straightforward to show that [132]

$$\displaystyle{ \frac{d\xi _{0}} {dt} =\rho c_{0}\frac{\int _{0}^{\infty }w(y)K(\xi _{ 0} + c_{0}t - y)} {\kappa -\int _{0}^{\infty }w(y)dy}. }$$
(7.106)

In order to match up with the previous method, let \(K(x) =\sin (2\pi x/\varepsilon )\) and \(w(x) ={ \text{e}}^{-\vert x\vert }/2\). Then c 0 = (1 − 2κ)∕(2κ) and [132]

$$\displaystyle{ \frac{d\xi _{0}} {dt} = c_{0}\rho \gamma (\varepsilon )\sin \left [\frac{2\pi } {\varepsilon } (\xi _{0}(t) + c_{0}t) +\phi _{0}(\varepsilon )\right ], }$$
(7.107)

with

$$\displaystyle{ \gamma (\varepsilon ) = \frac{1} {2\kappa - 1} \frac{1} {\sqrt{1 + {(2\pi /\varepsilon )}^{2}}},\quad \tan \phi _{0}(\varepsilon ) = \frac{2\pi } {\varepsilon }. }$$
(7.108)

The final step is to look for a T-periodic solution of Eq. (7.107) such that \(\xi _{0}(t) =\xi _{0}(t + T)\). Setting \(x_{0} =\xi _{0} + c_{0}t\) with \(x_{0} \in [0,\varepsilon ]\) and integrating gives

$$\displaystyle{ \int _{0}^{x_{0} } \frac{dx} {1 +\rho \gamma \sin (2\pi x/\sigma +\phi )} = c_{0}t. }$$
(7.109)

This may be evaluated using a half-angle substitution,

$$\displaystyle{ c_{0}t = \frac{\varepsilon } {\pi } \frac{1} {\sqrt{1 {-\rho { }^{2 } \gamma }^{2}}}\left .{\tan }^{-1} \frac{z} {\sqrt{1 {-\rho { }^{2 } \gamma }^{2}}}\right \vert _{z_{0}(0)+\rho \gamma }^{z_{0}(t)+\rho \gamma }, }$$
(7.110)

where \(z_{0}(t) =\tan [(2\pi x_{0}(t)/\varepsilon +\phi )/2]\) and x 0(0) = 0. A self-consistent pulsating front solution is then obtained by imposing the condition \(\varepsilon = x_{0}(T)\), which then determines the effective speed \(c =\varepsilon /T\) to be

$$\displaystyle{ c = c_{0}\sqrt{1 {-\rho }^{2 } \gamma {(\varepsilon )}^{2}}. }$$
(7.111)

Note that on Taylor expanding \(\gamma (\varepsilon )\) to first order in \(\varepsilon\), Eq. (7.111) recovers the corresponding result (7.102) obtained using averaging theory. However, the expression derived using interfacial dynamics is more accurate when the period \(\varepsilon\) increases, provided that the amplitude ρ does not become too large.

3.3 Hamilton–Jacobi Dynamics and Slow Spatial Heterogeneities

We now turn to the effects of periodically modulated weights on the propagation of pulled front solutions of the activity-based neural field equation (7.27). In the case of high-frequency modulation, Coombes and Laing [132] adapted previous work by Shigesada et al. on pulsating fronts in reaction–diffusion models of the spatial spread of invading species into heterogeneous environments [574, 575]. (In Sect. 3.2.2 we applied the theory of pulsating fronts to CaMKII translocation waves along spiny dendrites.) We briefly sketch the basic steps in the analysis. First, substitute the periodically modulated weight distribution (7.78) into Eq. (7.27) and linearize about the leading edge of the wave where a(x, t) ∼ 0:

$$\displaystyle\begin{array}{rcl} \frac{\partial a(x,t)} {\partial t} & =& -a(x,t) +\int _{ -\infty }^{\infty }w(x - y)[1 + K(y/\varepsilon )]a(y,t)dy.{}\end{array}$$
(7.112)

Now assume a solution of the form a(x, t) = A(ξ)P(x), ξ = xct with A(ξ) → 0 as ξ and \(P(x + 2\pi \varepsilon ) = P(x)\). Substitution into Eq. (7.112) then gives

$$\displaystyle\begin{array}{rcl} -cP(x)A^{\prime}(\xi ) = -P(x)A(\xi ) +\int _{ -\infty }^{\infty }w(x - y)[1 + K(y/\varepsilon )]P(y)A(\xi -[x - y])dy.& &{}\end{array}$$
(7.113)

Taking \(A(\xi ) \sim {\text{e}}^{-\lambda \xi }\) and substituting into the above equation yields a nonlocal version of the Hill equation:

$$\displaystyle{ (1 + c\lambda )P(x) =\int _{ -\infty }^{\infty }{\text{e}}^{\lambda [x-y]}w(x - y)[1 + K(y/\varepsilon )]P(y)dy. }$$
(7.114)

In order to determine the minimal wave speed, it is necessary to find a bounded periodic solution P(x) of Eq. (7.114), which yields a corresponding dispersion relation c = c(λ), whose minimum with respect to λ can then be determined (assuming it exists). One way to obtain an approximate solution to Eq. (7.114) is to use Fourier methods to derive an infinite matrix equation for the Fourier coefficients of the periodic function P(x) and then to numerically solve a finite truncated version of the matrix equation. This is the approach followed in [132]. The matrix equation takes the form

$$\displaystyle{ (1 + c\lambda )P_{m} = \mathcal{W}(\lambda -im/\varepsilon )P_{m} + \mathcal{W}(\lambda -im/\varepsilon )\sum _{l}K_{l}P_{m-l}, }$$
(7.115)

where \(K(x/\varepsilon ) =\sum _{n}K_{n}{\text{e}}^{imx/\varepsilon }\), \(P(x) =\sum _{n}P_{n}{\text{e}}^{imx/\varepsilon }\), and \(\mathcal{W}(p) =\hat{ W}(p) +\hat{ W}(-p)\) with \(\hat{W}(p)\), the Laplace transform of w(x). One finds that the mean velocity of a pulsating front increases with the period \(2\pi \varepsilon\) of the synaptic modulations [132]. This is illustrated in Fig. 7.8, which shows space–time plots of a pulsating front for \(\varepsilon = 0.5\) and \(\varepsilon = 0.8\).

Fig. 7.8
figure 8

Space–time contour plots of a pulsating front solution of the neural field equation (7.112) with piecewise linear firing rate function (7.29), Gaussian weight distribution (7.38), and a \(2\pi \varepsilon\)-periodic modulation of the synaptic weights, \(K(x) =\cos (x/\varepsilon )\). (a) \(\varepsilon = 0.5\) and (b) \(\varepsilon = 0.8\). Other parameters are W 0 = 1. 2, σ = 1. 0, and κ = 0. 4

Now suppose that there is a slowly varying spatial modulation of the synaptic weight distribution (relative to the range of synaptic interactions). (Although we do not have a specific example of long-wavelength modulations in mind, we conjecture that these might be associated with inter-area cortical connections. For example, it has been shown that heterogeneities arise as one approaches the V1/V2 border in visual cortex, which has a number of effects including the generation of reflected waves [688].) In the case of slow modulations, one can extend the Hamilton–Jacobi theory of sharp interfaces developed originally for PDEs (see [178, 203, 204, 212, 421] and Sect. 3.3.2) to the case of neural fields [70]. In order to illustrate this, consider a heterogeneous version of the activity-based neural field equation (7.27) of the form

$$\displaystyle\begin{array}{rcl} \frac{\partial a(x,t)} {\partial t} & =& -a(x,t) + F\left (\int _{-\infty }^{\infty }w(x - x^{\prime})J(\varepsilon x^{\prime})a(x^{\prime},t)dx^{\prime}\right ),{}\end{array}$$
(7.116)

in which there is a slow (nonperiodic) spatial modulation \(J(\varepsilon x)\) of the synaptic weight distribution with \(\varepsilon \ll 1\).

Recall from Sect. 3.3.2 that the first step in the Hamilton–Jacobi method is to rescale space and time in Eq. (7.116) according to \(t \rightarrow t/\varepsilon\) and \(x \rightarrow x/\varepsilon\) [178, 204, 421]:

$$\displaystyle\begin{array}{rcl} \varepsilon \frac{\partial a(x,t)} {\partial t} & =& -a(x,t) + F\left (\frac{1} {\varepsilon } \int _{-\infty }^{\infty }w([x - x^{\prime}]/\varepsilon )J(x^{\prime})a(x^{\prime},t)dx^{\prime}\right ).{}\end{array}$$
(7.117)

Under this hyperbolic rescaling, the front region where the activity a(x, t) rapidly increases as x decreases from infinity becomes a step as \(\varepsilon \rightarrow 0\); see Fig. 7.2(b). This motivates introducing the WKB approximation

$$\displaystyle{ a(x,t) \sim {\text{e}}^{-G(x,t)/\varepsilon } }$$
(7.118)

with G(x, t) > 0 for all x > x(t) and G(x(t), t) = 0. The point x(t) determines the location of the front and \(c =\dot{ x}\). Substituting (3.85) into Eq. (7.117) gives

$$\displaystyle\begin{array}{rcl} -\partial _{t}G(x,t) = -1 + \frac{1} {\varepsilon } \int _{-\infty }^{\infty }w([x - x^{\prime}]/\varepsilon )J(x^{\prime}){\text{e}}^{-[G(x^{\prime},t)-G(x,t)]/\varepsilon }dx^{\prime}.& &{}\end{array}$$
(7.119)

We have used the fact that for x > x(t) and \(\varepsilon \ll 1\), the solution is in the leading edge of the front so that F can be linearized. Equation (7.119) can be simplified using the method of steepest descents [70]; see below. This yields the equation

$$\displaystyle\begin{array}{rcl} -\partial _{t}G(x,t) = -1 +\tilde{ w}(i\partial _{x}G(x,t))J(x),& &{}\end{array}$$
(7.120)

where \(\tilde{w}(k)\) is the Fourier transform of w(x):

$$\displaystyle{ w(x) =\int _{ -\infty }^{\infty }\tilde{w}(k){\text{e}}^{ikx}\frac{dk} {2\pi }. }$$
(7.121)

Equation (7.120) is formally equivalent to the Hamilton–Jacobi equation

$$\displaystyle{ \partial _{t}G + H(\partial _{x}G,x) = 0 }$$
(7.122)

with corresponding Hamiltonian

$$\displaystyle{ H(p,x) = -1 +\tilde{ w}(ip)J(x) }$$
(7.123)

where p = x G is interpreted as the conjugate momentum of x, and \(\tilde{w}(ip) = \mathcal{W}(p)\). It follows that the Hamilton–Jacobi equation (7.122) can be solved in terms of the Hamilton equations

$$\displaystyle\begin{array}{rcl} \frac{dx} {ds} = \frac{\partial H} {\partial p} = J(x)\mathcal{W}^{\prime}(p) = J(x)[\hat{W}^{\prime}(p) -\hat{ W}^{\prime}(-p)]& &{}\end{array}$$
(7.124)
$$\displaystyle\begin{array}{rcl} \frac{dp} {ds} = -\frac{\partial H} {\partial x} = -J^{\prime}(x)\mathcal{W}(p).& &{}\end{array}$$
(7.125)

Let X(s; x, t), P(s; x, t) denote the solution with x(0) = 0 and x(t) = x. We can then determine G(x, t) according to

$$\displaystyle{ G(x,t) = -E(x,t)t +\int _{ 0}^{t}P(s;x,t)\dot{X}(s;x,t)ds. }$$
(7.126)

Here

$$\displaystyle{ E(x,t) = H(P(s;x,t),X(s;x,t)), }$$
(7.127)

which is independent of s due to conservation of “energy,” that is, the Hamiltonian is not an explicit function of time.

Steepest descent calculation of G. The derivation of Eq. (7.120) using steepest descents proceeds as follows. First, substituting the Fourier transfer (7.121) into (7.119) and reversing the order of integration gives

$$\displaystyle\begin{array}{rcl} -\partial _{t}G(x,t) = -1 + \frac{1} {\varepsilon } \int _{-\infty }^{\infty }\int _{ -\infty }^{\infty }\tilde{w}(k)J(x^{\prime}){\text{e}}^{-S(k,x^{\prime};x,t)/\varepsilon }dx^{\prime}\frac{dk} {2\pi },& & {}\end{array}$$
(7.128)

where

$$\displaystyle{ S(k,x^{\prime}; x,t) = ik(x^{\prime} - x) + G(x^{\prime},t) - G(x,t). }$$
(7.129)

Exploiting the fact that \(\varepsilon\) is small, we perform steepest descents with respect to the x′ variable with (k, x, t) fixed. Let x′ = z(k, t) denote the stationary point for which ∂ S∂ x′ = 0, which is given by the solution to the implicit equation

$$\displaystyle{ ik + \partial _{x}G(x^{\prime},t) = 0. }$$
(7.130)

Taylor expanding S about this point (assuming it is unique) gives to second order

$$\displaystyle\begin{array}{rcl} S(k,x^{\prime}; x,t)& \approx & S(k,z(k,t); x,t) + \left.\frac{1} {2} \frac{{\partial }^{2}S} {\partial {x^{\prime}}^{2}} \right \vert _{x^{\prime}=z(k,t)}{(x^{\prime} - z(k,t))}^{2} \\ & =& ik[z(k,t) - x] + G(z(k,t),t) - G(x,t) \\ & & \qquad -\frac{1} {2}\partial _{xx}G(z(k,t),t){(x^{\prime} - z(k,t))}^{2}. {}\end{array}$$
(7.131)

Substituting into Eq. (7.128) and performing the resulting Gaussian integral with respect to x′ yields the result

$$\displaystyle\begin{array}{rcl} -\partial _{t}G(x,t)& =& -1 + \frac{1} {\varepsilon } \int _{-\infty }^{\infty }\sqrt{ \frac{2\pi \varepsilon } {\partial _{xx}G(z(k,t),t)}}\tilde{w}(k)J(z(k,t)) \\ & & \qquad \times {\text{e}}^{-\left (ik[z(k,t) - x] + G(z(k,t),t) - G(x,t)\right )/\varepsilon }\frac{dk} {2\pi }. {}\end{array}$$
(7.132)

This can be rewritten in the form

$$\displaystyle\begin{array}{rcl} -\partial _{t}G(x,t)& =& -1 + \frac{1} {\sqrt{2\pi \varepsilon }}\int _{-\infty }^{\infty }\tilde{w}(k)J(z(k,t)){\text{e}}^{-\hat{S}(k;x,t)/\varepsilon }dk, {}\end{array}$$
(7.133)

where

$$\displaystyle{ \hat{S}(k; x,t) = ik[z(k,t) - x] + G(z(k,t),t) - G(x,t) + \frac{\varepsilon } {2}\ln \partial _{xx}G(z(k,t),t). }$$
(7.134)

The integral over k can also be evaluated using steepest descents. Thus, Taylor expand \(\hat{S}\) to second order about the stationary point k = k(x, t), which is the solution to the equation

$$\displaystyle\begin{array}{rcl} 0 = \frac{\partial \hat{S}} {\partial k} = i[z(k,t) - x] + \frac{\partial z(k,t)} {\partial k} \left [ik + \partial _{x}G(z(k,t),t) + \frac{\varepsilon } {2} \frac{\partial _{xxx}G(z(k,t),t)} {\partial _{xx}G(z(k,t),t)} \right ].& & {}\end{array}$$
(7.135)

It follows from Eqs. (7.130) and (7.135) that \(z(k(x,t),t) = x + \mathcal{O}(\varepsilon )\) and so

$$\displaystyle{ k(x,t) = i\partial _{x}G(x,t) + \mathcal{O}(\varepsilon ). }$$
(7.136)

Moreover,

$$\displaystyle{ \hat{S}(k; x,t) \approx \frac{1} {2}\left.\frac{{\partial }^{2}\hat{S}} {\partial {k}^{2}} \right \vert _{k=k(x,t)}{(k - k(x,t))}^{2}. }$$
(7.137)

Substituting into Eq. (7.133) and performing the Gaussian integral with respect to k gives to leading order in \(\varepsilon\)

$$\displaystyle\begin{array}{rcl} -\partial _{t}G(x,t) = -1 + \frac{1} {\sqrt{i\partial _{xx } G(x,t)\partial _{k } z(k(x,t),t)}}\tilde{w}(k(x,t))J(x).& & {}\end{array}$$
(7.138)

Finally, setting x′ = z(k, t) in Eq. (7.130) and differentiating with respect to k show that \(\partial _{xx}G(z(k,t),t)\partial _{k}z(k,t) = -i\) and we obtain Eq. (7.120).

Given G(x, t), the location x(t) of the front at time t is determined by the equation G(x(t), t) = 0. Differentiating with respect to t shows that \(\dot{x}\partial _{x}G + \partial _{t}G = 0\). Let us begin by rederiving the wave speed for a homogeneous neural field by setting J(x) ≡ 1. In this case, dpds = 0 so that p =λ 0 independently of s. Hence, x(s) = xst, which implies that

$$\displaystyle{ \dot{x} = \frac{dx} {ds} = \mathcal{W}^{\prime}(\lambda _{0}). }$$
(7.139)

On the other hand,

$$\displaystyle{ \dot{x} = -\frac{\partial _{t}G} {\partial _{x}G} = \frac{-1 + \mathcal{W}(\lambda _{0})} {\lambda _{0}}. }$$
(7.140)

Combining these two results means that λ 0 is given by the minimum of the function

$$\displaystyle{ c(\lambda ) = \frac{-1 + \mathcal{W}(\lambda )} {\lambda } }$$
(7.141)

and c 0 = c(λ 0). This recovers the result of Sect. 7.1.3. Thus, in the case of a Gaussian weight distribution, λ 0 is related to c 0 according to Eq. (7.42). Now suppose that there exists a small-amplitude, slow modulation of the synaptic weights J(x) = 1 +β f(x) with β ≪ 1. We can then obtain an approximate solution of Hamilton’s Eqs. (7.124) and (7.125) and the corresponding wave speed using regular perturbation theory along analogous lines to a previous study of the F–KPP equation [421]. We find (see below) that

$$\displaystyle{ x(t) = c_{0}t + \frac{\beta \mathcal{W}(\lambda _{0})} {c_{0}\lambda _{0}} \int _{0}^{c_{0}t}f(y)dy + \mathcal{O}{(\beta }^{2}). }$$
(7.142)

Here c 0 is the wave speed of the homogeneous neural field (β = 0), which is given by c 0 = c(λ 0) with λ 0 obtained by minimizing the function c(λ) defined by Eq. (7.141); see Eq. (7.42). Finally, differentiating both sides with respect to t and inverting the hyperbolic scaling yields

$$\displaystyle{ c \equiv \dot{ x}(t) = c_{0} + \frac{\beta \mathcal{W}(\lambda _{0})} {\lambda _{0}} f(\varepsilon c_{0}t) + \mathcal{O}{(\beta }^{2}). }$$
(7.143)
Fig. 7.9
figure 9

(a) Propagating front in a homogeneous network with Gaussian weights (7.38) and piecewise linear rate function (7.29). Parameter values are W 0 = 1. 2, σ = 1, κ = 0. 4. The initial condition is taken to be a steep sigmoid a(x, 0) = 0. 5∕(1 + exp(−η(xl))) with η = 5 and l = 10. (a) Snapshots of wave profile at time intervals of width Δ t = 5 from t = 10 to t = 40. (b) Space–time contour plot. Wave speed asymptotically approaches the minimum c 0 of the velocity dispersion curve given by Eq. (7.39). (c,d) Propagating front in a network with a linear heterogeneity in the synaptic weights, \(J(x) = 1 +\varepsilon (x - l)\), l = 10, and \({\varepsilon }^{2} = 0.005\). Other parameters as in (a,b). (c) Snapshots of wave profile at time intervals of width Δ t = 5 from t = 10 to t = 40. (d) Space–time contour plot. Wave speed increases approximately linearly with time, so the position x(t) of front evolves according to a downward parabola. Theoretical curve based on the perturbation calculation is shown by the solid curve. The trajectory of the front in the corresponding homogeneous case is indicated by the dashed curve

The analytical results agree reasonably well with numerical simulations, provided that \(\varepsilon\) is sufficiently small [70]. In Fig. 7.9(a) we show snapshots of a pulled front in the case of a homogeneous network with Gaussian weights (7.38) and piecewise linear firing rate function (7.29). Space and time units are fixed by setting the range of synaptic weights σ = 1 and the time constant τ = 1. A corresponding space–time plot is given in Fig. 7.9(b), which illustrates that the speed of the front asymptotically approaches the calculated minimal wave speed c 0. (Note that pulled fronts take an extremely long time to approach the minimal wave speed at high levels of numerical accuracy, since the asymptotics are algebraic rather than exponential in time [162].) In Figs. 7.9(c,d) we plot the corresponding results in the case of an inhomogeneous network. For the sake of illustration, the synaptic heterogeneity is taken to be a linear function of displacement, that is, \(J(x) = 1 +\varepsilon (x - l)\), and \(\beta =\varepsilon\). Equation (7.142) implies that

$$\displaystyle\begin{array}{rcl} x(t)& =& l + c_{0}t + \frac{{\varepsilon }^{2}\mathcal{W}(\lambda _{0})} {2c_{0}\lambda _{0}} [{(c_{0}t)}^{2} - 2c_{ 0}lt] \\ & =& l + \left [c_{0} -\frac{{\varepsilon }^{2}l(c_{0}\lambda _{0} + 1)} {\lambda _{0}} \right ]t + \frac{{\varepsilon }^{2}c_{0}(c_{0}\lambda _{0} + 1)} {2\lambda _{0}} {t}^{2},{}\end{array}$$
(7.144)

where we have used Eq. (7.141) and assumed that the initial position of the front is x(0) = l. Hence, the perturbation theory predicts that a linearly increasing modulation in synaptic weights results in the leading edge of the front tracing out a downward parabola in a space–time plot for times \(t \ll \mathcal{O}(1{/\varepsilon }^{2})\). This is consistent with numerical simulations for \({\varepsilon }^{2} = 0.005\), as can be seen in the space–time plot of Fig. 7.9(d).

Perturbation calculation of wave speed. Introduce the perturbation expansions

$$\displaystyle{ x(s) = x_{0}(s) +\beta x_{1}(s) + \mathcal{O}{(\beta }^{2}),\quad p(s) = p_{ 0}(s) +\beta p_{1}(s) + \mathcal{O}{(\beta }^{2}) }$$
(7.145)

and substitute into Eqs. (7.124) and (7.125). Taylor expanding the nonlinear function f(x) about x 0 and \(\mathcal{W}(p) =\hat{ W}(p) +\hat{ W}(-p)\) about p 0 then leads to a hierarchy of equations, the first two of which are

$$\displaystyle{ \dot{p}_{0}(s) = 0,\quad \dot{x_{0}}(s) = \mathcal{W}^{\prime}(p_{0}), }$$
(7.146)

and

$$\displaystyle{ \dot{p}_{1}(s) = -f^{\prime}(x_{0})\mathcal{W}(p_{0}),\quad \dot{x_{1}}(s) = \mathcal{W}^{\prime\prime}(p_{0})p_{1}(s) + f(x_{0})\mathcal{W}^{\prime}(p_{0}), }$$
(7.147)

These are supplemented by the Cauchy conditions x 0(0) = 0, x 0(t) = x and x n (0) = x n (t) = 0 for all integers n ≥ 1. Equations (7.146) have solutions of the form

$$\displaystyle{ p_{0}(s) =\lambda,\quad x_{0}(s) = \mathcal{W}^{\prime}(\lambda )s + B_{0} }$$
(7.148)

with λ, B 0 independent of s. Imposing the Cauchy data then implies that B 0 = 0 and λ satisfies the equation

$$\displaystyle{ \mathcal{W}^{\prime}(\lambda ) = x/t. }$$
(7.149)

At the next order,

$$\displaystyle\begin{array}{rcl} p_{1}(s) = -\mathcal{W}(\lambda ) \frac{t} {x}f(xs/t) + A_{1},& & {}\end{array}$$
(7.150)
$$\displaystyle\begin{array}{rcl} x_{1}(s)& =& -\mathcal{W}^{\prime\prime}(\lambda )\mathcal{W}(\lambda ) \frac{{t}^{2}} {{x}^{2}}\int _{0}^{xs/t}f(y)dy \\ & & \qquad +\int _{ 0}^{xs/t}f(y)dy + \mathcal{W}^{\prime\prime}(\lambda )A_{ 1}s + B_{1}, {}\end{array}$$
(7.151)

with A 1, B 1 independent of s. Imposing the Cauchy data then implies that B 1 = 0 and

$$\displaystyle{ A_{1} = A_{1}(x,t) = \mathcal{W}(\lambda ) \frac{t} {{x}^{2}}\int _{0}^{x}f(y)dy - \frac{1} {t\mathcal{W}^{\prime\prime}(\lambda )}\int _{0}^{x}f(y)dy. }$$
(7.152)

Given these solutions, the energy function E(x, t) is

$$\displaystyle\begin{array}{rcl} E(x,t)& =& -1 + [1 +\beta f(x_{0} +\beta x_{1}+\ldots )]\mathcal{W}(\lambda +\beta p_{1}+\ldots ) \\ & =& -1 + \mathcal{W}(\lambda ) +\beta [\mathcal{W}^{\prime}(\lambda )p_{1}(s) + f(x_{0}(s))\mathcal{W}(\lambda )] + \mathcal{O}{(\beta }^{2}). {}\end{array}$$
(7.153)

Substituting for x 0(s) and p 1(s) and using the condition \(\mathcal{W}^{\prime}(\lambda ) = x/t\), we find that

$$\displaystyle{ E(x,t) = -1 + \mathcal{W}(\lambda ) +\beta \frac{x} {t} A_{1}(x,t) + \mathcal{O}{(\beta }^{2}), }$$
(7.154)

which is independent of s as expected. Similarly,

$$\displaystyle\begin{array}{rcl} \int _{0}^{t}p(s)\dot{x}(s)ds& =& \lambda x +\beta \mathcal{W}^{\prime}(\lambda )\int _{ 0}^{t}p_{ 1}(s)ds + \mathcal{O}{(\beta }^{2}) \\ & =& \lambda x +\beta \frac{\mathcal{W}^{\prime}(\lambda )} {\mathcal{W}^{\prime\prime}(\lambda )}\int _{0}^{t}\left [\dot{x}_{ 1}(s) -\mathcal{W}^{\prime}(\lambda )f(\mathcal{W}^{\prime}(\lambda )s)\right ]ds + \mathcal{O}{(\beta }^{2}) \\ & =& \lambda x -\beta \frac{\mathcal{W}^{\prime}(\lambda )} {\mathcal{W}^{\prime\prime}(\lambda )}\int _{0}^{x}f(y)dy + \mathcal{O}{(\beta }^{2}). {}\end{array}$$
(7.155)

Hence, to first order in β,

$$\displaystyle{ G(x,t) = t -\mathcal{W}(\lambda )t +\lambda x -\beta \mathcal{W}(\lambda ) \frac{t} {x}\int _{0}^{x}f(y)dy. }$$
(7.156)

We can now determine the wave speed c by imposing the condition G(x(t), t) = 0 and performing the perturbation expansions \(x(t) = x_{0}(t) +\beta x_{1}(t) + \mathcal{O}{(\beta }^{2})\) and \(\lambda =\lambda _{0} +\beta \lambda _{1} + \mathcal{O}{(\beta }^{2})\). Substituting into Eq. (7.156) and collecting terms at \(\mathcal{O}(1)\) and \(\mathcal{O}(\beta )\) leads to Eq. (7.142).

4 Wave Propagation in Stochastic Neural Fields

In Sect. 6.4 we constructed stochastic neuronal population models based on a master equation formulation. However, continuum versions of these models are difficult to analyze even under a diffusion approximation, due to the nonlocal nature of the multiplicative noise terms; see Eq. (6.71). Therefore, in this section, we analyze the effects of noise on wave propagation in stochastic neural fields with local multiplicative noise, extending the PDE methods outlined in Sect. 2.6. This form of noise can also be interpreted in terms of parametric fluctuations in the firing threshold [58].

4.1 Spontaneous Front Propagation

Consider the following stochastic neural field equation: U(x, t)

$$\displaystyle{ dU(x,t) = \left [-U(x,t) +\int _{ -\infty }^{\infty }w(x - y)F(U(y,t))dy\right ]dt{+\varepsilon }^{1/2}g(U(x,t))dW(x,t). }$$
(7.157)

We assume that dW(x, t) represents an independent Wiener process such that

$$\displaystyle{ \langle dW(x,t)\rangle = 0,\quad \langle dW(x,t)dW(x^{\prime},t^{\prime})\rangle = 2C([x - x^{\prime}]/\lambda )\delta (t - t^{\prime})dtdt^{\prime}, }$$
(7.158)

where ⟨⋅ ⟩ denotes averaging with respect to the Wiener process. Here λ is the spatial correlation length of the noise such that C(xλ) →δ(x) in the limit λ → 0, and ε determines the strength of the noise, which is assumed to be weak. Moreover, the multiplicative noise term is taken to be of Stratonovich form. The analysis of Eq. (7.157) proceeds along similar lines to the analysis of the stochastic bistable equation in Sect. 2.6; see also [70]. First, using Novikov’s theorem, we rewrite Eq. (7.157) so that the fluctuating term has zero mean:

$$\displaystyle{ dU(x,t) = \left [h(U(x,t)) +\int _{ -\infty }^{\infty }w(x - y)F(U(y,t))dy\right ]dt {+\varepsilon }^{1/2}dR(U,x,t), }$$
(7.159)

where

$$\displaystyle{ h(U) = -U +\varepsilon C(0)g^{\prime}(U)g(U) }$$
(7.160)

and

$$\displaystyle{ dR(U,x,t) = g(U)dW(x,t) {-\varepsilon }^{1/2}C(0)g^{\prime}(U)g(U)dt. }$$
(7.161)

The stochastic process R has the variance

$$\displaystyle{ \langle dR(U,x,t)dR(U,x^{\prime},t)\rangle =\langle g(U(x,t))dW(x,t)g(U(x^{\prime},t)dW(x^{\prime},t)\rangle + \mathcal{O}{(\varepsilon }^{1/2}). }$$
(7.162)

The next step in the analysis is to express the solution U of Eq. (7.159) as a combination of a fixed wave profile U 0 that is displaced by an amount Δ(t) from its uniformly translating position \(\xi = x - c_{\varepsilon }t\) and a time-dependent fluctuation Φ in the front shape about the instantaneous position of the front:

$$\displaystyle{ U(x,t) = U_{0}(\xi -\varDelta (t)) {+\varepsilon }^{1/2}\varPhi (\xi -\varDelta (t),t). }$$
(7.163)

The wave profile U 0 and associated wave speed \(c_{\varepsilon }\) are obtained by solving the modified deterministic equation

$$\displaystyle{ -c_{\varepsilon }\frac{dU_{0}} {d\xi } - h(U_{0}(\xi )) =\int _{ -\infty }^{\infty }w(\xi -\xi ^{\prime})F(U_{ 0}(\xi ^{\prime}))d\xi ^{\prime}, }$$
(7.164)

As in Sect. 2.6, Eq. (7.164) is chosen so that that to leading order, the stochastic variable Δ(t) undergoes unbiased Brownian motion with

$$\displaystyle{ \langle \varDelta (t)\rangle = 0,\quad \langle \varDelta {(t)}^{2}\rangle = 2D(\varepsilon )t }$$
(7.165)

where the diffusion coefficient \(D(\varepsilon ) = \mathcal{O}(\varepsilon )\) can be calculated using perturbation analysis (see below).

Perturbation calculation of diffusivity \(D(\varepsilon )\) . Substitute the decomposition (7.163) into Eq. (7.159) and expand to first order in \(\mathcal{O}{(\varepsilon }^{1/2})\):

$$\displaystyle\begin{array}{rcl} & & -c_{\varepsilon }U_{0}^{\prime}(\xi -\varDelta (t))dt - U_{0}^{\prime}(\xi -\varDelta (t))d\varDelta (t) {+\varepsilon }^{1/2}\left [d\varPhi (\xi -\varDelta (t),t) - c_{\varepsilon }\varPhi ^{\prime}(\xi -\varDelta (t),t)dt\right ] {}\\ & & \quad {-\varepsilon }^{1/2}\varPhi ^{\prime}(\xi -\varDelta (t),t)d\varDelta (t) {}\\ & & \qquad = h(U_{0}(\xi -\varDelta (t)))dt + h^{\prime}{(U_{0}(\xi -\varDelta (t)))\varepsilon }^{1/2}\varPhi (\xi -\varDelta (t),t)dt {}\\ & & \quad +\int _{ -\infty }^{\infty }w(\xi -\xi ^{\prime})\left (F(U_{ 0}(\xi ^{\prime} -\varDelta (t))) + F^{\prime}{(U_{0}(\xi ^{\prime} -\varDelta (t)))\varepsilon }^{1/2}\varPhi (\xi ^{\prime} -\varDelta (t),t)\right )d\xi ^{\prime}dt {}\\ & & \quad {+\varepsilon }^{1/2}dR(U_{ 0}(\xi -\varDelta (t)),\xi,t) + \mathcal{O}(\varepsilon ). {}\\ \end{array}$$

Imposing Eq. (7.164), after shifting \(\xi \rightarrow \xi -\varDelta (t)\), and dividing through by \({\varepsilon }^{1/2}\) then gives

$$\displaystyle\begin{array}{rcl} d\varPhi (\xi -\varDelta (t),t)& =& \hat{L} \circ \varPhi (\xi -\varDelta (t),t)dt {+\varepsilon }^{-1/2}U_{ 0}^{\prime}(\xi -\varDelta (t))d\varDelta (t) \\ & & +dR(U_{0}(\xi -\varDelta (t),\xi,t) + \mathcal{O}{(\varepsilon }^{1/2}), {}\end{array}$$
(7.166)

where \(\mathbb{L}\) is the non-self-adjoint linear operator

$$\displaystyle\begin{array}{rcl} \mathbb{L} \circ A(\xi ) = c_{\varepsilon }\frac{dA(\xi )} {d\xi } + h^{\prime}(U_{0}(\xi ))A(\xi ) +\int _{ -\infty }^{\infty }w(\xi -\xi ^{\prime})F^{\prime}(U_{ 0}(\xi ^{\prime}))A(\xi ^{\prime})d\xi ^{\prime}& & {}\end{array}$$
(7.167)

for any function \(A(\xi ) \in {L}^{2}(\mathbb{R})\). Finally, for all terms in Eq. (7.166) to be of the same order, we require that \(\varDelta (t) = \mathcal{O}{(\varepsilon }^{1/2})\). It then follows that \(U_{0}(\xi -\varDelta (t)) = U_{0}(\xi ) + \mathcal{O}{(\varepsilon }^{1/2})\) and Eq. (7.166) reduces to

$$\displaystyle\begin{array}{rcl} d\varPhi (\xi,t)& =& \mathbb{L} \circ \varPhi (\xi,t)dt {+\varepsilon }^{-1/2}U_{ 0}^{\prime}(\xi )d\varDelta (t) + dR(U_{0}(\xi ),\xi,t) + \mathcal{O}{(\varepsilon }^{1/2}). {}\end{array}$$
(7.168)

It can be shown that for a sigmoid firing rate function and exponential weight distribution, the operator \(\mathbb{L}\) has a 1D null-space spanned by U 0′(ξ) [172]. (The fact that U 0′(ξ) belongs to the null-space follows immediately from differentiating equation (7.164) with respect to ξ.) We then have the solvability condition for the existence of a nontrivial solution of Eq. (7.168), namely, that the inhomogeneous part is orthogonal to all elements of the null-space of the adjoint operator \({\mathbb{L}}^{\dag }\). The latter is almost identical in form to Eq. (7.87):

$$\displaystyle{ {\mathbb{L}}^{\dag }B(\xi ) = -c_{\varepsilon }\frac{dB(\xi )} {d\xi } + h^{\prime}(U_{0}(\xi ))B(\xi ) + F^{\prime}(U_{0}(\xi ))\int _{-\infty }^{\infty }w(\xi -\xi ^{\prime})B(\xi ^{\prime})d\xi ^{\prime}. }$$
(7.169)

Hence, \({\mathbb{L}}^{\dag }\) has a one-dimensional null-space that is spanned by some function \(\mathcal{V}(\xi )\). Taking the inner product of both sides of Eq. (7.168) with respect to \(\mathcal{V}(\xi )\) then leads to the solvability condition

$$\displaystyle{ \int _{-\infty }^{\infty }\mathcal{V}(\xi )\left [U_{ 0}^{\prime}(\xi )d\varDelta (t) {+\varepsilon }^{1/2}dR(U_{ 0},\xi,t)\right ]d\xi = 0. }$$
(7.170)

Thus Δ(t) satisfies the stochastic differential equation (SDE)

$$\displaystyle{ d\varDelta (t) = {-\varepsilon }^{1/2}\frac{\int _{-\infty }^{\infty }\mathcal{V}(\xi )dR(U_{ 0},\xi,t)d\xi } {\int _{-\infty }^{\infty }\mathcal{V}(\xi )U_{ 0}^{\prime}(\xi )d\xi }. }$$
(7.171)

Using the lowest-order approximation \(dR(U_{0},\xi,t) = g(U_{0}(\xi ))dW(\xi,t)\), we deduce that Δ(t) is a Wiener process with effective diffusion coefficient

$$\displaystyle\begin{array}{rcl} D(\varepsilon )& =& \varepsilon \frac{\int _{-\infty }^{\infty }\int _{ -\infty }^{\infty }\mathcal{V}(\xi )\mathcal{V}(\xi ^{\prime})g(U_{ 0}(\xi ))g(U_{0}(\xi ^{\prime}))\langle dW(\xi,t)dW(\xi ^{\prime},t)\rangle d\xi d\xi ^{\prime}} {{\left [\int _{-\infty }^{\infty }\mathcal{V}(\xi )U_{ 0}^{\prime}(\xi )d\xi \right ]}^{2}} {}\end{array}$$
(7.172)

In the case of a Heaviside rate function F(U) = H(Uκ) and multiplicative noise g(U) = g 0 U, the effective speed \(c_{\varepsilon }\) and diffusion coefficient \(D(\varepsilon )\) can be calculated explicitly [70]. (The constant g 0 has units of \(\sqrt{\mathrm{length/time}}\).) The deterministic equation (7.164) for the fixed profile U 0 then reduces to

$$\displaystyle{ -c_{\varepsilon }\frac{dU_{0}} {d\xi } + U_{0}(\xi )\gamma (\varepsilon ) =\int _{ -\infty }^{\infty }w(\xi -\xi ^{\prime})H(U_{ 0}(\xi ^{\prime})-\kappa )d\xi ^{\prime}, }$$
(7.173)

with

$$\displaystyle{ \quad \gamma (\varepsilon ) = (1 -\varepsilon g_{0}^{2}C(0)), }$$
(7.174)

This is identical in structure to Eq. (7.3) for the deterministic neural field modulo the rescaling of the decay term. The analysis of the wave speeds proceeds along similar lines to Sect. 7.1. Thus, multiplying both sides of Eq. (7.173) by \({\text{e}}^{-\xi \gamma (\varepsilon )/c_{\varepsilon }}\) and integrating with respect to ξ gives

$$\displaystyle{ U_{0}(\xi ) ={ \text{e}}^{\xi \gamma (\varepsilon )/c_{\varepsilon }}\left [\kappa -\frac{1} {c_{\varepsilon }}\int _{0}^{\xi }{\text{e}}^{-y\gamma (\varepsilon )/c_{\varepsilon }}W(y)dy\right ]. }$$
(7.175)

Finally, requiring the solution to remain bounded as ξ (ξ →−) for \(c_{\varepsilon } > 0\) (for \(c_{\varepsilon } < 0\)) implies that κ must satisfy the condition

$$\displaystyle{ \kappa = \frac{1} {\vert c_{\varepsilon }\vert }\int _{0}^{\infty }{\text{e}}^{-y\gamma (\varepsilon )/\vert c_{\varepsilon }\vert }W(\mbox{ sign}(c_{\varepsilon })y)dy. }$$
(7.176)

Hence, in the case of the exponential weight distribution (7.1), we have

$$\displaystyle{ c_{\varepsilon } = \frac{\sigma } {2\kappa }[1 - 2\kappa \gamma (\varepsilon )] }$$
(7.177)

for \(c_{\varepsilon } > 0\), and

$$\displaystyle{ c_{\varepsilon } = \frac{\sigma \gamma (\varepsilon )} {2} \frac{1 - 2\kappa \gamma (\varepsilon )} {1 -\kappa \gamma (\varepsilon )}, }$$
(7.178)

for \(c_{\varepsilon } < 0\). Assuming that \(0 \leq \gamma (\varepsilon ) \leq 1\), we see that multiplicative noise shifts the effective velocity of front propagation in the positive ξ direction.

Fig. 7.10
figure 10

Numerical simulation showing the propagation of a front solution of the stochastic neural field equation (7.157) for Heaviside weight function F(U) = H(Uκ) with κ = 0. 35, exponential weight function (7.1) with σ = 2, and multiplicative noise g(U) = U. Noise strength ε = 0. 005 and C(0) = 10. The wave profile is shown at successive times (a) t = 0 (b) t = 12 (c) t = 18 and (d) t = 24, with the initial profile at t = 0 given by the solution to Eq. (7.164). In numerical simulations we take the discrete space and time steps Δ x = 0. 1, Δ t = 0. 01. The deterministic part U 0 of the stochastic wave is shown by the solid gray curves and the corresponding solution in the absence of noise \((\varepsilon = 0\)) is shown by the dashed gray curves

In order to calculate the diffusion coefficient, it is first necessary to determine the null vector \(\mathcal{V}(\xi )\) of the adjoint linear operator \({\mathbb{L}}^{\dag }\) defined by Eq. (7.169). Setting F(U) = H(Uκ) and g(U) = g 0 U, we obtain an adjoint equation almost identical to (7.93):

$$\displaystyle{ c_{\varepsilon }\mathcal{V}^{\prime}(\xi ) +\gamma (\varepsilon )\mathcal{V}(\xi ) = - \frac{\delta (\xi )} {U_{0}^{\prime}(0)}\int _{-\infty }^{\infty }w(\xi ^{\prime})\mathcal{V}(\xi ^{\prime})d\xi ^{\prime}. }$$
(7.179)

Hence, this has the solution

$$\displaystyle{ \mathcal{V}(\xi ) = -H(\xi )\exp \left (-\varGamma (\varepsilon )\xi \right ),\quad \varGamma (\epsilon ) = \frac{\gamma (\varepsilon )} {c_{\varepsilon }}, }$$
(7.180)

and Eq. (7.172) reduces to the form

$$\displaystyle\begin{array}{rcl} D(\varepsilon ) =\varepsilon \frac{\int _{0}^{\infty }{\text{e}}^{-2\varGamma (\varepsilon )\xi }U_{ 0}{(\xi )}^{2}d\xi } {{\left [\int _{0}^{\infty }{\text{e}}^{-\varGamma (\varepsilon )\xi }U_{ 0}^{\prime}(\xi )d\xi \right ]}^{2}}.& &{}\end{array}$$
(7.181)

In the case of an exponential weight distribution, U 0(ξ) has the explicit form

$$\displaystyle{ U_{0}(\xi ) = \left \{\begin{array}{cc} \frac{1} {2c_{\varepsilon }} \frac{\sigma {e}^{-\xi /\sigma }} {1 +\sigma \varGamma (\varepsilon )} & \xi \geq 0 \\ \\ \frac{1} {2c_{\varepsilon }}\left [ \frac{2{e}^{\xi \varGamma (\varepsilon )}} {\varGamma (\varepsilon )\left (-1 {+\sigma }^{2}\varGamma {(\varepsilon )}^{2}\right )} + \frac{2} {\varGamma (\varepsilon )} + \frac{\sigma {\text{e}}^{\xi /\sigma }} {1 -\sigma \varGamma (\varepsilon )}\right ]&\xi < 0, \end{array} \right. }$$
(7.182)

and the integrals in (7.181) can be evaluated explicitly to give

$$\displaystyle{ D(\varepsilon ) = \frac{1} {2}\varepsilon \sigma g_{0}^{2}(1 +\sigma \varGamma (\varepsilon )). }$$
(7.183)
Fig. 7.11
figure 11

Plot of (a) mean \(\overline{X}(t)\) and (b) variance \(\sigma _{X}^{2}(t)\) of front position as a function of time, averaged over N = 4096 trials. Same parameter values as Fig. 7.10

Fig. 7.12
figure 12

Plot of (a) wave speed \(c_{\varepsilon }\) and (b) diffusion coefficient \(D(\varepsilon )\) as a function of threshold κ. Numerical results (solid dots) are obtained by averaging over N = 4, 096 trials starting from the initial condition given by Eq. (7.182 ). Corresponding theoretical predictions (solid curves) for \(c_{\varepsilon }\) and \(D(\varepsilon )\) are based on Eqs. (7.177) and (7.181), respectively. Other parameters as in Fig. 7.10

In Fig. 7.10 we show the temporal evolution of a single stochastic wave front, which is obtained by numerically solving the stochastic neural field equation (7.157) for F(U) = H(Uκ), g(U) = U and an exponential weight distribution w. In order to numerically calculate the mean location of the front as a function of time, we carry out a large number of level set position measurements. That is, we determine the positions X a (t) such that U(X a (t), t) = a, for various level set values a ∈ (0. 5κ, 1. 3κ) and then define the mean location to be \(\overline{X}(t) = \mathbb{E}[X_{a}(t)]\), where the expectation is first taken with respect to the sampled values a and then averaged over N trials. The corresponding variance is given by \(\sigma _{X}^{2}(t) = \mathbb{E}[{(X_{a}(t) -\bar{ X}(t))}^{2}]\). In Fig. 7.11 we plot \(\overline{X}(t)\) and σ X 2(t) as a function of t. It can be seen that both vary linearly with t, consistent with the assumption that there is a diffusive-like displacement of the front from its uniformly translating position at long time scales. The slopes of these curves then determine the effective wave speed and diffusion coefficient according to \(\overline{X}(t) \sim c_{\varepsilon }t\) and \(\sigma _{X}^{2}(t) \sim 2D(\varepsilon )t\). In Fig. 7.12 we plot the numerically estimated speed and diffusion coefficient for various values of the threshold κ and compare these to the corresponding theoretical curves obtained using the above analysis. It can be seen that there is excellent agreement with our theoretical predictions provided that κ is not too large. As κ → 0. 5, the wave speed decreases towards zero so that the assumption of relatively slow diffusion breaks down.

4.2 Stimulus-Locked Fronts

So far we have assumed that the underlying deterministic neural field equation is homogeneous in space so that there exists a family of traveling front solutions related by a uniform shift. Now suppose that there exists an external front-like input that propagates at a uniform speed v, so that the deterministic equation (7.2) becomes

$$\displaystyle{ \frac{\partial u(x,t)} {\partial t} = -u(x,t) +\int _{ -\infty }^{\infty }w(x - x^{\prime})F(u(x^{\prime},t))dx^{\prime}\, +\, I(x - vt), }$$
(7.184)

where the input is taken to be a positive, bounded, monotonically decreasing function of amplitude \(I_{0} = I(-\infty ) - I(\infty )\). The resulting inhomogeneous neural field equation can support a traveling front that locks to the stimulus, provided that the amplitude of the stimulus is sufficiently large [198]. Consider, in particular, the case of a Heaviside firing rate function F(u) = H(uκ). (See [174] for an extension to the case of a smooth sigmoid function F.) We seek a traveling wave solution \(u(x,t) = \mathcal{U}(\xi )\) where ξ = xvt and \(\mathcal{U}(\xi _{0}) =\kappa\) at a single threshold crossing point \(\xi _{0} \in \mathbb{R}\). The front is assumed to travel at the same speed as the input (stimulus-locked front). If I 0 = 0, then we recover the homogeneous equation (7.2) and ξ 0 becomes a free parameter, whereas the wave propagates at the natural speed c(κ) given by Eq. (7.6). Substituting the front solution into Eq. (7.184) yields

$$\displaystyle{ -v\frac{d\mathcal{U}(\xi )} {d\xi } = -\mathcal{U}(\xi ) +\int _{ -\infty }^{\xi _{0} }w(\xi -\xi ^{\prime})d\xi ^{\prime} + I(\xi ). }$$
(7.185)

This can be solved for v > 0 by multiplying both sides by the integrating factor \({v}^{-1}{\text{e}}^{-v\xi }\) and integrating over the interval [ξ, ) with U(ξ) → 0 as ξ to give

$$\displaystyle{\mathcal{U}(\xi ) = \frac{1} {v}\int _{\xi }^{\infty }{e}^{(\xi -\xi ^{\prime})/v}[W(\xi ^{\prime} -\xi _{ 0}) + I(\xi ^{\prime})]d\xi ^{\prime},}$$

with W(ξ) defined according to Eq. (7.3). Similarly, for v < 0, we multiply by the same integrating factor and then integrate over (−, ξ] with U(ξ) → W 0 as ξ →− to find

$$\displaystyle{\mathcal{U}(\xi ) = -\frac{1} {v}\int _{-\infty }^{\xi }{e}^{(\xi -\xi ^{\prime})/v}[W(\xi ^{\prime} -\xi _{ 0}) + I(\xi ^{\prime})]d\xi ^{\prime}.}$$

The threshold crossing condition \(\mathcal{U}(\xi _{0}) =\kappa\) then determines the position ξ 0 of the front relative to the input as a function of speed v, input amplitude I 0, and threshold κ.

One of the interesting features of stimulus-locked fronts is that they are much more robust to noise [70]. In order to show this, consider the following stochastic version of Eq. (7.184):

$$\displaystyle\begin{array}{rcl} dU(x,t)& =& \left [-U(x,t) +\int _{ -\infty }^{\infty }w(x - y)F(U(y,t))dy + I(x - vt)\right ]dt \\ & & \qquad {+\varepsilon }^{1/2}g(U(x,t))dW(x,t). {}\end{array}$$
(7.186)

Proceeding along identical lines to the case of freely propagating fronts, Eq. (7.186) is first rewritten so that the fluctuating term has zero mean:

$$\displaystyle\begin{array}{rcl} dU(x,t) = \left [h(U(x,t))+\int _{-\infty }^{\infty }w(x-y)F(U(y,t))dy+I(x-vt)\right ]dt{+\varepsilon }^{1/2}dR(U,x,t),& &{}\end{array}$$
(7.187)

and h and R are given by Eqs. (7.160) and (7.161), respectively. The stochastic field U(x, t) is then decomposed according to Eq. (7.163) with U 0 a front solution of

$$\displaystyle{ -v\frac{dU_{0}} {d\xi } - h(U_{0}(\xi )) - I(\xi ) =\int _{ -\infty }^{\infty }w(\xi -\xi ^{\prime})F(U_{ 0}(\xi ^{\prime}))d\xi ^{\prime}. }$$
(7.188)

It is assumed that the fixed profile U 0 is locked to the stimulus (has speed v). However, multiplicative noise still has an effect on U 0 by generating an \(\varepsilon\)-dependent threshold crossing point \(\xi _{\varepsilon }\) such that \(U_{0}(\xi _{\varepsilon }) =\kappa\).

Proceeding to the next order and imposing equation (7.188), we find that \(\varDelta (t) = \mathcal{O}{(\varepsilon }^{1/2})\) and

$$\displaystyle{ d\varPhi (\xi,t) = \mathbb{L} \circ \varPhi (\xi,t)dt {+\varepsilon }^{-1/2}U_{ 0}^{\prime}(\xi )d\varDelta (t) + dR(U_{0},\xi,t) {+\varepsilon }^{-1/2}I^{\prime}(\xi )\varDelta (t)dt }$$
(7.189)

where \(\mathbb{L}\) is the non-self-adjoint linear operator (7.167) with \(c_{\varepsilon } \rightarrow v\). The last term on the right-hand side of Eq. (7.189) arises from the fact that in Eq. (7.163), U 0 and Φ are expressed as functions of ξΔ(t) and \(I(\xi ) = I(\xi -\varDelta (t) +\varDelta (t)) \approx I(\xi -\varDelta (t)) + I^{\prime}(\xi -\varDelta (t))\varDelta (t)\). A nontrivial solution of Eq. (7.189) exists if and only if the inhomogeneous part is orthogonal to the null vector \(\mathcal{V}(\xi )\) of the adjoint operator \({\mathbb{L}}^{\dag }\) defined by Eq. (7.169) with \(c_{\varepsilon } \rightarrow v\). Taking the inner product of both sides of Eq. (7.189) with respect to \(\mathcal{V}(\xi )\) thus leads to the solvability condition

$$\displaystyle{ \int _{-\infty }^{\infty }\mathcal{V}(\xi )\left [U_{ 0}^{\prime}(\xi )d\varDelta (t) + I^{\prime}(\xi )\varDelta (t)dt {+\varepsilon }^{1/2}dR(U_{ 0},\xi,t)\right ]d\xi = 0. }$$
(7.190)

It follows that, to leading order, Δ(t) satisfies the Ornstein–Uhlenbeck equation

$$\displaystyle{ d\varDelta (t) + A\varDelta (t)dt = d\hat{W}(t), }$$
(7.191)

where

$$\displaystyle{ A = \frac{\int _{-\infty }^{\infty }\mathcal{V}(\xi )I^{\prime}(\xi )d\xi } {\int _{-\infty }^{\infty }\mathcal{V}(\xi )U_{ 0}^{\prime}(\xi )d\xi }, }$$
(7.192)

and

$$\displaystyle{ \hat{W}(t) = {-\varepsilon }^{1/2}\frac{\int _{-\infty }^{\infty }\mathcal{V}(\xi )g(U_{ 0}(\xi ))W(\xi,t)d\xi } {\int _{-\infty }^{\infty }\mathcal{V}(\xi )U_{ 0}^{\prime}(\xi )d\xi }. }$$
(7.193)

Note that A > 0 for I 0 > 0, since both U 0(ξ) and I(ξ) are monotonically decreasing functions of ξ. Moreover

$$\displaystyle{ \langle d\hat{W}(t)\rangle = 0,\quad \langle d\hat{W}(t)d\hat{W}(t)\rangle = 2D(\varepsilon )dt }$$
(7.194)

with D(ε) given by Eq. (7.172). Using standard properties of an Ornstein–Uhlenbeck process [210], we conclude that

$$\displaystyle{ \langle \varDelta (t)\rangle =\varDelta (0){\text{e}}^{-At},\quad \langle \varDelta {(t)}^{2}\rangle -\langle \varDelta {(t)\rangle }^{2} = \frac{D(\varepsilon )} {A} \left [1 -{\text{e}}^{-2At}\right ]. }$$
(7.195)

In particular, the variance approaches a constant \(D(\varepsilon )/A\) in the large t limit, rather than increasing linearly with time as found for freely propagating fronts.

Fig. 7.13
figure 13

Plot of existence regions of a stimulus-locked front without noise (γ = 1, dark gray) and in the presence of noise (γ = 0. 9, light gray) with overlapping regions indicated by medium gray. Stimulus taken to be of the form \(I(x,t) = I_{0}H(-\xi ),\xi = x - vt\) with amplitude I 0 and speed v. Other parameter values as in Fig. 7.10. (a) κ = 0. 95: spontaneous fronts exist in the absence of a stimulus (I 0 = 0). (b) κ = 1. 25: there are no spontaneous fronts

Fig. 7.14
figure 14

Numerical simulation showing the propagation of a stimulus-locked wave-front solution (black curves) of the stochastic neural field equation (7.186) for Heaviside weight function F(U) = H(Uκ) with κ = 0. 35, exponential weight function (7.1) with σ = 2, and multiplicative noise g(U) = U. The external input (gray curves) is taken to be of the form I(x, t) = I 0Erfc[xvt] with amplitude I 0 = 0. 4 and speed v = 1. 5. Noise strength ε = 0. 005 and C(0) = 10. The wave profile is shown at successive times (a) t = 0 (b) t = 6 (c) t = 12 and (d) t = 24, with the initial profile at t = 0 given by the solution U 0 of Eq. (7.188). In numerical simulations we take the discrete space and time steps Δ x = 0. 1, Δ t = 0. 01

In order to illustrate the above analysis, take g(U) = g 0 U for the multiplicative noise term and set F(U) = H(Uκ). The deterministic Eq. (7.188) for the profile U 0 then reduces to

$$\displaystyle{ -v\frac{dU_{0}} {d\xi } + U_{0}(\xi )[1 -\varepsilon g_{0}^{2}C(0)] + I(\xi ) =\int _{ -\infty }^{\infty }w(\xi -\xi ^{\prime})H(U_{ 0}(\xi ^{\prime})-\kappa )d\xi ^{\prime}. }$$
(7.196)

Existence of front solution proceeds along identical lines to Sect. 7.1, except now the speed v is fixed, whereas the threshold crossing point ξ 0, say, is no longer arbitrary due to the breaking of translation symmetry. The point ξ 0 is determined by the threshold condition U 0(ξ 0) =κ and will depend on the noise strength \(\varepsilon\). In Fig. 7.13 we show existence regions in the (v, I 0)-plane for stimulus-locked fronts when I(ξ) = I 0 H(−ξ), that is, for a step function input of speed v and amplitude I 0. This illustrates the fact that multiplicative noise leads to an \(\varepsilon\)-dependent shift in the existence regions. In Fig. 7.14 we show the temporal evolution of a single stimulus-locked front, which is obtained by numerically solving the Langevin equation (7.186) for F(U) = H(Uκ), g(U) = U and an exponential weight distribution w. Numerically speaking, it is convenient to avoid discontinuities in the input by taking I(x, t) = I 0Erfc[xvt] rather than a Heaviside. The corresponding mean \(\overline{X}(t)\) and variance σ X 2(t) of the position of the front, which are obtained by averaging over level sets as outlined in Sect. 7.4.1, are shown in Fig. 7.15. It can be seen that, as predicted by the analysis, \(\overline{X}(t)\) varies linearly with t with a slope equal to the stimulus speed v = 1. 5. Moreover, the variance σ X 2(t) approaches a constant value as t, which is comparable to the theoretical value \(D(\varepsilon )/A\) evaluated for the given input. Thus, we find that stimulus-locked fronts are much more robust to noise than freely propagating fronts, since the variance of the mean position saturates as t. Consequently, stimulus locking persists in the presence of noise over most of the parameter range for which stimulus locking is predicted to occur.

Fig. 7.15
figure 15

Plot of (a) mean \(\overline{X}(t)\) and (b) variance σ X 2(t) of the position of a stimulus-locked front as a function of time, averaged over N = 4096 trials. Smooth gray curve in (b) indicates theoretical prediction of variance. Stimulus taken to be of the form \(I(x,t) = I(x - ct) = I_{0}\mbox{ Erfc}[x - vt]\) with amplitude I 0 = 0. 4 and speed v = 1. 5. Other parameter values as in Fig. 7.10

4.3 Stochastic Pulled Fronts

In the case of the F–KPP equation with multiplicative noise, one finds that the stochastic wandering of a pulled front about its mean position is subdiffusive with varΔ(t) ∼ t 1∕2, in contrast to the diffusive wandering of a front propagating into a metastable state for which varΔ(t) ∼ t [530]. Such scaling is a consequence of the asymptotic relaxation of the leading edge of the deterministic pulled front. Since pulled front solutions of the neural field equation (7.27) exhibit similar asymptotic dynamics (see Eq. (7.46)), it suggests that there will also be subdiffusive wandering of these fronts in the presence of multiplicative noise. In order to illustrate this, consider the stochastic neural field equation

$$\displaystyle\begin{array}{rcl} dA(x,t) = \left [-A(x,t)+F\left (\int _{-\infty }^{\infty }w(x-y)A(y,t)dy\right )\right ]dt{+\varepsilon }^{1/2}g_{ 0}A(x,t)dW(x,t)\qquad & &{}\end{array}$$
(7.197)

with W(x, t) a Wiener process satisfying Eq. (2.84). Note that the noise term has to vanish when A(x, t) = 0, since the firing rate A is restricted to be positive. Hence, the noise has to be multiplicative. Formally speaking, one can carry over the analysis of the Langevin equation (7.157). First, decompose the solution along similar lines to Eq. (2.96):

$$\displaystyle{ A(x,t) = A_{0}(\xi -\varDelta (t)) {+\varepsilon }^{1/2}\varPhi (\xi -\varDelta (t),t) }$$
(7.198)

with \(\xi = x - c_{\varepsilon }t\), with the fixed front profile A 0 satisfying the deterministic equation

$$\displaystyle{ -c_{\varepsilon }\frac{dA_{0}} {d\xi } + A_{0}(\xi )[1 -\varepsilon g_{0}^{2}C(0)] = F\left (\int _{ -\infty }^{\infty }w(\xi -\xi ^{\prime})A_{ 0}(\xi ^{\prime})d\xi ^{\prime}\right ). }$$
(7.199)

The effective velocity \(c_{\varepsilon }\) of the front is given by the minimum of the dispersion curve

Fig. 7.16
figure 16

(a) Plot of variance σ X 2(t) of the position of a stochastic pulled front as a function of time. (b) Log-log plot of variance σ X 2(t) as a function of time t. Noise amplitude \(\varepsilon = 0.005\) and κ = 0. 8. Other parameter values as in Fig. 7.2

$$\displaystyle{ c_{\varepsilon }(\lambda ) = \frac{1} {\lambda } \left [\hat{W}(\lambda ) +\hat{ W}(-\lambda ) - [1 -\varepsilon g_{0}^{2}C(0)]\right ]. }$$
(7.200)

Fluctuations thus shift the dispersion curve to higher velocities. However, it is no longer possible to derive an expression for the diffusion coefficient \(D(\varepsilon )\) along the lines of Eq. (7.172), since both numerator and denominator would diverge for a pulled front. This reflects the asymptotic behavior of the leading edge of the front. It is also a consequence of the fact that there is no characteristic time scale for the convergence of the front velocity to its asymptotic value, which means that it is not possible to separate the fluctuations into a slow wandering of front position and fast fluctuations of the front shape [162, 486]. Nevertheless, numerical simulations of Eq. (7.197) with F given by the piecewise linear firing rate (7.29) are consistent with subdiffusive wandering of the front. In Fig. 7.16(a), we plot the variance σ X 2(t) of the position of a pulled front solution of Eq. (7.197), which are obtained by averaging over level sets along identical lines to Sect. 7.4.1. It can be seen that the variance appears to exhibit subdiffusive behavior over longer time scales. This is further illustrated by plotting a log-log plot of σ X 2(t) against time t; see Fig. 7.16(b). At intermediate time scales, the slope of the curve is approximately equal to one, consistent with normal diffusion, but at later times the slope decreases, indicating subdiffusive behavior.

5 Traveling Waves in 2D Oscillatory Neural Fields

Troy and Shusterman [580, 634] have shown how a neural field model with strong linear adaptation (see Eq. (7.49)) can act as an oscillatory network that supports 2D target patterns and spiral waves consistent with experimental studies of tangential cortical slices [292]. (For the analysis of spiral waves in the corresponding excitable regime, see [354].) However, since the linear form of adaptation used in these studies is not directly related to physiological models of adaptation, it is difficult to ascertain whether or not the strength of adaptation required is biologically reasonable. This motivated a more recent study of spiral waves in a 2D neural medium involving a nonlinear, physiologically based form of adaptation, namely, synaptic depression [330]. The latter model is given by

$$\displaystyle\begin{array}{rcl} \frac{\partial u(\mathbf{r},t)} {\partial t} & =& -u(\mathbf{r},t) +\int w(\vert \mathbf{r} -\mathbf{r}^{\prime}\vert )q(\mathbf{r}^{\prime},t)F(u(\mathbf{r}^{\prime},t))d\mathbf{r}^{\prime} \\ \frac{\partial q(\mathbf{r},t)} {\partial t} & =& \frac{1 - q(\mathbf{r},t)} {\tau _{q}} -\beta q(\mathbf{r},t)F(u(\mathbf{r},t)). {}\end{array}$$
(7.201)

The radially symmetric excitatory weight distribution is taken to be an exponential, \(w(r) ={ \text{e}}^{-r}/2\pi\). It can be shown that the space-clamped model

$$\displaystyle\begin{array}{rcl} \dot{u}(t) = -u(t) + q(t)F(u(t)),\quad \dot{q}(t) = \frac{1 - q(t)} {\tau _{q}} -\beta q(t)F(u(t)),& &{}\end{array}$$
(7.202)
Fig. 7.17
figure 17

Limit cycle oscillations in the space-clamped system (7.202) for a piecewise linear firing rate function (1.16) with threshold κ = 0. 01 and gain η = 4. (a) Bifurcation diagram showing fixed points u of the system as a function of β for τ q = 80. (b) Corresponding phase-plane plot of q versus u (gray curve) for β = 4, showing that the system supports a stable limit cycle [329]

Fig. 7.18
figure 18

Target patterns in a 2D neural field with synaptic depression induced by an initial conditional stimulus specified by Eq. (7.203) at t = 0, where χ = 1 and ζ = 25. Initially, an activated state spreads radially outward, across the entire medium as a traveling front. Then, the localized oscillating core of activity emits a target wave with each oscillation cycle. Eventually, these target waves fill the domain. Each target wave can be considered as a phase shift in space of the oscillation throughout the medium; they travel with the same speed as the initial front. Parameters are τ q = 80, β = 4, η = 4, and κ = 0. 01 [330]

supports limit cycle oscillations provided that the firing rate function has finite gain. For example, in the case of the piecewise linear firing rate function (1.16), oscillations arise via a subcritical Hopf bifurcation of a high activity fixed point; see Fig. 7.17. One then finds that the full network model (7.201) supports a spatially localized oscillating core that periodically emits traveling pulses [330]. Such dynamics can be induced by taking an initial condition of the form

Fig. 7.19
figure 19

Spiral wave generated by shifting the phase of the top and bottom halves of the target pattern shown in Fig. 7.18. The period of the spiral wave oscillation is roughly the same as the period of the oscillation in the space-clamped system. All patches of neurons are oscillating at the same frequency, but phase-shifted as coordinates are rotated about the central phase singularity [330]

$$ \displaystyle\begin{array}{rcl} (u(\mathbf{r},0),q(\mathbf{r},0))& =& (\chi {\text{e}}^{-({x}^{2}+{y}^{2}){/\zeta }^{2} },1),{}\end{array}$$
(7.203)

where χ and ζ parameterize the amplitude and spatial constant of the initial state. An example of a pulse-emitting core is shown in Fig. 7.18, which oscillates at a frequency of roughly 3 Hz. Pulses are emitted each cycle and travel at a speed of roughly 30 cm/s, which is determined by the period of the oscillations; the latter is set by the time constant of synaptic depression. The initial emission of spreading activity appears as a traveling front which propagates from the region activated by the input current into the surrounding region of zero activity; it travels at the same speed as the subsequent target waves. The front converts each region of the network into an oscillatory state that is phase-shifted relative to the core, resulting in the appearance of a radially symmetric target pattern. Spiral waves can also be induced by breaking the rotational symmetry of pulse emitter solutions [330]. More specifically, if the target pattern produced by the emitter has the top and bottom halves of its domain phase-shifted, then the dynamics evolves into two counterrotating spirals on the left and right halves of the domain. Closer inspection of one of these spirals reveals that it has a fixed center about which activity rotates indefinitely as shown in Fig. 7.19.

A very different mechanism for generating periodic waves in a 1D or 2D neural field model is through the combination of adaptation and a spatially localized input [196, 197]. Recall from Sect. 7.2 that a 1D excitatory neural field with adaptation supports the propagation of solitary traveling pulse, which can be induced by perturbing the system with a transient localized pulse. (In contrast to the previous example, we are assuming that the neural field operates in an excitable regime.) In the case of a 2D network with radially symmetric weights, such a pulse will produce a single expanding circular wave. Now suppose that a 2D localized pulse persists in the form of a radially symmetric Gaussian input \(I(\mathbf{r}) = I_{0}{\text{e}}^{-{r}^{2}{/\sigma }^{2} }\)—this could either represent an external stimulus or a localized region of depolarization. As one might expect, for sufficiently large input amplitude I 0, the neural field supports a radially symmetric stationary pulse or bump centered about the input. Such a bump is not self-sustaining, however, since if the input is removed, then the bump disappears as well. This then raises the question as to what happens to the stability of the bump as the input amplitude is slowly decreased. One finds that the bump first undergoes a Hopf instability as I 0 is decreased, leading to the formation of a spatially localized oscillating pulse or breather [196]. Interestingly, as the input amplitude is further reduced, the breather can undergo a secondary instability such that it now acts as an oscillating core that emits circular target waves. Thus, a spatially localized stationary input provides a mechanism for the formation of a network pacemaker oscillator. A linear stability analysis establishes that the primary instability is due to the growth of radially symmetric eigenmodes. A similar bifurcation scenario also occurs in a neural field with lateral inhibition, except that now the Hopf bifurcation typically involves the growth of nonradially symmetric eigenmodes, resulting in asymmetric breathers and rotating waves [197].