Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Neural field models are generally considered to date back to the 1970s [1, 41], although several earlier papers consider similar equations [4, 25]. These types of equations were originally formulated as models for the dynamics of macroscopic activity patterns in the cortex , on a much larger spatial scale than that of a single neuron. They have been used to model phenomena such as short-term memory  [36], the head direction system  [43], visual hallucinations  [19, 20], and EEG rhythms [7].

Perhaps the simplest formulation of such a model in one spatial dimension is

$$\displaystyle{ \frac{\partial u(x,t)} {\partial t} = -u(x,t) +\int _{ -\infty }^{\infty }w(x - y)f(u(y,t))\mathit{dy} }$$
(5.1)

where

  • w is symmetric, i.e. w(−x) = w(x),

  • \(\lim _{x\rightarrow \infty }w(x) = 0\),

  • \(\int _{-\infty }^{\infty }w(x)\mathit{dx} < \infty \),

  • w(x) is continuous,

and f is a non-decreasing function with \(\lim _{u\rightarrow -\infty }f(u) = 0\) and \(\lim _{u\rightarrow \infty }f(u) = 1\) [12, 36]. The physical interpretation of this type of model is that u(x, t) is the average voltage of a large group of neurons at position \(x \in \mathbb{R}\) and time t, and f(u(x, t)) is their firing rate, normalised to have a maximum of 1. The function w(x) describes how neurons a distance x apart affect one another. Typical forms of this function are purely positive [6], “Mexican hat” [19, 26] (positive for small x and negative for large x) and decaying oscillatory [18, 36]. To find the influence of neurons at position y on those at position x we evaluate f(u(y, t)) and weight it by w(xy). The influence of all neurons is thus the integral over y of w(xy)f(u(y, t)). In the absence of inputs from other parts of the network, u decays exponentially to a steady state, which we define to be zero. Equation (5.1) is a nonlocal differential equation, with the nonlocal term arising from the biological reality that we are modelling. Typically, researchers are interested in either “bump ” solutions of (5.1), for which f(u(x)) > 0 only on a finite number of finite, disjoint intervals, or front solutions which connect a region of high activity to one of zero activity [12] (see Chap. 7). Note that this type of model is invariant with respect to spatial translations, which is reflected in the fact that w appears as a function of relative position only (i.e. xy), not the actual values of x and y.

The function f is normally thought of as a sigmoid (although other functions are sometimes considered [26]), and in the limit of infinite steepness it becomes the Heaviside step function [12, 36]. In this case stationary solutions are easily constructed since to evaluate the integral in (5.1) we just integrate w(xy) over the interval(s) of y where f(u(y, t)) = 1. The stability of these solutions can be determined by linearising (5.1) about them and using the fact that the derivative of the Heaviside function is the Dirac delta function [6, 40].

When f is not a Heaviside, constructing stationary solutions becomes more difficult and we generally have to do so numerically. A stationary solution of (5.1) satisfies

$$\displaystyle{ u(x) =\int _{ -\infty }^{\infty }w(x - y)f(u(y))\mathit{dy}. }$$
(5.2)

In all but Sect. 5.4 of this chapter we consider only spatially-localised solutions, i.e. ones for which u and all of its relevant spatial derivatives decay to zero as | x | → . Generally speaking, integral equations such as (5.2) are not studied in as much detail as differential equations. As a result more methods for analysis—and software packages—exist for the numerical solution of differential equations, as opposed to integral equations. For these reasons we consider rewriting (5.2) as a differential equation for the function u(x). The key to doing so is to recognise that the integral in (5.2) is a convolution . This observation provides several equivalent ways of converting (5.2) into a differential equation.

The first method involves recalling that the Fourier transform of the convolution of two functions is the product of their Fourier transforms. Thus, denoting by F[u](k) the Fourier transform of u(x), where k is the transform variable, Fourier transforming (5.2) gives

$$\displaystyle{ F[u](k) = F[w](k) \times F[f(u)](k) }$$
(5.3)

where “×” indicates normal multiplication. Suppose that the Fourier transform of w was a rational function of k 2, i.e.

$$\displaystyle{ F[w](k) = \frac{P(k^{2})} {Q(k^{2})} }$$
(5.4)

where P and Q are polynomials. Multiplying both sides of (5.3) by Q(k 2) we obtain

$$\displaystyle{ Q(k^{2}) \times F[u](k) = P(k^{2}) \times F[f(u)](k) }$$
(5.5)

Recalling that if the Fourier transform of u(x) is F[u](k), then the Fourier transform of u ′ ′(x) is − k 2 F[u](k), the Fourier transform of u ′ ′ ′ ′(x) is k 4 F[u](k) and so on, where the primes indicate spatial derivatives, taking the inverse Fourier transform of (5.5) gives

$$\displaystyle{ D_{1}u(x) = D_{2}f(u(x)) }$$
(5.6)

where D 1 and D 2 are linear differential operators involving only even derivatives associated with Q and P respectively [32]. As an example, consider the decaying oscillatory coupling function

$$\displaystyle{ w(x) = e^{-b\vert x\vert }(b\sin \vert x\vert +\cos x) }$$
(5.7)

where b is a parameter (plotted in Fig. 5.1 (left) for b = 0. 5), which has the Fourier transform

$$\displaystyle{ \frac{4b(b^{2} + 1)} {k^{4} + 2(b^{2} - 1)k^{2} + (b^{2} + 1)^{2}}. }$$
(5.8)

For this example D 2 is just the constant 4b(b 2 + 1) and

$$\displaystyle{ D_{1} = \frac{d^{4}} {\mathit{dx}^{4}} - 2(b^{2} - 1) \frac{d^{2}} {\mathit{dx}^{2}} + (b^{2} + 1)^{2} }$$
(5.9)

and thus (for this choice of w) Eq. (5.2) can be written

$$\displaystyle{ \frac{d^{4}u} {\mathit{dx}^{4}} - 2(b^{2} - 1)\frac{d^{2}u} {\mathit{dx}^{2}} + (b^{2} + 1)^{2}u = 4b(b^{2} + 1)f(u(x)). }$$
(5.10)

Our decision to consider only spatially-localised solutions validates the use of Fourier transforms and gives the boundary conditions for (5.10), namely

$$\displaystyle{ \lim _{x\rightarrow \pm \infty }(u,u^{{\prime}},u^{{\prime\prime}},u^{{\prime\prime\prime}}) = (0,0,0,0). }$$
(5.11)
Fig. 5.1
figure 1

Left: w(x) given by (5.7) when b = 0. 5. Right: f(u) given by (5.21) for κ = 0. 1, h = 1

The other method for converting (5.2) into a differential equation is to recall that the solution of an inhomogeneous linear differential equation can be formally written as the convolution of the Green’s function of the linear differential operator together with the appropriate boundary conditions , and the function on the right hand side (RHS) of the differential equation. Thus if w was such a Green’s function, we could recognise (5.2) as being the solution of a linear differential equation with f(u) as its RHS.

Using the example above one can show that the Green’s function of the operator (5.9) with boundary conditions (5.11), i.e. the solution of

$$\displaystyle{ \frac{d^{4}w} {\mathit{dx}^{4}} - 2(b^{2} - 1)\frac{d^{2}w} {\mathit{dx}^{2}} + (b^{2} + 1)^{2}w =\delta (x) }$$
(5.12)

satisfying (5.11), where δ is the Dirac delta function, is

$$\displaystyle{ w(x) = \frac{e^{-b\vert x\vert }(b\sin \vert x\vert +\cos x)} {4b(b^{2} + 1)} }$$
(5.13)

and thus the solution of (5.10)–(5.11) is (5.2). This second method, of recognising that the coupling function w is the Green’s function of a linear differential operator, is perhaps less easy to generalise, so we concentrate mostly on the first method in this chapter. An important point to note is that the Fourier transform method applies equally well to (5.1), i.e. the full time-dependent problem. Using the function (5.7) and keeping the time derivative one can convert (5.1) to

$$\displaystyle{ \left [ \frac{\partial ^{4}} {\partial x^{4}} - 2(b^{2} - 1) \frac{\partial ^{2}} {\partial x^{2}} + (b^{2} + 1)^{2}\right ]\left (u(x,t)\,+\,\frac{\partial u(x,t)} {\partial t} \right ) = 4b(b^{2} + 1)f(u(x,t)) }$$
(5.14)

Clearly stationary solutions of (5.14) satisfy (5.10), but keeping the time dependence in (5.14) enables us to determine the stability of these stationary solutions via linearisation about them.

Note that the Fourier transform of (1∕2)e − | x |  is 1∕(1 + k 2), and thus for this coupling function (5.2) is equivalent to

$$\displaystyle{ \left (1 - \frac{\partial ^{2}} {\partial x^{2}}\right )u = f(u) }$$
(5.15)

Also, the Fourier transform of the “wizard hat ” w(x) = (1∕4)(1 − | x | )e − | x |  is \(k^{2}/(1 + k^{2})^{2}\), giving the differential equation [12]

$$\displaystyle{ \left (1 - \frac{\partial ^{2}} {\partial x^{2}}\right )^{2}u = - \frac{\partial ^{2}} {\partial x^{2}}f(u) }$$
(5.16)

and thus a variety of commonly used connectivity functions are amenable to this type of transformation. (See also [26] for another example.)

The model (5.1) assumes that information about activity at position y propagates instantaneously to position x, but a more realistic model could include a distance-dependent delay:

$$\displaystyle{ \frac{\partial u(x,t)} {\partial t} = -u(x,t) +\int _{ -\infty }^{\infty }w(x - y)f\left (u\left (y,t -\frac{\vert x - y\vert } {v} \right )\right )\mathit{dy} }$$
(5.17)

where v > 0 is the velocity of propagation of information. Equation (5.17) can be written

$$\displaystyle{ \frac{\partial u(x,t)} {\partial t} = -u(x,t) +\psi (x,t) }$$
(5.18)

where

$$\displaystyle{ \psi (x,t) \equiv \int _{-\infty }^{\infty }\int _{ -\infty }^{\infty }K(x - y,t - s)f(u(y,s))\mathit{dy}\ \mathit{ds} }$$
(5.19)

and K(x, t) = w(x)δ(t − | x | ∕v) [12, 33]. Recognising that both integrals in (5.19) are convolutions, and making the choice w(x) = (1∕2)e − | x | , one can take Fourier transforms in both space and time and convert (5.19) to

$$\displaystyle{ \left ( \frac{\partial ^{2}} {\partial t^{2}} + 2v \frac{\partial } {\partial t} + v^{2} - v^{2} \frac{\partial ^{2}} {\partial x^{2}}\right )\psi (x,t) = \left (v^{2} + v \frac{\partial } {\partial t}\right )f(u(x,t)) }$$
(5.20)

This equation was first derived by [30], and these authors may well have been the first to use Fourier transforms to convert neural field models to PDEs. We will not consider delays here, but see [15] for a recent approach in two spatial dimensions.

2 Results in One Spatial Dimension

We now present some results of the analysis of (5.14), similar to those in [36]. From now on we make the specific choice of the firing rate function

$$\displaystyle{ f(u) = e^{-\kappa /(u-h)^{2} }H(u - h) }$$
(5.21)

where κ > 0 and \(h \in \mathbb{R}\) are parameters, and H is the Heaviside step function. The function (5.21) for typical parameter values is shown in Fig. 5.1 (right). Note that if h > 0 then f(0) = 0.

We start with a few comments regarding Eqs. (5.10) and (5.11). Firstly, Eq. (5.10) is reversible under the involution \((u,u^{{\prime}},u^{{\prime\prime}},u^{{\prime\prime\prime}})\mapsto (u,-u^{{\prime}},u^{{\prime\prime}},-u^{{\prime\prime\prime}})\) [18]. Secondly, spatially-localised solutions of (5.10) can be regarded as homoclinic orbits to the origin, i.e. orbits for which u and all of its derivatives tend to zero as x → ±. Linearising (5.10) about the origin one finds that it has eigenvalues b ± i and − b ± i, i.e. the fixed point at the origin is a bifocus [34], and thus the homoclinic orbits spiral into and out of the origin. Thirdly, Eq. (5.10) is Hamiltonian , and homoclinic orbits to the origin satisfy the first integral

$$\displaystyle{ u^{{\prime}}u^{{\prime\prime\prime}}-\frac{(u^{{\prime\prime}})^{2}} {2} - (b^{2} - 1)(u^{{\prime}})^{2} + (b^{2} + 1)^{2}Q(u) = 0 }$$
(5.22)

where

$$\displaystyle{ Q(u) \equiv \int _{0}^{u}\left (s -\frac{4bf(s)} {b^{2} + 1} \right )\mathit{ds} }$$
(5.23)

This Hamiltonian nature can be exploited to understand the solutions of (5.10)–(5.11) and the bifurcations they undergo as parameters are varied [18]. See for example [11] for more details on homoclinic orbits in reversible systems.

We are interested in stationary spatially-localised solutions of (5.14), and how they vary as parameters are varied. Figure 5.2 shows the result of following such solutions as the parameter h (firing threshold) is varied. As was found in [18, 35] the family of solutions forms a “snake ” with successively more large amplitude oscillations added to the solution as one moves from one branch of the snake to the next in the direction of increasing max(u). (Note that b, not h, was varied in [18, 35].) Similar snakes of homoclinic orbits have been found in other reversible systems of fourth-order differential equations [10, 42], and Faye et al. [21] very recently analysed snaking behaviour in a model of the form (5.1).

Fig. 5.2
figure 2

Spatially-localised steady states of (5.14) as a function of h. The vertical axis is the maximum over the domain of u(x). Solid curves indicate stable while dashed indicate unstable. The solutions at points A, B and C are shown in Fig. 5.3. Other parameters are b = 0. 25, κ = 0. 1

Fig. 5.3
figure 3

Spatially-localised steady states of (5.14) at the three points marked A, B and C in Fig. 5.2. Other parameters are b = 0. 25, κ = 0. 1

Figure 5.3 shows three solutions from the family shown in Fig. 5.2, all at h = 0. 5. Solutions at A and C are stable, and are referred to as “1-bump” and “3-bump” solutions, respectively, since they have 1 and 3, respectively, regions for which u > h. The solution at B is an unstable 3-bump solution. Stability of solutions was determined by linearising (5.14) about them. The curve in Fig. 5.2 shows N-bump solutions which are symmetric about the origin, where N is odd. A similar curve exists for N even (not shown) and asymmetric solutions also exist [17]. In summary, spatially-localised solutions of (5.10) are generic and form families which are connected in a snake -like fashion which can be uncovered as parameters are varied. For more details on (5.10)–(5.11) the reader is referred to [36]. We next consider the generalisation of neural field models to two spatial dimensions and again investigate spatially-localised solutions.

3 Two Dimensional Bumps and Rings

Neural field equations are easily generalised to two spatial dimensions, and the simplest are of the form

$$\displaystyle{ \frac{\partial u(\mathbf{x},t)} {\partial t} = -u(\mathbf{x},t) +\int _{\mathbb{R}^{2}}w(\vert \mathbf{x} -\mathbf{y}\vert )f(u(\mathbf{y},t))d\mathbf{y} }$$
(5.24)

where \(\mathbf{x} \in \mathbb{R}^{2}\) and w and f have their previous meanings. Note that w is a function of the scalar distance between points x and y. Spatially-localised solutions of equations of the form (5.24) have only recently been analysed in any depth [9, 16, 2224, 29, 35, 38]. The study of such solutions is harder than in one spatial dimension for the following reasons:

  • Their analytical construction involves integrals over subsets of the plane rather than over intervals.

  • The determination of the stability of, say, a circular stationary solution is more difficult because perturbations which break the rotational symmetry must be considered.

  • Numerical studies require vastly more mesh points in a discretisation of the domain.

However, the use of the techniques presented in Sect. 5.1 has been fruitful for the construction and analysis of such solutions. One important point to note is that the techniques cannot be applied directly when the function w is one of the commonly used ones mentioned above. For example, if \(w(x) = e^{-x} -\mathit{Me}^{-\mathit{mx}}\) (of Mexican-hat type when 0 < M < 1 and 0 < m < 1) then its Fourier transform is

$$\displaystyle{ F[w](\vert \mathbf{k}\vert ) = \frac{1} {(1 + \vert \mathbf{k}\vert ^{2})^{3/2}} - \frac{\mathit{Mm}} {(m^{2} + \vert \mathbf{k}\vert ^{2})^{3/2}} }$$
(5.25)

where \(\mathbf{k} \in \mathbb{R}^{2}\) is the transform variable. Rearranging and then taking the inverse Fourier transform one faces the question as to what a differential equation containing an operator like \((1 -\nabla ^{2})^{3/2}\) actually means [15]. One way around this is to expand a term like \((1 + \vert \mathbf{k}\vert ^{2})^{3/2}\) around | k |  = 0 as \(1 + (3/2)\vert \mathbf{k}\vert ^{2} + O(\vert \mathbf{k}\vert ^{4})\) and keep only the first few terms, thus (after inverse transforming) giving one a PDE. This is known as the long wavelength approximation  [37]; see [15] for a discussion.

A more fruitful approach is to realise that neural field models are qualitative only, and we can gain insight from models in which the functions w and f are qualitatively correct. Thus we have some freedom in our choice of these functions. The approach of Laing and co-workers [32, 35, 36] was to use this freedom to choose not w, but its Fourier transform. If the Fourier transform of w is chosen so that the Fourier transform of (5.24) can be rearranged and then inverse transformed to give a simple differential equation, and the resulting function w is qualitatively correct (i.e. has the same general properties as connectivity functions of interest) then one can make much progress.

As an example, consider the case when

$$\displaystyle{ F[w](\vert \mathbf{k}\vert ) = \frac{A} {B + (\vert \mathbf{k}\vert ^{2} - M)^{2}} }$$
(5.26)

where A, B and M are parameters [35]. Taking the Fourier transform of (5.24), using (5.26), and rearranging, one obtains

$$\displaystyle{ \left \{\vert \mathbf{k}\vert ^{4} - 2M\vert \mathbf{k}\vert ^{2} + B + M^{2}\right \}F\left [u + \frac{\partial u} {\partial t} \right ](\mathbf{k}) = \mathit{AF}[f(u)](\mathbf{k}) }$$
(5.27)

and upon taking the inverse Fourier transform one obtains the differential equation

$$\displaystyle{ \left [\nabla ^{4} + 2M\nabla ^{2} + B + M^{2}\right ]\left (u + \frac{\partial u} {\partial t} \right ) = \mathit{Af }(u) }$$
(5.28)

The function w is then defined as the inverse Fourier transform of its Fourier transform, i.e.

$$\displaystyle{ w(x) = A\int _{0}^{\infty } \frac{sJ_{0}(\mathit{xs})} {B + (s^{2} - M)^{2}}\mathit{ds} }$$
(5.29)

where J 0 is the Bessel function of the first kind of order 0 [35]. (w(x) is the Hankel transform of order 0 of F[w].) Figure 5.4 shows a plot of w(x) for parameter values M = 1, A = 0. 4, B = 0. 1. We see that it is of a physiologically-plausible form, qualitatively similar to that shown in Fig. 5.1 (left). We have thus formally transformed (5.24) into the PDE (5.28).

Fig. 5.4
figure 4

The function w(x) defined by (5.29) for parameter values M = 1, A = 0. 4, B = 0. 1

As a start we consider spatially-localised and rotationally-invariant solutions of (5.28), which satisfy

$$\displaystyle\begin{array}{rcl} & & \left [ \frac{\partial ^{4}} {\partial r^{4}} + \frac{2} {r} \frac{\partial ^{3}} {\partial r^{3}} - \frac{1} {r^{2}} \frac{\partial ^{2}} {\partial r^{2}} + \frac{1} {r^{3}} \frac{\partial } {\partial r} + 2M\left ( \frac{\partial ^{2}} {\partial r^{2}} + \frac{1} {r} \frac{\partial } {\partial r}\right ) + (B + M^{2})\right ]\left (u + \frac{\partial u} {\partial t} \right ) \\ & & = \mathit{Af }(u) {}\end{array}$$
(5.30)

with

$$\displaystyle{ \left.\frac{\partial u} {\partial r}\right \vert _{r=0} = \left.\frac{\partial ^{3}u} {\partial r^{3}} \right \vert _{r=0} = 0\quad \mbox{ and }\quad \lim _{r\rightarrow \infty }\left (u, \frac{\partial u} {\partial r}, \frac{\partial ^{2}u} {\partial r^{2}}, \frac{\partial ^{3}u} {\partial r^{3}} \right ) = (0,0,0,0) }$$
(5.31)

where u is now a function of radius r and time t only. We can numerically find and then follow stationary solutions of (5.30)–(5.31) as parameters are varied. For example, Fig. 5.5 shows the effects of varying h for solutions with u(0) > 0 and u ′ ′(0) < 0. We see a snaking curve similar to that in Fig. 5.2, and as we move up the snake, on each successive branch the solution gains one more large amplitude oscillation.

Fig. 5.5
figure 5

Solutions of (5.30)–(5.31) with u(0) > 0 and u ′ ′(0) < 0 as a function of h. Other parameter values: κ = 0. 05, M = 1, A = 0. 4, B = 0. 1. The solution \(\overline{u}(r)\) at the point indicated by the circle is shown in Fig. 5.6 (left)

For any particular solution, \(\overline{u}(r)\) on the curve in Fig. 5.5 one can find its stability by linearising (5.28) about it. To do this we write

$$\displaystyle{ u(r,\theta,t) = \overline{u}(r) +\epsilon \nu (t,r)\cos (m\theta ) }$$
(5.32)

where 0 < ε ≪ 1 and m ≥ 0 is an integer, the azimuthal index. We choose this form of perturbation in order to find solutions which break the circular symmetry of the system. Substituting (5.32) into (5.28) and keeping only first order terms in ε we obtain

$$\displaystyle\begin{array}{rcl} \left [ \frac{\partial ^{4}} {\partial r^{4}} + \frac{2} {r} \frac{\partial ^{3}} {\partial r^{3}} + \left (\frac{2\mathit{Mr}^{2} - 2m^{2} - 1} {r^{2}} \right ) \frac{\partial ^{2}} {\partial r^{2}} + \left (\frac{2m^{2} + 1 + 2\mathit{Mr}^{2}} {r^{3}} \right ) \frac{\partial } {\partial r}\right.& & \\ \left.+\frac{m^{4} - 4m^{2} + (B + M^{2})r^{4} - 2\mathit{Mm}^{2}r^{2}} {r^{4}} \right ]\left (\nu + \frac{\partial \nu } {\partial t}\right ) = \mathit{Af }^{{\prime}}(\overline{u})\nu & & \qquad {}\end{array}$$
(5.33)

Since this equation is linear in ν we expect solutions of the form \(\nu (r,t) \sim \mu (r)e^{\lambda t}\) as t → , where λ is the most positive eigenvalue associated with the stability of \(\overline{u}\) (which we assume to be real) and μ(r) is the corresponding eigenfunction .

Thus to determine the stability of a circularly-symmetric solution with radial profile \(\overline{u}(r)\), we solve (5.33) for integer m ≥ 0 and determine λ(m). If N is the integer for which λ(N) is largest, and λ(N) > 0, then this circularly-symmetric solution will be unstable with respect to perturbations with D N symmetry, and the radial location of the growing perturbation will be given by μ(r).

For example, consider the solution shown solid in the left panel of Fig. 5.6. This solution exists at h = 0. 42, so in terms of active regions (where u > h) this solution corresponds to a central circular bump with a ring surrounding it. Calculating λ(m) for this solution we obtain the curve in Fig. 5.6 (right). (We do not need to be restricted to integer m for the calculation.) We see that for this solution N = 6, and thus we expect a circularly-symmetric solution of (5.28) with radial profile given by \(\overline{u}(r)\) to be unstable at these parameter values, and most unstable with respect to perturbations with D 6 symmetry . The eigenfunction μ(r) corresponding to λ(6) is shown dashed in Fig. 5.6 (left). It is spatially-localised around the ring at r ≈ 7, so we expect the instability to appear here.

Fig. 5.6
figure 6

Left: the solid curve shows \(\overline{u}(r)\) at the point indicated by the circle in Fig. 5.5. The dashed curve shows the eigenfunction μ(r) corresponding to λ(6). Right: λ(m) for the solution shown solid in the left panel. The integer with largest λ is N = 6

Figure 5.7 shows the result of simulating (5.28) with an initial condition formed by rotating the radial profile in Fig. 5.6 (left) through a full circle in the angular direction, and then adding a small random perturbation to u at each grid point. The initial condition is shown in the left panel and the final state (which is stable) is shown in the right panel. We see the formation of six bumps at the location of the first ring, as expected. This analysis has thus successfully predicted the appearance of a stable “7-bump” solution from the initial condition shown in Fig. 5.7 (left). (We used a regular grid in polar coordinates, with domain radius 30, using 200 points in the radial direction and 140 in the angular. The spatial derivatives in (5.28) were approximated using second-order accurate finite differences.)

Fig. 5.7
figure 7

A simulation of (5.28) with initial condition corresponding to \(\overline{u}(r)\) in Fig. 5.6. Left: initial condition. Right: stable final state. u(r, θ) is plotted vertically

Fig. 5.8
figure 8

Solutions of (5.30)–(5.31) with u(0) < 0 and u ′ ′(0) > 0 as a function of h. Other parameter values: κ = 0. 05, M = 1, A = 0. 4, B = 0. 1. The solutions \(\overline{u}(r)\) at the points A and B are shown in Fig. 5.9 (left) and Fig. 5.11 (left), respectively

Fig. 5.9
figure 9

Left: the solid curve shows \(\overline{u}(r)\) at the point indicated by the point A in Fig. 5.8. The dashed curve shows the eigenfunction μ(r) corresponding to λ(3). Right: λ(m) for the solution shown solid in the left panel. The integer with largest λ is N = 3

Fig. 5.10
figure 10

A simulation of (5.28) with initial condition corresponding to \(\overline{u}(r)\) in Fig. 5.9. Left: initial condition. Right: stable final state. u(r, θ) is plotted vertically

We can also consider stationary solutions of (5.30)–(5.31) for which u(0) < 0 and u ′ ′(0) > 0, i.e. which have a “hole” in the centre. Following these solutions as h is varied we obtain Fig. 5.8. As in Fig. 5.5 we see a snake of solutions, with successive branches having one more large amplitude oscillation. We will consider the stability of two solutions on the curve in Fig. 5.8; first, the solution at point A, shown in the left panel of Fig. 5.9. This solution corresponds to one with just a single ring of active neurons. Calculating λ(m) for this solution we obtain the curve in Fig. 5.9 (right), and we see that a circularly-symmetric solution of (5.28) with radial profile given by this \(\overline{u}(r)\) will be most unstable with respect to perturbations with D 3 symmetry. The eigenfunction μ(r) corresponding to N = 3 is shown dashed in Fig. 5.9 (left), and it is localised at the first maximum of \(\overline{u}(r)\).

Figure 5.10 shows the result of simulating (5.28) with an initial condition formed by rotating the radial profile in Fig. 5.9 (left) through a full circle in the angular direction, and then adding a small random perturbation to u at each grid point. The initial condition is shown in the left panel and the final state (which is stable) is shown in the right panel. We see the formation of three bumps at the first ring, as expected.

Now consider the solution at point B in Fig. 5.8. This solution, shown in Fig. 5.11 (left) corresponds to one with two active rings . An analysis of its stability is shown in Fig. 5.11 (right) and we see that it is most unstable with respect to perturbations with D 9 symmetry , and that these should appear at the outer ring . Figure 5.12 shows the result of simulating (5.28) with an initial condition formed by rotating the radial profile in Fig. 5.11 (left) through a full circle in the angular direction, and then adding a small random perturbation to u at each grid point. The initial condition is shown in the left panel and the final state (which is stable) is shown in the right panel. We see the formation of nine bumps at the second ring, as expected.

Fig. 5.11
figure 11

Left: the solid curve shows \(\overline{u}(r)\) at the point indicated by the point B in Fig. 5.8. The dashed curve shows the eigenfunction μ(r) corresponding to λ(9). Right: λ(m) for the solution shown solid in the left panel

Fig. 5.12
figure 12

A simulation of (5.28) with initial condition corresponding to \(\overline{u}(r)\) in Fig. 5.11. Left: initial condition. Right: stable final state. u(r, θ) is plotted vertically

In summary we have shown how to analyse the stability of rotationally-symmetric solutions of the neural field equation (5.24), where w is given by (5.29), via transformation to a PDE. Notice that for all functions \(\overline{u}\) shown in the left panels of Figs. 5.65.9 and 5.11, λ(0) < 0, i.e. these are stable solutions of (5.30). However, they are unstable with respect to some perturbations which break their rotational invariance. The stable states for all three examples considered consist of a small number of spatially-localised active regions.

Similar results to those presented in this section were obtained subsequently by [38] using a Heaviside firing rate function , which allowed for the construction of an Evans function to determine stability of localised patterns. These authors also showed that the presence of a second, slow variable could cause a rotational instability of a pattern like that in Fig. 5.10 (right), resulting in it rotating at a constant speed. Very recently, instabilities of rotationally-symmetric solutions were addressed by considering the dynamics of the interface dividing regions of high activity from those with low activity, again using the Heaviside firing rate function [14] (and Coombes chapter). Several other authors have also recently investigated symmetry breaking bifurcations of spatially-localised bumps  [9, 16]. We now consider solutions of two-dimensional neural field equations which are not spatially-localised, specifically, spiral waves.

4 Spiral Waves

The function w used in the previous section was of the decaying oscillatory type (Fig. 5.4). Another form of coupling of interest is purely positive, i.e. excitatory. However, without some form of negative feedback , activity in a neural system with purely excitatory coupling will typically spread over the whole domain. With the inclusion of some form of slow negative feedback such as spike frequency adaptation  [13] or synaptic depression [31], travelling pulses of activity are possible [1, 12, 19]. In two spatial dimensions the analogue of a travelling pulse is a spiral wave  [2, 3], which we now study. Let us consider the system

$$\displaystyle\begin{array}{rcl} \frac{\partial u(\mathbf{x},t)} {\partial t} = -u(\mathbf{x},t) + B\int _{\varOmega }w(\vert \mathbf{x} -\mathbf{y}\vert )F(u(\mathbf{y},t))d\mathbf{y} - a(\mathbf{x},t)& &{}\end{array}$$
(5.34)
$$\displaystyle\begin{array}{rcl} \tau \frac{\partial a(\mathbf{x},t)} {\partial t} = \mathit{Au}(\mathbf{x},t) - a(\mathbf{x},t)& &{}\end{array}$$
(5.35)

where \(\varOmega \subset \mathbb{R}^{2}\) which, in practice, we choose to be a disk, and the firing rate function is

$$\displaystyle{ F(u) = \frac{1} {1 + e^{-\beta (u-h)}}. }$$
(5.36)

where h and β are parameters. This system is very similar to that in [24] and is the two-dimensional version of that considered in [23, 39]. If we choose the coupling function to be

$$\displaystyle{ w(r) =\int _{ 0}^{\infty } \frac{\mathit{sJ}_{0}(\mathit{rs})} {s^{4} + s^{2} + 1}\mathit{ds} }$$
(5.37)

then, using the same ideas as above (and ignoring the fact that we are not dealing with spatially-localised solutions) (5.34) is equivalent to

$$\displaystyle{ \left [\nabla ^{4} -\nabla ^{2} + 1\right ]\left (\frac{\partial u(\mathbf{x},t)} {\partial t} + u(\mathbf{x},t) + a(\mathbf{x},t)\right ) = \mathit{BF}(u(\mathbf{x},t)) }$$
(5.38)

We choose boundary conditions

$$\displaystyle{ u(R,\theta,t) = \left.\frac{\partial ^{2}u(r,\theta,t)} {\partial r^{2}} \right \vert _{r=R} = 0 }$$
(5.39)

for all θ and t,where R is radius of the circular domain and we have written u in polar coordinates. The two differences between the system considered here and that in [32] are that here we use the firing rate function F (Eq. (5.36)), which is non-zero everywhere (the function f (Eq. (5.21)) was used in [32]), and the boundary conditions given in (5.39) are different from those in [32].

Fig. 5.13
figure 13

The function w(r) defined by (5.37)

The function w(r) defined by (5.37) is shown in Fig. 5.13 and we see that it is positive and decays monotonically as r → . For a variety of parameters, the system (5.34)–(5.35) supports a rigidly-rotating spiral wave on a circular domain. To find and study such a wave we recognise that rigidly-rotating patterns on a circular domain can be “frozen” by moving to a coordinate frame rotating at the same speed as the pattern [2, 3, 5]. These rigidly rotating patterns satisfy the time-independent equations

$$\displaystyle\begin{array}{rcl} \left [\nabla ^{4} -\nabla ^{2} + 1\right ]\left (-\omega \frac{\partial u} {\partial \theta } + u + a\right ) = \mathit{BF}(u)& &{}\end{array}$$
(5.40)
$$\displaystyle\begin{array}{rcl} -\omega \tau \frac{\partial a} {\partial \theta } = \mathit{Au} - a& &{}\end{array}$$
(5.41)

where ω is the rotation speed of the pattern and θ is the angular variable in polar coordinates. Rigidly rotating spiral waves are then solutions of (5.40)–(5.41), together with a scalar “pinning” equation [2, 32] which allows us to determine ω as well as u and a. In practice, one solves (5.41) to obtain a as a function of u and substitutes into (5.40), giving the single equation for u

$$\displaystyle{ \left [\nabla ^{4} -\nabla ^{2} + 1\right ]\left (1 -\omega \frac{\partial } {\partial \theta } + A\left [1 -\omega \tau \frac{\partial } {\partial \theta }\right ]^{-1}\right )u = \mathit{BF}(u) }$$
(5.42)

Having found a solution \(\overline{u}\) of (5.42) its stability can be determined by linearising (5.34)–(5.35) about \((\overline{u},\overline{a})\), where

$$\displaystyle{ \left (1 -\omega \tau \frac{\partial } {\partial \theta }\right )\overline{a} = A\overline{u} }$$
(5.43)

As we have done in previous sections, we can numerically follow solutions of (5.42) as parameters are varied, determining their stability.

Fig. 5.14
figure 14

ω as a function of A for spiral wave solutions of (5.40)–(5.41). Solid curves are stable, dashed unstable. The spiral wave at points marked “a”, “b” and “c” are shown in Fig. 5.15. Other parameters are h = 0. 6, β = 20, τ = 3, B = 3. 5. The domain has radius 35

In Fig. 5.14 we show ω as a function of A and also indicate the stability of solutions. Interestingly, there is a region of bistability for moderate values of A. Typical solutions (of both u and a) at three different points on the curve are shown in Fig. 5.15. In agreement with the results in [32] we see that as A (the strength of the negative feedback) is decreased, more of the domain becomes active, and as A is increased, less of the domain is active. The results of varying h (the threshold of the firing rate function) are shown in Fig. 5.16. We obtain results quite similar to those in Fig. 5.14—as h is decreased, more of the domain becomes active, and vice versa, and we also have a region of bistability. Figure 5.17 shows the result of varying τ: for large τ the spiral is unstable. The bifurcations seen in Figs. 5.145.16 and 5.17 are all generic saddle-node bifurcations. In principle they could be followed as two parameters are varied, thus mapping out regions of parameter space in which stable spiral waves exist.

Fig. 5.15
figure 15

Solutions of (5.40)–(5.41) at the three points marked in Fig. 5.14. The left column shows u and the right column shows a. The top, middle and bottom rows correspond to points “a”, “b” and “c”, respectively

Fig. 5.16
figure 16

ω as a function of h for spiral wave solutions of (5.40)–(5.41). Solid curves are stable, dashed unstable. Other parameters are A = 2, β = 20, τ = 3, B = 3. 5. The domain has radius 35

Fig. 5.17
figure 17

ω as a function of τ for spiral wave solutions of (5.40)–(5.41). Solid curves are stable, dashed unstable. The right panel is an enlargement of the left. Other parameters are A = 2, β = 20, B = 3. 5, h = 0. 6. The domain has radius 35

We conclude this section by noting that spiral waves have been observed in simulations which include synaptic depression rather than spike frequency adaptation  [8, 31], and also seen experimentally in brain slice preparations [27, 28].

5 Conclusion

This chapter has summarised some of the results from [32, 35, 36], in which neural field equations in one and two spatial dimensions were studied by being converted into PDEs via a Fourier transform in space. In two spatial dimensions we showed how to investigate the instabilities of spatially-localised “bumps ” and rings of activity, and also how to study spiral waves . An important technique used was the numerical continuation of solutions of large systems of coupled, nonlinear, algebraic equations defined by the discretisation of PDEs. Since the work summarised here was first published a number of other authors have used some of the techniques presented here to further investigate neural field models [9, 15, 21, 26, 31, 33].