Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Hans Berger’s 1924 discovery of the human electroencephalogram (EEG) [11] lead to a tremendous research enterprise in clinical, cognitive and computational neurosciences [58]. However, one of the yet unresolved problems in the biophysics of neural systems is understanding the proper coupling of complex neural network dynamics to the electromagnetic field, that is macroscopically measurable by means of neural mass potentials, such as local field potential (LFP ) or electroencephalogram. One requirement for this understanding is a forward model that links the ‘hidden’ activities of billions of neurons in mammalian brains and their propagation through neural networks to experimentally accessible quantities such as LFP and EEG. Utilizing terminology from theoretical physics, we call the operationally accessible quantities observables and an integrative forward model an observation model. Yet, there is an ongoing debate in the literature whether field effects, i.e. the feedback from mass potentials to neural activity, plays a functional role in the self-organization of cortical activity (e.g. [40]). Such field effects have recently been demonstrated via experiments on ephaptic interaction [31]. Thus a theoretical framework for observation forward and feedback models is mandatory in order to describe the coupling between neural network activity and the propagation of extracellular electromagnetic fields in clinical, computational and cognitive neurosciences, e.g. for treatment of epilepsy [55] or for modeling cognition-related brain potentials [8, 46].

Currently, there is ample evidence that the generators of neural field potentials, such as cortical LFP and EEG are the cortical pyramidal cells (sketched in Fig. 10.1). They exhibit a long dendritic trunk separating mainly excitatory synapses at the apical dendritic tree from mainly inhibitory synapses at the perisomatic basal dendritic tree [23, 60]. When both kinds of synapses are simultaneously active, inhibitory synapses generate current sources and excitatory synapses current sinks in extracellular space , causing the pyramidal cell to behave as a microscopic dipole surrounded by its characteristic electrical field. This dendritic dipole field is conveniently described by its associated electrodynamic potential , the dendritic field potential (DFP). Dendritic fields superimpose to the field of a cortical dipole layer, which is measurable as cortical LFP, due to the geometric arrangement of pyramidal cells in a cortical column. There, pyramidal cells exhibit an axial symmetry and they are aligned in parallel to each other, perpendicular to the cortex ’ surface, thus forming a palisade of cell bodies and dendritic trunks. Eventually, cortical LFP gives rise to the EEG measurable at the human’s scalp [27, 53, 58].

Fig. 10.1
figure 1

Sketch of a cortical pyramidal neuron with extracellular current dipole between spatially separated excitatory (open bullet) and inhibitory synapses (filled bullet). Neural in- and outputs are indicated by the jagged arrows. The z-axis points toward the scull. Current density j is given by dendritic current I 1 through cross section area A as described in the text

Weaving the above phenomena into a mathematical and biophysical plausible observation model that represents correctly the multi-spatiotemporal characteristics of LFP is a non-trivial task. The difficulty results from the complexity of brain processes that operate at several spatial and temporal scales. On one hand the organization of the brain, from single neuron scales to that of whole brain regions, changes its connectivity from almost probabilistic to highly structured as discussed above in the case of the cortical columns. On the other hand, temporal dynamics in time scales ranges from milliseconds for discrete events like spikes to hours and even longer for synaptic plasticity and learning. Hence, there is strong interaction between the different spatiotemporal scales [12, 45], which directly contribute to complex oscillatory dynamics, e.g., to mixed-mode oscillations [25, 29]. Thus it is not clear how and when to break down complex brain processes into simpler ‘building blocks’ where analysis can be made. Despite these peculiarities, various mathematical and computational approaches have been proposed in order to establish coarse-graining techniques and how to move from one scale to another.

Most studies for realistically simulating LFP , typically for the extracellular fluid in the vicinity of a neuron, have been attempted by means of compartmental models [2, 48, 54, 57] where every compartment contributes a portion of extracellular current to the DFP that is given by Coulomb’s equation in conductive media [5, 7, 53]. However, because compartmental models are computationally expensive, large-scale neural network simulations preferentially employ point models, based either on conductance [36, 50] or population models [39, 56, 64, 65] where neural mass potentials are estimated either through sums (or rather differences) of postsynaptic potentials [24] or of postsynaptic currents [50]. In particular, the model of Mazzoni et al. [50] led to a series of recent follow-up studies [51, 52] that address the correlations between numerically simulated or experimentally measured LFP/EEG and spike rates by means of statistical modeling and information theoretic measures.

To adequately explain field potentials measured around the dendritic tree of an individual cortical pyramidal cell (DFP ), in extracellular space of a cortical module (LFP ), or at a human’s scalp (EEG ), Maxwell’s electromagnetic field equations, specifically the continuity equation describing conservation of charge have to be taken into account. However, coupling the activity of discrete neural networks to the continuous electromagnetic field is difficult since neural network topology is not embedded into physical space as an underlying metric manifold. This can be circumvented by employing continuous neural networks as investigated in neural field theory (NFT) [1, 15, 18, 38, 41, 66]. In fact previous studies [42, 47] gave the first reasonable accounts for such couplings in NFT population models that are motivated by the corresponding assumptions for neural mass models (cf. Chap. 17 in this volume). Jirsa et al. [42] relate the impressed current density in extracellular space to neural field activity. On the other hand, Liley et al. [47] consider LFP as average somatic membrane potential being proportional to the neural field. Their model found a number of successful applications [13, 14, 22] (see also Chap. 14). However, both approaches [42, 47] are not concerned with the microscopic geometry around the field generators, the cortical pyramidal cells. Therefore, they do not take pyramidal dipole currents into account.

Another problem with the aforementioned neural field approaches is that the extracellular space has either been completely neglected, or only implicitly been taken into account by assuming that cortical LFP is proportional to either membrane potentials or synaptic currents as resulting from a purely resistive medium. That means, dipole currents in the extracellular space have been completely abandoned. However, recent studies indicate that at least the resistive property of the extracellular space is crucial [49], but more interestingly, it has been revealed that diffusion currents, represented by their corresponding Warburg impedances [59], cannot be neglected in extracellular space as they may substantially contribute to the characteristic power spectra of neural mass potentials [3, 4, 6, 27].

In this chapter, we outline a theoretical framework for the microscopic coupling of continuous neural networks, i.e. neural fields, to the electromagnetic field, properly described by dipole currents of cortical pyramidal neurons and diffusion effects in extracellular space. As a starting point we use a three-compartment model for a single pyramidal cell [7, 26, 63] and derive the evolution law for the activity of a neural network. These derivations additionally include observation equations for the extracellular dipole currents, which explicitly incorporate extracellular resistivity and diffusion . Subsequently, we demonstrate that our approach can be related to previous modeling strategies, by considering reasonable simplifications. Herein, we intentionally and specifically simplify our approach to a leaky integrate-and-fire (LIF ) model for the dynamics of a neural network, which then shows the missing links that previous modeling approaches failed to incorporate to account for a proper dipole LFP observation model. In particular, we compare our results with the related model by Mazzoni et al. [50] by means of numerical simulations. Moreover, performing the continuum limit (yet à la physique) for the network yields an Amari-type neural field equation [1] coupled to the Maxwell equations in extracellular fluid, while an observation model for electric field potentials is obtained from the interaction of cortical dipole currents with charge density in non-resistive extracellular space as described by the Nernst-Planck equation . Thereby, our work provides for the first time a biophysically plausible observation model for the Amari-type neural field equations and crucially, it gives estimates for the local field potentials that satisfy the widespread dipole assumption discussed in the neuroscientific literature.

2 Pyramidal Neuron Model

Fig. 10.2
figure 2

Equivalent circuit for a three-compartment pyramidal neuron model

Inspired by earlier attempts to approximate the complex shape of cortical pyramidal neurons by essentially three passively coupled compartments [7, 26, 63], we reproduce in Fig. 10.2 the electronic equivalent circuit of beim Graben [7] for the ith pyramidal cell (Fig. 10.1) in a population of P pyramidal neurons here. This parsimonious circuit allows to derive our observation model. It comprises one compartment for the apical dendritic tree where p i excitatory synapses are situated (for the sake of simplicity, we only show one synapse here), another one for the soma and perisomatic basal dendritic tree, populated with q i mainly inhibitory synapses (again, only one synapse is shown here), and a third one for the axon hillock where membrane potential is converted into spike trains by means of an integrate-and-fire mechanism. Note that nonlinear fire mechanisms of Hodgkin-Huxely type can be incorporated as well. In total we consider N populations of neurons, arranged in two-dimensional layers \(\varGamma _{n} \subset \mathbb{R}^{2}\) (\(i = n,\ldots,N\)). Neurons in layers 1 to M should be excitatory, neurons in layers M + 1 to N should be inhibitory and layer 1 exclusively contains the P cortical pyramidal cells in our simplified treatment. The total number of neurons should be K.

Excitatory synapses are schematically represented by the left-most branch of Fig. 10.2 as ‘phototransistors’ [7] in order to indicate that they comprise quanta-gated resistors, namely ion channels whose resistance depends on the concentration of ligand molecules which are either extracellular neurotransmitters or intracellular metabolites [44]. There, the excitatory postsynaptic current (EPSC) at a synapse between a neuron j from layers 1 to M and neuron i is given as

$$\displaystyle{ I_{\mathit{ij}}^{\mathrm{E}}(t) = \frac{\gamma _{\mathit{ij}}^{\mathrm{E}}(t)} {R_{\mathit{ij}}^{\mathrm{E}}} (V _{i1}(t) - E_{\mathit{ij}}^{\mathrm{E}})\:. }$$
(10.1)

Here, the time-dependent function γ ij E(t) reflects the neurotransmitter -gated opening of postsynaptic ion channels. Usually, this function is given as a sum of characteristic excitatory impulse response functions η E(t) that is elicited by one presynaptic spike, i.e.

$$\displaystyle{ \gamma (t) =\sum _{\nu }\eta (t - t_{\nu }) }$$
(10.2)

where t ν denote the ordered spike arrival times. Moreover, \(R_{\mathit{ij}}^{\mathrm{E}}\) comprises the maximum synaptic conductivity as well as the electrotonic distance between the synapse between neuron j and i and i’s trigger zone, both expressed as resistance. V i1(t) is the membrane potential of neuron i’s compartment 1, i.e. the apical dendritic tree and E ij E is the excitatory reversal potential of the synapse j → i. We can conveniently express γ(t) through the spike rate [10, 18]

$$\displaystyle{ a(t) =\sum _{\nu }\delta (t - t_{\nu }) }$$
(10.3)

by means of a temporal convolution (‘∗’ denotes convolution product)

$$\displaystyle{ \gamma (t) =\int _{ -\infty }^{t}\eta (t - t^{{\prime}})a(t^{{\prime}})\,\mathrm{d}t^{{\prime}} = (\eta {\ast}a)(t)\:. }$$
(10.4)

Furthermore, the apical dendritic compartment 1 is characterized by a particular leakage resistance R 1 and by a capacity C 1, reflecting the temporary charge storage capacity of the membrane. Both, R 1 and C 1 are correlated with the compartment’s membrane area [26]. The battery E M denotes the Nernst resting potential [43, 62].

The middle branch of Fig. 10.2 describes the inhibitory synapses (also displayed as ‘phototransistors’ [7]) between a neuron k from layers M + 1 to N and neuron i. Here, inhibitory postsynaptic currents (IPSC)

$$\displaystyle{ I_{\mathit{ik}}^{\mathrm{I}}(t) = \frac{\gamma _{\mathit{ik}}^{\mathrm{I}}(t)} {R_{\mathit{ik}}^{\mathrm{I}}} (V _{i2}(t) - E_{\mathit{ik}}^{\mathrm{I}})\:, }$$
(10.5)

described by a similar channel opening function γ I(t), shunt the excitatory branch with the trigger zone when compartment’s 2 membrane potential V i2(t) is large due to previous excitation. Also Eqs. (10.2) and (10.3) hold for another postsynaptic impulse response function η I(t), characteristic for inhibitory synapses. The resistance of the current paths along the cell plasma is given by R ik I. Finally, E ik I denotes the inhibitory reversal potential of the synapse k → i. Also the somatic and perisomatic dendritic compartment 2 possesses its specific leakage resistance R 2 and capacity C 2.

The cell membrane at the axon hillock [36] itself is represented by the branch at the right hand side described by another RC-element consisting of R 3 and C 3. Action potentials, δ(tt ν ), are generated by a leaky integrate-and-fire mechanism [50] as indicated by a ‘black box’ when the membrane potential U i (t) crosses a certain threshold θ i  > 0 from below at time t ν , i.e.

$$\displaystyle{ U_{i}(t_{\nu }) \geq \theta _{i}\:. }$$
(10.6)

Afterwards, membrane potential is reset to some steady-state potential [50]

$$\displaystyle{ U_{i}(t_{\nu +1}) \leftarrow E\:. }$$
(10.7)

and the integration of the differential equations can be restarted at time \(t = t_{\nu +1} +\tau _{rp}\) after interrupting the dynamics for a refractory period τ rp .

The three compartments are coupled through longitudinal resistors, R A, R B, R C, R D where R A, R B denote the resistivity of the cell plasma and R C, R D that of extracellular space [37]. Yet, in extracellular space not only Ohmic but also diffusion currents are present [3, 4, 6, 3234, 61]. These are taken into account by the current source J i connected in parallel to R D. However, for convenience, diffusion currents in the extracellular space between compartments 2 and 3 are disregarded following an adiabatic approximation.

Finally, the membrane potentials at compartments 1, V i1, 2, V i2, and 3, U i , as the dynamical state variable as well as the DFP Φ i are shown in Fig. 10.2. The latter drops along the extracellular resistor R D. For the aim of calculation, the mesh currents I i1 (current in the apical compartment 1 of neuron i), I i2 (current in somatic and perisomatic compartment 2 of neuron i) and I i3 (the leaky integrate-and-fire (LIF) current in compartment 3 of neuron i) are indicated.

The circuit in Fig. 10.2 obeys Kirchhoff’s laws ; first for currents:

$$\displaystyle\begin{array}{rcl} I_{i1} = C_{1}\frac{\mathrm{d}V _{i1}} {\mathrm{d}t} + \frac{V _{i1} - E^{\mathrm{M}}} {R_{1}} +\sum _{ j=1}^{p_{i} }I_{\mathit{ij}}^{\mathrm{E}}& &{}\end{array}$$
(10.8)
$$\displaystyle\begin{array}{rcl} I_{i2} = C_{2}\frac{\mathrm{d}V _{i2}} {\mathrm{d}t} + \frac{V _{i2} - E^{\mathrm{M}}} {R_{2}} +\sum _{ k=1}^{q_{i} }I_{\mathit{ik}}^{\mathrm{I}}& &{}\end{array}$$
(10.9)
$$\displaystyle\begin{array}{rcl} I_{i3} = C_{3}\frac{\mathrm{d}U_{i}} {\mathrm{d}t} + \frac{U_{i} - E^{\mathrm{M}}} {R_{3}} & &{}\end{array}$$
(10.10)
$$\displaystyle\begin{array}{rcl} I_{i3} = I_{i1} - I_{i2}\:,& &{}\end{array}$$
(10.11)

and second, for voltages:

$$\displaystyle\begin{array}{rcl} V _{i1} = (R^{\mathrm{A}} + R^{\mathrm{D}})I_{ i1} + (R^{\mathrm{B}} + R^{\mathrm{C}})I_{ i3} + U_{i} - R^{\mathrm{D}}J_{ i}& &{}\end{array}$$
(10.12)
$$\displaystyle\begin{array}{rcl} V _{i2} = (R^{\mathrm{B}} + R^{\mathrm{C}})I_{ i3} + U_{i}& &{}\end{array}$$
(10.13)
$$\displaystyle\begin{array}{rcl} \varPhi _{i} = R^{\mathrm{D}}(I_{ i1} - J_{i})\:,& &{}\end{array}$$
(10.14)

where the postsynaptic currents \(I_{\mathit{ij}}^{\mathrm{E}}\) and \(I_{\mathit{ik}}^{\mathrm{I}}\) are given through (10.1) and (10.5). Here, p i is the number of excitatory and q i the number of inhibitory synapses connected to neuron i, respectively.

Subtracting (10.13) from (10.12) yields the current along the pyramidal cell’s dendritic trunk

$$\displaystyle{ I_{i1} = \frac{V _{i1} - V _{i2} + R^{\mathrm{D}}J_{i}} {R^{\mathrm{A}} + R^{\mathrm{D}}} \:. }$$
(10.15)

The circuit described by Eqs. (10.810.14) shows that the neuron i is likely to fire when the excitatory synapses are activated. Then, the LIF current I i3 equals the dendritic current I i1. If, by contrast, also the inhibitory synapses are active, the dendritic current I i1 follows the shortcut between the apical and perisomatic dendritic trees and only a portion could evoke spikes at the trigger zone (Eq. (10.10)). On the other hand, the large dendritic current I i1, diminished by some diffusion current J i , flowing through the extracellular space of resistance R i D, gives rise to a large DFP Φ i .

In order to simplify the following derivations, we first gauge the resting potential to E M = 0. Then, excitatory synapses are characterized by \(E_{\mathit{ij}}^{\mathrm{E}}> 0\), while inhibitory synapses obey E ik I < 0. Combining Eqs. (10.810.13) entails

$$\displaystyle\begin{array}{rcl} C_{1}\frac{\mathrm{d}V _{i1}} {\mathrm{d}t} + \frac{V _{i1}} {R_{1}} +\sum _{ j=1}^{p_{i} }I_{\mathit{ij}}^{\mathrm{E}} = \frac{V _{i1} - V _{i2} + R^{\mathrm{D}}J_{ i}} {R^{\mathrm{A}} + R^{\mathrm{D}}} & &{}\end{array}$$
(10.16)
$$\displaystyle\begin{array}{rcl} C_{2}\frac{\mathrm{d}V _{i2}} {\mathrm{d}t} + \frac{V _{i2}} {R_{2}} +\sum _{ k=1}^{q_{i} }I_{\mathit{ik}}^{\mathrm{I}} = \frac{V _{i1} - V _{i2} + R^{\mathrm{D}}J_{ i}} {R^{\mathrm{A}} + R^{\mathrm{D}}} - \frac{V _{i2} - U_{i}} {R^{\mathrm{B}} + R^{\mathrm{C}}}& &{}\end{array}$$
(10.17)
$$\displaystyle\begin{array}{rcl} C_{3}\frac{\mathrm{d}U_{i}} {\mathrm{d}t} + \frac{U_{i}} {R_{3}} = \frac{V _{i2} - U_{i}} {R^{\mathrm{B}} + R^{\mathrm{C}}}\:.& &{}\end{array}$$
(10.18)

2.1 General Solution of the Circuit Equations

Next we follow Bressloff’s [16] argumentation and regard the compartmental voltages as auxiliary variables that are merged into a two-dimensional vector \(\mathbf{V }_{i} = (V _{i1},V _{i2})^{T}\) which is subject to elimination. We only keep Eq. (10.18) as the evolution law of the entire state vector \(\mathbf{U} = (U_{i})_{i=1,\ldots,K}\) of the neural network. Inserting the postsynaptic currents from (10.1) and (10.5) into Eqs. (10.1610.17) and temporarily assuming an arbitrary time-dependence for the functions γ(t) from Eq. (10.2) (in fact, the γ(t) are given through the presynaptic spike rates and are thus nonlinear functions of the entire state \(\mathbf{U}\)), we obtain a system of two inhomogeneous linear differential equations that can be compactly written in matrix form as

$$\displaystyle{ \frac{\mathrm{d}} {\mathrm{d}t}\mathbf{V }_{i}(t) =\mathbf{ H}_{i}(t) \cdot \mathbf{ V }_{i}(t) +\mathbf{ G}_{i}(t)\:, }$$
(10.19)

with

$$\displaystyle\begin{array}{rcl} \mathbf{H}_{i}(t) = \left (\begin{array}{*{10}c} \frac{1} {C_{1}} \left (- \frac{1} {R_{1}} + \frac{1} {R^{\mathrm{A}}+R^{\mathrm{D}}} -\sum _{j}\frac{\gamma _{\mathit{ij}}^{\mathrm{E}}(t)} {R_{\mathit{ij}}^{\mathrm{E}}} \right )& - \frac{1} {C_{1}(R^{\mathrm{A}}+R^{\mathrm{D}})} \\ \frac{1} {C_{2}(R^{\mathrm{A}}+R^{\mathrm{D}})} & \frac{1} {C_{2}} \left (- \frac{1} {R_{2}} - \frac{1} {R^{\mathrm{A}}+R^{\mathrm{D}}} - \frac{1} {R^{\mathrm{B}}+R^{\mathrm{C}}} -\sum _{k}\frac{\gamma _{\mathit{ik}}^{\mathrm{I}}(t)} {R_{\mathit{ik}}^{\mathrm{I}}} \right ) \end{array} \right )& &{}\end{array}$$
(10.20)

and

$$\displaystyle{ \mathbf{G}_{i}(t) = \left (\begin{array}{*{10}c} \sum _{j}\frac{\gamma _{\mathit{ij}}^{\mathrm{E}}(t)E_{\mathit{ ij}}^{\mathrm{E}}} {C_{1}R_{\mathit{ij}}^{\mathrm{E}}} + \frac{R^{\mathrm{D}}} {C_{1}(R^{\mathrm{A}}+R^{\mathrm{D}})}J_{i}(t) \\ \sum _{k}\frac{\gamma _{\mathit{ik}}^{\mathrm{I}}(t)E_{\mathit{ ik}}^{\mathrm{I}}} {C_{2}R_{\mathit{ik}}^{\mathrm{I}}} + \frac{R^{\mathrm{D}}} {C_{2}(R^{\mathrm{A}}+R^{\mathrm{D}})}J_{i}(t) + \frac{1} {C_{2}(R^{\mathrm{B}}+R^{\mathrm{C}})}U_{i}(t) \end{array} \right )\:. }$$
(10.21)

Initial conditions start with \(\mathbf{V } = 0\) in the infinite past \(t = -\infty\) for the sake of convenience.

Obviously, the time-dependence of the transition matrix \(\mathbf{H}(t)\) is due to the shunting terms γ(t). In order to solve (10.19) one first considers the associated homogeneous differential equation

$$\displaystyle{ \frac{\mathrm{d}} {\mathrm{d}t}\mathbf{W}_{i}(t) =\mathbf{ H}_{i}(t) \cdot \mathbf{ W}_{i}(t) }$$
(10.22)

whose general solutions are given as the columns of

$$\displaystyle{ \mathbf{W}_{i}(t) = \mathbb{T}\mathrm{e}^{t\mathbf{H}_{i}(t)}\:, }$$
(10.23)

where \(\mathbb{T}\) denotes the time-ordering operator [20, 21]. Using the integral (10.23), a particular solution of the inhomogeneous equation (10.19) is then obtained by the variation of parameter method as

$$\displaystyle{ \mathbf{V }_{i}(t) =\int _{ -\infty }^{t}\mathbf{X}_{ i}(t,t^{{\prime}}) \cdot \mathbf{ G}_{ i}(t^{{\prime}})\,\mathrm{d}t^{{\prime}} }$$
(10.24)

with matrix-valued Green’s function

$$\displaystyle{ \mathbf{X}_{i}(t,t^{{\prime}}) =\mathbf{ W}_{ i}(t) \cdot \mathbf{ W}_{i}(t^{{\prime}})^{-1}\:. }$$
(10.25)

Therefore, the compartmental voltages are obtained as

$$\displaystyle{ V _{i\alpha }(t) =\sum _{ \beta =1}^{2}\int _{ -\infty }^{t}\chi _{ i\alpha \beta }(t,t^{{\prime}})g_{ i\beta }(t^{{\prime}})\,\mathrm{d}t^{{\prime}} =\sum _{ \beta =1}^{2}\chi _{ i\alpha \beta } {\ast} g_{i\beta } }$$
(10.26)

with components \(\mathbf{X}_{i}(t,t^{{\prime}}) = (\chi _{i\alpha \beta }(t,t^{{\prime}}))_{\alpha \beta }\) and \(\mathbf{G}_{i}(t^{{\prime}}) = (g_{i\beta }(t^{{\prime}}))_{\beta }\), α, β = 1, 2.

2.2 Observation Model

In order to derive the general observation equations for the DFP of the three-compartment model, we insert the formal solutions (10.26) and the inhomogeneity (10.21) into Eq. (10.15) and obtain

$$\displaystyle\begin{array}{rcl} & & I_{i1}(t) = \frac{1} {R^{\mathrm{A}} + R^{\mathrm{D}}}\int _{-\infty }^{t}(\chi _{ i11}(t,t^{{\prime}}) -\chi _{ i21}(t,t^{{\prime}}))\Bigg[\sum _{ j} \frac{E_{\mathit{ij}}^{\mathrm{E}}} {C_{1}R_{\mathit{ij}}^{\mathrm{E}}}\gamma _{\mathit{ij}}^{\mathrm{E}}(t) + \frac{R^{\mathrm{D}}} {C_{1}(R^{\mathrm{A}} + R^{\mathrm{D}})}J_{i}(t)\Bigg] + \\ & & (\chi _{i12}(t,t^{{\prime}}) -\chi _{ i22}(t,t^{{\prime}}))\Bigg[\sum _{ k} \frac{E_{\mathit{ik}}^{\mathrm{I}}} {C_{2}R_{\mathit{ik}}^{\mathrm{I}}}\gamma _{\mathit{ik}}^{\mathrm{I}}(t) + \frac{R^{\mathrm{D}}} {C_{2}(R^{\mathrm{A}} + R^{\mathrm{D}})}J_{i}(t) + \frac{1} {C_{2}(R^{\mathrm{B}} + R^{\mathrm{C}})}U_{i}(t)\Bigg]\,\mathrm{d}t^{{\prime}} + \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \frac{R^{\mathrm{D}}} {R^{\mathrm{A}} + R^{\mathrm{D}}}J_{i}(t)\:, {}\end{array}$$
(10.27)

which can be reshaped by virtue of the convolutions (10.4) to

$$\displaystyle\begin{array}{rcl} & & \!\!\!\!\!\!I_{i1} = \frac{1} {R^{\mathrm{A}} + R^{\mathrm{D}}}\Bigg[\!\!\sum _{j} \frac{E_{\mathit{ij}}^{\mathrm{E}}} {C_{1}R_{\mathit{ij}}^{\mathrm{E}}}(\chi _{i11} -\chi _{i21}) {\ast}\eta ^{\mathrm{E}} {\ast} a_{ j} + \frac{R^{\mathrm{D}}} {C_{1}(R^{\mathrm{A}} + R^{\mathrm{D}})}(\chi _{i11} -\chi _{i21}) {\ast} J_{i}\!\Bigg] + \\ & & \frac{1} {R^{\mathrm{A}} + R^{\mathrm{D}}}\Bigg[\sum _{k} \frac{E_{\mathit{ik}}^{\mathrm{I}}} {C_{2}R_{\mathit{ik}}^{\mathrm{I}}}(\chi _{i12} -\chi _{i22}) {\ast}\eta ^{\mathrm{I}} {\ast} a_{ k} + \frac{R^{\mathrm{D}}} {C_{2}(R^{\mathrm{A}} + R^{\mathrm{D}})}(\chi _{i12} -\chi _{i22}) {\ast} J_{i} + \\ & & \qquad \qquad \qquad \qquad \qquad \frac{1} {C_{2}(R^{\mathrm{B}} + R^{\mathrm{C}})}(\chi _{i12} -\chi _{i22}) {\ast} U_{i}\Bigg] + \frac{R^{\mathrm{D}}} {R^{\mathrm{A}} + R^{\mathrm{D}}}J_{i}\:. {}\end{array}$$
(10.28)

Introducing new impulse response functions that simultaneously account for synaptic transmission (η) and dendritic propagation (χ) by

$$\displaystyle\begin{array}{rcl} \psi _{i\alpha 1} =\chi _{i\alpha 1} {\ast}\eta ^{\mathrm{E}}& &{}\end{array}$$
(10.29)
$$\displaystyle\begin{array}{rcl} \psi _{i\alpha 2} =\chi _{i\alpha 2} {\ast}\eta ^{\mathrm{I}}& &{}\end{array}$$
(10.30)

yields

$$\displaystyle\begin{array}{rcl} & & I_{i1}= \frac{1} {R^{\mathrm{A}}+R^{\mathrm{D}}}\Bigg[\sum _{j} \frac{E_{\mathit{ij}}^{\mathrm{E}}} {C_{1}R_{\mathit{ij}}^{\mathrm{E}}}(\psi _{i11} -\psi _{i21}) {\ast} a_{j} + \frac{R^{\mathrm{D}}} {C_{1}(R^{\mathrm{A}} + R^{\mathrm{D}})}(\chi _{i11} -\chi _{i21}) {\ast} J_{i}\Bigg] + \\ & & \frac{1} {R^{\mathrm{A}} + R^{\mathrm{D}}}\Bigg[\sum _{k} \frac{E_{\mathit{ik}}^{\mathrm{I}}} {C_{2}R_{\mathit{ik}}^{\mathrm{I}}}(\psi _{i12} -\psi _{i22}) {\ast} a_{k} + \frac{R^{\mathrm{D}}} {C_{2}(R^{\mathrm{A}} + R^{\mathrm{D}})}(\chi _{i12} -\chi _{i22}) {\ast} J_{i} + \\ & & \qquad \qquad \qquad \qquad \qquad \frac{1} {C_{2}(R^{\mathrm{B}} + R^{\mathrm{C}})}(\chi _{i12} -\chi _{i22}) {\ast} U_{i}\Bigg] + \frac{R^{\mathrm{D}}} {R^{\mathrm{A}} + R^{\mathrm{D}}}J_{i}\:. {}\end{array}$$
(10.31)

Eventually we obtain the DFP of neuron i as the potential dropping along the resistor R D caused by the current through it (Eq. (10.14)), i.e.

$$\displaystyle\begin{array}{rcl} & & \varPhi _{i} = \frac{R^{\mathrm{D}}} {R^{\mathrm{A}} + R^{\mathrm{D}}}\Bigg\{\sum _{j} \frac{E_{\mathit{ij}}^{\mathrm{E}}} {C_{1}R_{\mathit{ij}}^{\mathrm{E}}}(\psi _{i11} -\psi _{i21}) {\ast} a_{j} +\sum _{k} \frac{E_{\mathit{ik}}^{\mathrm{I}}} {C_{2}R_{\mathit{ik}}^{\mathrm{I}}}(\psi _{i12} -\psi _{i22}) {\ast} a_{k} + \\ & & \Bigg[ \frac{R^{\mathrm{D}}} {C_{1}(R^{\mathrm{A}} + R^{\mathrm{D}})}(\chi _{i11} -\chi _{i21}) + \frac{R^{\mathrm{D}}} {C_{2}(R^{\mathrm{A}} + R^{\mathrm{D}})}(\chi _{i12} -\chi _{i22}) - R^{\mathrm{A}}\delta \Bigg] {\ast} J_{ i} + \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \frac{1} {C_{2}(R^{\mathrm{B}} + R^{\mathrm{C}})}(\chi _{i12} -\chi _{i22}) {\ast} U_{i}\Bigg\} {}\end{array}$$
(10.32)

2.3 Neurodynamics

Equation (10.32) reveals that the DFP is driven by the neuron’s state variable U i , by the entirety of postsynaptic potentials caused by spike trains a i and by the diffusion currents J i . The state variables and the spike trains are given by the network’s evolution equation that is straightforwardly derived along the lines of Bressloff [16] again. To this end, we insert V i2(t) as the solution of (10.26) into the remaining Eq. (10.18) to get

$$\displaystyle\begin{array}{rcl} C_{3}(R^{\mathrm{B}} + R^{\mathrm{C}})\frac{\mathrm{d}U_{i}} {\mathrm{d}t} + \left (1 + \frac{R^{\mathrm{B}} + R^{\mathrm{C}}} {R_{3}} \right )U_{i} =\chi _{i21} {\ast} g_{i1} +\chi _{i22} {\ast} g_{i2}\:.& &{}\end{array}$$
(10.33)

Next, we insert the inhomogeneity (10.21) again and obtain

$$\displaystyle\begin{array}{rcl} & & C_{3}(R^{\mathrm{B}} + R^{\mathrm{C}})\frac{\mathrm{d}U_{i}} {\mathrm{d}t} + \left (1 + \frac{R^{\mathrm{B}} + R^{\mathrm{C}}} {R_{3}} \right )U_{i} =\sum _{j} \frac{E_{\mathit{ij}}^{\mathrm{E}}} {C_{1}R_{\mathit{ij}}^{\mathrm{E}}}\chi _{i21} {\ast}\gamma _{\mathit{ij}}^{\mathrm{E}} + \\ & & \frac{R^{\mathrm{D}}} {C_{1}(R^{\mathrm{A}} + R^{\mathrm{D}})}\chi _{i21} {\ast} J_{i} +\sum _{k} \frac{E_{\mathit{ik}}^{\mathrm{I}}} {C_{2}R_{\mathit{ik}}^{\mathrm{I}}}\chi _{i22} {\ast}\gamma _{\mathit{ik}}^{\mathrm{I}} + \frac{R^{\mathrm{D}}} {C_{2}(R^{\mathrm{A}} + R^{\mathrm{D}})}\chi _{i22} {\ast} J_{i} + \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \frac{1} {C_{2}(R^{\mathrm{B}} + R^{\mathrm{C}})}\chi _{i22} {\ast} U_{i}\:. {}\end{array}$$
(10.34)

Utilizing the convolutions (10.4) once more, yields

$$\displaystyle\begin{array}{rcl} & & C_{3}(R^{\mathrm{B}} + R^{\mathrm{C}})\frac{\mathrm{d}U_{i}} {\mathrm{d}t} + \left (1 + \frac{R^{\mathrm{B}} + R^{\mathrm{C}}} {R_{3}} \right )U_{i} =\sum _{j} \frac{E_{\mathit{ij}}^{\mathrm{E}}} {C_{1}R_{\mathit{ij}}^{\mathrm{E}}}\chi _{i21} {\ast}\eta ^{\mathrm{E}} {\ast} a_{ j} + \\ & & \sum _{k} \frac{E_{\mathit{ik}}^{\mathrm{I}}} {C_{2}R_{\mathit{ik}}^{\mathrm{I}}}\chi _{i22} {\ast}\eta ^{\mathrm{I}} {\ast} a_{ k} + \frac{R^{\mathrm{D}}} {C_{1}(R^{\mathrm{A}} + R^{\mathrm{D}})}\chi _{i21} {\ast} J_{i} + \frac{R^{\mathrm{D}}} {C_{2}(R^{\mathrm{A}} + R^{\mathrm{D}})}\chi _{i22} {\ast} J_{i} + \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \frac{1} {C_{2}(R^{\mathrm{B}} + R^{\mathrm{C}})}\chi _{i22} {\ast} U_{i}\:. {}\end{array}$$
(10.35)

which becomes

$$\displaystyle\begin{array}{rcl} & & C_{3}(R^{\mathrm{B}} + R^{\mathrm{C}})\frac{\mathrm{d}U_{i}} {\mathrm{d}t} + \left (1 + \frac{R^{\mathrm{B}} + R^{\mathrm{C}}} {R_{3}} \right )U_{i} - \frac{1} {C_{2}(R^{\mathrm{B}} + R^{\mathrm{C}})}\chi _{i22} {\ast} U_{i} = \\ & & \sum _{j} \frac{E_{\mathit{ij}}^{\mathrm{E}}} {C_{1}R_{\mathit{ij}}^{\mathrm{E}}}\psi _{i21} {\ast} a_{j} +\sum _{k} \frac{E_{\mathit{ik}}^{\mathrm{I}}} {C_{2}R_{\mathit{ik}}^{\mathrm{I}}}\psi _{i22} {\ast} a_{k} + \frac{R^{\mathrm{D}}} {R^{\mathrm{A}} + R^{\mathrm{D}}}\left [ \frac{1} {C_{1}}\chi _{i21} + \frac{1} {C_{2}}\chi _{i22}\right ] {\ast} J_{i}{}\end{array}$$
(10.36)

after inserting the Green’s functions (10.29) and (10.30) again. Equation (10.36) together with (10.3), (10.6) and (10.7) determine the dynamics of a network with three-compartment pyramidal neurons.

3 Leaky Integrate-and-Fire Model

The most serious difficulty for dealing with the neurodynamical evolution equations (10.3610.310.610.7) and the DFP observation equation (10.32) is the inhomogeneity of the matrix Green’s function \(\mathbf{X}_{i}(t,t^{{\prime}})\) involved through the time-ordering operator and the time-dependence of \(\mathbf{H}_{i}(t)\).

3.1 Simplification

In a first approximation \(\mathbf{H}_{i}\) becomes time-independent by neglecting the shunting terms [20, 21]. Then, the matrix Green’s function \(\mathbf{X}_{i}(t,t^{{\prime}})\) becomes

$$\displaystyle{ \mathbf{X}(t,t^{{\prime}}) =\mathbf{ X}(t - t^{{\prime}}) =\mathrm{ e}^{(t-t^{{\prime}})\mathbf{H} } =\mathbf{ Q}^{(t-t^{{\prime}}) } }$$
(10.37)

with

$$\displaystyle{ \mathbf{Q} =\mathrm{ e}^{\mathbf{H}} }$$
(10.38)

and

$$\displaystyle{ \mathbf{H} = \left (\begin{array}{*{10}c} \frac{1} {C_{1}} \left (- \frac{1} {R_{1}} + \frac{1} {R^{\mathrm{A}}+R^{\mathrm{D}}} \right )& - \frac{1} {C_{1}(R^{\mathrm{A}}+R^{\mathrm{D}})} \\ \frac{1} {C_{2}(R^{\mathrm{A}}+R^{\mathrm{D}})} & \frac{1} {C_{2}} \left (- \frac{1} {R_{2}} - \frac{1} {R^{\mathrm{A}}+R^{\mathrm{D}}} - \frac{1} {R^{\mathrm{B}}+R^{\mathrm{C}}} \right ) \end{array} \right )\:, }$$
(10.39)

i.e. the transition matrix \(\mathbf{H}\), and consequently also the Green’s function , do not depend on the actual neuron index i any more. In this case, analytical methods can be employed [16].

However, for the present purpose, we employ another crucial simplification by choosing the electrotonic parameters in such a way that χ 22(t) ≈ δ(t). By virtue of this choice the dendritic filtering of compartment 2 is completely neglected. Then, the neural evolution equation (10.36) turns into

$$\displaystyle\begin{array}{rcl} & & C_{3}(R^{\mathrm{B}} + R^{\mathrm{C}})\frac{\mathrm{d}U_{i}} {\mathrm{d}t} + \left (1 + \frac{R^{\mathrm{B}} + R^{\mathrm{C}}} {R_{3}} - \frac{\hat{\tau }} {C_{2}(R^{\mathrm{B}} + R^{\mathrm{C}})}\right )U_{i} = \\ & & \quad \sum _{j} \frac{E_{\mathit{ij}}^{\mathrm{E}}} {C_{1}R_{\mathit{ij}}^{\mathrm{E}}}\psi _{21} {\ast} a_{j} +\sum _{k} \frac{E_{\mathit{ik}}^{\mathrm{I}}} {C_{2}R_{\mathit{ik}}^{\mathrm{I}}}\psi _{22} {\ast} a_{k} + \frac{R^{\mathrm{D}}} {R^{\mathrm{A}} + R^{\mathrm{D}}}\left [ \frac{1} {C_{1}}\chi _{21} + \frac{1} {C_{2}}\delta \right ] {\ast} J_{i} \\ & & \quad C_{3}(R^{\mathrm{B}} + R^{\mathrm{C}})\frac{\mathrm{d}U_{i}} {\mathrm{d}t} + \frac{C_{2}R_{3}(R^{\mathrm{B}} + R^{\mathrm{C}}) + C_{2}(R^{\mathrm{B}} + R^{\mathrm{C}})^{2} -\hat{\tau } R_{3}} {C_{2}R_{3}(R^{\mathrm{B}} + R^{\mathrm{C}})} U_{i} = \\ & & \sum _{j} \frac{E_{\mathit{ij}}^{\mathrm{E}}} {C_{1}R_{\mathit{ij}}^{\mathrm{E}}}\psi _{21} {\ast} a_{j} +\sum _{k} \frac{E_{\mathit{ik}}^{\mathrm{I}}} {C_{2}R_{\mathit{ik}}^{\mathrm{I}}}\psi _{22} {\ast} a_{k} + \frac{R^{\mathrm{D}}} {R^{\mathrm{A}} + R^{\mathrm{D}}}\left [ \frac{1} {C_{1}}\chi _{21} + \frac{1} {C_{2}}\delta \right ] {\ast} J_{i}{}\end{array}$$
(10.40)

where all kernels lost their neuron index i. Additionally, some time constant \(\hat{\tau }\) results from the temporal convolution. Multiplying next with

$$\displaystyle{ r = \frac{C_{2}R_{3}(R^{\mathrm{B}} + R^{\mathrm{C}})} {C_{2}R_{3}(R^{\mathrm{B}} + R^{\mathrm{C}}) + C_{2}(R^{\mathrm{B}} + R^{\mathrm{C}})^{2} -\hat{\tau } R_{3}} }$$
(10.41)

yields a leaky integrate-and-fire (LIF ) model

$$\displaystyle{ \tau \frac{\mathrm{d}U_{i}} {\mathrm{d}t} + U_{i} =\sum _{j}w_{\mathit{ij}}^{\mathrm{E}}\,\psi _{ 21} {\ast} a_{j} +\sum _{k}w_{\mathit{ik}}^{\mathrm{I}}\,\psi _{ 22} {\ast} a_{k} +\kappa \left [ \frac{1} {C_{1}}\chi _{21} + \frac{1} {C_{2}}\delta \right ] {\ast} J_{i} }$$
(10.42)

where we have introduced the following parameters:

  • Time constant

    $$\displaystyle{ \tau = rC_{3}(R^{\mathrm{B}} + R^{\mathrm{C}}) }$$
    (10.43)
  • Excitatory synaptic weights

    $$\displaystyle{ w_{\mathit{ij}}^{\mathrm{E}} = r \frac{E_{\mathit{ij}}^{\mathrm{E}}} {C_{1}R_{\mathit{ij}}^{\mathrm{E}}}> 0 }$$
    (10.44)
  • Inhibitory synaptic weights

    $$\displaystyle{ w_{\mathit{ik}}^{\mathrm{I}} = r \frac{E_{\mathit{ik}}^{\mathrm{I}}} {C_{2}R_{\mathit{ik}}^{\mathrm{I}}} <0 }$$
    (10.45)
  • Diffusion coefficient

    $$\displaystyle{ \kappa = r \frac{R^{\mathrm{D}}} {R^{\mathrm{A}} + R^{\mathrm{D}}}\:. }$$
    (10.46)

Moreover, we make the same approximation for the DFP (10.32) and obtain

$$\displaystyle\begin{array}{rcl} & & \varPhi _{i} = \frac{R^{\mathrm{D}}} {R^{\mathrm{A}} + R^{\mathrm{D}}}\Bigg\{\sum _{j} \frac{E_{\mathit{ij}}^{\mathrm{E}}} {C_{1}R_{\mathit{ij}}^{\mathrm{E}}}(\psi _{11} -\psi _{21}) {\ast} a_{j} +\sum _{k} \frac{E_{\mathit{ik}}^{\mathrm{I}}} {C_{2}R_{\mathit{ik}}^{\mathrm{I}}}(\psi _{12}-\delta ) {\ast} a_{k} + \\ & & \Bigg[ \frac{R^{\mathrm{D}}} {C_{1}(R^{\mathrm{A}} + R^{\mathrm{D}})}(\chi _{11} -\chi _{21}) + \frac{R^{\mathrm{D}}} {C_{2}(R^{\mathrm{A}} + R^{\mathrm{D}})}(\chi _{12}-\delta ) - R^{\mathrm{A}}\delta \Bigg] {\ast} J_{ i} + \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \frac{1} {C_{2}(R^{\mathrm{B}} + R^{\mathrm{C}})}(\chi _{12}-\delta ) {\ast} U_{i}\Bigg\} {}\end{array}$$
(10.47)

as the observation equation of the LIF model.

Equations (10.42, 10.47) still exhibit some redundancy, seeing that the kernel ψ 21 always relates to excitatory synapses while the kernel ψ 22 refers to inhibitory synapses. We could thus absorb these kernel indices into the presynaptic neuron indices by introducing new kernels

$$\displaystyle\begin{array}{rcl} \psi _{j}& =& \left \{\begin{array}{@{}l@{\quad }l@{}} \psi _{21}\quad &\quad: \quad j\text{ excitatory} \\ \psi _{22}\quad &\quad: \quad j\text{ inhibitory} \end{array} \right.{}\end{array}$$
(10.48)
$$\displaystyle\begin{array}{rcl} \zeta _{j}& =& \left \{\begin{array}{@{}l@{\quad }l@{}} \psi _{11} -\psi _{21}\quad &\quad: \quad j\text{ excitatory} \\ \psi _{12}-\delta \quad &\quad: \quad j\text{ inhibitory}\:. \end{array} \right.{}\end{array}$$
(10.49)

These kernels entail

$$\displaystyle\begin{array}{rcl} \tau \frac{\mathrm{d}U_{i}} {\mathrm{d}t} + U_{i}& =& \sum _{j}w_{\mathit{ij}}\,\psi _{j} {\ast} a_{j} +\kappa \left [ \frac{1} {C_{1}}\chi _{21} + \frac{1} {C_{2}}\delta \right ] {\ast} J_{i}{}\end{array}$$
(10.50)
$$\displaystyle\begin{array}{rcl} \varPhi _{i}& =& \frac{R^{\mathrm{D}}} {r(R^{\mathrm{A}} + R^{\mathrm{D}})}\Bigg\{\sum _{j}w_{\mathit{ij}}\,\zeta _{j} {\ast} a_{j} + \\ & & \Bigg[ \frac{R^{\mathrm{D}}} {C_{1}(R^{\mathrm{A}}+R^{\mathrm{D}})}(\chi _{11} -\chi _{21})+ \frac{R^{\mathrm{D}}} {C_{2}(R^{\mathrm{A}}+R^{\mathrm{D}})}(\chi _{12}-\delta ) - R^{\mathrm{A}}\delta \Bigg] {\ast} J_{ i} + \\ & & \frac{1} {C_{2}(R^{\mathrm{B}} + R^{\mathrm{C}})}(\chi _{12}-\delta ) {\ast} U_{i}\Bigg\} {}\end{array}$$
(10.51)

after also dropping the redundant excitatory/inhibitory superscripts. Thus, the indices i, j now extend over the entire network of K units.

Because the relevance of diffusion currents is controversially discussed in the literature [3, 6, 3234], we could provisionally neglect these for further simplification: J i  = 0 which leads to

$$\displaystyle\begin{array}{rcl} \tau \frac{\mathrm{d}U_{i}} {\mathrm{d}t} + U_{i} =\sum _{j}w_{\mathit{ij}}\,\psi _{j} {\ast} a_{j}& &{}\end{array}$$
(10.52)
$$\displaystyle\begin{array}{rcl} \varPhi _{i} = \frac{R^{\mathrm{D}}} {r(R^{\mathrm{A}} + R^{\mathrm{D}})}\Bigg\{\sum _{j}w_{\mathit{ij}}\,\zeta _{j} {\ast} a_{j} + \frac{1} {C_{2}(R^{\mathrm{B}} + R^{\mathrm{C}})}(\chi _{12}-\delta ) {\ast} U_{i}\Bigg\}\:.& &{}\end{array}$$
(10.53)

3.2 Simulation

We have extensively discussed the system (10.52, 10.53) under the further assumption χ 12 = δ in a recent paper [9], and herein we present further numerical simulations under different external stimulation input. In particular, we simulate a cortical tissue as a LIF network (10.52), comprising of 1,000 interneurons and 4,000 pyramidal neurons interconnected randomly via an Erdős-Rényi graph with connection probability of 0.2. We refer the reader to [9] on how the somatic, dendritic and extracellular electrotonic parameters are estimated and how these are related to the phenomenological parameters of Mazzoni et al. [50]. All other parameters such as steady state voltages, refractory period, synaptic latencies, thresholds and others can also be found therein. Thalamic inputs are the only source of noise , which attempt to account for both cortical heterogeneity and spontaneous activity. This is achieved by modeling a two level noise, where the first level is an Ornstein-Uhlenbeck process superimposed with a constant or periodic signal and the second level is a time varying inhomogeneous Poisson process. Thus, we have the following time varying rate, λ(t), that feeds into an inhomogeneous Poisson process:

$$\displaystyle\begin{array}{rcl} \tau _{n}\frac{\mathrm{d}n(t)} {\mathrm{d}t} = -n(t) +\sigma _{n}\sqrt{\frac{2} {\tau _{n}}}\,\eta (t)& &{}\end{array}$$
(10.54)
$$\displaystyle\begin{array}{rcl} \lambda (t) = [c_{0} + n(t)]_{+}& &{}\end{array}$$
(10.55)

where η(t) represents Gaussian white noise, c 0 represents a constant signal (but equally could be periodic or other), and the operation [⋅ ]+ is the threshold-linear function, \([x]_{+} = x\) if x > 0, \([x]_{+} = 0\) otherwise, which circumvents negative rates. The constant signal c 0 can range between 1. 2 and 2. 6 spikes/ms. Note also that periodic or more complex ‘naturalistic’ signals can be applied, but we have herein kept it simple just for illustrative purposes. The parameters of the Ornstein-Uhlenbeck process are τ n  = 16 ms and the standard deviation σ n  = 0. 4 spikes/ms; also refer to [50] for these parameters.

The network simulations were run under the Brian Simulator, which is a Python based environment [35]. We focus on resistive extracellular case and compare between our DFP Φ i measure (10.53) and the Mazzoni LFP measure (MPLB) defined herein as the sum of the moduli of excitatory and inhibitory synaptic currents:

$$\displaystyle\begin{array}{rcl} V _{i}^{\mathrm{MPLB}} =\sum _{ j}\vert w_{\mathit{ij}}^{\mathrm{E}}\,\psi _{ 21} {\ast} a_{j}\vert +\sum _{k}\vert w_{\mathit{ik}}^{\mathrm{I}}\,\psi _{ 22} {\ast} a_{k}\vert & &{}\end{array}$$
(10.56)

In addition, Mazzoni et al. [50] take the sum of V i MPLB across all pyramidal neurons. To provide a comparison we will also consider the sum of our proposed DFP measure (10.53), but also contrast it with its average. Thus we compare the following models of local field potentials:

$$\displaystyle\begin{array}{rcl} L_{1} =\sum _{i}V _{i}^{\mathrm{MPLB}}& &{}\end{array}$$
(10.57)
$$\displaystyle\begin{array}{rcl} L_{2} =\sum _{i}\varPhi _{i}& &{}\end{array}$$
(10.58)
$$\displaystyle\begin{array}{rcl} L_{3} = \frac{1} {P}\sum _{i}\varPhi _{i}\:,& &{}\end{array}$$
(10.59)

where P is the number of pyramidal neurons. Subsequently, we run the network for 2 s with three different noise levels, specifically, receiving constant signals with rates 1. 2, 1. 6 and 2. 4 spikes/ms as depicted in Fig. 10.3. We report two main striking differences between LFP measures Eqs. (10.57), (10.58) and (10.59), namely in frequency and in amplitude. The L 1 responds instantaneously to the spiking network activity by means of high frequency oscillations . Moreover, L 1 exhibits a large amplitude overestimating experimental LFP/EEG measurements that typically vary between 0.5 and 2 mV [45, 58]. In contrast, L 2 and L 3 respond more smoothly to population activity and it is noticeable that our LFP estimates represent the low-pass filtered thalamic input . Clearly, both L 2 and L 3 have same time profile, however, the L 3 measures comparably with experimental LFP, that is, in the order of millivolts (although its not contained within the experimental range 0.5–2 mV). Thus we do concede that further work is required to improve our estimates. A minor improvement could be attained by applying a weighted average to mimic the distance of an electrode to a particular neuron. However, more biophysical modeling work is in demand as other neural effects, such as neuron-glia interaction and other effects, could be required to bring down these estimates to experimental results.

Fig. 10.3
figure 3

Dynamics of the network and LFP comparisons: the three columns represent different runs of the network for three different rates, 1.2, 1.6 and 2.4 spikes/ms. In each column, all panels show the same 250 ms (extracted from 2 s simulations). The top panels (ac) represent thalamic inputs with the different rates. The second top panels (df) correspond to a raster plot of the activity of 200 pyramidal neurons. Panels (gi) depict average instantaneous firing rate (computed on a 1 ms bin) of interneurons and panels (j-l) correspond to average instantaneous firing rate of pyramidal neurons. Panels (mo) show the LFP L 1 (Eq. (10.57)) which is the Mazzoni et al. measure [50]. Panels (pr) and (su) depict our proposed LFP measures L 2 (Eq. (10.58)) and L 3 (Eq. (10.59)), respectively. Note and compare the different order of magnitudes between the three LFP measures

4 Continuum Neural Field Model

So far we have considered the electrical properties of neural networks containing cortical pyramidal cells by means of equivalent circuits of a three-compartment model . In order to link these properties to the electromagnetic field in extracellular space , we need an embedding of the network topology into physical metric space \(\mathbb{R}^{3}\). This is most easily achieved in the continuum limit of neural field theory.

4.1 Rate Model

Starting with the approximation from Sect. 10.3, we first transform our LIF approach into a rate model. According to Eq. (10.3), a spike train is represented by a sum over delta functions. In order to obtain the number of spikes in a time interval [0, t], one has to integrate Eq. (10.3), yielding

$$\displaystyle{n(t) =\int _{ 0}^{t}a(t^{{\prime}})\,\mathrm{d}t^{{\prime}}\:.}$$

Then, the instantaneous spike rate per unit time is formally regained as the original signal Eq. (10.3), through

$$\displaystyle{ \frac{\mathrm{d}} {\mathrm{d}t}n(t) = a(t)\:. }$$
(10.60)

A spike train a(t) arriving at the presynaptic terminal of an axon causes changes in the conductivity of voltage-gated calcium channels. Therefore, calcium current flows into the synaptic button evoking the release of neurotransmitter into the synaptic cleft which is essentially a stochastic Bernoulli process [7, 44], where the probability P(k) for releasing k transmitter vesicles obeys a binomial distribution

$$\displaystyle{ P(k) ={ Y \choose k}\,p^{k}(1 - p)^{Y -k}\:, }$$
(10.61)

with Y the number of allocated vesicles in the button and p the elementary probability that an arriving action potential releases one vesicle.

In the limit of large numbers, the binomial distribution can be replaced by a normal distribution

$$\displaystyle{ P(k) = \frac{1} {\sqrt{2\pi y(1 - p)}}\,\exp \left [-\frac{(k - y)^{2}} {2y(1 - p)}\right ]\:, }$$
(10.62)

where y = Yp is the average number of allocated transmitter vesicles. Due to this stochasticity of synaptic transmission, even the dynamics of a single neuron should be treated in terms of statistic ensembles in probability theory. Hence, we describe the state variables U i (t) by a normal distribution density ρ(u, t) with mean \(\bar{U}(t)\) and variance σ 2, and determine the firing probability as

$$\displaystyle{ r(t) =\Pr (U(t) \geq \theta ) =\int _{ \theta }^{\infty }\rho (u,t)\,\mathrm{d}u = \frac{1} {2}\,\mathrm{erfc}\left (\frac{\theta -\bar{U}} {\sqrt{2}\sigma } \right )\:, }$$
(10.63)

with ‘erfc’ as the complementary error function accounting for the cumulative probability distribution. Thereby, the stochastic threshold dynamics is characterized by the typical sigmoidal activation functions. In computational neuroscience, the complementary error function is often approximated by the logistic function

$$\displaystyle{f(u) = \frac{1} {1 +\mathrm{ e}^{-\gamma (u-\theta )}}}$$

with parameters gain γ and threshold θ. Using f as nonlinear activation function, the firing probability r(t) = f(U(t)) for mean membrane potential U is closely related to the instantaneous spike rate a(t) (Eq. (10.60)) via

$$\displaystyle{ a(t) = a_{\max }\,r(t) = a_{\max }\,f(U) }$$
(10.64)

with maximal spike rate a max which can be absorbed by f:

$$\displaystyle{ f(u) = \frac{a_{\max }} {1 +\mathrm{ e}^{-\gamma (u-\theta )}}\:. }$$
(10.65)

Inserting (10.64) and (10.65) into our LIF model (10.50) yields a leaky integrator rate (LIR ) model [10, 19]

$$\displaystyle{ \tau \frac{\mathrm{d}U_{i}} {\mathrm{d}t} + U_{i} =\sum _{j}w_{\mathit{ij}}\,\psi _{j} {\ast} f(U_{j}) +\kappa \left [ \frac{1} {C_{1}}\chi _{21} + \frac{1} {C_{2}}\delta \right ] {\ast} J_{i}\:. }$$
(10.66)

4.2 Neuroelectrodynamics

Next we perform the continuum limit \(U_{i}(t) \rightarrow u_{n}(\mathbf{x},t)\) à la physique under the assumption of identical neural properties within each population. The sum in Eq. (10.66) may converge under suitable smoothness assumptions upon the synaptic weight matrix w ij and the convolution kernels.Footnote 1 Then a continuous two-dimensional vector \(\mathbf{x} \in \varGamma _{n}\) replaces the neuron index i, while n becomes a population index. The population layers Γ n become two-dimensional manifolds embedded in three-dimensional physical space such that \(\mathbf{x} \in \varGamma _{n}\) is a two-dimensional projection of a vector \(\mathbf{r} \in C \subset \mathbb{R}^{3}\) (C denoting cortex). Or, likewise, \(\mathbf{r} = (\mathbf{x},z)\), as indicated in Fig. 10.1.

As a result, Eq. (10.66) passes into the Amari equation [1]

$$\displaystyle\begin{array}{rcl} \tau \frac{\partial } {\partial t}u_{i}(\mathbf{x},t) + u_{i}(\mathbf{x},t) =\sum _{k}\int _{\varGamma _{k}}w_{\mathit{ik}}(\mathbf{x},\mathbf{x}^{\prime})\,\psi (\mathbf{x}^{\prime},t) {\ast} f(u_{k}(\mathbf{x}^{\prime},t))\,\mathrm{d}\mathbf{x}^{\prime} + h_{i}(\mathbf{x},t)& &{}\end{array}$$
(10.67)

with input current \(h_{i}(\mathbf{x},t)\) delivered to neuron layer i at site \(\mathbf{x} \in \varGamma _{i}\). The synaptic weight kernels \(w_{\mathit{ik}}(x,x^{{\prime}})\) and the synaptic-dendritic impulse response \(\psi (\mathbf{x}^{\prime},t)\) are obtained from the synaptic weight matrix, and from the Green’s functions ψ j (t), respectively.

This neural field equation is complemented by the continuum limit of the extracellular dendritic dipole current density through cross section area A with normal vector \(\mathbf{n}_{A}\), shown in Fig. 10.1, which is obtained from (10.31), i.e.

$$\displaystyle\begin{array}{rcl} & & \mathbf{j}(\mathbf{r},t) =\lim _{i\rightarrow \mathbf{x}}\frac{\mathbf{n}_{A}} {A} I_{i1} = \\ & & =\sum _{k}\int _{\varGamma _{k}}\tilde{w}_{1k}(\mathbf{r},\mathbf{r}^{\prime})\,\zeta (\mathbf{r}^{\prime},t) {\ast} f(u_{k}(\mathbf{r}^{\prime},t)) +\xi _{1}(t) {\ast}\mathbf{ j}^{\mathrm{D}}(\mathbf{r},t) +\xi _{ 2}(t) {\ast} u_{1}(\mathbf{r},t)\:,{}\end{array}$$
(10.68)

where we have introduced a modified synaptic weight kernel \(\tilde{w}\) and two new convolution kernels ξ j that are related to the electrotonic parameters of the discrete model (10.31). The proper diffusion current \(\mathbf{j}^{\mathrm{D}}(\mathbf{r},t)\) must be related to the gradient of the charge density \(\rho (\mathbf{r},t)\) according to Fick’s law

$$\displaystyle{ \mathbf{j}^{\mathrm{D}}(\mathbf{r},t) = -\mathbf{D}^{\mathrm{D}}(\mathbf{r},t)\nabla \rho (\mathbf{r},t)\:, }$$
(10.69)

where the diffusion tensor \(\mathbf{D}^{\mathrm{D}}(\mathbf{r},t)\) accounts for inhomogeneities and unisotropies of extracellular fluid, as being reflected by diffusion tensor imaging (DTI) [4, 5]. For layer 1 of pyramidal neurons the input is then given by the diffusion current \(\mathbf{j}^{\mathrm{D}}(\mathbf{r},t)\). Therefore, the input to the Amari equation  (10.67) becomes

$$\displaystyle{ h_{i}(\mathbf{x},t) = -\delta _{i1}\kappa A\,\mathbf{D}^{\mathrm{D}}(\mathbf{r},t) \cdot \nabla \rho (\mathbf{r},t) }$$
(10.70)

with Kronecker’s δ ik and appropriately redefined κ.

For further treatment of the electrodynamics of neural fields in linear but inhomogeneous and unisotropic media, we need Ohm’s law

$$\displaystyle{ \mathbf{j}^{\Omega }(\mathbf{r},t) =\boldsymbol{\sigma } (\mathbf{r},t) \cdot \mathbf{ E}(\mathbf{r},t)\:, }$$
(10.71)

where \(\boldsymbol{\sigma }(\mathbf{r},t)\) is the conductivity tensor and \(\mathbf{E}\) the electric field strength. In case of negligible magnetic fields, we can introduce the dendritic field potential \(\varphi\) via

$$\displaystyle{ \mathbf{E} = -\nabla \varphi \:. }$$
(10.72)

The diffusion current (10.69) and Ohmic current (10.71) together obey the Nernst-Planck equation [43, 62]

$$\displaystyle{ \mathbf{j} = -\mathbf{D}^{\mathrm{D}} \cdot \nabla \rho +\boldsymbol{\sigma } \cdot \mathbf{E}\:. }$$
(10.73)

In the diffusive and conductive extracellular fluid, we additionally have

  • Einstein’s relation [28]

    $$\displaystyle{ \mathbf{D}^{\mathrm{D}} = k_{\mathrm{ B}}Tq\boldsymbol{\mu } }$$
    (10.74)
  • Conductivity

    $$\displaystyle{ \boldsymbol{\sigma }=\boldsymbol{\mu }\rho \:, }$$
    (10.75)

where k B denotes the thermodynamic Boltzmann constant, T temperature, q the ion charge, and \(\boldsymbol{\mu }\) the ion’s mobility tensor related to the fluid’s viscosity [43, 62]. Inserting (10.7410.75) into (10.73) yields

$$\displaystyle{ \mathbf{j} = -k_{\mathrm{B}}Tq\boldsymbol{\mu } \cdot \nabla \rho +\boldsymbol{\mu } \cdot \mathbf{E}\rho \:. }$$
(10.76)

This form of the Nernst-Planck equation has to be augmented by a continuity equation

$$\displaystyle{ \nabla \cdot \mathbf{ j} + \frac{\partial \rho } {\partial t} = 0 }$$
(10.77)

reflecting the conservation of charge as a consequence of Maxwell’s equations , and by the first Maxwell equation

$$\displaystyle{ \nabla \cdot \mathbf{ D} =\rho }$$
(10.78)

where

$$\displaystyle{ \mathbf{D} =\boldsymbol{\epsilon } \cdot \mathbf{E} }$$
(10.79)

introduces the electrical permittivity tensor \(\boldsymbol{\epsilon }\).

Inserting (10.79) into (10.78) first gives

$$\displaystyle{ \boldsymbol{\epsilon }\cdot (\nabla \cdot \mathbf{ E}) =\rho -(\nabla \cdot \boldsymbol{\epsilon }) \cdot \mathbf{ E}\:. }$$
(10.80)

Next, we take the divergence of the Nernst-Planck equation (10.76), which yields after consideration of the continuity equation (10.77)

$$\displaystyle\begin{array}{rcl} \nabla \cdot \mathbf{ j}& =& -k_{\mathrm{B}}Tq\nabla \cdot (\boldsymbol{\mu }\cdot \nabla \rho ) + \nabla \cdot (\boldsymbol{\mu }\cdot \mathbf{E}\rho ) {}\\ -\boldsymbol{\epsilon }\frac{\partial \rho } {\partial t}& =& -k_{\mathrm{B}}Tq\boldsymbol{\epsilon } \cdot [(\nabla \cdot \boldsymbol{\mu }) \cdot (\nabla \rho )+\boldsymbol{\mu }\varDelta \rho ] + {}\\ & & \boldsymbol{\epsilon }\cdot (\nabla \cdot \boldsymbol{\mu }) \cdot \mathbf{ E}\rho +\boldsymbol{\epsilon } \cdot \boldsymbol{\mu }\cdot (\nabla \cdot \mathbf{ E})\rho +\boldsymbol{\epsilon } \cdot \boldsymbol{\mu }\cdot \mathbf{ E} \cdot \nabla \rho \:. {}\\ \end{array}$$

Introducing the commutator \([\boldsymbol{\epsilon },\boldsymbol{\mu }] =\boldsymbol{\epsilon } \cdot \boldsymbol{\mu }-\boldsymbol{\mu }\cdot \boldsymbol{\epsilon }\), we can write

$$\displaystyle\begin{array}{rcl} -\boldsymbol{\epsilon }\frac{\partial \rho } {\partial t}& =& -k_{\mathrm{B}}Tq\boldsymbol{\epsilon } \cdot [(\nabla \cdot \boldsymbol{\mu }) \cdot (\nabla \rho )+\boldsymbol{\mu }\varDelta \rho ] +\boldsymbol{\epsilon } \cdot (\nabla \cdot \boldsymbol{\mu }) \cdot \mathbf{ E}\rho + {}\\ & & [\boldsymbol{\epsilon },\boldsymbol{\mu }] \cdot (\nabla \cdot \mathbf{ E})\rho +\boldsymbol{\mu } \cdot \rho ^{2} -\boldsymbol{\mu }\cdot (\nabla \cdot \boldsymbol{\epsilon }) \cdot \mathbf{ E}\rho +\boldsymbol{\epsilon } \cdot \boldsymbol{\mu }\cdot \mathbf{ E} \cdot \nabla \rho \:, {}\\ \end{array}$$

where we have also utilized (10.80).

Using the Nernst-Planck equation  (10.76) once more, we eliminate the electric field

$$\displaystyle{ \mathbf{E} =\boldsymbol{\mu } ^{-1} \cdot \left (\frac{\mathbf{j} + k_{\mathrm{B}}Tq\boldsymbol{\mu } \cdot \nabla \rho } {\rho } \right ) }$$
(10.81)

thus

$$\displaystyle\begin{array}{rcl} & & \quad -\boldsymbol{\epsilon } \frac{\partial \rho } {\partial t} = -k_{\mathrm{B}}Tq\boldsymbol{\epsilon } \cdot [(\nabla \cdot \boldsymbol{\mu }) \cdot (\nabla \rho )+\boldsymbol{\mu }\varDelta \rho ] +\boldsymbol{\epsilon } \cdot \nabla (\ln \boldsymbol{\mu }) \cdot (\mathbf{j} + k_{\mathrm{B}}Tq\boldsymbol{\mu } \cdot \nabla \rho ) + \\ & & [\boldsymbol{\epsilon },\boldsymbol{\mu }] \cdot \nabla \cdot \left \{\boldsymbol{\mu }^{-1} \cdot \left (\frac{\mathbf{j} + k_{\mathrm{B}}Tq\boldsymbol{\mu } \cdot \nabla \rho } {\rho } \right )\right \}\rho +\boldsymbol{\mu } \cdot \rho ^{2} -\boldsymbol{\mu }\cdot (\nabla \cdot \boldsymbol{\epsilon }) \cdot \boldsymbol{\mu }^{-1} \cdot (\mathbf{j} + k_{\mathrm{ B}}Tq\boldsymbol{\mu } \cdot \nabla \rho ) + \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \boldsymbol{\epsilon } \cdot \left (\frac{\mathbf{j} + k_{\mathrm{B}}Tq\boldsymbol{\mu } \cdot \nabla \rho } {\rho } \right ) \cdot \nabla \rho \:. {}\end{array}$$
(10.82)

The solution of (10.82) provides the extracellular charge density \(\rho (\mathbf{x},t)\) dependent on the extracellular driving currents \(\mathbf{j}\) that are delivered by the neural field equations (10.67, 10.68, 10.70). Inserting both \(\rho (\mathbf{x},t)\) and \(\mathbf{j}\) into (10.81) yields the DFP

$$\displaystyle{ -\nabla \varphi =\boldsymbol{\mu } ^{-1} \cdot \left (\frac{\mathbf{j} + k_{\mathrm{B}}Tq\boldsymbol{\mu } \cdot \nabla \rho } {\rho } \right ) }$$
(10.83)

via (10.72).

These equations of neuroelectrodynamics can be considerably simplified by assuming a homogeneous and isotropic medium. In that case (10.82) reduces to

$$\displaystyle{ -\boldsymbol{\epsilon }\frac{\partial \rho } {\partial t} = -k_{\mathrm{B}}Tq\boldsymbol{\epsilon } \cdot \boldsymbol{\mu }\varDelta \rho +\boldsymbol{\mu } \cdot \rho ^{2} +\boldsymbol{\epsilon } \cdot \left (\frac{\mathbf{j} + k_{\mathrm{B}}Tq\boldsymbol{\mu } \cdot \nabla \rho } {\rho } \right ) \cdot \nabla \rho \:, }$$
(10.84)

which is a kind of Fokker-Planck equation for the charge density. Taking only the first term of the r.h.s. into account, we obtain a diffusion equation whose stationary solution gives rise to the Warburg impedance of extracellular space [3, 4, 6, 59].

5 Discussion

In this contribution we outlined a biophysical theory for the coupling of microscopic neural activity to the electromagnetic field as described by the Maxwell equations , in order to adequately explain neural field potentials, such as DFP, LFP, and EEG . To that aim we have started from the widely accepted assumption, that cortical LFP/EEG mostly reflect extracellular dipole currents of pyramidal cells [53, 58]. This assumption has lead us to recent work suggesting that both Ohmic and diffusion currents contribute to LFP/EEG generation [3, 6, 3234]. In addition, the assumption has placed a further challenge in that the geometry of the cortical pyramidal cells should be incorporated. Accounting for the geometry of the cell seemed to imply that one loses the computational efficiency of point models and we had to resort to compartmental models. However, herein we have proposed a framework showing how to circumvent these apparent difficulties to finally derive a biophysically plausible observation model for the Amari neural field equation [1], with additional dipole currents coupled to the Maxwell’s equations .

We have first proposed a full-fledged three-compartment model of a single pyramidal cell decomposed into the apical dendritic tree for the main of excitatory synapses , the soma and the perisomatic dendritic tree that harbors mainly the inhibitory synapses , and the axon hillock exhibiting the neural spiking mechanism. In addition, the extracellular space was represented by incorporating both Ohmic and diffusive impedances, thus assuming that the total current through the extracellular fluid is governed by the Nernst-Planck equation . This has enabled us to account for the Warburg impedance . From this starting point and successive simplifications we have derived the evolution law of the circuit, represented as an integro-differential equation. In the continuum limit this evolution law went into the Amari neural field equation, augmented by an observation equation for dendritic dipole currents, that are coupled to Maxwell’s equations for the electromagnetic field in extracellular fluid.

Moreover we have demonstrated how to simplify and derive from our proposed three-compartment model a standard LIF network which then have enabled us to compare our LFP measure with other LFP measures found in the literature. Herein, we specifically have chosen to compare with the Mazzoni et al. work [50], that proposed the LFP to be the sum of the moduli of inhibitory and excitatory currents. Thus, we have proceeded by mapping our biophysical electrotonic parameters to the phenomenological parameters implemented in Mazzoni’s LFP model [50]. However, now with the advantage that our LFP measure accounts for the extracellular currents and the geometry of the cell. Subsequently, we have compared different simulation runs between our LIF network model and that of Mazzoni et al. [50]. This comparison indicates that the Mazzoni et al. model systematically overestimates LFP amplitude by almost one order of magnitude and also systematically overestimates LFP frequencies. For more detailed discussion we refer the interested reader to [9].

At the present stage, we note that there is still a long way to fully explain the spatiotemporal characteristics of LFP and EEG . For example, the polarity reversals observed in experimental LFP/EEG as an electrode traverses different cortical layers are not accounted for in our current model. This is explained with the direction of the dipole currents, which is constrained, in the sense that current sources are situated at the perisomatic and current sinks are situated at apical dendritic tree. Taking this polarity as positive also entails positive DFP and LFP that could only change in strength. However, it is straightforward to adapt our model by fully incorporating cortical layers III and VI, for example. Yet another aspect that was not looked in the present work, was that of ephaptic interactions [31, 37, 40, 55] between neurons and the LFP which could act via a mean-field coupling as an order parameter thereby entraining the local populations to synchronized activity. A possible biophysical basis for this phenomena could be polarization of neurons induced by electric fields that are generated by ionic charges. As a consequence, this could alter the voltage dependent conductances, triggering changes of the neuronal dynamics, such as spiking and the activity of glia cells . We have not accounted for this effect in a biophysical sense yet, however, we could phenomenologically describe this mean-field coupling through a modulation of firing thresholds as outlined in [7].