1 Introduction

Determining the effects of a network’s structure on its dynamics is an issue of great interest, particularly in the case of a network of neurons [15, 20, 26, 27]. Since neurons form directed synaptic connections, a neuron has both an in-degree—the number of neurons connected to it, and an out-degree—the number of neurons it connects to. In this paper, we present a framework for investigating the effects of correlations, both positive and negative, between these two quantities. To isolate the effects of these correlations, we assume no other structure in the networks, i.e. random connectivity based on the neurons’ degrees.

A number of other authors have considered this issue, and we now summarise relevant aspects of their results. LaMar and Smith [13] considered directed networks of identical pulse-coupled phase oscillators and mostly concentrated on the probability that the network would fully synchronise, and the time taken to do so. Vasquez et al. [30] considered binary neurons whose states were updated at discrete times, and found that negative degree correlations stabilised a low firing rate state, for excitatory coupling. A later paper [15] considered more realistic spiking neurons, had a mix of excitatory and inhibitory neurons, and concentrated more on the network’s response to transient stimuli, as well as analysis of network properties such as mean shortest path. Several authors have considered networks for which the in- and out-degrees of a neuron are equal, thereby inducing positive correlations between them [9, 27].

Vegué and Roxin [32] considered large networks of both excitatory and inhibitory leaky integrate-and-fire neurons and used a mean-field formalism to determine steady-state distributions of firing rates within neural populations. They considered the effects of within-neuron degree correlations for the excitatory-to-excitatory connections, and sometimes varied the probability of inhibitory-to-excitatory connections in order to create a “balanced state”. Nykamp et al. [20] also considered large networks of both excitatory and inhibitory neurons and used a Wilson–Cowan-type firing rate model to investigate the effects of within-neuron degree correlations. They showed that once correlations were included, the dynamics are effectively four-dimensional, in contrast to the two-dimensional dynamics expected from a standard rate-based excitatory/inhibitory network. They also related the degree distributions to cortical motifs. Experimental evidence for within-neuron degree correlations is given in [31].

The structure of the paper is as follows. In Sect. 2, we present the model network and summarise the analysis of [1] showing that under certain assumptions, the network can be described by a coupled set of ordinary differential equations, one for the dynamics associated with each distinct in-degree. In Sect. 3, we discuss how to generate correlated in- and out-degrees using a Gaussian copula. Our model involves sums over all distinct in-degrees, and in Sect. 4 we present a computationally efficient method for evaluating these sums, in analogy with Gaussian quadrature. Our main results are in Sect. 5, and we show in Sect. 6 that they also occur in networks of more realistic Morris–Lecar spiking neurons. We discuss motifs in Sect. 7 and conclude in Sect. 8.

2 Model

We consider the same model of pulse-coupled theta neurons as in [1]. The governing equations are

$$\begin{aligned} \frac{\hbox {d}\theta _i}{\hbox {d}t}=1-\cos {\theta _i}+(1+\cos {\theta _i})(\eta _i+I_i) \end{aligned}$$
(1)

for \(i=1,2\ldots N\), where the phase angle \(\theta _i\) characterises the state of neuron i, which fires an action potential as \(\theta _i\) increases through \(\pi \),

$$\begin{aligned} I_i=\frac{K}{\langle k\rangle }\sum _{j=1}^N A_{ij}P_n(\theta _j), \end{aligned}$$
(2)

K is the strength of connections within the network, \(A_{ij}=1\) if there is a connection from neuron j to neuron i and \(A_{ij}=0\) otherwise, \(\langle k\rangle \) is the average degree, \(\sum _{i,j}A_{ij}/N\), and \(P_n(\theta )=a_n(1-\cos {\theta })^n\) where \(a_n\) is chosen such that \(\int _0^{2\pi }P_n(\theta )\hbox {d}\theta =1\). The function \(P_n(\theta _j)\) models the pulse of current emitted by neuron j when it fires and can be made arbitrarily “spike-like” and localised around \(\theta _j=\pi \) by increasing n. The parameter \(\eta _i\) is the input current to neuron i in the absence of coupling, and the \(\eta _i\) are independently and randomly chosen from a Lorentzian distribution:

$$\begin{aligned} g(\eta )=\frac{\Delta /\pi }{(\eta -\eta _0)^2+\Delta ^2} \end{aligned}$$
(3)

Chandra et al. [1] considered the limit of large N and assumed that the network can be characterised by two functions. Firstly, a degree distribution \(P(\mathbf {k})\), normalised so that \(\sum _\mathbf {k}P(\mathbf {k})=N\), where \(\mathbf {k}=(k_\mathrm{in},k_\mathrm{out})\) and \(k_\mathrm{in}\) and \(k_\mathrm{out}\) are the in- and out-degrees, respectively, of a neuron with degree \(\mathbf {k}\). Secondly, an assortativity function \(a(\mathbf {k}' \rightarrow \mathbf {k})\) giving the probability of a connection from a neuron with degree \(\mathbf {k}'\) to one with degree \(\mathbf {k}\), given that such neurons exist. Whereas [1] investigated the effects of varying \(a(\mathbf {k}' \rightarrow \mathbf {k})\), here we consider the default value for this function [i.e. its value expected by chance, see (11)] and investigate the effects of varying correlations between \(k_\mathrm{in}\) and \(k_\mathrm{out}\) as specified by the degree distribution \(P(\mathbf {k})\). We emphasise that we are only considering within-neuron degree correlations and are not considering degree assortativity, which refers to the probability of neurons with specified degrees being connected to one another [1, 25].

In the limit \(N\rightarrow \infty \), the network can be described by a probability distribution \(f(\theta ,\eta |\mathbf {k},t)\), where \(f(\theta ,\eta |\mathbf {k},t)\hbox {d}\theta \ \hbox {d}\eta \) is the probability that a neuron with degree \(\mathbf {k}\) has phase angle in \([\theta ,\theta +\hbox {d}\theta ]\) and value of \(\eta \) in \([\eta ,\eta +\hbox {d}\eta ]\) at time t. This distribution satisfies the continuity equation

$$\begin{aligned} \frac{\partial f}{\partial t}+\frac{\partial }{\partial \theta }(vf)=0 \end{aligned}$$
(4)

where v is the continuum version of the right-hand side of (1):

$$\begin{aligned} v(\theta ,\mathbf {k},\eta ,t)&= 1-\cos {\theta }+(1+\cos {\theta }) \nonumber \\&\quad \left[ \eta +\frac{K}{\langle k\rangle }\sum _{\mathbf {k}'}P(\mathbf {k}')a(\mathbf {k}' \rightarrow \mathbf {k}) \right. \nonumber \\&\quad \left. \times \int _{-\infty }^\infty \int _0^{2\pi } f(\theta ',\eta '|\mathbf {k}',t) P_n(\theta ')\hbox {d}\theta '\ \hbox {d}\eta '\right] \end{aligned}$$
(5)

System (4)–(5) is amenable to the use of the Ott/Antonsen ansatz [22, 23], and using standard techniques [3, 10, 12, 14] one can show that the long-time dynamics of the system is described by

$$\begin{aligned} \frac{\partial b(\mathbf{k},t)}{\partial t}&=\frac{-i(b(\mathbf{k},t)-1)^2}{2}+\frac{(b(\mathbf{k},t)+1)^2}{2}\Bigg [-\Delta \nonumber \\&\quad \left. +i\eta _0+\frac{iK}{\langle k\rangle } \sum _{\mathbf{k}'}P(\mathbf{k}') a(\mathbf{k}'\rightarrow \mathbf{k})G(\mathbf {k}',t) \right] \end{aligned}$$
(6)

where (having chosen \(n=2\))

$$\begin{aligned}&G(\mathbf {k}',t) \nonumber \\&\quad =1-\frac{2(b(\mathbf{k}',t)+\bar{b}(\mathbf{k}',t))}{3}+\frac{b(\mathbf{k}',t)^2+\bar{b}(\mathbf{k}',t)^2}{6}.\nonumber \\ \end{aligned}$$
(7)

The quantity

$$\begin{aligned} b(\mathbf{k},t)=\int _{-\infty }^\infty \int _0^{2\pi } f(\theta ,\eta |\mathbf {k},t)e^{i\theta }\hbox {d}\theta \ \hbox {d}\eta \end{aligned}$$
(8)

can be regarded as a complex-valued “order parameter” for neurons with degree \(\mathbf {k}\) at time t. The function \(G(\mathbf {k}',t)\) can be regarded as the output current from neurons with degree \(\mathbf {k}'\), and its form results from rewriting the pulse function \(P_n(\theta )\) in terms of \(b(\mathbf {k}',t)\). [For general n, \(G(\mathbf {k}',t)\) is the sum of a degree-n polynomial in \(b(\mathbf {k}',t)\) and one in \(\bar{b}(\mathbf {k}',t)\) (the conjugate of \(b(\mathbf {k}',t)\)) [10, 14]. One can take the limit \(n\rightarrow \infty \) and obtain \(G(\mathbf {k}',t)=(1-|b(\mathbf {k}',t)|^2)/(1+b(\mathbf {k}',t)+\bar{b}(\mathbf {k}',t)+|b(\mathbf {k}',t)|^2)\).] Note that the parameters of Lorentzian (3) appear in (6) as a result of evaluating the integral over \(\eta '\) in (5). Equation (6) only describes the long-time asymptotic behaviour of network (1), on the “Ott/Antonsen manifold”, and thus may not fully describe transients from arbitrary initial conditions, nor the effects of stimuli which move the network off this manifold.

One can also marginalise \(f(\theta ,\eta |\mathbf {k},t)\) over \(\eta \) to obtain the distribution of \(\theta \) for each \(\mathbf {k}\) and t:

$$\begin{aligned}&p_{\theta }(\theta |\mathbf {k},t) \nonumber \\&\quad =\frac{1-|b(\mathbf{k},t)|^2}{2\pi \{1-2|b(\mathbf{k},t)|\cos {[\theta -\arg (b(\mathbf{k},t))]}+|b(\mathbf{k},t)|^2\}}\nonumber \\ \end{aligned}$$
(9)

a unimodal function with maximum at \(\theta =\arg (b(\mathbf{k},t))\). The firing rate of neurons with degree \(\mathbf {k}\) is equal to the flux through \(\theta =\pi \), i.e.

$$\begin{aligned} f(\mathbf{k},t)&=2p_{\theta }(\pi |\mathbf {k},t) \nonumber \\&=\frac{1-|b(\mathbf{k},t)|^2}{\pi \{1+2|b(\mathbf{k},t)|\cos {[\arg (b(\mathbf{k},t))]}+|b(\mathbf{k},t)|^2\}} \nonumber \\&= \frac{1}{\pi }\text{ Re }\left( \frac{1-\bar{b}(\mathbf{k},t)}{1+\bar{b}(\mathbf{k},t)}\right) \end{aligned}$$
(10)

where we have used the fact that \(\hbox {d}\theta /\hbox {d}t=2\) when \(\theta =\pi \).

Suppose our network has neutral assortativity, i.e. neurons are randomly connected with the probability of connection being determined by just their relevant degrees. Then [1, 25]

$$\begin{aligned} a(\mathbf{k}'\rightarrow \mathbf{k})=\frac{k_\mathrm{out}'k_\mathrm{in}}{N\langle k \rangle } \end{aligned}$$
(11)

and [writing \(P(k_\mathrm{in}',k_\mathrm{out}',\hat{\rho })\) instead of \(P(\mathbf {k}')\) from now on, where \(\hat{\rho }\) is a parameter used to calibrate the desired correlation between \(k_\mathrm{in}'\) and \(k_\mathrm{out}'\), defined below in (17)]

$$\begin{aligned}&\sum _{k_\mathrm{in}'}\sum _{k_\mathrm{out}'}P(k_\mathrm{in}',k_\mathrm{out}',\hat{\rho })a(\mathbf{k}'\rightarrow \mathbf{k})G(k_\mathrm{in}',k_\mathrm{out}',t) \nonumber \\&\quad = \frac{k_\mathrm{in}}{N\langle k\rangle }\sum _{k_\mathrm{in}'}\sum _{k_\mathrm{out}'}P(k_\mathrm{in}',k_\mathrm{out}',\hat{\rho })k_\mathrm{out}'G(k_\mathrm{in}',k_\mathrm{out}',t) \end{aligned}$$
(12)

This quantity is proportional to the input to a neuron with degree \((k_\mathrm{in},k_\mathrm{out})\) from other neurons within the network, but it is clearly independent of \(k_\mathrm{out}\), so the state of a neuron with degree \((k_\mathrm{in},k_\mathrm{out})\) must also be independent of \(k_\mathrm{out}\), and thus G must be independent of \(k_\mathrm{out}'\). So the expression in (12) can be written

$$\begin{aligned} \frac{k_\mathrm{in}}{N\langle k\rangle }\sum _{k_\mathrm{in}'}Q(k_\mathrm{in}',\hat{\rho })G(k_\mathrm{in}',t) \end{aligned}$$
(13)

where

$$\begin{aligned} Q(k_\mathrm{in}',\hat{\rho })\equiv \sum _{k_\mathrm{out}'}P(k_\mathrm{in}',k_\mathrm{out}',\hat{\rho })k_\mathrm{out}' \end{aligned}$$
(14)

The function Q can be thought of as a \(k_\mathrm{in}'\)-dependent mean of \(k_\mathrm{out}'\) which is also dependent on the correlations between \(k_\mathrm{in}'\) and \(k_\mathrm{out}'\).

Our model equations are thus

$$\begin{aligned} \frac{\partial b(k_\mathrm{in},t)}{\partial t}= & {} \frac{-i(b(k_\mathrm{in},t)-1)^2}{2}+\frac{(b(k_\mathrm{in},t)+1)^2}{2} \nonumber \\&\times \left[ -\Delta +i\eta _0+\frac{iK k_\mathrm{in}}{N\langle k\rangle ^2} \sum _{k_\mathrm{in}'}Q(k_\mathrm{in}',\hat{\rho })G(k_\mathrm{in}',t) \right] \nonumber \\ \end{aligned}$$
(15)

where \(k_\mathrm{in}\) takes on integer values between the minimum and maximum in-degrees. The correlation between in- and out-degrees of a neuron is controlled by \(\hat{\rho }\), as explained below, and this appears as a parameter in (14).

It is interesting to compare (14)–(15) with the heuristic rate equation in [20]. These authors characterised a neuron by its “fI curve”—a nonlinear function transforming input current into a firing rate. They concluded that the input current to a neuron is proportional to two quantities: (i) its in-degree, and (ii) the sum over in- and out-degrees of presynaptic neurons of the product of the joint degree distribution, the out-degree of the presynaptic neuron, and the “output” of presynaptic neurons. We also find this form of equation.

We note that the transformation \(V=\tan {(\theta /2)}\) maps a theta neuron to a quadratic integrate-and-fire (QIF) neuron with threshold and resets of \(\pm \infty \), and that for the special case \(n=\infty \) one could derive an equivalent pair of real equations rather than the single Eq. (15) where the two real variables are the mean voltage and firing rate of the QIF neurons with a specific in-degree [17].

3 Generating correlated in- and out-degrees

We now turn to the problem of deriving \(P(k_\mathrm{in}',k_\mathrm{out}',\hat{\rho })\) and thus \(Q(k_\mathrm{in}',\hat{\rho })\). For simplicity, we choose the distributions of both the in- and out-degrees to be the same, namely power law distributions with exponent − 3, truncated below and above at degrees a and b, respectively. (Evidence for power law distributions in the human brain is given in [4], for example.) So the probability distribution function of either in- or out-degree k is

$$\begin{aligned} p(k)={\left\{ \begin{array}{ll} \left( \frac{2a^2b^2}{b^2-a^2}\right) k^{-3} &{} a\le k\le b \\ 0 &{} \text{ otherwise } \end{array}\right. } \end{aligned}$$
(16)

where the normalisation factor results from approximating the sum from a to b by an integral. (The approximation improves as a and b are both increased.) We want to introduce correlations between the in- and out-degrees of a neuron while retaining these marginal distributions. We do this using a Gaussian copula [18]. The correlated bivariate normal distribution with zero mean is

$$\begin{aligned} f(x,y,\hat{\rho })&=\frac{1}{2\pi \sqrt{|\varSigma |}}e^{-(\mathbf {x}^T\varSigma ^{-1}\mathbf {x})/2} \nonumber \\&=\frac{1}{2\pi \sqrt{1-\hat{\rho }^2}}e^{-(x^2-2\hat{\rho } xy+y^2)/[2(1-\hat{\rho }^2)]} \end{aligned}$$
(17)

where

$$\begin{aligned} \mathbf {x}\equiv \begin{pmatrix} x \\ y \end{pmatrix} \qquad \varSigma =\begin{pmatrix} 1 &{} \hat{\rho } \\ \hat{\rho } &{} 1 \end{pmatrix} \end{aligned}$$
(18)

and \(\hat{\rho }\in (-1,1)\) is the correlation between x and y. The variables x and y have no physical meaning, and we use the copula just as a way of deriving an analytic expression for \(P(k_\mathrm{in}',k_\mathrm{out}',\hat{\rho })\) for which the correlations between \(k_\mathrm{in}'\) and \(k_\mathrm{out}'\) can be varied systematically.

The marginal distributions for x and y are the same:

$$\begin{aligned} \tilde{p}(x)=\frac{1}{\sqrt{2\pi }}e^{-x^2/2} \end{aligned}$$
(19)

as are their cumulative distribution functions:

$$\begin{aligned} C(x)=[1+\text{ erf }(x/\sqrt{2})]/2 \end{aligned}$$
(20)

We define the cumulative distribution function of f:

$$\begin{aligned} F(X,Y,\hat{\rho })=\int _{-\infty }^Y\int _{-\infty }^X f(x,y,\hat{\rho })\hbox {d}x\ \hbox {d}y \end{aligned}$$
(21)

and also have the cumulative distribution function for a degree k:

$$\begin{aligned} C_k(k)= \int _a^k \left( \frac{2a^2b^2}{b^2-a^2}\right) s^{-3}\hbox {d}s=\frac{b^2(k^2-a^2)}{k^2(b^2-a^2)} \end{aligned}$$
(22)

where we have treated k as a continuous variable and again approximated a sum by an integral.

We thus have the joint cumulative distribution function for \(k_\mathrm{in}\) and \(k_\mathrm{out}\)

$$\begin{aligned} \widehat{C}(k_\mathrm{in},k_\mathrm{out},\hat{\rho })= & {} F(C^{-1}(C_k(k_\mathrm{in})),C^{-1}(C_k(k_\mathrm{out})),\hat{\rho }) \nonumber \\= & {} \int _{-\infty }^{C^{-1}(C_k(k_\mathrm{out}))}\int _{-\infty }^{C^{-1}(C_k(k_\mathrm{in}))} f(x,y,\hat{\rho })\hbox {d}x\ \hbox {d}y\nonumber \\ \end{aligned}$$
(23)

The joint degree distribution for \(k_\mathrm{in}\) and \(k_\mathrm{out}\) is then

$$\begin{aligned} P(k_\mathrm{in},k_\mathrm{out},\hat{\rho })= & {} \frac{\partial ^2}{\partial k_\mathrm{in}\partial k_\mathrm{out}}\widehat{C}(k_\mathrm{in},k_\mathrm{out},\hat{\rho }) \nonumber \\= & {} \{C^{-1}[C_k(k_\mathrm{in})]\}'\{C^{-1}[C_k(k_\mathrm{out})]\}'\nonumber \\&\times f\{C^{-1}[C_k(k_\mathrm{in})],C^{-1}[C_k(k_\mathrm{out}]),\hat{\rho }\} \nonumber \\ \end{aligned}$$
(24)

where the primes indicate differentiation with respect to the relevant k. Now

$$\begin{aligned} C^{-1}(x)=\sqrt{2} \text{ erf }^{-1}(2x-1) \end{aligned}$$
(25)

so

$$\begin{aligned} C^{-1}[C_k(k)]= \sqrt{2} \text{ erf }^{-1}\left( \frac{2b^2(k^2-a^2)}{k^2(b^2-a^2)}-1\right) \end{aligned}$$
(26)

and

$$\begin{aligned}&\{C^{-1}[C_k(k)]\}' \nonumber \\&\quad =\sqrt{\frac{\pi }{2}}\exp {\left[ \left\{ \text{ erf }^{-1}\left( \frac{2b^2(k^2-a^2)}{k^2(b^2-a^2)}-1\right) \right\} ^2\right] }\frac{4a^2b^2}{(b^2-a^2)k^3}\nonumber \\ \end{aligned}$$
(27)

Substituting these into (24) and simplifying, we find

$$\begin{aligned}&P(k_\mathrm{in},k_\mathrm{out},\hat{\rho })\nonumber \\&\quad =\frac{4a^4b^4}{\sqrt{1-\hat{\rho }^2}(b^2-a^2)^2k_\mathrm{in}^3k_\mathrm{out}^3} \nonumber \\&\qquad \times \exp {\left\{ \frac{\hat{\rho }C^{-1}[C_k(k_\mathrm{in})]C^{-1}[C_k(k_\mathrm{out})}{1-\hat{\rho }^2}\right\} } \nonumber \\&\qquad \times \exp {\left[ \frac{-\hat{\rho }^2\left( \{C^{-1}[C_k(k_\mathrm{in})]\}^2+\left\{ C^{-1}[C_k(k_\mathrm{out})]\right\} ^2\right) }{2(1-\hat{\rho }^2)}\right] }\nonumber \\ \end{aligned}$$
(28)
$$\begin{aligned}= & {} \frac{p(k_\mathrm{in})p(k_\mathrm{out})}{\sqrt{1-\hat{\rho }^2}}\exp {\left\{ \frac{\hat{\rho }C^{-1}[C_k(k_\mathrm{in})]C^{-1}[C_k(k_\mathrm{out})]}{1-\hat{\rho }^2}\right\} } \nonumber \\&\times \exp {\left[ \frac{-\hat{\rho }^2\left( \{C^{-1}[C_k(k_\mathrm{in})]\}^2+\left\{ C^{-1}[C_k(k_\mathrm{out})]\right\} ^2\right) }{2(1-\hat{\rho }^2)}\right] } \nonumber \\ \end{aligned}$$
(29)

Note that for \(\hat{\rho }=0\), this simplifies to \(p(k_\mathrm{in})p(k_\mathrm{out})\), as expected. Examples of \(P(k_\mathrm{in},k_\mathrm{out},\hat{\rho })\) for different \(\hat{\rho }\) are shown in Fig. 1. Both Zhao et al. [33] and LaMar and Smith [13] used Gaussian copulas to create networks with correlated in- and out-degrees as done here, but did not derive an expression of form (29).

Fig. 1
figure 1

Log of \(P(k_\mathrm{in},k_\mathrm{out},\hat{\rho })\) is shown for three different values of \(\hat{\rho }\) (red: larger P, blue: smaller P). \(a=100,b=400\) (colour figure online)

We need to relate \(\hat{\rho }\), a parameter in (29), to \(\rho \), the Pearson’s correlation coefficient between in- and out-degrees of a neuron (note: not between two connected neurons). We have

$$\begin{aligned} \rho =\frac{\tilde{\varSigma }P(k_\mathrm{in},k_\mathrm{out},\hat{\rho })(k_\mathrm{in}-\langle k\rangle )(k_\mathrm{out}-\langle k\rangle )}{\sqrt{\tilde{\varSigma }P(k_\mathrm{in},k_\mathrm{out},\hat{\rho })(k_\mathrm{in}-\langle k\rangle )^2}\sqrt{\tilde{\varSigma }P(k_\mathrm{in},k_\mathrm{out},\hat{\rho })(k_\mathrm{out}-\langle k\rangle )^2}} \end{aligned}$$
(30)

where \(\tilde{\varSigma }\) indicates a sum over all \(k_\mathrm{in}\) and \(k_\mathrm{out}\). \(\rho \) as a function of \(\hat{\rho }\) is shown in Fig. 2. We see that the relationship is monotonic, and while it is possible to obtain values of \(\rho \) close to 1, the lower limit is approximately − 0.6. By varying \(\hat{\rho }\) in (15), we can thus investigate the effects of varying the correlation coefficient between in- and out-degrees of a neuron (\(\rho \)) on the dynamics of a network. Note that for the distributions used here, treating k as a continuous variable, \(\langle k \rangle =2ab/(b+a)\).

Fig. 2
figure 2

Correlation coefficient between in- and out-degrees, \(\rho \), as a function of the correlation coefficient in the Gaussian copula, \(\hat{\rho }\). Parameters: \(a=100,b=400\)

Keeping in mind the normalisation \(\sum _\mathbf {k}P(\mathbf {k})=N\), we write \(Q(k_\mathrm{in}',\hat{\rho })\) as

$$\begin{aligned} Q(k_\mathrm{in}',\hat{\rho })=N\sum _{k_\mathrm{out}'=a}^b P(k_\mathrm{in}',k_\mathrm{out}',\hat{\rho })k_\mathrm{out}' \end{aligned}$$
(31)

Note that the factor of N here cancels with that in the last term in (15), giving equations which do not explicitly depend on N. Examples of \(Q(k_\mathrm{in}',\hat{\rho })\) for different \(\hat{\rho }\) are shown in Fig. 3. We see that increasing \(\hat{\rho }\) gives more weight to high in-degree nodes and less to low in-degree nodes and vice versa.

Fig. 3
figure 3

The function \(Q(k_\mathrm{in},\hat{\rho })\) [Eq. (31)] for different \(\hat{\rho }\). The right panel is a zoom of the left panel. Parameters: \(a=100,b=400,N=2000\)

4 Reduced model

We now turn to the issue of evaluating the sums over degrees in both (31) and (15). Although such sums are typically over only several hundred terms, it is possible to accurately evaluate them using many fewer terms, in analogy with Gaussian quadrature [5].

Defining an inner product as the sum

$$\begin{aligned} (f,g)=\sum _{k=a}^b f(k)g(k) \end{aligned}$$
(32)

we assume that there is a corresponding set of orthogonal polynomials \(\{q_n(k)\}_{0\le n}\) associated with this product. These polynomials satisfy the three-term recurrence relationship

$$\begin{aligned} q_{n+1}(k)=(k-\alpha _n)q_n(k)-\beta _n q_{n-1}(k) \end{aligned}$$
(33)

where

$$\begin{aligned} \alpha _n\equiv & {} \frac{(kq_n,q_n)}{(q_n,q_n)}; \qquad 0\le n \end{aligned}$$
(34)
$$\begin{aligned} \beta _n\equiv & {} \frac{(q_n,q_n)}{(q_{n-1},q_{n-1})}; \qquad 1\le n \end{aligned}$$
(35)

\(q_0(k)=1\) and \(q_{-1}(k)=0\). Then for a given positive integer n, assuming that f is 2n times continuously differentiable, we have the Gaussian summation formula

$$\begin{aligned} \sum _{k=a}^b f(k)=\sum _{i=1}^n w_i f(x_i)+R_n \end{aligned}$$
(36)

with error

$$\begin{aligned} R_n=\frac{f^{(2n)}(\xi )}{(2n)!}(q_n,q_n) \end{aligned}$$
(37)

where \(x_i\) are the n roots of \(q_n\), \(\xi \in [a,b]\), and the weights \(w_i\) are discussed below. Note that the roots of \(q_n(k)\) are typically not integers, but this does not matter if the function f(k) can be evaluated for arbitrary k.

In practice, to find the roots of \(q_n\) we use the Golub–Welsch algorithm. Form the tridiagonal matrix

$$\begin{aligned} J=\begin{pmatrix} \alpha _0 &{} \quad \sqrt{\beta _1} &{} \quad 0 &{} \quad \dots &{} \quad \dots &{} \quad \dots \\ \sqrt{\beta _1} &{} \quad \alpha _1 &{} \quad \sqrt{\beta _2} &{} \quad 0 &{} \quad \dots &{} \quad \dots \\ 0 &{} \quad \sqrt{\beta _2} &{} \quad \alpha _2 &{} \quad \sqrt{\beta _3} &{} \quad 0 &{} \quad \dots \\ 0 &{} \quad \dots &{} \quad \dots &{} \quad \dots &{} \quad \dots &{} \quad 0 \\ \dots &{} \quad \dots &{} \quad 0 &{} \quad \sqrt{\beta _{n-2}} &{} \quad \alpha _{n-2} &{} \quad \sqrt{\beta _{n-1}} \\ \dots &{} \quad \dots &{} \quad \dots &{} \quad 0 &{} \quad \sqrt{\beta _{n-1}} &{} \quad \alpha _{n-1} \end{pmatrix} \end{aligned}$$
(38)

The eigenvalues of J are the \(\{x_i\}\) and if all eigenvectors, \(v_i\), are scaled to have norm 1, then \(w_i=(b-a)\left( v_i^{(1)}\right) ^2\), where \(v_i^{(1)}\) is the first component of \(v_i\).

We will use the approximation

$$\begin{aligned} \sum _{k=a}^b f(k)\approx \sum _{i=1}^n w_i f(x_i) \end{aligned}$$
(39)

where \(n\ll b-a+1\), the number of terms in the original sum. Given the resemblance of the sum on the left in (39) to the integral of f(k) between \(k=a\) and \(k=b\), it is not surprising that the roots of \(p_n\), when translated from the interval [ab] to \([-\,1,1]\), are close to the roots of the nth-order Legendre polynomial, as would be used in Gaussian quadrature. (The same is true for the corresponding weights.)

We thus choose n and write

$$\begin{aligned} Q(k_\mathrm{in}',\hat{\rho })=N\sum _{j=1}^n w_j P(k_\mathrm{in}',k_j,\hat{\rho })k_j \end{aligned}$$
(40)

where \(k_j\) are the roots and \(w_j\) are the weights, respectively, associated with \(q_n(k)\). In order to use the same approximation for the sum in (15), we consider only values of \(k_\mathrm{in}\) equal to the \(k_j\). As mentioned, these are typically not integers. We refer to them as “virtual degrees”. Thus, our model equations are

$$\begin{aligned} \frac{\partial b(k_j,t)}{\partial t}&=\frac{-i(b(k_j,t)-1)^2}{2}+\frac{(b(k_j,t)+1)^2}{2} \nonumber \\&\quad \times \left. \Bigg [-\Delta +i\eta _0+\frac{iK k_j}{N\langle k\rangle ^2} \sum _{j=1}^n w_j Q(k_j,\hat{\rho })G(k_j,t) \right] \end{aligned}$$
(41)

for \(j=1,\dots n\). We are interested in fixed points of these equations, and how these fixed points and their stabilities change as parameters such as \(\eta _0\) and \(\hat{\rho }\) are varied. We use pseudo-arclength continuation [7, 11] to investigate this.

Fig. 4
figure 4

Mean frequency, f, as a function of n, the number of virtual degrees used. a\(\hat{\rho }=-\,0.2,K=1,\eta _0=0.5\). b\(\hat{\rho }=0.3,K=-\,0.1,\eta _0=-\,0.5\). Other parameters: \(a=100,b=400,\Delta =0.05,N=2000\)

In order to calculate the mean frequency of the network, we use the result that the frequency for neurons with in-degree k is [17]

$$\begin{aligned} f(k)=\frac{1}{\pi }\text{ Re }\left( \frac{1-\bar{b}(k)}{1+\bar{b}(k)}\right) , \end{aligned}$$
(42)

where overline indicates complex conjugate, and then average over the network to obtain the mean frequency

$$\begin{aligned} f&=\frac{\sum _{k_\mathrm{in}}\sum _{k_\mathrm{out}}P(k_\mathrm{in},k_\mathrm{out},\hat{\rho })f(k_\mathrm{in})}{\sum _{k_\mathrm{in}}\sum _{k_\mathrm{out}}P(k_\mathrm{in},k_\mathrm{out},\hat{\rho })} \nonumber \\&=\frac{\sum _{i=1}^n\sum _{j=1}^n w_i w_j P(k_i,k_j,\hat{\rho }) f(k_i)}{\sum _{i=1}^n\sum _{j=1}^n w_i w_j P(k_i,k_j,\hat{\rho })} \end{aligned}$$
(43)

(The normalisation is needed because even though the integral of the joint degree distribution over \([k_\mathrm{in},k_\mathrm{out}]^2\) equals 1, the sum over the corresponding discrete grid does not.)

Typical convergence of a calculation of f with increasing n is shown in Fig. 4 for several sets of parameter values. We see rapid convergence and choose \(n=15\) for future calculations. (Calculations of the form shown in Figs. 5 and 7 were repeated using the full degree sequence from a to b, with essentially identical results.)

5 Results

5.1 Excitatory coupling

We first consider the case of excitatory coupling, i.e. \(K>0\). We expect a region of bistability for negative \(\eta _0\), as seen in Fig. 5. We see that decreasing \(\rho \) moves the curve to the right and vice versa. (\(\hat{\rho }\) was chosen to give these particular values of \(\rho \).) Following the saddle-node bifurcations as \(\rho \) is varied, we obtain Fig. 6.

Fig. 5
figure 5

Mean frequency, f, versus \(\eta _0\) for (left to right) \(\rho =0.5,0\) and − 0.5. Solid: stable, dashed: unstable. Parameters: \(a=100,b=400,K=1.5,\Delta =0.05\)

Fig. 6
figure 6

Continuation of the saddle-node bifurcations shown in Fig. 5. The network is bistable in the region between the curves. Parameters as in Fig. 5

Given the influence of \(\hat{\rho }\) (and thus \(\rho \)) on Q (see Fig. 3), this result is easy to understand. Neurons with high in-degree fire faster than those with low in-degree, and for positive \(\rho \), high in-degree neurons contribute more to the sum in (41) than for negative \(\rho \). Thus, the total amount of “output” from neurons is higher for positive \(\rho \) and lower for negative \(\rho \). Put another way, with positive \(\rho \), neurons with high firing rate (due to high in-degree) are more likely to have a high out-degree, thus exciting more neurons than would otherwise be the case. Increasing \(\rho \) has the same qualitative effect as increasing the coupling strength K, as observed by [20].

5.2 Inhibitory coupling

Next, we consider inhibitory coupling, with \(K=-\,1\). Average network frequency versus \(\eta _0\) is shown in Fig. 7 for three different values of \(\rho \). We see that increasing \(\rho \) slightly increases the frequency and vice versa. We can also understand this behaviour in a qualitative sense. For inhibitory coupling, neurons with high in-degree are not likely to be firing, so can be ignored. When \(\rho <0\), neurons with low in-degree will have high out-degree, and thus the amount of inhibitory “output” in the network is increased. For positive \(\rho \), neurons with low in-degree will have low out-degree, and thus they will inhibit fewer neurons than in the case of negative \(\rho \), leading to a higher average firing rate.

We performed calculations corresponding to the results shown in Figs. 5 and 7 for networks of theta neurons and found qualitatively, and to a large extent quantitatively, the same behaviour as in those figures (results not shown).

Fig. 7
figure 7

Mean frequency, f, versus \(\eta _0\) for \(\rho =-\,0.5,0\) and 0.5; same colour code as in Fig. 5. All branches are stable. Parameters: \(a=100,b=400,K=-\,1,\Delta =0.05\)

6 More realistic network

To verify the behaviour seen above in a network of theta neurons, we investigated a more realistic network of spiking neurons, in this case Morris–Lecar neurons. For the case of excitatory coupling, the network equations are [29]

$$\begin{aligned} C\frac{\hbox {d}V_i}{\hbox {d}t}&= g_L(V_L-V_i)+g_\mathrm{Ca}m_\infty (V_i)(V_\mathrm{Ca}-V_i) \end{aligned}$$
(44)
$$\begin{aligned}&\quad +g_Kn_i(V_K-V_i) \nonumber \\&\quad +I_0+I_i+(V_{ex}-V_i)\frac{\epsilon }{N}\sum _{j=1}^N A_{ij}s_j \nonumber \\ \frac{\hbox {d}n_i}{\hbox {d}t}&= \frac{\lambda _0(w_\infty (V_i)-n_i)}{\tau _n(V_i)} \end{aligned}$$
(45)
$$\begin{aligned} \tau \frac{\hbox {d}s_i}{\hbox {d}t}&= m_\infty (V_i)-s_i \end{aligned}$$
(46)

where

$$\begin{aligned} m_\infty (V)&=0.5(1+\tanh {[(V-V_1)/V_2]}) \end{aligned}$$
(47)
$$\begin{aligned} w_\infty (V)&=0.5(1+\tanh {[(V-V_3)/V_4]}) \end{aligned}$$
(48)
$$\begin{aligned} \tau _n(V)&= \frac{1}{\cosh {[(V-V_3)/(2V_4)]}} \end{aligned}$$
(49)

Parameters are \(V_1=-\,1.2,V_2=18,V_3=12,V_4=17.4,\lambda _0=1/15 \hbox {ms}^{-1},g_L=2, g_K=8, g_\mathrm{Ca}=4, V_L=-\,60, V_\mathrm{Ca}=120, V_K=-\,80, C=20\,\upmu \,\hbox {F}/\hbox {cm}^2,\tau =100,V_\mathrm{ex}=120,\epsilon =5\,\hbox {mS}/\hbox {cm}^2\). Voltages are in mV, conductances are in mS/cm\(^2\), time is measured in milliseconds, and currents in \(\upmu \hbox {A}/\hbox {cm}^2\). In the absence of coupling and heterogeneity, a neuron undergoes a SNIC bifurcation as \(I_0\) is increased through \(\sim 40\). We have used synaptic coupling of the form in [6], but on a timescale \(\tau \) rather than instantaneous as in that paper. The \(I_i\) are randomly chosen from a Lorentzian distribution with mean zero and half-width at half-maximum 0.05.

The network is created as follows, using the Gaussian copula of Sect. 3. For each \(i\in \{1,\dots N\}\), let \(x_1\) and \(x_2\) be independently chosen from a unit normal distribution. Then \(x_1\) and \(y_1=\hat{\rho } x_1+\sqrt{1-\hat{\rho }^2}x_2\) both have unit normal distributions and covariance \(\hat{\rho }\), i.e. are realisations of x and y in (17). We then set \(k_\mathrm{in}^i=C_k^{-1}(C(x_1))\) and \(k_\mathrm{out}^i=C_k^{-1}(C(y_1))\). These degrees each have distribution p(k) but have correlation coefficient \(\rho \), where \(\rho \) is determined by the value of \(\hat{\rho }\), as shown in Fig. 2. We then create the connection from neuron j to neuron i (i.e. set \(A_{ij}=1\)) with probability

$$\begin{aligned} \frac{k_\mathrm{in}^ik_\mathrm{out}^j}{N\langle k\rangle } \end{aligned}$$
(50)

where \(\langle k\rangle \) is the mean of the degrees, and \(A_{ij}=0\) otherwise (the Chung–Lu model [2]). Typical results for the network generation are shown in Fig. 8, and the measured correlations are given in the figure. The distributions of the resulting degrees no longer match the distributions of the \(k_\mathrm{in}^i\) and \(k_\mathrm{out}^i\), but are close. We could have used the configuration model to avoid this problem [19], but here we are only interested in qualitative results. Quasi-statically sweeping through \(I_0\) for networks with three different values of \(\rho \), we obtain Fig. 9, in qualitative agreement with Fig. 5. In Fig. 5, there is a region of bistability for each value of \(\rho \), and the region moves to lower average drive as \(\rho \) is increased. Since we cannot detect unstable states through simulation of (44)–(46), this bistability is manifested as jumps from low-frequency to high-frequency branches as \(I_0\) is varied, as seen in Fig. 9.

Fig. 8
figure 8

Degrees for a network whose generation is described in Sect. 6 for \(\hat{\rho }=0.9\) (left) and \(\hat{\rho }=-\,0.9\) (right). Parameters: \(N=2000,a=100,b=400\)

Fig. 9
figure 9

Mean frequency versus \(I_0\) for a network of Morris–Lecar neurons. \(N=2000\). Blue crosses: \(\rho =-\,0.57\); black diamonds: \(\rho =0\); red circles: \(\rho =0.85\). \(I_0\) is quasi-statically increased and then decreased in all cases (colour figure online)

For inhibitory coupling, we replace \(m_\infty (V_i)\) in (46) by \(w_\infty (V_i)\), replace \(V_{ex}-V_i\) in (44) by \(V_K-V_i\), and choose \(\epsilon =10 mS/cm^2\). Sweeping through \(I_0\) for three different values of \(\rho \), we obtain Fig. 10, in qualitative agreement with Fig. 7.

Fig. 10
figure 10

Mean frequency versus \(I_0\) for networks of Morris–Lecar neurons with inhibitory coupling. \(N=2000\)

7 Motifs

A number of authors have found that “motifs” (small sets of neurons connected in a specific way) do not occur in cortical networks in the proportions one would expect by chance [24, 28]. Some theoretical results relating the presence or absence of certain motifs to network dynamics have been obtained [8, 21, 33]. For networks whose generation is described in Sect. 6, we counted the number of order-2 and order-3 motifs (involving two or three neurons, respectively), for negative, zero and positive values of \(\rho \). We compute the frequencies (amount) of order-2 motifs by counting the amount of 0’s, 1’s and 2’s in the upper triangular part of \((A+A^T)\), where A is the adjacency matrix and T means transposed. They refer to unconnected, unidirectional connected and reciprocal connected pairs of neurons, respectively. For all 13 connected order-3 motifs, we used the software “acc-motif” [16]. The remaining three unconnected motifs have been counted by our own algorithm, i.e. looping through all neurons, we create for each a list of disconnected neurons and count among those order-2 motifs. The results are shown in Figs. 11 and 12, where counts are shown relative to the numbers found for \(\rho =0\).

Fig. 11
figure 11

Relative counts of order-2 motifs. We generate three networks at a time with \(\hat{\rho }\in [-\,0.9, 0, 0.9]\) to compute motif frequencies and repeat this process 100 times. Error bars indicate the standard deviation. Parameters are chosen as in Fig. 8

In all motifs with at least one reciprocal connection between two neurons, we see that the number of motifs goes up with positive \(\rho \) and down with negative \(\rho \). This can be understood in an intuitive way: suppose \(0<\rho \) and consider a neuron with a high out-degree. It is likely to connect to a neuron with a high in-degree. But this second neuron will also have a high out-degree and is therefore more likely to connect to the first neuron, which also has a high in-degree, forming a reciprocal connection. Similarly, suppose \(\rho <0\) and consider a neuron with high out-degree. It is likely to connect to a neuron with high in-degree but low out-degree. Thus, it is unlikely that this second neuron will connect back to the first, which has a low in-degree.

Fig. 12
figure 12

Relative counts of order-3 motifs

8 Conclusion

We have investigated the effects of correlating the in- and out-degrees of spiking neurons in a structured network. We considered a large network of theta neurons, allowing us to exploit the analytical results previously derived by [1], which give dynamics for complex-valued order parameters, indexed by neurons with the same degrees. The states of interest are steady states of these dynamics, and by using a Gaussian copula we were able to analytically incorporate a parameter which controls the correlations between in- and out-degrees. Numerical continuation was then used to determine the effects of varying parameters, particularly the degree correlation. In order to reduce the computational cost, we introduced the concept of “virtual degrees” allowing us to efficiently approximate sums with many terms by sums with fewer terms.

For an excitatory network, we found that increasing degree correlations had a similar effect as increasing the overall strength of coupling between neurons, consistent with the findings of [20, 32]. Our results are also consistent with those of [30], who found that negative correlations stabilised the low firing rate state, as shown in Fig. 5. For inhibitory coupling, we found that increasing degree correlations slightly increased the mean firing rate of the network. Both of these effects were reproduced in a more realistic network of Morris–Lecar spiking neurons.

We also measured the relative frequency of occurrence of order-2 and order-3 motifs as within-degree correlations were varied and found that in all motifs with at least one reciprocal connection between two neurons, the number of motifs is positively correlated with \(\rho \). Several authors have linked motif statistics to synchrony within a network [8, 33]; however, a link between motif statistics and firing rate, as observed here, seems yet to be developed.

We chose a Lorentzian distribution of the \(\eta _i\) in (1), as many others have done [22], in order to analytically evaluate an integral and derive (6). However, we repeated the calculations shown in Figs. 579 and 10 using a Gaussian distribution of the \(\eta _i\) and found the same qualitative behaviour (not shown). Regarding the parameter n governing the sharpness of the function \(P_n(\theta )\), we repeated the calculations shown in Figs. 5 and 7 for \(n=5,\infty \) and obtained qualitatively the same results (not shown). We used a Gaussian copula to correlate in- and out-degrees due to its analytical form, but numerically investigated the scenarios shown in Figs. 5 and 7 for t copulas and Archimedean Clayton, Frank and Gumbel copulas and found the same qualitative behaviour (also not shown).

For simplicity, we used the same truncated power law distribution for both in- and out-degrees. However, the use of a Gaussian copula for inducing correlations between degrees does not require them to be the same, so one could use the framework presented here to investigate the effects of varying degree distributions [26], correlated or not.

We also only considered either excitatory or inhibitory networks, but it would be straightforward to generalise the techniques used here to the case of both types of neuron, with within-neuron degree correlations for either or both populations, though at the expense of increasing the number of parameters to investigate.