1 Introduction

One of the most fundamental questions in neuroscience is the neural coding, which is a key to understand the brain. A neural code is a system of rules and mechanisms by which a signal carries information [1]. To our knowledge, two types of neural coding are mainly considered including rate coding [2,3,4,5]and temporal coding [6,7,8,9]. Rate coding encodes the neuronal information by mean firing rate, while temporal coding(synchrony coding) encodes the neuronal information by the precise spike times. Though the two coding strategies are alternative to each other [9], there exist some works trying to reconcile the two mechanisms, like the mean firing rate propagation based on synchrony [10], or both types of encoded information exist and propagated in the same neuronal networks [11,12,13,14,15]. Up to now, how neuronal information is coded in the brain cortex is still under debate.

According to the classical rate coding, neurons encode the information through representing it in the number of spikes per time window(firing rate) [16, 17]. Neurons, however, need a short integration time to estimate the signal [18]. If it is required for every synaptic stage to average in time to calculate the firing rate, it could be seen that rate coding is slow for the brain [9]. Therefore, the slow rate coding is highly impossible for the information coding in the brain because the brain is a highly efficient structure and its mechanism of information coding is ought to be performed at high speed. How to solve the efficiency problem of the classical rate coding? The population rate coding was introduced, which calculates the population firing rate instead of the mean firing rate, improving the efficiency of rate coding. Population rate coding is capable of faster and more accurate processing information since averaging is performed across many fast-responding neurons [19,20,21,22]. The population rate coding is a mechanism based on the experimental observation that the firing rates of neurons are related to the intensity of external stimuli [21, 23]. Recently, the population rate coding has been widely investigated in many works [12, 24,25,26,27,28].

In theoretical studies, for simplicity, neurons are classified into excitatory and inhibitory neurons. The activity patterns and spiking dynamics of excitatory–inhibitory (EI) neuronal networks have been investigated in many works [29,30,31,32,33]. What is more, signal coding in recurrent EI network also attracted much attentions [13, 14, 26,27,28, 31, 34,35,36,37,38,39,40]. In most of these works, neurons are classified into excitatory and inhibitory neurons and whether the synaptic connections are excitatory or inhibitory is mainly determined by the type of presynaptic neurons. In those models, neurons have only one type of output synapses, respectively. Here, we call this kind of neuronal network model as single excitatory–inhibitory network (SEIN). However, according to recent physiological evidence, there exists the other case. Root et al. [41, 42] report a novel case: a large fraction of rodent ventral tegmental area (VTA) neurons that project to lateral habenula (LHb) co-release glutamate (a main excitatory neurotransmitter) and GABA (a main inhibitory fast neurotransmitter) from single axon terminals. There are also other studies suggesting the co-release of both excitatory and inhibitory neurotransmitters or receptors [43,44,45,46,47,48,49]. These findings demonstrated that neurons potentially have both types of output synapses. Therefore, inspired by these experimental studies, we construct a novel neuronal network, in which neurons have mixed excitatory–inhibitory output synapses; here, we call our model as mixed excitatory–inhibitory network (MEIN). In the following contents, we will mainly study the representation of population firing rate in such a recurrent neuronal network in detail.

In this paper, we construct a neuronal network of neurons with mixed excitatory–inhibitory synapses to study the population rate coding. The contents are arranged as follows. In ‘Model and Method’ section, we introduce the computational model of neurons, synapses, and networks as well as measurements of neuronal information coding. In ‘Results’ section, the effects of some critical parameters on the population rate coding will be illustrated and discussed. Finally, we try to compare the performance of population rate coding in the traditional model (SEIN) with the novel one (MEIN) proposed by us. In ‘Conclusion and Discussion’ section, we give a summary of this work and some discussions about future works.

2 Model and method

According to recent physiological evidences [41, 42], neurons potentially have both types of output synapses.

Therefore, in our constructed model, one neuron has different types of output synapses: Some of them are excitatory, while others are inhibitory. And we set that there is only one type of synapse between two coupled neurons. Note that excitatory connections could grow from a neuron who also grow inhibitory connections. That is why we named such neurons as neurons with mixed excitatory–inhibitory synapses. We constructed a recurrent network of N neurons coupled through excitatory and inhibitory synaptic connections, as is shown in Fig. 1. Neurons are connected with recurrent probability \(P_{\mathrm{rc}}\). We keep the excitatory–inhibitory connection ratio in the whole network as 4:1; therefore, among those connections, \(80\%\) of them are modeled as excitatory synaptic connections, while \(20\%\) are inhibitory. Namely, excitatory probability \(P_{\mathrm{rc}}^E = 0.8\) and inhibitory probability \(P_{\mathrm{rc}}^I = 0.2\). Generally, \(N=100\) and \(P_{\mathrm{rc}}=0.12\) if not specified otherwise.

Fig. 1
figure 1

Illustration of the recurrent network. Light blue solid circles denote neurons, and the total number of neurons in the considered network is set as N. The red lines with an arrow denote excitatory synaptic connections, while the blue lines with a solid circle denote inhibitory synaptic connections. The side of the arrow and circle points to the postsynaptic neurons. The ratio between the number of excitatory and inhibitory synaptic connections in such a network is about 4:1. (Color figure online)

The neuronal model used in this paper is the Hodgkin–Huxley model [50] whose equations are shown as follows.

$$\begin{aligned} \left\{ \begin{array}{lrll} C\frac{\displaystyle \mathrm {d}V_{i}}{\displaystyle \mathrm {d}t} =I^{\mathrm{ext}}+ I_i^{\mathrm{ion}} + I_{i}^{\mathrm{syn}}+D\xi , \\ \frac{\displaystyle \mathrm {d}m_{i}}{\displaystyle \mathrm {d}t}=a_m(V_i)(1-m)-b_m(V_i)m, \\ \frac{\displaystyle \mathrm {d}h_{i}}{\displaystyle \mathrm {d}t}=a_h(V_i)(1-h)-b_h(V_i)h, \\ \frac{\displaystyle \mathrm {d}n_{i}}{\displaystyle \mathrm {d}t}=a_n(V_i)(1-n)-b_n(V_i)n, \end{array} \right. \end{aligned}$$
(1)

with

$$\begin{aligned} I_i^{\mathrm{ion}}= & {} -\,g_{Na}m_{i}^{3}h_{i}(V_{i}-V_{Na}) \nonumber \\&-\,g_{K}n_{i}^{4}(V_{i}-V_{K}) -g_{l}(V_{i}-V_{l}), \end{aligned}$$
(2)

where \(V_{i}\) denotes the membrane potential of each neuron i\((i=1,\ldots ,N)\). \(C = 1\, \mathrm \mu F/cm^2\) is the membrane capacity. \(I^{\mathrm{ext}}\) denotes the external stimuli injected into all neurons in the network. \(I_i^{\mathrm{ion}}\) denotes the ion current, where mh, and n are activation and inactivation gate variables of sodium channels, activation gate variable of potassium channels, respectively. The related parameters are listed in Table 1. \(D\xi \) is the background noise with D denoting the noise intensity (unit: pA) and \(\xi \) being the white Gaussian noise, whose mean is zero and the standard deviation is 1. The functions \(a_x(V)\) and \(b_x(V)\) with \(x = m,n,h\) are given by: \(a_m=0.1(V+40)/(1-\exp [(-V-40)/10])\), \(b_m=4\exp [(-V-65)/18]\), \(a_h=0.07\exp [(-V-65)/20]\), \(b_h=1/(1+\exp [(-V-35)/10])\), \(a_n=0.01(V+55)/(1-\exp [(-V-55)/10])\), \(b_n=0.125\exp [(-V-65)/80]\) from Hansel et al [51]. All numerical simulations are performed in Euler–Maruyama method with temporal resolution fixing at 0.05 ms. The systems start from \(t=0\) ms and the initials are \(V(0)=-61\) mV, \(m(0)=0.08,h(0)=0.46,n(0)=0.37\).

In order to examine the accuracy of signal representation, we use a kind of time-varying input whose distribution is Gaussian distribution, though stimuli in neural systems are considered as a Poisson distribution. Here, such the external stimuli \(I^{\mathrm{ext}}\) are used widely in many papers [25,26,27,28], constructed from the low-pass filtered and half-wave rectified Gaussian-distributed white noise. The external stimuli can be described by

$$\begin{aligned} I^{\mathrm{ext}}(t)=\left\{ \begin{array}{lcl} K \eta (t), &{} &{} \mathrm{if\; \eta \ge 0}\\ 0, &{} &{} \mathrm{if\; \eta <0}\\ \end{array}, \right. \end{aligned}$$
(3)

where \(\eta (t)\) is an Ornstein–Uhlenbeck process

$$\begin{aligned} \tau _c\frac{\mathrm{d}\eta (t)}{\mathrm{d}t}=-\eta (t)+\sqrt{2A}\xi (t), \end{aligned}$$
(4)

where \(\xi (t)\) is Gaussian white noise. \(K=15\) denoting the modulation strength. The correlation time \(\tau _c\) is set as 80 ms. The external input current intensity is set as \(A = 200\).

Table 1 Parameters of H–H neurons

\(I_{i}^{\mathrm{syn}}\) is the synaptic current of each neuron from the coupled presynaptic neurons and can be described by

$$\begin{aligned} I_{i}^{\mathrm{syn}}=\sum _{k=1}^{N_{\mathrm{pre}}}G_k^x(t)(V_i(t)-V_{\mathrm{syn}}),(i=1,2,...,N)\nonumber \\ \end{aligned}$$
(5)

and

$$\begin{aligned} G_k^x(t)=\left\{ \begin{array}{lcl} g^x\alpha ^x(t-t_{\mathrm{spk}}), &{} &{} {t-t_{\mathrm{spk}}>0}\\ 0, &{} &{} \mathrm {else}\\ \end{array}, \right. \end{aligned}$$
(6)

with \(\alpha ^x(t)=(t/\tau ^x)e^{(-t/\tau ^x)}\). \(N_{\mathrm{pre}}\) denotes the amount of presynaptic neurons coupled with the neuron (i). \(V_{\mathrm{syn}}\) denotes the reversal potential of synapse with \(V_{\mathrm{syn}} = 0\) for excitatory connections and \(V_{\mathrm{syn}} = -75\, \mathrm mV\) for inhibitory connections. The function \(G_k^x(t)\) denotes the synaptic conductance from the kth presynaptic neuron based on alpha-function time courses. The x denotes the types of synapses include excitatory (\(x=\mathrm{exc}\)) and inhibitory (\(x=\mathrm{inh}\)) types. \(t_{\mathrm{spk}}\) is the spike timings of the presynaptic neurons. \(\tau ^x\) is synaptic time constant. \(\tau ^{\mathrm{exc}}=0.3\, \mathrm ms\) for the excitatory connections, while \(\tau ^{\mathrm{inh}}=0.6\, \mathrm ms\) for inhibitory connections. If not specified below, the excitatory synaptic strength is set as \(g^{\mathrm{exc}}=0.102\). The inhibitory synaptic strength \(g^{\mathrm{inh}}\) is calculated by excitatory one:

$$\begin{aligned} R = \frac{g^{\mathrm{inh}}}{g^{\mathrm{exc}}}, \end{aligned}$$
(7)

in which R denotes the inhibitory and excitatory synaptic strength ratio (I/E ratio). Note that \(g^{\mathrm{inh}}\) and \(g^{\mathrm{exc}}\) are two bifurcation parameters of the model that are summarized in the parameter R.

In order to study population rate coding, some parameters and measures are introduced as follows. The population firing rate is used to represent the signal, which is calculated as

$$\begin{aligned} p(t)=\frac{n(\varDelta t)}{N \varDelta t}, \end{aligned}$$
(8)

where \(\varDelta t\) is a given short time interval centered at time t; here, \(\varDelta t\) is taken as 10 \(\mathrm ms\), and \(n(\varDelta t)\) is the total amount of spikes in the neuronal population during this given time interval.

The correlation coefficient \(C(\tau )\) of input s(t) and population rate p(t) in target layer i is introduced to quantify how well the input is encoded by the network. \(C(\tau )\) is calculated as [26]

$$\begin{aligned} C(\tau )=\frac{\left\langle [s(t)-{\bar{s}}][p(t+\tau )-{\bar{p}}]\right\rangle _{t}}{\sqrt{\left\langle [s(t)-{\bar{s}}]^{2} \right\rangle _{t}\left\langle [p(t+\tau )-{\bar{p}}]^{2} \right\rangle _{t}}}, \end{aligned}$$
(9)

where p(t) is the population firing rate in a 10 \(\mathrm ms\) time window sliding with a step of 1 \(\mathrm ms\). Here, we term the maximum of the correlation coefficients as encoding fidelity,

$$\begin{aligned} {Q=\max \{(C(\tau )\}}, \end{aligned}$$
(10)

through which we could know how much information is captured by population firing rate.

3 Results

In this section, simulated results and relevant analyses are presented. We mainly pay attention to discussing how stochastic stimuli can be represented with the population firing rate in the studied neuronal network and how to improve the encoding quality. The key parameters we investigate include I/E strength ratio R, recurrent probability \(P_{\mathrm{rc}}\), noise intensity D, excitatory strength \(g^{\mathrm{exc}}\), and the synaptic time constant \(\tau ^{\mathrm{exc}},\tau ^{\mathrm{inh}}\). Finally, we compare the encoding quality of our proposed neuronal network (mixed excitatory–inhibitory network, MEIN) with the neuronal networks in previous works which consist of neurons with one type of output synapses (single excitatory–inhibitory network, SEIN).

Fig. 2
figure 2

a Dependence of coding fidelity Q with respect to the I/E synaptic strength ratio R. \(P_{\mathrm{rc}}=0.1\), \(D=1\), \(g^{\mathrm{exc}}=0.102\), \(\tau ^{\mathrm{exc}}=0.3, \tau ^{\mathrm{inh}}=0.6\). Population firing rate encoding from the same stimuli injected into the recurrent network are presented for three typical values of R, b\(R = 5\), c\(R = 25\) and d\(R = 60\). From the top to the bottom are external input, spike raster of the studied neuronal network, and the corresponding population firing rate, respectively

3.1 I/E strength ratio and recurrent probability

The network state has a directive effect on information encoding. How network state affects encoding fidelity of the population rate is an important problem. In the current studied network, the ratio between inhibitory and excitatory synaptic strengths directly determines the network state. If the ratio is small, excitation dominates; otherwise, inhibition dominates. Meanwhile, the recurrent probability \(P_{\mathrm{rc}}\) determines how tightly neurons in the network connect with each other, which directly influences the total number of excitatory and inhibitory connections. That is to say that the I/E strength ratio R and recurrent probability \(P_{\mathrm{rc}}\) definitely have great influences on the networks’ state. Therefore, in this subsection, we will discuss the effects of R and \(P_{\mathrm{rc}}\) on encoding quality Q first.

Dependence of Q with respect to R under some certain parameters is shown in Fig. 2a. From this figure, we see that there exists an optimal range of R (\(15<R<40\)) for a better encoding performance. And for some values of R, the encoding quality Q could take values as large as \(92.03\%\). It indicates that the current studied neuronal network could encode the external stimuli in the form of population firing rate as much as \(92.03\%\). In order to have an intuitive knowledge about the dependence of R on encoding fidelity Q, three typical results are presented in Fig. 2b–d, where R takes 5, 25, and 60, respectively. In these three figures, stochastic external input, network spiking raster, and the corresponding population firing rate are shown from top to bottom. Referring to the results shown in Fig. 2a, Q takes larger value for \(R=25\) than for \(R=5\) and \(R=60\). Combining with the results shown in Fig. 2b and (d), it can be observed that population firing rate of the neuronal system for \(R=25\) is much similar to the stochastic external input. Thus, the case corresponding to \(R=25\) has a better performance on representing the input using population firing rate, which preserves the input more completely, while in the other two cases signals are weakened or amplified.

Fig. 3
figure 3

Dependence of encoding fidelity Q on R and \(P_{\mathrm{rc}}\). Dark blue color represents high Q, while light orange color represents small Q. In the parameter space of R and \(P_{\mathrm{rc}}\), it seems that Q takes higher values below the black dashed curve (noted as blue area), while it takes smaller values above the dashed curve (noted as orange area). In the blue area, especially around \(P_{\mathrm{rc}}=0.1\), the quality Q has a clear tendency to increase firstly and then decrease, similar to the curve shown in Fig. 2a. The encoding quality has a better performance when the recurrent probability is set around 0.1. \(D=1\), \(g^{\mathrm{exc}}=0.102\), \(\tau ^{\mathrm{exc}}=0.3, \tau ^{\mathrm{inh}}=0.6\) (averaged in 10 trials). (Color figure online)

Now, it is observed that the I/E strength ratio actually has important effects on encoding fidelity of the neuronal system where \(P_{\mathrm{rc}}\) is fixed to be 0.1. As stated above, \(P_{\mathrm{rc}}\) also influences the network state. Could the obtained results above be observed for other values of \(P_{\mathrm{rc}}\)? What will happen if \(P_{\mathrm{rc}}\) is altered? In order to answer these questions, we test the dependence of Q on the parameters \(P_{\mathrm{rc}}\) and R. Results are shown in Fig. 3. In the parameter space of R and \(P_{\mathrm{rc}}\), it seems that Q takes higher values below the black dashed curve (noted as the blue area), while it takes smaller values above the dashed curve (noted as the orange area). Interestingly, for most \(P_{\mathrm{rc}}\) below the black dashed curve, e.g., \(0.06<P_{\mathrm{rc}}<0.3\), there exists an optimal range of R in which Q takes higher values, while it takes smaller values if R is out of the range. Meanwhile, the width of the optimal range of R gets wider as \(P_{\mathrm{rc}}\) decreases to 0.06 approximately.

From the statistical characteristic shown in Fig. 4a, the mean values of Q, however, present a decaying trend as R increases, in which the large error bars are almost due to the lower encoding quality in the orange area. Therefore, we should focus on the blue area with high encoding performance. The green curve in Fig. 4b shows that Q presents a decaying trend with increasing \(P_{\mathrm{rc}}\) and takes optimal values around \(P_{\mathrm{rc}}\approx 0.1\). According to the two statistical results and corresponding analysis, we now turn to Fig. 3 again. Obviously, most optimal ranges of R located in the blue area, though the range gets narrower as \(P_{\mathrm{rc}}\) increases. In the blue area, especially around \(P_{\mathrm{rc}}=0.1\), the quality Q has a clear tendency to increase firstly and then decrease, similar to the curve shown in Fig. 2a. Thus, encoding quality has a better performance as the recurrent probability is set around 0.1.

In order to determine the relationship of Q versus R around \(P_{\mathrm{rc}}\approx 0.1\), we show some curves selected from the results around \(P_{\mathrm{rc}}\approx 0.1\) in Fig. 4c. As shown in this figure, the other four curves have the similar variation tendency of Q with respect to R as the curve with \(P_{\mathrm{rc}}=0.1\) (green line with upper triangles). However, in detail, the curve tends to be saturated as R increases if \(P_{\mathrm{rc}}>0.12\); the curve tends to decay rapidly as R increases if \(P_{\mathrm{rc}}<0.12\). In other words, the optimal range tends to be wider for \(P_{\mathrm{rc}}>0.12\), while the optimal range tends to be narrower for \(P_{\mathrm{rc}}<0.12\). Thus, in the following simulations, we choose \(P_{\mathrm{rc}}=0.12\) since there is no obvious saturation and sharp decay, but a suitable optimal range of R.

Fig. 4
figure 4

Statistical results and samples of data calculated or sampled from the results about effects of \(P_{\mathrm{rc}}\) and R on the population rate coding. a The purple line denotes variations of the mean values of Q versus R under different \(P_{\mathrm{rc}}\). The purple bar denotes the corresponding SD. b The green line denotes variations of the mean values of Q versus \(P_{\mathrm{rc}}\) under different R, and the green bar denotes the corresponding SD. c Effects of R on the population rate coding at some selected \(P_{\mathrm{rc}}\). Red circle: \(P_{\mathrm{rc}}=0.08\), blue square: \(P_{\mathrm{rc}}=0.10\), green upper triangle: \(P_{\mathrm{rc}}=0.12\), purple lower triangle: \(P_{\mathrm{rc}}=0.14\), orange diamond: \(P_{\mathrm{rc}}=0.16\). The subgraph denotes the overlapping curves located in the gray area where R takes value from \(12<R<30\). The encoding quality performs worse in the case of low and high values of R, while it performs better in the moderate values of R. (Color figure online)

3.2 Effects of considered parameters on population rate coding

3.2.1 Noise intensity

Neurons in the biological environment receive background noise from time to time. In this work, we use Gaussian noise to mimic the background noise. In the previous studies, it is reported that noise intensity is a key parameter in information coding [10, 25, 52]. Here, we also discuss effects of noise intensity D on encoding fidelity Q. Figure 5a presents the dependence of encoding fidelity Q on noise intensity D and I/E ratio R. The red color represents the better performance of information coding, while blue is for the opposite. Obviously there exists an optimal area of R and D where Q takes high values (refer to the red area in Fig. 5a).

Fig. 5
figure 5

Effect of noise intensity on encoding fidelity. a Dependence of encoding fidelity on D and R. Red denotes higher Q and indicates higher coding fidelity. b Samples of the dependence of noise intensity of \(D=0.7\) (blue triangle), 1 (purple square), 1.2 (green circle). c Statistics of the data calculated from the results about effects of noise intensity on the population rate coding. The orange line denotes variations of the mean values of Q vs. R under different D. The orange bar denotes the corresponding SD. The curve shows the same trends that lower Q is for small and large R, while higher Q is for intermediate R. d The blue line denotes variations of the mean values of Q vs. D under different R, and the blue bar denotes the corresponding SD. The curve shows that D around 1 is better for information encoding fidelity. \(P_{\mathrm{rc}}=0.12\), \(g^{\mathrm{exc}}=0.102\), \(\tau ^{\mathrm{exc}}=0.3, \tau ^{\mathrm{inh}}=0.6\). (Color figure online)

For \(0.6<D<1.2\), the network encodes the input information very well within an optimal range of R, within which Q could be larger than 0.9. Meanwhile, it could be seen that the minimal and maximal optimal value of R both decreases with D increasing from 0.6 to 1.2. In order to get a deeper understanding of the relationship between Q, D, and R, we perform some statistical analysis of the results presented in Fig. 5a. The statistical results are shown in Fig. 5c, d.

In Fig. 5c, the solid orange line indicates the mean values of Q vs. R averaged with respect to D, and the orange bar denotes the corresponding standard deviation. Similarly, the blue line (Fig. 5d) shows the mean values of Q vs. D averaged with respect to R, and the blue bar denotes the corresponding standard deviation too. As shown in these two figures, it can be observed that Q takes higher values for moderate R and an optimal range of D is found where encoding fidelity is better.

The optimal performance of some certain intermediate noise strength on the population coding can be well explained by the stochastic resonance (SR, or stochastic facilitation) phenomenon [53, 54]. The SR phenomenon [55, 56] is found that there exist optimal noise strengths promoting the neuronal representation of input signals. In other words, by altering the strength of additive noise, the input information (such as the weak signal) can be amplified. From Fig. 5a, for different R, the optimal range of noise strength D is also changing.

In order to set a more reasonable value for noise intensity D in the following discussions, variances of Q with respect to R for three different values of D (\(D=0.7,1,1.2\)) are plotted in Fig. 5b. As we discussed above, the three cases all have an optimal range of R, the case \(D=0.7\) has a wider optimal range of R and its lower and upper limits are larger, the case \(D=1.2\) has a narrower optimal range of R and its lower and upper limits are smaller. Comparing to the other two cases, the case \(D=1\) has a moderate optimal range of R maintaining the high level of Q and Q does not decay rapidly if R exceeds the corresponding optimal range. It makes the case \(D=1\) be a compromise of the other two cases and proves the rationality of the parameter \(D=1\) that we selected before. Of course, we could get some very high encoding quality with some noise intensity like \(D=0.7\). We still select \(D=1\) in the later simulations because that does not affect the results that we investigate the effects of other parameters on the encoding performance.

3.2.2 Excitatory synaptic strength

The excitatory synaptic strength is a key parameter who directly determines the strength of excitatory synapses and indirectly determines the inhibitory synaptic strength together with R. Therefore, the excitatory synaptic strength \(g^{\mathrm{exc}}\) together with I/E ratio R could have great influences on the network state and the performance of population rate coding. The dependence of encoding fidelity on \(g^{\mathrm{exc}}\) and R is shown in Fig. 6.

Fig. 6
figure 6

a Dependence of encoding fidelity on \(g^{\mathrm{exc}}\) and R. Orange color denotes higher Q. Q takes higher values in the area between the two black dashed parallel lines, but lower values out of the area especially in the upper right corner. \(P_{\mathrm{rc}}=0.12\), \(D=1\), \(\tau ^{\mathrm{exc}}=0.3\), \(\tau ^{\mathrm{inh}}=0.6\). b The product of \(g^{\mathrm{exc}}\) and R, which is also the inhibitory synaptic strength \(g^{\mathrm{inh}}\). Yellow color, red color, and blue color denote small, medium, and large values of \(g^{\mathrm{inh}}\), respectively. Then, it can be seen that medium values of \(g^{\mathrm{inh}}\) almost correspond to the orange area with the better performance of information encoding. (Color figure online)

As shown in this figure, there is an orange area locating between two parallel black dashed lines. Within this area, the studied neuronal network has a better performance of population rate coding with Q being almost above 0.9, while outside this area, the performance of population rate coding is worse with Q being smaller. To describe the graph in detail, we separate the 2-D parameter space into three parts. Q takes higher values in the middle area (orange color), second high values in the lower left corner (white color) and low values in the upper right corner (purple color). Meanwhile, we note that the product of R and \(g^{\mathrm{exc}}\) takes lower values in the lower left corner, medium values in the middle area and high values in the upper right corner, as is shown in Fig. 6b. It seems that the product of R and \(g^{\mathrm{exc}}\) has a positive correlation with the encoding quality. Since the product of R and \(g^{\mathrm{exc}}\) is exactly the inhibitory synaptic strength \(g^{\mathrm{inh}}\), the inhibitory synaptic strength might have a great influence on the encoding quality of the neuronal systems.

Now, we try to give some illustrations on the observed effects of inhibitory synaptic strength on encoding fidelity. For large \(g^{\mathrm{inh}}\), the encoding quality is worse, which suggests that too strong inhibitory synapses depress encoding ability of the network and then leading to smaller Q; for small \(g^{\mathrm{inh}}\), Q is larger than the values for large \(g^{\mathrm{inh}}\), but still smaller than Q for intermediate \(g^{\mathrm{inh}}\). It suggests that decreasing inhibitory strength could facilitate the encoding quality of the network, while too weak inhibition could indirectly make the neuronal network much more excited and then reduce the accuracy of information coding. For intermediate \(g^{\mathrm{inh}}\), the network could keep in an intermediate state—not too excitatory or too inhibitory—so that the neuronal network could respond to the signals very well and possess a better performance of rate coding.

Therefore, with the results obtained in this subsection, it can be observed that intermediate values of \(g^{\mathrm{inh}}\) are needed for the neuronal networks to have better encoding abilities. Since \(g^{\mathrm{inh}}\) is determined by the excitatory synaptic strength \(g^{\mathrm{exc}}\) and I/E ratio R, it means that the excitatory synaptic strength and I/E ratio influence the encoding quality together. Then, the existence of an optimal range of R for some fixed \(g^{\mathrm{exc}}\) (not too large) observed here could be understood easily. Hence, we can also see that \(g^{\mathrm{exc}}=0.102\) as fixed in the above discussions is suitable.

Fig. 7
figure 7

Effects of synaptic time constant on the population rate coding for \(0<R<80\). The color decodes the encoding quality, and blue color indicates higher Q. \(P_{\mathrm{rc}}=0.12\), \(D=1\), \(g^{\mathrm{exc}}=0.102\). (Color figure online)

3.2.3 Synaptic time constant

In this subsection, we investigate effects of the synaptic time constant on the encoding performance. In the simulations, we fix the ratio between the inhibitory and excitatory synaptic time constant as 2, i.e., \(\tau ^{\mathrm{inh}} = 2 \,\tau ^{\mathrm{exc}} \). Then, \(\tau ^{\mathrm{exc}}\) is altered to investigate effects of synaptic time constant on the encoding performance. As shown in Fig. 7, the results indicate that only small synaptic time constant is good for encoding performance. It is easy to understand that only if a synapse with a small synaptic time constant, namely, fast dynamics, could respond to the coming synaptic currents with high speed so that the coding is more accurate.

3.3 Comparison with single excitatory–inhibitory network

We have studied effects of multiple parameters on the population rate coding quality of the neuronal network, in which we claimed that presynaptic neurons have mixed excitatory and inhibitory output synapses. This is different from the previously studied neuronal networks where neurons have one type of output synapses. In order to elucidate the differences, we compare the population rate coding fidelity between the previous networks (single excitatory–inhibitory network, SEIN) with our studied network (mixed excitatory–inhibitory network, MEIN). The synaptic strength and noise intensity are mainly considered in this subsection.

Firstly, let us introduce the setup of the SEIN model. The network consists of 100 neurons with 80 excitatory neurons and 20 inhibitory neurons, which are named as excitatory subpopulation E and inhibitory subpopulation I, respectively. And the connection probabilities of excitatory to excitatory neurons \(P_{\mathrm{EE}}\), excitatory to inhibitory neurons \(P_{\mathrm{EI}}\), inhibitory to excitatory neurons \(P_{\mathrm{IE}}\) and inhibitory to inhibitory neurons \(P_{\mathrm{II}}\) are set as \(P_{\mathrm{EE}}=P_{\mathrm{EI}}=P_{\mathrm{IE}}=P_{\mathrm{II}}=0.12\). The values of the other parameters are set the same as the parameters in our considered network mentioned above.

Fig. 8
figure 8

a Dependence of encoding fidelity on noise intensity D and I/E strength ratio R in SEIN model with \(g^{\mathrm{exc}}=0.08\). b Dependence of encoding fidelity on excitatory synaptic strength \(g^{\mathrm{exc}}\) and I/E strength ratio R in SEIN model with \(D=0.7\). c, d Comparison of the encoding fidelity of the determined and undetermined models. c The red color denotes the SEIN model, while blue color denotes our MEIN model. It can be observed that Q takes higher values in our models than that in the traditional ones. \(g^{\mathrm{exc}}=0.1\). d The green color denotes the SEIN model, while the purple color denotes our model. Similar to that observed in (c), the optimal Q takes higher values in our models than that in the SEIN ones. \(D=0.7\). (Color figure online)

Results of the SEIN model are shown in Fig. 8a, b. As shown in this figure, noise intensity and synaptic strength have the similar effect on the encoding quality to ours. Namely, too weak and strong noise depress the population rate coding, while intermediate noise facilitates coding, and intermediate values of the product \(R * g^{\mathrm{exc}}\) (inhibitory synaptic strength \(g^{\mathrm{inh}}\)) could promote the population rate coding fidelity. However, comparing with the results shown in Figs. 5 and 6a, it is easily found that Q takes smaller values in the SEIN models. To make it much more clear, comparisons of some parameters are presented in Fig. 8c, d. From this figure, it is obviously observed that Q takes smaller values in the SEIN model than that in MEIN model when R falls into the optimal range. It indicates that the encoding fidelity of the neuronal networks with one-type output synapse neurons seems worse than the networks with mixed-type output synapse neurons. In other words, the model raised in our work could have better performance of the population rate coding than the traditional model usually used in the former works.

4 Conclusion and discussion

Information coding in cortical network is one of the critical problems for people to understand the brain, which has attracted extensive attention. In the past decades, people developed many potential strategies of information coding [2, 8, 25, 26, 57,58,59,60,61], in which there are two main types of coding paradigm including temporal coding [6, 8, 9] and rate coding [2, 25]. Among these neural code hypthesis, the population rate coding has been widely studied in many works [12, 19,20,21,22, 24, 25, 27, 28].

Some researchers have constructed some recurrent networks consisting of excitatory and inhibitory neurons to model the cortical neuronal networks. In these works, different types of neurons have different corresponding synapses. Considering the physiological findings [41, 42], however, neurons potentially have both types of output synapses. Inspired by these experimental results, we construct a recurrent neuronal network in which one presynaptic neuron has two types of output connections including excitatory and inhibitory synapses.

In this paper, we mainly discuss the population rate coding fidelity of our considered neuronal networks. The I/E strength ratio R is a key parameter that determines how different between the excitatory and inhibitory synaptic strength, which indirectly influences the network state. It is found that there exist some intermediate R at which neuronal network could have better performance of population rate coding. And the recurrent probability has a similar optimal intermediate range in which the population rate coding performs well. After determining these two key parameters, effects of the noise intensity, synaptic strength, and synaptic time constant on the population rate coding are investigated in the following simulations. For noise intensity, it is revealed that intermediate noise intensity could facilitate the encoding quality, while large and small noise intensity depresses it. For synaptic strength, it is found that the population rate coding quality could be optimized by some intermediate values of R if the excitatory synaptic strength is not too large. Finally, since the synaptic time constant determines the speed of neurons’ response to signals, small synaptic time constants are reported to promote the accuracy of population rate coding. With these obtained results, it can be seen that intermediate values of these parameters (except the synaptic time constant) could make the neuronal network possess better performance of the population rate coding. Thus, with the suitable combination of these parameters, the studied neuronal network could encode signal information in population rate coding very well.

Furthermore, we compare the population rate coding performance of our considered network (MEIN model) with the previous networks (SEIN model). It is found that, similar to that observed in MEIN model, intermediate values of the controlled parameters are also shown to enhance encoding fidelity. However, with the same setting parameters, the population rate coding performance in the SEIN model is worse than ours. To sum up, the simulation results suggested that neuronal networks with one-type output synapse neurons have similar regulated effects of the population rate coding fidelity as the ones with mixed-type output synapse neurons. Moreover, our model has a much better performance in population rate coding and a more rational architecture to some sense.

In this paper, the changing of I/E strength ratio is found to play an important role in information coding. The excitability of neuronal networks is altered as the I/E strength ratio changed, which hence affects the ability of information processing of neuronal networks. In the field of psychiatric disorders, studies have found EI balance has an important effect on neural disorders such as autism (ASD) and schizophrenia [62]. Some forms of ASD might be caused by changing the circuits’ EI balance [63]. It was hypothesized that severe behavioral deficits in psychiatric diseases might arise from elevations in the cellular balance of EI balance within neural microcircuitry [63,64,65]. Yizhar et al. showed that the compensatory elevation of inhibitory cell excitability partially rescued social deficits caused by E/I balance elevation [66]. Selimbeyoglu et al. also showed that real-time modulation of EI balance in the mouse prefrontal cortex can rescue social behavior deficits reminiscent of autism phenotypes [67]. These two studies suggest that the excitation–inhibition balance can be used as a framework for investigating mechanisms in neuropsychiatric disorders [62]. It is shown that increasing the E/I ratio (either by increasing excitation or by impairing inhibition) within mPFC tends to cause social deficits, whereas increasing inhibition decreases the E/I ratio and can rescue some social deficits [66, 67]. The role of alterations to the function of the inhibitory system as a cause of psychiatric disorders has been studied [68,69,70]. Research shows that the specific modulations of E/I balance lead to the interruption of specific functions in the network, which affect signal processing in a specific way, leading to a specific psychiatric phenotype [68]. To study the effect of modulations of excitatory–inhibitory strength ratio on the neuronal networks, information processing ability might provide potential contributions to the study of psychiatric disorders.

We use numerical simulation to study the population rate coding in neural systems. Through computational and physical methods to study neuroscience is a new subject named as theoretical neuroscience or computational neurodynamics [71]. In this field, people usually apply mathematical physical methods and reasonable simplifications to model neural systems. In this paper, although we construct a network of neurons with mixed excitatory–inhibitory synapses according to the co-release of both types of neurotransmitters and receptors, it is still not very biological enough. In fact, many unknown issues about the co-release of excitatory and inhibitory synaptic receptors remain. Meanwhile, neuronal noise in this paper is modeled as the Gaussian noise, which is a simplified version. In recent studies, it is shown that the neuronal fluctuations are produced by the network topology and internal neuronal parameters [72,73,74,75,76,77,78]. In the future, more complex and realistic synaptic model can be considered such as introducing the stochastic process of neural transmitters and receptors. Apart from the population rate coding, other kinds of neuronal information based on different coding mechanisms are also worth studying to open our minds.