1 Introduction

The topological structures of neuronal network have received considerable attention in recent years due to their crucial roles for the dynamics of neural system such as the network synchronization and resonance [26, 32]. The experimental findings have suggested the small-world and scale-free structures of neuronal network [14, 16, 36, 39, 43]. In addition, the global coupled network, random network and nearest-neighbor coupled network were also mathematically proposed to investigate the dynamical behaviors of neuronal network [13, 24, 25, 29, 47, 50, 51]. Recently, the self-organized structure was found to be more realistic to characterize the real neuronal network [45], and this type of structure with both small-world and scale-free properties has a great effect on the neuronal network dynamics [26, 39, 43]. For example, the self-organized neuronal networks have higher coherence resonance, stochastic resonance and efficiency in information transmission compared to the global coupled network and random network [26]. Therefore, the exploration of self-organized neuronal network contributes to the more realistic understandings of structure and dynamics in the neural system.

The well-known feature of self-organized neuronal network is that the synaptic connections among neurons are controlled by the repeated firing activities of neurons [38, 55, 56]. This biological process is called the spike-timing-dependent plasticity (STDP) rule which has been found in various vivo and vitro experiments, such as the neocortical slices [33], rat hippocampal neurons [7], human motor system [6, 10, 54] and so forth [5, 37, 44]. The capability of STDP rule is updating the synaptic connections among neurons according to the relative timing between pre- and postsynaptic action potentials at a millisecond timescale. If the firing time for the presynaptic neuron is ahead of that for the postsynaptic neuron, the synaptic connection is strengthened; otherwise, it is weakened [34]. The updated synaptic connection again modulates the firing behavior of postsynaptic neuron and further affects the synaptic connection from postsynaptic neuron to pre-synaptic neuron. Apparently, the modulation of synaptic connection adaptively depends on the output of neurons, which in turn affects the neural responses. This feedback process between chemical synapses and neurons is believed to be closely related to the mechanisms of learning and memory in the brain [1, 20, 34]. Furthermore, the STDP rule has a great effect on the dynamics of neural system [1, 26, 55, 57]. For example, in the absence of information transmission delay, the STDP rule can slightly depress the efficiency of network stochastic resonance [55] and induce the transition of spike propagation in neuronal networks [57]. Thus, investigating the effect of STDP rule on neural system is more helpful for revealing the mechanism of neural information transmission in the brain.

To date, there are two types of STDP rules [17, 18, 42, 61]: an additive rule and a multiplicative rule. For the additive STDP rule, the modulated amount of synaptic weight is only decided by the difference between firing timings of pre- and postsynaptic neurons and is independent on the present synaptic weight [17]. But for the multiplicative STDP rule, the modulated amount of synaptic weight depends on both the neuronal firing timings and synaptic weight [17]. Although the two types of STDP rules can reflect the biological process, the multiplicative STDP rule is more regarded as an improved version of additive STDP rule. More importantly, the multiplicative STDP rule results in a higher small-world property for self-organized neuronal network than the additive STDP rule, but it decreases the rate with which the neuronal network evolves into a complex small-world structure [18]. Consequently, the multiplicative STDP rule induces the higher efficiency of information transfer across the resulted neuronal network than the additive STDP rule, but it also brings the redundant feedback and decreases the modulating accuracy during the evolutionary process of neuronal network, which is accompanied by the higher energy cost for the biological system. Therefore, overcoming the low speed of multiplicative STDP rule may contribute to the less energy cost during the STDP biological process.

In this paper, we improve the multiplicative STDP rule with the aim of speeding up the convergence process and increasing the evolutionary accuracy of neuronal network. We first study the effect of improved STDP rule on the evolutionary process of self-organized neuronal network, and then we utilize the complex network method to explore the topological properties. Finally, we investigate the dynamical behaviors of self-organized neuronal network from the perspective of network synchronization, efficiency of information transmission and functional segregation and integration. We find that the improved STDP rule has a great effect on the evolutionary process, structure and dynamics of self-organized neuronal network. In particular, the improved STDP rule with a specific momentum factor results in an optimal neuronal network.

This paper is organized as follows. Section 2.1 provides a brief review of neuronal network model, Sect. 2.2 theoretically analyzes the advantages of the improved STDP rule compared to the multiplicative STDP rule, and Sect. 3 displays the results and Sect. 4 shows the discussion and conclusion .

2 Model

2.1 Neuronal network model

The neuronal network consisting of FitzHugh–Nagumo neurons is described as [9, 11, 26, 49, 60]:

$$\begin{aligned} \varepsilon \frac{\mathrm{d} V_i}{\mathrm{d}t}= & {} V_{i}-V_i^3/3-W_i+I_\mathrm{ext}+I_i^\mathrm{syn}\nonumber \\ \frac{\mathrm{d}W_i}{\mathrm{d}t}= & {} V_i+a+b_i W_i, \end{aligned}$$
(1)

where \(V_i\) is the membrane potential of the ith neuron, \(W_i\) is the corresponding slow variable, and a, \(b_i\) and \(\varepsilon \) are the dimensionless parameters. \(I_\mathrm{ext}\) is the applied current, \(I_i^\mathrm{syn}\) is the total chemical synaptic current that the ith neuron receives and it obeys following equation:

$$\begin{aligned} I_i^{syn}=-\sum _{j}^N g_{ij} s_j (V_i-V_\mathrm{syn}). \end{aligned}$$
(2)

Here, \(g_{ij}\) is the synaptic weight from the jth neuron to the ith neuron and is updated by the STDP rule, \(s_j\) is the synaptic variable and satisfies:

$$\begin{aligned}&\frac{\mathrm{d}s_j}{\mathrm{d}t}=\alpha (V_j )(1-s_j )-\beta s_j\nonumber \\&\alpha (V_j)=\alpha _0/(1+e^{-V_j/V_\mathrm{shp}}), \end{aligned}$$
(3)

where \(\alpha (V_j )\) is the synaptic recovery function and \(\beta \) is the rate of synaptic decay.

In Eq. (2), \(V_\mathrm{syn}\) is the reversal potential of chemical synapse, which decides the classification of chemical synapse: excitatory synapse or inhibitory synapse. Here, the reversal potential for excitatory synapse is set as 0 mV, and it is equal to 2 mV for inhibitory synapse [26].

2.2 STDP rule

In the case that the presynaptic neuron i fires at the time \(t_i\) and the postsynaptic neuron j fires at \(t_j\), the STDP updating function \(F(\Delta t)\) is defined as [26]:

$$\begin{aligned} F(\Delta t)={\left\{ \begin{array}{ll} A_+ \exp (-\Delta t/\tau _+ ) &{} \mathrm {if} \ \Delta t>0\\ -A_- \exp (\Delta t/\tau _- ) &{} \mathrm {if} \ \Delta t<0\\ 0 &{} \mathrm {if} \ \Delta t=0\\ \end{array}\right. } \end{aligned}$$
(4)

where \(\Delta t=t_i-t_j\) is the firing time lag, \(\tau _+\) and \(\tau _-\) are the temporal windows for synaptic refinement, and \(A_+\) and \(A_-\) determine the maximum amount of synaptic update.

For the multiplicative STDP rule, the firing time lag \(\Delta t\) is measured within the temporal windows for synaptic refinement, and the synaptic update is performed once the temporal window is passed. We set the T as the updating time at which the corresponding synaptic weight is \(g_{ij}(T)\) and the firing time lag is \(\Delta t(T)\). Thus the updating process of synaptic weight \(g_{ij}(T)\) is modeled as:

$$\begin{aligned} \Delta g_{ij}(T){=}g_{ij}(T+1)-g_{ij}(T){=}g_{ij}(T)F(\Delta t(T)),\nonumber \\ \end{aligned}$$
(5)

where the \(g_{ij}(T)\) is always restricted into the region \([0,g_\mathrm{max}]\). If the jth neuron first fires at the temporal windows for synaptic refinement, the \(\Delta t(T)>0\) and the corresponding modulated amount of synaptic weight \(\Delta g_{ij}(T)\) is greater than zero value such that the synapse from jth neuron to ith neuron is reinforced; otherwise, it is weakened.

Apparently, the multiplicative STDP rule ensures that the \(\Delta g_{ij}(T)\) varies according to the neuronal firing activities and synaptic weight \(g_{ij}(T)\). However, the feedback of neural signals may be affected by its previous activities such as the refractory period [4], and the amount of synaptic update also may be related to its previous update. Thus we assume that the \(\Delta g_{ij}(T)\) is affected by its previous updated amount \(\Delta g_{ij}(T-1)\) and improve the multiplicative STDP rule as:

$$\begin{aligned} \Delta g_{ij}(T)=g_{ij}(T)F(\Delta t(T))+\sigma \Delta g_{ij}(T-1), \end{aligned}$$
(6)

where \(\sigma \Delta g_{ij}(T-1)\) is called the momentum term, and \(\sigma \) is the momentum factor.

Fig. 1
figure 1

The schematic diagram of comparison between themultiplicative STDP rule and the improved STDP rule. The horizontal axis represents the synaptic weight \(g_{ij}(T)\) which increases along the direction of arrow. The solid curve indicates the direction of synaptic weight varying with updating time T for the multiplicative STDP rule, and the dotted curve reflects the ideally updating process for the improved STDP rule. With the added momentum term, the improved STDP rule is more possible to promote the synaptic weight \(g_{ij}(T)\) to converge to the stable value \(g_{ij}^*\)

Now we analyze the advantages of improved STDP rule. While the neuronal network eventually achieves a stable state, the \(\Delta t(T) \rightarrow 0\) and \(g_{ij}(T)\approx g_{ij}^{*}\) where \(g_{ij}^{*}\) is the final stable value of synaptic weight from neuron j to neuron i. The process of synaptic weight \(g_{ij}(T)\) converging to the stable value \(g_{ij}^*\) contains the following four cases:

First, we assume that the \(\Delta t(T-1)>0\) happens at updating time \(T-1\) and the \(\Delta g_{ij}(T-1)>0\):

Case 1. If the \(\Delta t(T)>0\) and \(\Delta g_{ij}(T)>0\), the synaptic weight \(g_{ij}(T)\) is much less than the \(g_{ij}^{*}\) and needs to be further strengthened. The added momentum term \(\sigma \Delta g_{ij}(T-1)>0\) contributes to the further increase in synaptic weight such that the convergence rate of synaptic weight is speeded up (see Fig. 1).

Case 2. If the \(\Delta t(T)<0\), \(\Delta g_{ij}(T)<0\), the synaptic weight \(g_{ij}(T)\) just crossed the \(g_{ij}^{*}\) and needs to be decreased. At this moment, the \(\Delta t(T)\) is very close to zero and the modulation amount of synaptic weight \(\Delta g_{ij}(t)\) is small. The \(g_{ij}(T)\) would show the slight oscillation around the \(g_{ij}^*\), which ensures the \(g_{ij}(T+1)\) less than the \(g_{ij}^*\). Therefore, the momentum term \(\sigma \Delta g_{ij}(T-1)>0\) slows down the decrease in synaptic weight to ensure the \(g_{ij}(T+1)\approx g_{ij}^*\), and thus the evolutionary accuracy is enhanced (see Fig. 1).

Second, we assume that the \(\Delta t(T-1)<0\) happens at updating time \(T-1\) and the \(\Delta g_{ij}(T-1)<0\):

Case 3. If the \(\Delta t(T)<0\), \(\Delta g_{ij}(T)<0\), the synaptic weight \(g_{ij}(T)\) is far more than the \(g_{ij}^{*}\) and needs to be further decreased. The momentum term \(\sigma \Delta g_{ij}(T-1)<0\) advances the decrease so that the \(g_{ij}(T+1)\) approaches to the \(g_{ij}^*\) more faster (see Fig. 1).

Case 4. If the \(\Delta t(T)>0\), \(\Delta g_{ij}(T)>0\), the synaptic weight \(g_{ij}(T)\) is slightly less than the \(g_{ij}^{*}\) and needs to be strengthened. Due to the similar mechanism provided in Case 2, the momentum term \(\sigma \Delta g_{ij}(T-1)<0\) decays the increase in synaptic weight so as to promote the \(g_{ij}(T+1)\) to converge to the \(g_{ij}^*\) more accurately (see Fig. 1).

It should be stressed that the momentum factor must not be less than zero value to guarantee the advantages of improved STDP rule, and it also must not be too big to ensure the \(g_{ij}(T)\) accurately converging to the \(g_{ij}^*\). The momentum factor is thus constricted into the region (0, 1] according to the improved back propagation (BP) algorithm in the artificial neural network (ANN) where the momentum item is utilized to speed up the learning process [41]. Meanwhile, it also should be noted that the improved STDP rule is equivalent to the multiplicative STDP rule for the \(\sigma =0\), and the improved STDP rule would promote the synaptic weight \(g_{ij}(T)\) rapidly and accurately to reach the stable value \(g_{ij}^*\) only for the suitable momentum factors.

The rest values of parameters utilized in above models are \(\varepsilon =0.08\), \(I_\mathrm{ext}=0.1\), \(a=0.7\), \(\alpha _0=2\), \(\beta =1\), \(V_\mathrm{shp}=0.05\), \(A_+=0.05\), \(A_-=0.0525\), \(\tau _+=\tau _-=2\), \(g_{\max }=0.1\) and b is uniformly distributed in [0.45, 0.75]. Furthermore, the initial conditions for all neurons are the same, that is \(V=-1.2729\), \(W=-0.5307\) and \(s=0.1172\).

3 Results

In this paper, the Euler–Maruyama algorithm is utilized to integrate the Eqs. (16) with a time step of 0.005 ms and a total time of 60 ms. The 50 excitatory neurons and 10 inhibitory neurons in a neuronal network are globally coupled by chemical synapse in the beginning, in which the autapse is not involved [26, 30, 31]. The synaptic weight is set as \(g_{\max }/2\) for the excitatory synapse, and it is equal to \(3g_{\max }/2\) for the inhibitory synapse. During the STDP modulating process, the weight of excitatory synapse is updated by the STDP rule, but it keeps consistence for the inhibitory synapse. Consequently, the global coupled neuronal network eventually evolves into a sparse directed weighted topological structure. Furthermore, for simplicity, the \(P_0\) represents the proportion of synapse with weight in the range \([0,0.1g_{\max }]\) (weak coupling), the \(P_1\) stands for the proportion of synapse with weight belonging to the region \([0.9g_{\max },g_{\max }]\) (strong coupling), and the \(P_2\) is responsible for the rest of occasions.

Fig. 2
figure 2

The evolutionary process of \(P_0\), \(P_1\) and \(P_2\) for a \(\sigma =0.0\), b \(\sigma =0.3\), c \(\sigma =0.6\) and d \(\sigma =0.9\). Note that for the simulating time about over than 40 ms, the \(P_0\), \(P_1\) and \(P_2\) substantially keep stable as well as the neuronal network structure

Fig. 3
figure 3

The evolutionary process of standard deviation (std) between \(P_0\), \(P_1\) and \(P_2\) for different momentum factors. The horizontal yellow shadow indicates the relatively stable value of standard deviation for \(\sigma \) = 0.0, and the vertical yellow shadow means the start time at which the corresponding standard deviation achieves the stable value. While the standard deviation for a momentum factor costs smaller time to achieve the relatively stable value, the corresponding neuronal network evolves into the self-organized structure with higher speed. (Color figure online)

3.1 Evolutionary process of neuronal network structure

In this part, we first investigate the evolutionary process of neuronal network structure with the modulation of multiplicative STDP rule (\(\sigma \) = 0.0) and improved STDP rule (\(\sigma >0\)). Figure 2 shows the evolutionary process of \(P_0\), \(P_1\) and \(P_2\) for different momentum factors. As the evolutionary time increases, the \(P_2\) decreases while the \(P_0\) and \(P_1\) increase. This result indicates that during the evolutionary process, the competition between neurons strengthens a part of chemical synapses and weakens another, which is accompanied by the changes of topological structure of neuronal network. Meanwhile, the \(P_0\), \(P_1\) and \(P_2\) gradually reach the stable values, reflecting that the structure of neuronal network eventually achieves a stable state. More importantly, the \(P_0\), \(P_1\) and \(P_2\) for all momentum factors vary with a exponential form, revealing that the improved STDP rule has little effect on the evolutionary mode of neuronal network structure.

As discussed in Sect. 2.2, the improved STDP rule is excepted to speed up the evolutionary process of synaptic weight for the suitable momentum factors. To prove the theoretical analysis, the standard deviations (std) between \(P_0\), \(P_1\) and \(P_2\) are calculated to investigate the evolutionary rate of neuronal network. Figure 3 shows the evolutionary process of standard deviation for different momentum factors. As the evolutionary time increases, the standard deviations first decrease rapidly until the minimum values at which the \(P_0\), \(P_1\) and \(P_2\) curves intersect (see Fig. 2). Then, the standard deviations increase and eventually reach the relatively stable values, reflecting the stable topological structure of self-organized neuronal network. However, during the evolutionary process, the standard deviations for \(\sigma =\) 0.0–0.4 are later to reach the stable values than those for the rest of momentum factors, indicating that the improved STDP rule indeed speeds up the evolutionary process of neuronal network for the suitable momentum factors such as \(\sigma =\) 0.6, 0.9, 1.0.

Fig. 4
figure 4

The evolutionary process of a \(\Delta P_0=P_0^x-P_0^{0.0}\), b \(\Delta P_1=P_1^x-P_1^{0.0}\) and c \(\Delta P_2=P_2^x-P_2^{0.0}\), where x = 0.3, 0.6, 0.9, respectively. d The average values of \(\Delta P_0\), \(\Delta P_1\) and \(\Delta P_2\) from 50 to 60 ms for different momentum factors

3.2 Topological structure of self-organized neuronal network

Due to the STDP modulation on the synaptic connections, the global coupled neuronal network eventually evolves into a sparse network and many neurons cluster together to form a module. The synaptic weight between neurons in the same module is strong, but it is relatively weak while the neurons are in different modules (see Fig. 6a, d and g). Consequently, a very large proportion of weak coupling and strong coupling exist in the self-organized neuronal network (see Fig. 2).

To investigate the differences between structures of self-organized neuronal network for different momentum factors, we calculated the differences \(\Delta P_0=P_0^\sigma -P_0^{0.0}\), \(\Delta P_1=P_1^\sigma -P_1^{0.0}\) and \(\Delta P_2=P_2^\sigma -P_2^{0.0}\) (\(\sigma \) = 0.1, 0.2, \(\ldots \), 1.0) from 50 to 60 ms during which the structures of neuronal network have achieved the stable states (see Fig. 2). Figure 4a shows the evolutionary process of \(\Delta P_0\) for \(\sigma \) = 0.3, 0.6, 0.9. The values of \(P_0^{0.3}-P_0^{0.0}\) and \(P_0^{0.6}-P_0^{0.0}\) are always larger than zero value, but not for \(P_0^{0.9}-P_0^{0.0}\), reflecting that the improved STDP rule has dual effects on the \(P_0\). Meanwhile, the values of \(\Delta P_1\) for \(\sigma \) = 0.3, 0.6, 0.9 are frequently smaller than zero value, and the \(P_1^{0.3}-P_1^{0.0}\) has the minimum absolute value (see Fig. 4b), indicating that the improved STDP rule decreases the \(P_1\) and this effect is relatively small for \(\sigma =0.3\). Furthermore, the value of \(\Delta P_2\) for \(\sigma \) = 0.6, 0.9 is generally larger than zero value, but the \(P_2^{0.3}-P_2^{0.0}\) always fluctuates around the zero value (see Fig. 4c). This result reveals that the improved STDP rule increases the \(P_2\) and this influence is relatively weak for \(\sigma \) = 0.3.

To further confirm the above results, we calculated the average values of \(\Delta P_0\), \(\Delta P_1\) and \(\Delta P_2\) from 50 to 60 ms, the results are shown in Fig. 4d. As the momentum factor increases, the mean \(\Delta P_0\) fluctuates around the zero value, the mean \(\Delta P_1\) is frequently less than zero value and the mean \(\Delta P_2\) is always larger than zero value. Furthermore, these differences between \(P_0\), \(P_1\) and \(P_2\) for the improved STDP rule and the multiplicative STDP rule are statistically significant (\(p<0.05\) for all values of \(\sigma \), two-sample T test). These results indicate that compared to the multiplicative STDP rule, the improved STDP rule significantly decreases the \(P_1\) (strong coupling), increases the \(P_2\) and has dual effects on the \(P_0\) (weak coupling).

Above results have proved that the improved STDP rule has a great effect on the synaptic weight such that it would affect the topological structure of self-organized neuronal network. Hence, we introduced the complex network method to further investigate the effect of improved STDP rule on the topological structure. The definitions of complex network indices including the node degree, clustering coefficient and modularity in the directed weighted network are shown in “Appendix 1”. We first calculated the complex network indices of self-organized neuronal network at time \(T_k\) where \(T_k\) varies from 50 to 60 ms with a step of 0.05 ms, and then averaged the complex network indices among the 201 neuronal networks as the mean value for each momentum factor.

Fig. 5
figure 5

a The node degree, b clustering coefficient and c modularity of self-organized neuronal network for different momentum factors. The red fitting lines reflect the changing trends of node degree, clustering coefficient and modularity with the momentum factor increasing

The node degree measures the sparse degree of complex network, and the bigger value of node degree means the denser network. Figure 5a shows the mean node degree of self-organized neuronal network. As the momentum factor increases, the node degree gradually decreases, and these differences between node degree for the improved STDP rule and the multiplicative STDP rule are statistically significant (\(p<0.05\) for all values of \(\sigma \), two-sample T test). This result indicates that the improved STDP rule induces the more sparse self-organized neuronal network than the multiplicative STDP rule.

The clustering coefficient measures the degree with which nodes tend to cluster together [28]. Figure 5b shows the mean clustering coefficient for different momentum factors. The clustering coefficient of self-organized neuronal network for \(\sigma \) = 0.3 is larger than that for \(\sigma \) = 0.0, but it is apparently decreased for the rest values of \(\sigma \), among which the clustering coefficient is the smallest for \(\sigma \) = 0.6. More importantly, these differences between clustering coefficients of self-organized neuronal network for the improved STDP rule and the multiplicative STDP rule are statistically significant (\(p<0.05\) for all values of \(\sigma \), two-sample T test). These results suggest that the improved STDP rule mostly decreases the mean clustering coefficient of self-organized neuronal network, and the decreased effect is the strongest for \(\sigma \) = 0.6.

The self-organized neuronal network exhibits the apparent modular structure (see Fig. 6a, d and g), the degree of which is measured by the modularity. As can be seen from Fig. 5c, the mean modularity of self-organized neuronal network for the improved STDP rule is significantly smaller than that for the multiplicative STDP rule, and the modularity for \(\sigma \) = 0.5 is the smallest. Furthermore, these differences between modularity for the improved STDP rule and the multiplicative STDP rule are statistically significant (\(p<0.05\) for all values of \(\sigma \), two-sample T test). These results reveal that the improved STDP rule decreases the modular degree of self-organized neuronal network compared to the multiplicative STDP rule, and this decreased effect is the strongest for \(\sigma \) = 0.5.

Taken together, depending on the momentum factor, the improved STDP rule significantly changes the proportion of synaptic weight such that it generally results in a more sparse, less clustered and modular topological structure of self-organized neuronal network.

3.3 Dynamics of self-organized neuronal network

The dynamics of complex network is closely related to its topological structure and even can be predicted by it [58]. Since the improved STDP rule has a great effect on the topological structure of self-organized neuronal network, the networks for different momentum factors are expected to exhibit apparently different dynamical behaviors. Therefore, we now investigate the dynamics of self-organized neuronal network. We firstly selected the neuronal network at the time \(T_k\) as the baseline network structure, in which the \(T_k\) varies from 50 to 60 ms with a step of 0.05 ms. Consequently, the 201 neuronal networks were obtained with the mostly same topological properties for each momentum factor. Based on the baseline network structures, we then again integrated the Eqs. (16) with a time step of 0.05 ms and a total time of 60 ms. Furthermore, we sampled the time series of firing activities of neurons from 30 to 60 ms and calculated the correlation coefficient between time series of neurons according to “Appendix 2”. The correlation matrices were then obtained and regarded as the functional networks of self-organized neuronal network. Finally, based on the 201 functional networks for each momentum factor, we investigate the dynamics of self-organized neuronal network including the network synchronization, information efficiency and functional integration and segregation.

Fig. 6
figure 6

The synaptic connection matrices of the self-organized neuronal networks at the time of 60 ms (upper panel), the corresponding functional networks (middle panel) and the probability density distributions of absolute correlation coefficient (lower panel) for ac \(\sigma =0.0\), de \(\sigma =0.3\) and gi \(\sigma =0.6\)

3.3.1 Network synchronization

The correlation coefficient (\(\rho \)) measures the phase synchronization between firing activities of neurons, where the \(\rho =1\) indicates the completely phase synchronization, and \(\rho =-1\) reflects the completely anti-phase synchronization. Here we adopted the absolute correlation coefficient to characterize the firing synchronization among neurons. Figure 6c, f and i show the probability density distributions of correlation coefficient for \(\sigma \) = 0.0, 0.3, 0.6. The probability density of correlation coefficient has the largest value for \(\rho \rightarrow 1\) and the smallest value for \(\rho \rightarrow 0\), reflecting the strong synchronization among neurons in the self-organized networks. Moreover, the probability density for \(\sigma \) = 0.0 is obviously different from that for \(\sigma \) = 0.3, 0.6 with smaller probability density for \(\rho \rightarrow 1\), indicating that the improved STDP rule may induce the smaller firing synchronization among neurons. Furthermore, we calculated the mean correlation coefficient to prove the result. As seen in Fig. 7, the mean correlation coefficient exhibits the bimodal structure as the momentum factor increases. For a part of momentum factors such as \(\sigma \) = 0.2, 0.9, the mean correlation coefficient is significantly greater than that for \(\sigma =0.0\). But for the other part of momentum factors such as \(\sigma =0.6\), the mean correlation coefficient is apparently smaller than that for \(\sigma =0.0\). In addition, the mean correlation coefficient is the greatest for \(\sigma =0.2\) and is the smallest for \(\sigma =0.6\). More importantly, these differences between mean correlation correlation for the improved STDP rule and the multiplicative STDP rule are statistically significant except for \(\sigma =0.3\) (\(p>0.05\) for \(\sigma =0.3\) and \(p<0.05\) for the rest values of \(\sigma \), two-sample T test). These results indicate that compared to the multiplicative STDP rule, the improved STDP rule contributes to either strengthening the network synchronization or weakening it, and the strengthened effect is the strongest for \(\sigma =0.2\) and the weakened influence is the strongest for \(\sigma =0.6\).

3.3.2 Network efficiency

The neurons coupled together aims to communicate the neural information which apparently is related to the firing synchronization among neurons. Due to the great effect on the network synchronization, the improved STDP rule is expected to affect the neural information transmission across the network. To investigate the information transmission efficiency, we calculated the local information efficiency and global information efficiency of functional network, the definitions of which are seen in “Network efficiency” of Appendix 1.

Fig. 7
figure 7

The average value of absolute correlation coefficient of functional networks for different momentum factors. The red fitting line indicates the changing trend of average correlation coefficient with the momentum factor increasing

The local efficiency measures the information transmission among sub-networks. As can be seen from Fig. 8a, the mean local efficiency of functional network for some momentum factors such as \(\sigma \) = 0.2, 0.9 is larger than that for \(\sigma =0.0\), but it is decreased for the other momentum factors such as \(\sigma \) = 0.4, 0.6. Meanwhile, these differences between local efficiency of functional networks for the improved STDP rule and the multiplicative STDP rule are statistically significant except for \(\sigma \) = 0.3, 1.0 (\(p>0.05\) for \(\sigma \) = 0.3, 1.0 and \(p<0.05\) for the rest values of \(\sigma \), two-sample T test). These results reveal that depending on the momentum factor, the improved STDP rule either strengthens the local neural information transmission or weakens it. More importantly, the mean local efficiency of functional networks for \(\sigma =0.2\) is the biggest, indicating the strongest strengthened effect of improved STDP rule on local efficiency, and it is the smallest for \(\sigma =0.6\), revealing the strongest weakened influence. Furthermore, it should be noted that the evolutionary processes of average correlation and local efficient are similar as the momentum factor increases, reflecting that the local efficiency of information transmission is closely related to the synchronization within the local networks.

In contrary, the global efficiency characterizes the information flow across the global network. From Fig. 8b, the mean global efficiency of functional networks for \(\sigma \) = 0.4, 0.6 is larger than that for \(\sigma =0.0\), and the functional network for the rest of momentum factors has the decreased global efficiency. In addition, these differences between global efficiency for the improved STDP rule and the multiplicative STDP rule are statistically significant except for \(\sigma \) = 0.4, 0.7 (\(p>0.05\) for \(\sigma \) = 0.4, 0.7 and \(p<0.05\) for the rest values of \(\sigma \), two-sample T test). These results reveal that the improved STDP rule has dual effects on the global efficiency: strengthening the global efficiency or decreasing it. More importantly, the global efficiency is the biggest for \(\sigma =0.6\), indicating the strongest effect on promoting the global neural information transmission, and it is the smallest for \(\sigma =0.2\), revealing the strongest weakened effect.

Fig. 8
figure 8

a The local efficiency and b global efficiency of functional networks for different momentum factors. The red fitting lines reflect the changing trends of local efficiency and global efficiency as the momentum factor increases

3.3.3 Network functional integration and segregation

As seen from Fig. 6b, e and h, the functional networks exhibit the apparent modular structure. The strong synchronization within the module is responsible to specific function in subsystems and results in a functional segregation, while the functional integration in charging of the information integration in global system needs strong synchronization between modules, which would wipe away the modular structure. The balance between functional segregation and integration is believed to be crucial for normal functioning of complex system, for example, the brain disorders often result from the imbalance between functional segregation and integration [59]. Furthermore, the network shows different synchronous patterns for different momentum factors (see Fig. 6c, f and i), which must affect the balance between functional segregation and integration. Therefore, we further investigate the functional integration and segregation of functional network using modularity and network complexity.

The modularity is a natural measure of modular degree and its definition is given in “Modularity” of Appendix 1. The larger the modularity, the more modular the network is and the stronger the functional segregation is [27]. Figure 9a shows the modularity of functional network for different momentum factors. The modularity increases for some specific momentum factors such as \(\sigma =0.6\) than that for \(\sigma =0.0\), but it is decreased for the other momentum factors such as \(\sigma \) = 0.2, 0.9. However, the statistically significant difference between modularity for the improved STDP rule and the multiplicative STDP rule mostly depends on the momentum factor (\(p>0.05\) for \(\sigma \) = 0.3, 0.7, 0.8, 1.0 and \(p<0.05\) for the rest values of \(\sigma \), two-sample T test). These results reveal that the improved STDP rule has dual effects on the modularity: strengthening the functional segregation for \(\sigma \) = 0.4–0.6, and enhancing the functional integration for \(\sigma \) = 0.1, 0.2, 0.9. Furthermore, the modularity for \(\sigma =0.6\) is the largest, reflecting the strongest effect of improved STDP rule on functional segregation, and it has the smallest value for \(\sigma =0.2\), indicating the strongest influence on functional integration.

Fig. 9
figure 9

a The modularity and b complexity of functional networks for different momentum factors. The red fitting lines indicate the changing trends of modularity and complexity as the momentum factor increases

The network complexity S characterizes the balance between functional segregation, and integration [46, 59] and its definition is provided in “Appendix 3”. The S is closed to zero value for both nonsynchronous and fully synchronous state, and the maximum complexity reflects the combination between functional segregation and integration [59]. From Fig. 9b, the network complexity nonlinear varies as the momentum factor increases, during which the functional networks for some momentum factors such as \(\sigma \) = 0.6 have the larger complexity than that for \(\sigma \) = 0.0, but they have decreased complexity for another momentum factors such as \(\sigma \) = 0.2, 0.9. In addition, the two-sample T test provides the \(\sigma \)—dependent difference between complexity for the improved STDP rule and the multiplicative STDP rule (\(p>0.05\) for \(\sigma \) = 0.3, 0.5, 1.0 and \(p<0.05\) for the rest values of \(\sigma \)). Therefore, it is obvious that the improved STDP rule has the dual effects on the network complexity, decreasing the complexity for \(\sigma \) = 0.1, 0.2, 0.7–0.9 and increasing it for \(\sigma \) = 0.4, 0.6. Furthermore, the complexity for \(\sigma \) = 0.2 is the smallest, indicating the strongest decreased effect of improved STDP rule on complexity, and it is the largest for \(\sigma \) = 0.6, reflecting the strongest increased effect. It is important to note the similar evolutionary processes of modularity and complexity as the momentum factor increases, revealing that the decreased synchronization within the modules yields more modules with smaller size so as to produce higher complexity.

In summary, depending on the momentum factor, the improved STDP rule has different effects on the dynamics of self-organized neuronal network. Compared to the multiplicative STDP rule, the improved STDP rule with \(\sigma \) = 0.1, 0.2, 0.9 results in a higher network synchronization and local efficiency, less global efficiency, modularity and complexity. However, these changes in the dynamics of self-organized neuronal networks are reversed for \(\sigma \) = 0.6.

4 Discussion and conclusion

In this paper, we improved the multiplicative STDP rule by adding a momentum term and then studied the evolutionary rate, topological structure and dynamics of self-organized neuronal network with an aim of understanding whether the improved STDP rule optimizes the convergence process and network properties. We first found that the improved STDP rule with suitable momentum factors speeds up the convergence rate with which the global coupled neuronal network evolves into the sparse self-organized structure. Then, using the complex network method, we observed that the improved STDP rule mostly results in a more sparse, less clustered and modular self-organized neuronal network by affecting the proportion of synaptic weight. Finally, we showed that the improved STDP rule with \(\sigma \) = 0.1, 0.2, 0.9 strengthens the network synchronization and local efficiency and weakens the global information transmission, modularity and network complexity; however, it has the adverse effect on these dynamical behaviors for \(\sigma \) = 0.6.

The modulation of chemical synapse is crucial to natural properties of neural system such as the learning and memory, growing and aging [19, 23, 35, 48]. The previous study also suggested that the cellular networks in the adult brain are continually remodeled [2, 3, 12]. The rate with which the synaptic connections change according to the neural signals has a great effect on the modulation and is also important for high efficiency of information transmission in the neural system. Compared to the multiplicative STDP rule, the improved STDP rule with suitable momentum factors significantly increases the evolutionary rate of neuronal networks such that it optimizes the feedback between neural signals and chemical synapses. Therefore, the improved STDP rule may be more helpful for the higher efficiency of information transmission during the biological feedback process.

The brain networks at all scales have the common properties of sparsity and clusters [3, 15]. The improved STDP rule induces the more sparse self-organized neuronal network than the multiplicative STDP rule, which, on the one hand, optimizes the redundant chemical synaptic connections and, on the other hand, increases the long-distance connections between neurons and thus increases the energy cost of information transmission. Combining with the dual effects of improved STDP rule on clustering coefficient, it is hard to identify whether the improved STDP rule results in a more economical and beneficial neuronal network than the multiplicative STDP rule. Therefore, the dynamics of self-organized neuronal network should be considered. It is clear that depending on the momentum factors, the improved STDP rule either strengthens the network synchronization or weakens it. The high network synchronization not only increases the efficiency of information transmission in local loops or global network, but is also related to the brain disorders such as epileptic seizure [21, 62]. The optimal synchronization ensures the high efficiency of information transmission between neurons at low connection cost. Apparently, the improved STDP rule with \(\sigma \) = 0.6 results in a highest global efficiency (see Fig. 8b), reflecting that the improved STDP rule contributes to the higher efficiency of information transfer across the stable self-organized neuronal network.

The real complex network in the neural system should have the optimal balance between functional segregation and integration which yields the high complex dynamics [3, 59]. It is observed that the improved STDP rule with \(\sigma =0.6\) not only induces the highest modularity, but also produces the largest network complexity, reflecting the higher functional segregation but also the optimal balance between functional segregation and integration. Combining with the largest global efficiency of neural information transmission, it is reasonable to claim the optimal structure of self-organized neuronal network which is resulted from the improved STDP rule with \(\sigma =0.6\).

It should be noted that compared to the multiplicative STDP rule, the improved STDP rule with \(\sigma \) = 0.2, 0.6 induces the most significant changes in the dynamics of neuronal network (Figs. 7, 8, 9). Thus it is reasonable to suspect that the differences between topological structures for \(\sigma \) = 0.2, 0.6 and \(\sigma \) = 0.0 are most significant. However, the complex network indices provide insufficient evidence to support the suspect (Fig. 5). In fact, the relationship between structure and dynamics of complex network has been a crucial challenge in neuroscience [53], and the complex network method based on the nodes and edges also has shown the shortcomings in characterizing the intrinsic relationship between network structure and dynamics. In future, the more quantitative methods are required to intrinsically investigate how the network structure decides the dynamics.

In conclusion, the improved STDP rule not only is helpful for decreasing the energy cost during the modulating process of chemical synapses underlying the neural signals, but also contributes to the optimal neuronal network structure with the highest global efficiency and the best combination between functional segregation and integration.