Keywords

1 Introduction

Synchronization is thought to play a key role in intercommunications among neurons. With the developing of the theories of complex networks, people pay more attention on the synchronization of complex neuronal networks, especially small-world ones. Most of the research focuses on non-weighted or constant-weighted neuronal networks, however, the weights of synapses among neurons keep changing in the growth of neurons and in the studying and memorizing processes. In other words, synapse can learn, which is also called synapse plasticity. In this paper, the synchronization of small-world neuronal networks with synapse plasticity is explored numerically.

2 The Mathematical Model

By using the Hindmarsh-Rose neuronal model [1] and the Newman-Watts small-world strategy [2], we can set up electrically coupled neuronal network. The number of neurons is set as \(N=100\).

As for the synapse plasticity, we define the variation equation for the weight w ij between neuron i and neuron j as following [3]:

$$\Delta w_{ij}=L*\arctan [x_i(x_j-x_iw_{ij})]$$
((1))

where L is a positive synapse learning coefficient and x i is the membrane potential of neuron i.

Then the neuronal network with synapse plasticity can be expressed as:

$$\begin{array}{lll}\dot x_{i}&=&y_{i}-ax_{i}^3+bx_{i}^2-z_{i}+I+\sigma \sum\limits_{j=1,\;j \ne i}^N g_{ij}w_{ij}(x_j-x_i), \\ \dot y_{i}&=&c-dx_{i}^2-y_{i}, \\ \dot z_{i}&=&r[s(x_{i}+ \chi)-z_{i}],\\ \Delta w_{ij}&=&L \arctan [x_i(x_j-x_iw_{ij})], \qquad i=1,2,\cdots,N.\end{array}$$
((2))

where σ is the coupling strength, \(\mathbf G=\{g_{ij}\}_{N\times N}\) is the connectivity matrix of the network, I is the external current. The parameters in the system are set as \(a=1,\break b=3,c=1, d=5, s=4, r=0.006, \chi=1.6, I=3\). As ions in channels are transmitted mutually, bi-directional non-equal connections are adopted here. That is, if there is a connection between neuron i and neuron \(j\,(i \neq j)\), then there exists a weight \(w_{ij}\neq w_{ji}\), if not, then \(w_{ij}=w_{ji}=0\). And let \(w_{ii}=0\), which means that there is no self-connection for all the neurons in the network.

3 Simulation Results

The variation properties of synapse weights under the influence of the synapse learning coefficient is studied first. Figure 1 shows the variations of membrane potentials, connection weights and connection strengths between two arbitrarily chosen connected neurons in the network. The connection strength of neuron i is defined as \(s_i(t)=\sum_{j=1}^N w_{ij}(t)\).

Fig. 1
figure 46_1_212416_1_En

The variations of membrane potentials, connection weights and connection strengths between two arbitrarily chosen connected neurons \((L=8,\sigma=0.05, p=0.1)\): a x i and x j are the membrane potentials of neuron i and neuron j respectively; b w ij and w ji are the positive and negative weights between neuron i and neuron j respectively; c s i and s j are the connection strengths of neuron i and neuron j respectively

It can be seen from Fig. 1 that, if both of the two neurons are in rest, the connection weights and the connection strengths between them do not change; if neuron i is excited and neuron j is in rest, the connection weight from neuron i to neuron j is strengthened, and the connection weight from neuron j to neuron i is weakened. That means, when ion currents conduct from one neuron to another by the connection channels between them, the positive connection weight would be strengthened and the negative connection weight would be weakened. These phenomena conform with actual variations of synapses during the excitement transmission process among real neurons.

Then we discuss synchronization properties of the network under the influence of synapse plasticity. We use average synchronization error as characteristic measure, which can be represented as

$$e_{\textrm{mean}}={\textrm{mean}}(\langle e_i \rangle), \langle e_i \rangle=\langle \sqrt {(x_i-x_1)^2+(y_i-y_1)^2+(z_i-z_1)^2}\rangle$$

where \(\langle \cdot \rangle\) represents average on time, \(i=2,3,\cdots,N\).

By numerical simulation, the relationship of the average synchronization error and the synapse learning coefficient can be obtained as in Fig. 2.

Fig. 2
figure 46_2_212416_1_En

The relationship of the average synchronization error e mean and the learning coefficient \(L (\sigma=0.05,\break p=0.1)\)

It can be seen that, the average synchronization error decreases sharply first, then oscillates around 0.5 finally, which means that the increase of synapse learning coefficient is helpful for synchronization, but can not make the network achieve synchronization. It implies that in neuronal networks, the learning function of electric synapses must be neither too strong nor too weak, but be appropriate to maintain the synchronizability.

The variations of the average synchronization error with the increasing coupling strength and the adding probability are shown in Fig. 3a, b respectively.

Fig. 3
figure 46_3_212416_1_En

a The relationship of the average synchronization error e mean and the coupling strength \(\sigma (L=2,p=0.1)\); b The relationship of the average synchronization error e mean and the adding probability \(p (L=2,\sigma=0.01)\)

It can be seen from Fig. 3a that, when the coupling strength increases to \(\sigma=0.06\), the average synchronization error decreases to zero, which means the network achieves complete synchronization. Hence, increasing the coupling strengths among neurons can finally make the network synchronize completely. It also can be seen from Fig. 3b that, when the adding probability increases to \(p\approx 0.7\), the average synchronization error decreases to zero. So, in neuronal networks with synapse plasticity, the introduction of shortcuts also can improve synchronizability of neuronal networks.

4 Conclusion

Based on that actual biological neuronal networks have the properties of small-world connectivity and the connection strengths among neurons change dynamically, we study the synchronization of electrically coupled neuronal networks with synapse plasticity. The variation properties of connection weights are studied first. Then the effects of the synapse learning coefficient, the coupling strength and the adding probability on synchronization are studied respectively. It is shown that appropriate learning coefficient is helpful for improving the synchronizability of the network, and increasing the coupling strength and the adding probability can finally make the neuronal network synchronize completely.