Background

Functional organisation in neural networks is believed to arise from synaptic plasticity. Spike-Timing-Dependent Plasticity (STDP) is a candidate for such plasticity that has received considerable experimental support and has been studied in feed-forward network architectures [1] and more recently in recurrently connected networks [2], which corresponds to more biologically realistic situations.

Models and methods

We investigate the dynamics of recurrently connected linear Poisson neurons excited by steady external pulse trains with a correlation structure [1, 2]. The input synaptic weights are modified in accordance to "additive" STDP that relies upon both rate-based and pairwise-correlation-based changes [1, 2]; the recurrent weights are kept fixed. The network dynamics (evolution of the firing rates, of the correlations, and of the weights) is described by a dynamical system [2]. We focus in this paper on symmetry breaking among the input connections in the case of two input pools with the same characteristics (correlation within each pool but not between each other), particularly the influence of the recurrent connections.

Results

When there are no recurrent connections, symmetry breaking occurs among the input connections with 50% probability (Fig. 1, each horizontal line represents the input weights onto a neuron, from input pool #1 on the left half and from input pool #2 on the right half; bright colour represents the potentiated weights and dark colour the silent weights). With full recurrent connectivity, the neurons tend to specialise completely into one or other of the input pools (Fig. 2). In the case of two distinct recurrently connected groups of neurons, each group may specialise into a distinct input pool.

Conclusion

The presence of recurrent connections impacts upon the learning of the input connections. This phenomenon relates to the emergence of self-organised maps in neural networks, where only areas sensitive to some of the input pathways emerge. Further investigation is needed to better understand unsupervised learning, particularly with learning on both the input and the recurrent connections.

Figure 1
figure 1

Case of no recurrent connections.

Figure 2
figure 2

Case of full recurrent connectivity.