Introduction

Cognitive aging seems to be a process of global degradation. Performance in psychological tests of fluid intelligence, such as Raven's Advanced Progressive Matrices, tends to decrease with age [1]. These results are strongly contrasted by performance improvements in everyday situations [2]. We therefore hypothesize that the observed aging deficits are partly caused by the optimization of cognitive functions due to learning.

Model

To provide evidence for this hypothesis, we consider a neural memory model that allows for associative recall by pattern matching as well as for "fluid" recombination of memorized patterns by dynamical activation. In networks with dynamical synapses, critical behavior is a generic phenomenon [3]. It might provide the optimum for the speed and completeness tradeoff in the exploration of a large set of combinations of features, such as required in Raven's test. The model comprises also the life-long improvement in crystallized intelligence by Hebbian learning of the network connectivity while exposed to a number of neural activity patterns.

Results

The synaptic adaptation is shown to cause a breakdown of the initial critical state that can be explained by the formation of densely connected clusters within the network corresponding to the learned patterns. Avalanche-like activity waves in the network will more and more tend to remain inside a cluster, thus reducing the exploratory effects of the network dynamics. Meanwhile retrieval of patterns stored in the early phase of learning is still possible. Mimicking the Raven's test, we presented the model with new combinations of previously learned subpatterns during various states of learning. Networks with comparatively lower memory load achieve more stable activations of the new feature combinations than the "old" networks. This corresponds well to the results of the free-association mode in either network type where only the "young" networks are close to a self-organized critical state. The speed and extent of the loss of criticality depends on properties of the connectivity scheme the network evolves to during learning.

Conclusion

While on the one hand learning leads to impaired performance in unusual situations, it may on the other hand compensate for the decline in fluid intelligence if experienced guesses even in complex situations are possible due to the lifelong optimization of memory patterns.