Abstract
The hippocampal formation is critical for episodic memory, with area Cornu Ammonis 3 (CA3) a necessary substrate for auto-associative pattern completion. Recent theoretical and experimental evidence suggests that the formation and retrieval of cell assemblies enable these functions. Yet, how cell assemblies are formed and retrieved in a full-scale spiking neural network (SNN) of CA3 that incorporates the observed diversity of neurons and connections within this circuit is not well understood. Here, we demonstrate that a data-driven SNN model quantitatively reflecting the neuron type-specific population sizes, intrinsic electrophysiology, connectivity statistics, synaptic signaling, and long-term plasticity of the mouse CA3 is capable of robust auto-association and pattern completion via cell assemblies. Our results show that a broad range of assembly sizes could successfully and systematically retrieve patterns from heavily incomplete or corrupted cues after a limited number of presentations. Furthermore, performance was robust with respect to partial overlap of assemblies through shared cells, substantially enhancing memory capacity. These novel findings provide computational evidence that the specific biological properties of the CA3 circuit produce an effective neural substrate for associative learning in the mammalian brain.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Episodic memory is a fundamental cognitive operation that links together the contents of a present experience– spatial, temporal, sensory, and emotional– for future recall (Eichenbaum, 2004; Pfeiffer, 2020; Dragoi & Tonegawa, 2011; Stachenfeld et al., 2017). The hippocampal formation (HPF) is a critical substrate for episodic memory formation and retrieval, with area Cornu Ammonis 3 (CA3) crucial for auto-associative memories (Rolls, 2018). Auto-association and pattern completion are two circuit functions that involve the storage of individual experiences and their recall from a partial cue, respectively (Rebola et al., 2017). Neurophysiological studies highlight that these experiences are represented by the concurrent firing of a group or groups of excitatory pyramidal cells (PCs), known as neuronal ensembles or cell assemblies (Buzsáki, 2010; Farooq et al., 2019). Additionally, empirical evidence reveals a synaptic basis for these experiences, where the order and timing of spikes via long-term spike timing-dependent plasticity (STDP) is a key factor in strengthening synaptic conductance at PC-PC synapses (Feldman, 2012).
Open questions stemming from CA3 as a substrate for memory regard the quality of experience remembered, and the number of stored experiences: how well does CA3 recall experiences, and what is the memory capacity of CA3? Recall quality may be based on how the learned experience is encoded by cell assemblies and their corresponding connections, where changes in the amplitude of excitatory postsynaptic potentials (Perez-Rosello et al., 2011) and number of AMPA receptors at the terminals of postsynaptic PCs can occur (Feldman, 2012; Debanne et al., 1998; Mishra et al., 2016; Kakegawa et al., 2004). Additionally, the amount of information provided in the form of a cue to these cells can lead to graded re-activation of the memory through pattern completion (weak, moderate, or strong) (Neunuebel & Knierim, 2014). From a dynamical systems lens, this may involve the CA3 network exhibiting attractor dynamics in response to a pertinent cue (Treves & Rolls, 1994; Hasselmo et al., 1995; Menschik & Finkel, 1998). These mechanisms also depend not only on the specific input-output properties of CA3 PCs (Lazarewicz et al., 2002; Hemond et al., 2008, 2009), but also on considerably diverse inhibitory interneurons (Ascoli et al., 2009).
Concerning the network memory capacity, theoretical and empirical evidence suggests that there are four key factors in determining the number of memories stored in CA3: the number of PCs, the probability of connection between PCs, the size of cell assemblies, and the amount of overlap between cell assemblies. Estimates for the number of neurons, the PC-PC connection probability, and the size of cell assemblies have been offered based on various assumptions (Almeida et al., 2007; Guzman et al., 2016; Treves & Rolls, 1991). Additionally, estimates have been provided for the percentage of cells shared between cell assemblies, and the shared cells between assemblies provide a neural substrate for associations that enable representations of specific episodic memories (Gastaldi et al., 2021; Quian Quiroga, 2023). Estimates for the memory capacity of CA3 have been offered based on these factors in rats, though, to our knowledge, not in mice. However, these estimates relied on network models that did not reflect the neural and connection type diversity of the CA3 circuit (Almeida et al., 2007; Guzman et al., 2016; Treves & Rolls, 1991).
Hippocampome.org is an open access knowledge base of distinct neuron types in the rodent HPF (Wheeler et al., 2015, 2024). This resource identifies neuron types based on their primary neurotransmitter (glutamate or GABA) and the presence of axons and dendrites across distinct layers of each cytoarchitectonic area of the HPF: entorhinal cortex, dentate gyrus, CA3, CA2, CA1, and subiculum. Hippocampome.org provides for each neuron type experimental data regarding the expression of specific molecules (White et al., 2020), biophysical membrane properties (Ascoli & Wheeler, 2016), electrophysiological firing patterns in vitro and in vivo (Komendantov et al., 2019; Sanchez-Aguilera et al., 2021) and population size (Attili et al., 2019, 2022). Additionally, Hippocampome.org quantifies the connection probability and synaptic signals of directional pairs formed between a pre- and post-synaptic neuron type, known as potential connections, which are based on their axonal and dendritic distributions (Rees et al., 2017; Moradi & Ascoli, 2018, 2020; Tecuatl et al., 2021a, b). Also available on this web portal are computational models of neuronal excitability (Venkadesh et al., 2019) and short-term synaptic plasticity (Moradi et al., 2022) using the Izhikevich and Tsodyks-Markram formalisms, respectively.
Utilizing Hippocampome.org, we previously created a computational circuit model of the mouse CA3 that featured a selection of neuron types and potential connections chosen to represent the neural diversity of this area (Kopsick et al., 2023). Additionally, the in silico implementation of this model as a spiking neural network (SNN) in the GPU-based simulation environment CARLsim6 can capture the individual spike times of every neuron, and can track changes in synaptic weight at each connection (Niedermeier et al., 2022). This makes the Hippocampome derived CA3 SNN particularly useful for elucidating mechanisms for auto-association and pattern completion.
The present work investigates whether a SNN that reflects the scale, diversity, and biological properties of the mouse CA3 can form and retrieve patterns via cell assemblies. We demonstrate that this SNN has activity consistent with what has been observed in vivo, and that patterns are auto-associated and completed robustly with minimally informative cues that stem from cell assembly formation and retrieval, respectively. Additionally, we report that a range of assembly sizes can support pattern completion after a limited number of repeated presentations. Furthermore, when cells are shared between assemblies, auto-association and pattern completion remain nearly unaltered, suggesting that individual representations can be strongly retrieved while still providing a basis for overlapping experiences. Moreover, this finding offers a potential mechanism supporting a substantial expansion of memory capacity in the CA3 circuit.
2 Results
2.1 Can a full-scale CA3 SNN store and retrieve patterns via cell assemblies?
To answer this first research question, we utilize our full-scale SNN of the mouse CA3, which exhibited rhythmic network activity that was stable and robust in response to synchronous or asynchronous transient inputs, reflecting resting-state behaviors (Kopsick et al., 2023). This model consisted of 8 neuron types and 51 connection types and was instantiated with 84,053 neurons and 176 million connections (Fig. 1A; Tables 1 and 2). Starting from this architecture, we sought to understand how CA3 could embed experiences occurring during wakefulness via cell assemblies for later recall. To create cell assemblies, a symmetric STDP learning rule was implemented in the SNN (Mishra et al., 2016): \(\:\varDelta\:w=\:A{e}^{-|\varDelta\:t|/\tau\:}\), where \(\:A\) determines the peak amplitude of weight change, \(\:\tau\:\)is the decay time constant, and is the time difference between the post- and pre-synaptic spikes. Values for each parameter were set to best approximate the symmetric exponential decay curve observed experimentally (Mishra et al., 2016) (Materials and Methods; Fig. 1B).
We presented input patterns during a training phase that elicited concomitant firing in distinct subsets of PCs. This approach was inspired by a recent study (Guzman et al., 2016) which demonstrated through functional connectivity analysis and network modeling that cell assemblies formed within CA3 from the application of different input patterns to subsets of CA3 PCs. In this work, each pattern lasted the length of a gamma cycle (20 ms) and was activated within an overarching theta cycle (200 ms), inspired by how cell assemblies are theorized to form in vivo according to a theta-gamma neural code (Buzsáki, 2010; Lisman & Jensen, 2013); Fig. 2a, c). After training, a degraded form of each input pattern was provided during a testing phase to evaluate the pattern completion capability of the SNN. Pattern degradation consisted of eliciting concomitant firing in a smaller subset of PCs than the subset used during training; the test consisted of ascertaining whether this subset could retrieve the full pattern during the second half of the gamma cycle through activation of recurrent PC connections (Fig. 2b, d).
The full-scale network exhibited asynchronous population activity while patterns were not presented, with each neuron type firing at rates consistent with those observed for these types in vivo (Table 3). When patterns were presented, sparse firing of PCs was relegated primarily to assembly members, while the activity of each interneuron type remained similar to non-presentation periods (Fig. 3a, c). Between training and testing, all PC-PC synaptic weights were re-normalized via synaptic divisive downscaling based on the synaptic homeostasis hypothesis (Tononi & Cirelli, 2003, 2006, 2014). In order to test the specificity of auto-association and pattern completion, we trained the network with three distinct input patterns. Training (with 65 repetitions in this example) induced strong auto-association through the synaptic weights of PCs within the subset of PCs stimulated by each input pattern, thereby forming three cell assemblies. Synaptic weights between members of different assemblies and between PCs that did not belong to any assembly were similar to the synaptic weights before training had commenced (Fig. 3b). Strong auto-association within a subset of PCs stimulated by a given input pattern is indeed consistent with and expected from the cell assembly theory ((Hebb, 1949); Supplementary Fig. 1). Stimulation of 50% of the input patterns provided during training (50% pattern degradation) led to robust activation of each assembly (Fig. 3d). Importantly, utilizing a normal distribution of starting PC-PC synaptic weights (consistent with the Hippocampome.org ranges (Moradi et al., 2022) as opposed to a fixed distribution did not alter these results (Supplementary Fig. 2). Furthermore, our CA3 model exhibited fixed point attractor dynamics, as visualized by Principal Component Analysis (PCA; see Materials and Methods) in response to each of the three input patterns (Supplementary Fig. 3), consistent with previously theorized roles of CA3 as an attractor network (Rolls & Treves, 2024).
In summary, we extended a previous data-driven, full-scale SNN of the mouse CA3 with experimentally-derived STDP and showed that (1) the network could store patterns via cell assemblies when trained with a biologically realistic stimulation protocol; and (2) cell assemblies retrieved their activity patterns when only provided a halved cue. This result allowed us to investigate the robustness of cell assembly retrieval across a variety of scenarios.
2.2 Can robust cell assembly retrieval occur across learning and with increasingly degraded cues?
A CA3 SNN capable of pattern storage and retrieval allows the characterization of two central aspects of auto-associative memory: the amount of repetition (learning) required for an experience to be stored and appropriately recalled, and the impact on performance when cues are degraded. Addressing these issues requires a metric to quantify the extent of pattern recall. To this aim, we defined pattern reconstruction based on a previously developed approach (Guzman et al., 2021) relying on Pearson correlation coefficients (PCCs): if the output pattern PCCs were greater than the input pattern PCCs, then pattern completion occurred (Materials and Methods). Our pattern reconstruction metric adapts this index to capture the degree of pattern completion by scaling the PCCs relative to the maximum value of 1, and converting the result to a percentage to obtain an intuitive expression of performance accuracy (Supplementary Fig. 4).
With pattern reconstruction defined, we turn to the first question. We trained the CA3 SNN in sets of 5 presentations of, again, three distinct input patterns, which in the prior example created three corresponding cell assemblies. After each set of 5 presentations, we stored the synaptic weight matrices of the network to enable separate testing with 50% degraded input patterns. Interestingly, non-zero pattern reconstruction occurred with as few as 15 presentations of input patterns (Fig. 4a). Based on the second derivative of the reconstruction accuracy, 40 pattern presentations corresponded to the inflection point of most effective learning. Furthermore, a pattern completion plateau emerged at 55 presentations, with 65 and 95 presentations providing the strongest reconstruction accuracies, indicating the best pattern retrieval.
Turning to the second question, we utilized network structures trained on 40, 65, and 95 pattern presentation sets to assess how increased pattern degradation (i.e., increasingly diminished pattern cues) impacted pattern retrieval. Remarkably, pattern reconstruction remained substantial until a steep drop-off at 70% pattern degradation, and only weak pattern reconstruction occurred with 95% pattern degradation for each of the three network structures (Fig. 4b). Additionally, the similar performance of networks trained on 65 and 95 input repetitions highlighted that training beyond the initial plateau does not improve performance at more extreme pattern degradations.
Taken together, these results show that the CA3 SNN reliably encoded and retrieved patterns after as few as 40 presentation sets and upon reactivation of only a minority of PCs belonging to a cell assembly.
2.3 What assembly sizes can support pattern completion?
Another fundamental question is that of memory capacity– how many experiences can the network store and recall without interference? To address this question, we first consider the simple scenario in which all cell assemblies are fully segregated, that is, no neuron belongs to more than one assembly. In this case, the number of cell assemblies supported by the CA3 network is given by the total number of CA3 pyramidal cells divided by the assembly size, i.e. the number of CA3 pyramidal cells constituting each assembly. This factor is related to the sparseness ratio (γ), defined as the percentage of cells activated during an experience (Almeida et al., 2007). Theoretical insights and experimental evidence from humans and rats offered constraints for γ; using these constraints as a guide, we tested cell assembly sizes between 50 and 600 (0.067% <= γ <= 0.8%; (Almeida et al., 2007; Guzman et al., 2016; Treves & Rolls, 1991; Waydo et al., 2006; Bennett et al., 1997); Materials and Methods).
We trained networks on 40, 65, and 95 presentation sets to create assemblies of variable size and tested on patterns degraded by 50%. Interestingly, smaller sized assemblies performed best with fewer presentations (40 sets), while larger sized assemblies performed best with more presentations (65 and 95 sets) (Fig. 5a). Additionally, there was a stable range of assembly sizes between 150 and 600 where reconstruction accuracy improved with more training; the best performance occurred for an assembly size of 275 (0.33% of the total network size). Assembly sizes smaller than 150 with additional training performed worse due to pattern interference (Supplementary Fig. 5). Furthermore, as observed in the previous section, the choice of either 65 or 95 presentation sets within this range conferred similar pattern reconstruction accuracy. Moreover, application of a different synaptic downscaling method (subtractive normalization) or not normalizing synaptic weights at all led to comparable reconstruction accuracies; however, without downscaling, the stable range of assembly sizes was narrower, as accuracy decreased with additional training (Supplementary Fig. 6).
Utilizing SNNs trained on 40 and 65 presentation sets, we further tested pattern completion for the range of assembly sizes with increased pattern degradation percentages of 70 and 97.5% (Fig. 5b). Notably, assembly sizes of 100 and 150 displayed the best pattern completion in response to these highly degraded input patterns and exhibited weak pattern completion even when only 2.5% of an input pattern was provided. Therefore, in the presence of severely degraded input patterns, smaller assembly sizes (100 and 150) performed best in the SNN, whereas across moderate to high degradation levels an assembly size of 275 offered the best performance.
2.4 Can a full-scale CA3 SNN store and recall overlapping cell assemblies?
Our analysis so far assumed that no neuron could belong to more than a single cell assembly, but this is not necessarily the case in biological circuits. In fact, the extent of assembly overlap constitutes another key factor in determining memory capacity, because sharing neurons between cell assemblies can increase the number the experiences the network can encode (Quian Quiroga, 2023). Moreover, neurons shared between cell assemblies may facilitate hetero-association between episodic memories in CA3 (Gastaldi et al., 2021). Therefore, we investigated the storage and retrieval of patterns in the CA3 SNN when cell assemblies shared a subset of neurons (Fig. 6a).
To create (initially modest) overlaps between the three cell assemblies, we randomly selected 5% of neurons as shared between each pair of assemblies before training commenced (Materials and Methods). Following the usual procedure for storing cell assemblies and degrading input patterns by 50% during testing, overlapping cell assemblies retrieved patterns comparably to cell assemblies without overlaps (Fig. 6b). In particular, pattern reconstruction accuracy followed a similar trajectory with overlapping cell assemblies and had the same optimal point for learning of patterns and highest accuracy, which occurred at 40 and 65 presentations, respectively. Additionally, testing the overlapping cell assemblies in the presence of increased pattern degradation after training with 40 and 65 pattern presentation sets yielded similar reconstruction accuracies as with the no overlap (Fig. 6c). Furthermore, in the presence of 5% overlap, cell assembly sizes between 200 and 600 supported strong pattern completion, again consistent with the range found for assemblies without shared cells (Fig. 6d); notably, however, overlap reduced the performance of smaller cell assemblies in the 50–150 range.
Next, we varied the overlap percentage from 1 to 50% for an assembly size of 275 trained on 40 presentation sets and tested with 50% degraded input patterns. Remarkably, pattern reconstruction accuracy changed only minimally up to 20% overlap and remained above 30% even at 50% overlap. However, the percentage of neurons activated in the designated assembly compared to neurons not activated in the designated assembly, which we defined as pattern specificity (see Materials and Methods), substantially decreased with overlap (Fig. 7).
Auto-association and pattern completion of cell assemblies reflect the structural and functional components of memory formation and recall within the CA3 circuit, respectively, and SNNs can help reveal the underlying link between structure and function (Buzsáki, 2010; Lisman, 1999). We investigated this relationship by tracking two characteristics of PC-PC synapses throughout training: the auto-association signal-to-noise ratio (SNR) and the percentage of assembly synapses that had reached the maximum weight (Materials and Methods). It is especially interesting to analyze if and how these characteristics relate to the observed pattern completion performance. In this regard, we observed an auto-association SNR plateau occurring in assemblies trained both with and without overlap: further improvements in reconstruction accuracy became inconsequential above 94% SNR (Fig. 8a). This is consistent with the influence of the number of presentations on pattern completion, where training beyond 60 presentation sets did not significantly improve retrieval (cf. Figure 6b). Furthermore, at both 50% and 70% pattern degradation, reconstruction accuracy reached values close to optimal performance when only 10% of assembly synapses had reached their maximum weight with or without overlap (Fig. 8b). This indicates that effective learning in the CA3 SNN does not require synaptic saturation.
Taken together, these results highlight that strong retrieval occurs at moderate SNRs and when most assembly synapses are below their maximum weight. Moreover, overlapping cell assemblies retrieve patterns comparably to non-overlapping assemblies, supporting the use of shared neurons to enhance auto-associative memory capacity.
3 Discussion
The present work demonstrates that a biologically realistic SNN of the mouse CA3, with cell type-specific parameters of neuronal excitability, connection probabilities, and synaptic signaling all extracted from experimental measurements, can store and recall auto-associative memories via cell assemblies. Notably, cell assembly formation and retrieval relies on a training and testing paradigm grounded by in vivo neurophysiology (Lisman & Jensen, 2013). In particular, strong pattern reconstruction reliably occurs in the SNN in response to heavily incomplete or degraded input cues. Furthermore, auto-associative pattern completion in our model is robust across a broad range of assembly sizes and in the presence of assembly overlap, two critical factors to determine auto-associative memory capacity in CA3.
Training our CA3 SNN to an optimum point enabling strong pattern completion, yet well before most assembly synapses reach their maximum weights, may reflect how the real CA3 stores and recalls memories. Rather than maximizing post-synaptic conductance in pyramidal cells, a tradeoff with synaptic downscaling (possibly during slow-wave sleep) could support the storage of many patterns with minimal pattern interference. Additionally, these insights may be useful in training artificial neural networks, where training a network on multiple tasks in parallel with reasonable performance, instead of optimizing accuracy on a single task, could prevent catastrophic forgetting (Kemker et al., 2018; Kumaran et al., 2016).
The hippocampus may facilitate “one-shot” learning, i.e. rapid memory encoding from just a single experience (Moser & Moser, 2003). In a previous study, training a rat CA3 network model to store patterns with a clipped Hebbian plasticity rule enabled encoding of these memories in a one-shot manner (Guzman et al., 2016). However, one-shot encoding may not be prominent in the real rodent hippocampus, as animals typically spend weeks to learn a spatial or novel object location memory task before testing begins, and even then strong performance often requires multiple trials (Pfeiffer, 2020; Nakazawa et al., 2002; Montgomery & Buzsáki, 2007; Neves et al., 2022; Siegle & Wilson, 2014). During this time the hippocampus goes through many encoding, consolidation, and retrieval phases, when theta, gamma, and sharp-wave ripples contribute to cell assembly formation, refinement, and recall (Buzsáki, 2019). Therefore, our simulation design subjected the CA3 SNN to a training phase representative of encoding during experience through theta nested gamma oscillations (Lisman & Jensen, 2013). Moreover, modification of synaptic weights within assemblies between training and testing reflected synaptic downscaling during slow-wave sleep (González-Rueda et al., 2018). With this protocol, heavily degraded or incomplete cues reliably triggered strong pattern completion-mediated recall of experiences, in line with the expected role of CA3 in auto-associative memory.
Our results of robust pattern completion using circuit parameters measured from anatomical and physiological experiments complement and extend previous modeling work. A network model consisting of CA3 PCs and two interneuron types receiving inputs from the entorhinal cortex and dentate gyrus showed that, when patterns were strongly degraded, pattern completion could still occur within one recall cycle, known as simple recall (Bennett et al., 1997; Hummos et al., 2014). Neither the network model with PCs and two interneuron types nor the rat CA3 network model mentioned above (Guzman et al., 2016), however, constrained the simulation based on both size and diversity of the CA3 circuit. Another recent model of pattern completion in CA3 reflected the mouse network size, but again not the neuronal and synaptic diversity (Sammons et al., 2024). Therefore, to our knowledge this work provides clear evidence of robust pattern completion in the most realistic full-scale SNN model– including data-driven, cell type-specific parameters of neuronal excitability, connection probabilities, and synaptic signaling– of the mouse CA3 to date.
The cell assemblies formed and retrieved in this work involved zero to 50% shared cells between them. It is likely that cell assemblies have at least some level of overlap between them, as randomly creating assemblies with coding sparseness ratio of γ would share γ2N cells in common (Gastaldi et al., 2021). Our results with substantial overlap demonstrate that the neuronal and synaptic physiology of the CA3 circuit are well suited to support pattern completion even if non-zero overlap exists between assemblies in the mouse hippocampus in vivo, as recent empirical evidence demonstrates in the mouse primary visual cortex (Carrillo-Reid et al., 2019). Additionally, our results highlighting weak pattern specificity with high (\(\:\ge\:\) 40%) overlap might be improved by cholinergic presynaptic inhibition to prevent interference during encoding, as demonstrated in previous studies of pattern completion (Hasselmo et al., 1995; Hasselmo & Wyble, 1997).
The strong retrieval of smaller assembly sizes below 40 presentations, and their poor retrieval with presentations greater than 65 was due to the interplay of cell assembly dynamic with the STDP learning rule. Specifically, in our model, smaller assemblies have a higher learning rate (parameter A in Table 4), so as to balance out the within-assembly maximum synaptic weights after 100 presentations (cf. ‘Long-term synaptic plasticity’ in Materials and Methods). The STDP rule, however, is applied to all PC-PC synapses, including not only those between assembly members, but also those between non-assembly members. Thus, all PC-PC synapses are more strongly potentiated in the training of smaller assemblies than of their larger counterparts. As a consequence, assemblies with fewer than 100 cells reach a critical point during training with the emergence of large network activity due to strongly increasing weights between non-assembly member synapses. This activity in turn causes a further increase in the weights of all PC-PC synapses. For assembly size 75, this happens at 95 presentation sets, while for size 50, it happens at 65 sets, consistent with Fig. 5a. However, this never happens at the lower learning rates associated with larger cell assemblies. With the synaptic weights excessively strengthened throughout the network, application of downscaling, which brings the mean of all PC-PC synapses back to the before-training value, leaves the smaller-sized cell assemblies without sufficient excitatory drive to complete the cued pattern, effectively preventing storage and retrieval. Notice that this mechanism is not binary, but rather continuous. For example, even after 65 presentations with assembly size 75, the synaptic weights between assembly and cross-assembly members (using the terminology of Fig. 3b and Supplementary Fig. 2a, b) became larger. This leads to more noise as reflected in partial interference between assemblies when recalling the patterns (Supplementary Fig. 5).
The best performance across the range of assembly sizes examined in this study, when considering varying levels of cue degradation, lengths of training, and overlap, occurred with an assembly size of between 250 and 300 neurons. Intriguingly, 275 is approximately the square root of the number of PCs in the mouse CA3 network. It is tempting to speculate that hippocampal cell assemblies in vivo optimally form in accordance with the square root of the number of PCs, at least in rodents: based on the values of γ reported in previous studies, the square root relation would hold for rats, but not for humans. However, these estimates for γ are based on indirect evidence, including the number of hippocampal place cells active in each environment, and the number of concept cells active when presenting a concept, based on simultaneous single- and few-neuron recordings (Almeida et al., 2007; Quiroga, 2012).
Estimation of memory capacity in CA3 has previously involved the use of the connection probability, c, between CA3 PCs and γ. Utilizing Willshaw’s formula, which estimates capacity for both non-overlapping and overlapping assemblies with P = c/γ2, the capacity of the mouse CA3 would be on the order of 2,000 patterns (Almeida et al., 2007). Another formula was proposed by Treves and Rolls, which considers the number of recurrent collateral (RC) connections onto each PC, CRC, a scaling factor reflecting the total amount of information that can be stored and retrieved from the RCs, k, and γ (Rolls, 2018; Treves & Rolls, 1991). Estimating capacity with their formula of \(\:\text{P}=\:\frac{{\text{C}}^{\text{R}\text{C}}}{{\upgamma\:}\text{l}\text{n}(1/{\upgamma\:})}\text{k}\), the mouse CA3 could store on the order of 18,000 patterns. However, these formulas do not consider overlap directly as a variable, which may in principle allow a substantial increase in storage capacity. Collaborating with a different group of colleagues, one of the authors recently discovered an exact, closed-form solution for how memory capacity in our model varies as a function of network size, cell assembly size, and cell assembly overlap. The expression, derivation, proof, and analysis of such a mathematical formula will be the subject of a separate, forthcoming manuscript.
The sigmoidal relationship between training amount and pattern reconstruction accuracy provide a means for comparing how a real rodent successfully learns and retrieves meaningful representations during memory-related tasks. During both goal-directed navigation tasks in an open field and in a four-arm maze, four task trials enabled performance above chance (Pfeiffer, 2022; Belchior et al., 2014). With 5 theta cycles usually occurring when a place cell occupies its preferred place (Lisman & Jensen, 2013), and given the equivalence in our model between one presentation and one theta cycle, performing above chance level at 4 task trials would occur after 20 presentations, which intriguingly is the case in our model (Fig. 4A). A rodent reaching its best performance likely depends on task complexity, as best performance for the open field and four-arm maze occur within 10 and 50 trials, respectively. Our results of best performance at 65 presentations (13 trials) may be more representative of open field foraging as opposed to navigation in a four-arm or an H-maze (Siegle & Wilson, 2014). Furthermore, as mentioned above, sharp-wave ripples also assist in strengthening representations (Malerba et al., 2019), which occur predominately between trials during quiet wakefulness and during sleep (Buzsáki, 2015). Thus, time between tasks must not be discounted when making future predictions of rodent memory task performance.
While our CA3 SNN supported pattern completion via strong cell assembly retrieval, it did so with simple recall, i.e., recall that occurs from a degraded cue within a single gamma cycle (Bennett et al., 1997). The degraded pattern we presented to our network thus accounted for a cue that could be successfully recalled within a single cycle, which consisted only of activation of assembly members. Our work could be expanded to account for a different type of pattern degradation through progressive recall, i.e., the activation of both assembly and non-assembly member PCs in the testing cue that involves retrieval beyond a single gamma cycle. Testing pattern degradation in this manner in future work would allow for more direct comparison with other CA3 models (Guzman et al., 2016).
The advent of large-scale recording technologies, including two-photon calcium imaging, Neuropixel probes, and hundred Stimulation Targets Across Regions (HectoSTAR), enabling the simultaneous monitoring of thousands of neurons, may soon make it feasible to measure more directly the size of hippocampal assemblies (Steinmetz et al., 2021; Zong et al., 2022; Vöröslakos et al., 2022). Such evidence might show that the size of assemblies in vivo could vary depending on the represented cognitive content, providing further guidance for how to extend our SNN model. Additionally, these recordings during cue mismatch tasks would pinpoint how many neurons are typically reactivated in response to degraded cues (Neunuebel & Knierim, 2014; Knierim, 2002), allowing a quantitative comparison with our results. Last but not least, large-scale recordings will likely highlight the variation in neuronal overlap between assemblies, facilitating the estimation of key factors determining the memory capacity of the CA3 circuit.
4 Materials and methods
4.1 Full-scale CA3 SNN
The selection of the neuron types constituting the CA3 SNN and the model parameters, including neuron type-specific excitability, population size, connection probabilities, and synaptic signaling, were developed and validated in prior work (Kopsick et al., 2023). Briefly, the SNN consists of PCs and seven interneuron types: Axo-axonic, Basket, Basket CCK+, Bistratified, Ivy, Mossy Fiber-Associated ORDEN (MFA-ORDEN), and QuadD-LM cells. The perisomatic targeting and axonal-dendritic overlaps between these eight neuron types give rise to 51 directional connections (Fig. 1a).
For each neuron type, we utilized experimentally-derived parameters from Hippocampome.org for both the neuronal input-output function, i.e., the spiking pattern produced in response to a given stimulation, and the neuron count. In particular, to balance biological realism with computational efficiency, we chose the Izhikhevich 9-parameter, single-compartment dynamical systems framework (Izhikevich, 2007). The parameters reflect the following neuron type-specific properties: membrane capacitance (C), a constant that reflects conductance during spike generation (k), resting membrane potential (vr), instantaneous threshold potential (vt), a recovery time constant (a), a constant that reflects conductance during repolarization (b), spike cutoff value (vpeak), reset membrane potential (Vmin), and a constant that reflects the currents activated during a spike (d). Hippocampome.org reports the parameter values that best fit the firing patterns reported in the literature for the corresponding neuron types (Venkadesh et al., 2019).
For neuron counts, we considered each neuron type in our network as a representative of its supertype family (hippocampome.org/morphology). Thus, the population size of each neuron type in the SNN is the sum of all neuron types of the given supertype. For example, the number of instantiated CA3 Axo-axonic cells in the model (i.e., the population size parameter value for this particular neuron type) consisted of the sum of Axo-axonic proper and Horizontal Axo-axonic cells (two variants of Axo-axonic neurons in CA3), which Hippocampome.org reports as 1,482 for the mouse. The population sizes and the 9 Izhikevich parameters for each of the 8 CA3 neuron types are shown and listed in Fig. 1a; Table 1, respectively.
Modeling neuron type-specific communication involves a description of the postsynaptic signal caused by a presynaptic spike and related short-term plasticity (STP), as well as the connection probability and delay between the presynaptic and the postsynaptic neuron types. We modeled synaptic dynamics with the 5-parameter Tsodyks-Markram framework (Tsodyks et al., 1998), for which Hippocampome.org reports experimentally-derived pre- and post-synaptic neuron type-specific values (Table 2): synaptic conductance (g), decay time constant (τd), resource recovery time constant (τr), resource utilization reduction time constant (τf), and portion of available resources utilized on each synaptic event (U). Note that this formalism captures unitary synaptic communication. As such, it reflects the total somatic effect of all synapses corresponding to connected neuron pairs. In the simulations involving a normal (as opposed to constant) distribution of initial weights, we derived the mean (0.5531 nS), standard deviation (0.1201 ns), minimum (0.3372 nS), and maximum (0.8971 nS) conductance values (g) for the PC-PC synapses from the ranges provided by Hippocampome.org (Moradi et al., 2022). Given the local scope of the CA3 circuit, all connections were modeled with a synaptic delay of 1 ms. Hippocampome.org also provides morphologically derived connection probabilities for each directional pair of rat neuron types (Tecuatl et al., 2021a, b), which we scaled for the mouse according to a fixed anatomical sizing ratio (Tecuatl et al., 2021a, b). The probabilities for all 51 connection types in the circuit are reported in Fig. 1a.
Every instantiation of the simulation thus contained 84,053 neurons and 176 million synaptic connections on average. To elicit activity in the SNN, each neuron received a lognormal background current to model the upstream inputs CA3 receives from the dentate gyrus and entorhinal cortex (Mizuseki et al., 2013; Buzsáki & Mizuseki, 2014). The inputs were constrained to match the mean firing rates of each neuron type in the model with those observed in vivo (Table 3).
4.2 Range of assembly sizes
In order to define a range of assembly sizes to evaluate auto-association and pattern completion, we first considered the sparseness ratio of neural coding, γ, which is the average fraction of cells activated during an experience (Almeida et al., 2007). Available estimates for γ in humans and rats varied only slightly, from 0.1% (Guzman et al., 2016), through 0.23% (Waydo et al., 2006), to 0.3% (Almeida et al., 2007). This would correspond, for the number of PCs in mouse CA3, to a range of sizes between 75 and 225. The authors of the latter cited study, however, accompanied their estimate for assembly size (225) with a wider range (150–300) as well as cautionary lower and upper bounds of a factor of 2 in either direction (Neunuebel & Knierim, 2014). Furthermore, in the absence of precise experimental determinations, smaller values of value of γ could allow for larger storage capacity as long as recall from partial input could be maintained (Bennett et al., 1997). Based on these lines of reasoning, we set bounds of 0.067% <= γ <= 0.8%, corresponding to a range of assembly sizes between 50 and 600.
4.3 Long-term synaptic plasticity
In line with the notion that cell assemblies form via long-term plasticity (Miles et al., 2014), we adopted a symmetric (Hebbian) spike-timing dependent plasticity (STDP) learning rule between PCs (Mishra et al., 2016). Importantly, this symmetric STDP rule was observed between CA3 PCs in hippocampal CA3 slices of adult rodents, as opposed to a previous study of STDP that reported anti-symmetric STDP in cultured hippocampal neurons (Bi & Poo, 1998). The symmetric STDP was implemented as \(\:\varDelta\:w=\:A{e}^{-|\varDelta\:t|/\tau\:}\) Here, \(\:\varDelta\:w\) is the change in synaptic weight, \(\:A\) determines the weight change where the pre- and post-synaptic neurons fire at exactly the same time, \(\:\tau\:\) is the plasticity decay time constant, and \(\:\varDelta\:\text{t}\) is the temporal difference between the post- and pre-synaptic spikes. The value for \(\:\tau\:\) was set to 20 ms, which best approximated the symmetric exponential decay curve observed experimentally for CA3 PCs (Mishra et al., 2016) (Fig. 1B). The values for \(\:A\) varied based on the maximum CARLsim6 synaptic weight \(\:\left({w}_{max}^{*}\right)\) between PCs, which in our model depended on cell assembly size. Specifically, since the firing of each PC is triggered by the convergent integration of all activated presynaptic PCs, we reasoned that the maximum synaptic weight of each synapse should be inversely proportional to the number of PCs in an assembly.
In initial pilot testing with an assembly size of 300, we found that a value \(\:{w}_{max}^{*}=20\) induced strong auto-association after 100 input pattern presentations. Therefore, we anchored the maximum synaptic weight scaling based on assembly size to this value: for instance, SNNs with assembly size of 150 or 600 would have a \(\:{w}_{max}^{*}\) of 40 or 10, respectively. We then derived \(\:A\) so as to allow the synaptic weight to increase from the initial value before training (\(\:{w}_{init}^{*}=0.625\) in all our simulations) to \(\:{w}_{max}^{*}\) if all pre- and post-synaptic spikes were exactly coincident during training in the initial pilot settings. Since each of the 100 randomized spike trains during training contain 4 spikes on average, the resulting formula was \(\:A=\:\frac{{w}_{max}^{*}-\:0.625}{400}\), where \(\:{w}_{max}^{*}=\:\frac{6000}{size}\). Table 4 reports the maximum total synaptic conductance \(\:{(g}_{max}={w}_{max}^{*}*g)\) and \(\:{w}_{max}^{*}\) between PCs and \(\:A\) for each assembly size used.
4.4 Network training and testing protocol
Formation and retrieval of cell assemblies occurred in the CA3 SNN through dedicated training and testing phases. During the training phase, the SNN was presented with three input patterns, which consisted of requisite injected current to activate firing in a specific subset of PCs based on the size set for an assembly. The current injections triggered in each PC a randomized train of four spikes during a 20 ms (gamma) time window, with 200 ms (theta) time windows separating the presentation of the subsequent input pattern. This protocol of patterns presented at 50 Hz within an encompassing 5 Hz rhythm (“theta-gamma neural code” (Buzsáki, 2010; Lisman & Jensen, 2013; Bezaire et al., 2016)) resulted in the formation of three unique cell assemblies. After the initial randomization of spike trains in the first input pattern (the first presentation), the same pattern was provided to each subset of PCs in every subsequent presentation of the pattern, i.e., each stimulation pattern provided to a given subset of PCs was identical across all presentations.
Between training and testing, for network structures trained on successive presentations of five patterns, e.g., structures trained on 5, 10,…, 100 pattern sets, each synaptic weight \(\:\left({w}^{*}\right)\) between PCs was divided by the same factor such that the average \(\:{w}^{*}\) across all PC-PC synapses returned to \(\:{w}_{init}^{*}\). Rescaling synaptic weights in this manner is theorized to occur during slow-wave sleep, preserving synaptic weight distributions without eliminating the auto-association between assembly member PCs (Tononi & Cirelli, 2003, 2006, 2014). Additionally, re-scaling the network after every 5th pattern presentation was consistent with evidence that 5 theta cycles (corresponding to 5 pattern presentations) elapse while a mouse occupies its preferred place field when performing a spatial memory task (e.g., a T-maze, Y-maze, or open field foraging task) (Lisman & Jensen, 2013). This design ensures a proper training interval equivalent to one task trial between periods of rescaling.
Testing pattern completion involved providing degraded input patterns to the SNN during gamma and theta time windows as performed during training. Degradation of input patterns consisted of decreasing the percentage of assembly PCs firing together within the designated 20 ms period. The percentage of pattern degradation in this work ranged from 25 to 97.5%.
To visualize the network attractor dynamics during the testing phase (Supplementary Fig. 3), we first divided the spikes for each neuron in the network into successive, nonoverlapping 10 ms bins, yielding an 84,053 (# of neurons) x 100 (# of time steps) activity matrix. We then performed Principal Component Analysis (PCA) on this matrix to project the high-dimensional activity onto a 3D principal component space (Mazor & Laurent, 2005; Cunningham & Yu, 2014).
4.5 Quantification of auto-association and pattern completion
The capability of the SNN to form cell assemblies was investigated by quantifying two features of PC-PC synapses. The first one was the auto-association signal-to-noise ratio (SNR), defined as the mean synaptic weight between assembly member PCs divided by the mean synaptic weight between non-assembly member PCs. Thus, the higher the ratio, the stronger the auto-association of the formed cell assemblies relative to the rest of the CA3 network. The maximum auto-association SNR for each network structure investigated would occur if all assembly and non-assembly members had reached the downscaled maximum and minimum synaptic weights, respectively. The second quantified feature was the percentage of all assembly member synapses that had reached the maximum synaptic weight.
Pattern completion via cell assembly retrieval was assessed with a metric we called pattern reconstruction. First, Pearson correlation coefficients (PCCs) were computed from the training and testing input and training and testing output as described previously (Guzman et al., 2021). Pattern reconstruction accuracy was then computed as the difference in the output and input PCCs divided by the difference of the maximum PCC (1) and the input PCC, multiplied by 100 to obtain a percentage (Supplementary Fig. 4). Therefore, a non-zero pattern reconstruction accuracy would mean a cell assembly was retrieved, with 100% accuracy meaning perfect assembly retrieval. For the extended analysis of overlap (Fig. 7) we introduce a second metric, pattern specificity, meant to capture the activity in the cued assembly (Ac) relative to the activity in the non-cued assemblies (An). Specifically, we define pattern specificity (expressed as percentage) as \(\:100\times\:\frac{\#PC\left({A}_{c}\right)-\sum\:\frac{\#PC\left({A}_{n}\right)}{\#{A}_{n}}}{\#PC\left({A}_{c}\right)}\), where #PC(Ac) is the number of pyramidal cells firing in the cued assembly, #PC(An) is the number of pyramidal cells firing in the non-cued assemblies, and #An is the number of non-cued assemblies.
4.6 Overlap of cell assemblies
Associations between episodic memories in CA3 may be encoded by neurons shared between cell assemblies (Gastaldi et al., 2021; Quian Quiroga, 2023). Based on the finding of an overlap of 4–5% being suitable for recall of individual and overlapping assemblies (Gastaldi et al., 2021), and that pattern reconstruction accuracy at 97.5% pattern degradation was close to zero for assembly size 275 (1.68%; meaning that overlapping memories would not interfere with one another), an overlap of 5% was selected. Shared cells between each of three assemblies were randomly selected before training commenced, with the same training and synaptic downscaling procedures for no shared cells (0% overlap) utilized to obtain and normalize the weights between overlapping cell assemblies, respectively. For testing pattern completion with overlaps, an equal proportion of overlapping and non-overlapping cell assembly members were selected for stimulation, e.g., for an assembly size of 300 tested with a degraded pattern of 50%, 135 non-overlapping members and 15 overlapping members were randomly selected to activate each of the three assemblies.
4.7 Model implementation and execution
The CA3 model was implemented in CARLsim6 (Niedermeier et al., 2022), which utilized the 4th order Runge Kutta numerical integration method with a fixed time step of 0.2 ms (Butcher, 1996). The duration for simulations that trained and tested the networks were 70 s and 1 s, respectively. Instantiation and execution of the network model was performed on single 40 and 80 GB VRAM Tesla A100 GPUs on the George Mason University High Performance Computing Cluster (Hopper). Hopper, which contained more than one hundred such GPUs, allowed for efficient and flexible simulation that greatly reduced the time needed to test different training and testing paradigms. Simulation results were loaded and visualized in MATLAB with CARLsim6’s Offline Analysis Toolbox (OAT). Additional custom-built functions for data analysis were written in Python and MATLAB. All scripts developed are available open source at github.com/jkopsick/cell_assembly_formation_retrieval.
Data availability
All data used in this study is publicly available at https://doi.org/10.5281/zenodo.10870586.
References
Ascoli, G. A., & Wheeler, D. W. (2016). In search of a periodic table of the neurons: Axonal-dendritic circuitry as the organizing principle. Bioessays, 38(10), 969–976.
Ascoli, G. A., Brown, K. M., Calixto, E., Card, J. P., Galvan, E. J., Perez-Rosello, T., et al. (2009). Quantitative morphometry of Electrophysiologically identified CA3b interneurons reveals robust local geometry and distinct cell classes. The Journal of Comparative Neurology, 515(6), 677–695.
Attili, S. M., Silva, M. F. M., Nguyen, T., & vi, Ascoli, G. A. (2019). Cell numbers, distribution, shape, and regional variation throughout the murine hippocampal formation from the adult brain Allen Reference Atlas. Brain Struct Funct, 224(8), 2883–2897.
Attili, S. M., Moradi, K., Wheeler, D. W., & Ascoli, G. A. (2022). Quantification of neuron types in the rodent hippocampal formation by data mining and numerical optimization. European Journal of Neuroscience, 55(7), 1724–1741.
Belchior, H., Lopes-dos-Santos, V., Tort, A. B., & Ribeiro, S. (2014). Increase in hippocampal theta oscillations during spatial decision making. Hippocampus, 24(6), 693–702.
Bennett, M. R., Gibson, W. G., & Robinson, J. (1997). Dynamics of the CA3 pyramidial neuron autoassociative memory network in the hippocampus. Philosophical Transactions of the Royal Society of London Series B: Biological Sciences, 343(1304), 167–187.
Bezaire, M. J., Raikov, I., Burk, K., Vyas, D., & Soltesz, I. (2016). Interneuronal mechanisms of hippocampal theta oscillations in a full-scale model of the rodent CA1 circuit. eLife, 5, e18566.
Bi, G. &, Poo, M. (1998). Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. The Journal of Neuroscience, 18(24), 10464–10472.
Butcher, J. C. (1996). A history of Runge-Kutta methods. Applied Numerical Mathematics, 20(3), 247–260.
Buzsáki, G. (2010). Neural syntax: Cell assemblies, synapsembles and readers. Neuron, 68(3), 362–385.
Buzsáki, G. (2015). Hippocampal sharp wave-ripple: A cognitive biomarker for episodic memory and planning. Hippocampus, 25(10), 1073–1188.
Buzsáki, G. (2019). The brain from Inside Out (p. 464). Oxford University Press.
Buzsáki, G., & Mizuseki, K. (2014). The log-dynamic brain: How skewed distributions affect network operations. Nature Reviews Neuroscience, 15(4), 264–278.
Carrillo-Reid, L., Han, S., Yang, W., Akrouh, A., & Yuste, R. (2019). Controlling visually guided Behavior by Holographic Recalling of Cortical Ensembles. Cell, 178(2), 447–457e5.
Cunningham, J. P., & Yu, B. M. (2014). Dimensionality reduction for large-scale neural recordings. Nature Neuroscience, 17(11), 1500–1509.
de Almeida, L., Idiart, M., & Lisman, J. E. (2007). Memory retrieval time and memory capacity of the CA3 network: Role of gamma frequency oscillations. Learning & Memory, 14(11), 795–806.
Debanne, D., Gähwiler, B. H., & Thompson, S. M. (1998). Long-term synaptic plasticity between pairs of individual CA3 pyramidal cells in rat hippocampal slice cultures. Journal of Physiology, 507(Pt 1), 237–247.
Ding, L., Chen, H., Diamantaki, M., Coletta, S., Preston-Ferrer, P., & Burgalossi, A. (2020). Structural correlates of CA2 and CA3 pyramidal cell activity in freely-moving mice. Journal of Neuroscience, 40(30), 5797–5806.
Dragoi, G., & Tonegawa, S. (2011). Preplay of future place cell sequences by hippocampal cellular assemblies. Nature, 469(7330), 397–401.
Eichenbaum, H.(2004). Hippocampus: Cognitive processes and neural representations that underlie declarative memory. Neuron, 44(1), 109–120.
Farooq, U., Sibille, J., Liu, K., & Dragoi, G. (2019). Strengthened temporal coordination within pre-existing sequential cell assemblies supports Trajectory Replay. Neuron, 103(4), 719–733e7.
Feldman, D. E. (2012). The spike timing dependence of plasticity. Neuron, 75(4), 556–571.
Fuentealba, P., Begum, R., Capogna, M., Jinno, S., Márton, L. F., Csicsvari, J., et al. (2008). Ivy cells: A Population of nitric-Oxide-Producing, slow-spiking GABAergic neurons and their involvement in hippocampal network activity. Neuron, 57(6), 917–929.
Gastaldi, C., Schwalger, T., Falco, E. D., Quiroga, R. Q., & Gerstner, W. (2021). When shared concept cells support associations: Theory of overlapping memory engrams. PLOS Computational Biology, 17(12), e1009691.
González-Rueda, A., Pedrosa, V., Feord, R. C., Clopath, C., & Paulsen, O. (2018). Activity-dependent downscaling of Subthreshold Synaptic Inputs during slow-Wave-Sleep-like activity in vivo. Neuron, 97(6), 1244–1252e5.
Guzman, S. J., Schlögl, A., Frotscher, M., & Jonas, P. (2016). Synaptic mechanisms of pattern completion in the hippocampal CA3 network. Science, 353(6304), 1117–1123.
Guzman, S. J., Schlögl, A., Espinoza, C., Zhang, X., Suter, B. A., & Jonas, P. (2021). How connectivity rules and synaptic properties shape the efficacy of pattern separation in the entorhinal cortex–dentate gyrus–CA3 network. Nat Comput Sci, 1(12), 830–842.
Hasselmo, M. E., & Wyble, B. P. (1997). Free recall and recognition in a network model of the hippocampus: Simulating effects of scopolamine on human memory function. Behavioural Brain Research, 89(1), 1–34.
Hasselmo, M., Schnell, E., & Barkai, E. (1995). Dynamics of learning and recall at excitatory recurrent synapses and cholinergic modulation in rat hippocampal region CA3. Journal of Neuroscience, 15(7), 5249–5262.
Hebb, D. O. (1949). The Organization of Behavior. Wiley.
Hemond, P., Epstein, D., Boley, A., Migliore, M., Ascoli, G. A., & Jaffe, D. B. (2008). Distinct classes of pyramidal cells exhibit mutually exclusive firing patterns in hippocampal area CA3b. Hippocampus, 18(4), 411–424.
Hemond, P., Migliore, M., Ascoli, G. A., & Jaffe, D. B. (2009). The membrane response of hippocampal CA3b pyramidal neurons near rest: Heterogeneity of passive properties and the contribution of hyperpolarization-activated currents. Neuroscience, 160(2), 359–370.
Hummos, A., Franklin, C. C., & Nair, S. S. (2014). Intrinsic mechanisms stabilize encoding and retrieval circuits differentially in a hippocampal network model. Hippocampus, 24(12), 1430–1448.
Izhikevich, E. M. (2007). Dynamical systems in Neuroscience (p. 522). MIT Press.
Kakegawa, W., Tsuzuki, K., Yoshida, Y., Kameyama, K., & Ozawa, S. (2004). Input- and subunit-specific AMPA receptor trafficking underlying long-term potentiation at hippocampal CA3 synapses. European Journal of Neuroscience, 20(1), 101–110.
Katona, L., Lapray, D., Viney, T. J., Oulhaj, A., Borhegyi, Z., Micklem, B. R., et al. (2014). Sleep and Movement differentiates actions of two types of somatostatin-expressing GABAergic Interneuron in Rat Hippocampus. Neuron, 82(4), 872–886.
Kay, K., Sosa, M., Chung, J. E., Karlsson, M. P., Larkin, M. C., & Frank, L. M. (2016). A hippocampal network for spatial coding during immobility and sleep. Nature, 531(7593), 185–190.
Kemker, R., McClure, M., Abitino, A., Hayes, T., & Kanan, C. (2018). Measuring Catastrophic Forgetting in Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence.;32(1).
Klausberger, T., Márton, L. F., Baude, A., Roberts, J. D. B., Magill, P. J., & Somogyi, P. (2004). Spike timing of dendrite-targeting bistratified cells during hippocampal network oscillations in vivo. Nature Neuroscience, 7(1), 41–47.
Knierim, J. J. (2002). Dynamic interactions between local surface cues, distal landmarks, and intrinsic circuitry in hippocampal place cells. Journal of Neuroscience, 22(14), 6254–6264.
Komendantov, A. O., Venkadesh, S., Rees, C. L., Wheeler, D. W., Hamilton, D. J., & Ascoli, G. A. (2019). Quantitative firing pattern phenotyping of hippocampal neuron types. Scientific Reports.;9.
Kopsick, J. D., Tecuatl, C., Moradi, K., Attili, S. M., Kashyap, H. J., Xing, J., et al. (2023). Robust resting-state dynamics in a large-scale spiking neural Network Model of Area CA3 in the mouse Hippocampus. Cogn Comput, 15(4), 1190–1210.
Kumaran, D., Hassabis, D., & McClelland, J. L. (2016). What Learning systems do Intelligent agents need? Complementary Learning systems Theory updated. Trends in Cognitive Sciences, 20(7), 512–534.
Lapray, D., Lasztoczi, B., Lagler, M., Viney, T. J., Katona, L., Valenti, O., et al. (2012). Behavior-dependent specialization of identified hippocampal interneurons. Nature Neuroscience, 15(9), 1265–1271.
Lasztóczi, B., Tukker, J. J., Somogyi, P., & Klausberger, T. (2011). Terminal field and firing selectivity of cholecystokinin-expressing interneurons in the hippocampal CA3 area. Journal of Neuroscience, 31(49), 18073–18093.
Lazarewicz, M. T., Migliore, M., & Ascoli, G. A. (2002). A new bursting model of CA3 pyramidal cell physiology suggests multiple locations for spike initiation. Biosystems, 67(1), 129–137.
Lisman, J. E. (1999). Relating hippocampal circuitry to function: Recall of memory sequences by reciprocal Dentate–CA3 interactions. Neuron, 22(2), 233–242.
Lisman, J. E., & Jensen, O. (2013). The Theta-Gamma neural code. Neuron, 77(6), 1002–1016.
Malerba, P., Rulkov, N. F., & Bazhenov, M. (2019). Large time step discrete-time modeling of sharp wave activity in hippocampal area CA3. Communications in Nonlinear Science and Numerical Simulation, 72, 162–175.
Mazor, O., & Laurent, G. (2005). Transient dynamics versus fixed points in odor representations by Locust Antennal lobe projection neurons. Neuron, 48(4), 661–673.
Menschik, E. D., & Finkel, L. H. (1998). Neuromodulatory control of hippocampal function: Towards a model of Alzheimer’s disease. Artificial Intelligence in Medicine, 13(1), 99–121.
Miles, R., Le Duigou, C., Simonnet, J., Telenczuk, M., & Fricker, D. (2014). Recurrent synapses and circuits in the CA3 region of the hippocampus: An associative network. Frontiers in Cellular Neuroscience.;7.
Mishra, R. K., Kim, S., Guzman, S. J., & Jonas, P. (2016). Symmetric spike timing-dependent plasticity at CA3–CA3 synapses optimizes storage and recall in autoassociative networks. Nature Communications, 7(1), 11552.
Mizuseki, K., Buzsáki, G., & Preconfigured (2013). Skewed distribution of firing rates in the Hippocampus and Entorhinal Cortex. Cell Reports, 4(5), 1010–1021.
Montgomery, S. M., & Buzsáki, G. (2007). Gamma oscillations dynamically couple hippocampal CA3 and CA1 regions during memory task performance. Proc Natl Acad Sci U S A, 104(36), 14495–14500.
Moradi, K., & Ascoli, G. A. (2018). Systematic data mining of hippocampal synaptic Properties. In V. Cutsuridis, B. P. Graham, S. Cobb, & I. Vida (Eds.), Hippocampal microcircuits: A computational modeler’s Resource Book (pp. 441–471). Springer International Publishing. (Springer Series in Computational Neuroscience).
Moradi, K., & Ascoli, G. A. (2020). A comprehensive knowledge base of synaptic electrophysiology in the rodent hippocampal formation. Hippocampus, 30(4), 314–331.
Moradi, K., Aldarraji, Z., Luthra, M., Madison, G. P., & Ascoli, G. A. (2022). Normalized unitary synaptic signaling of the hippocampus and entorhinal cortex predicted by deep learning of experimental recordings. Commun Biol, 5(1), 1–19.
Moser, E. I., & Moser, M. B. (2003). One-shot memory in hippocampal CA3 networks. Neuron, 38(2), 147–148.
Nakazawa, K., Quirk, M. C., Chitwood, R. A., Watanabe, M., Yeckel, M. F., Sun, L. D., et al. (2002). Requirement for hippocampal CA3 NMDA receptors in Associative Memory Recall. Science, 297(5579), 211–218.
Neunuebel, J. P., & Knierim, J. J. (2014). CA3 retrieves coherent representations from degraded input: Direct evidence for CA3 pattern completion and dentate gyrus pattern separation. Neuron, 81(2), 416–427.
Neves, L., Lobão-Soares, B., Araujo, A. P., de Furtunato, C., Paiva, A. M. B., & Souza, I. (2022). Theta and gamma oscillations in the rat hippocampus support the discrimination of object displacement in a recognition memory task. Frontiers in Behavioral Neuroscience, 16, 970083.
Niedermeier, L., Chen, K., Xing, J., Das, A., Kopsick, J. D., Scott, E. O. (2022). CARLsim 6: An Open Source Library for Large-Scale, Biologically Detailed Spiking Neural Network Simulation. In: International Joint Conference on Neural Networks (IJCNN).
Oliva, A., Fernández-Ruiz, A., Buzsáki, G., & Berényi, A. (2016). Spatial coding and physiological properties of hippocampal neurons in the Cornu Ammonis subregions. Hippocampus, 26(12), 1593–1607.
Perez-Rosello, T., Baker, J. L., Ferrante, M., Iyengar, S., Ascoli, G. A., & Barrionuevo, G. (2011). Passive and active shaping of unitary responses from associational/commissural and perforant path synapses in hippocampal CA3 pyramidal cells. Journal of Computational Neuroscience, 31(2), 159–182.
Pfeiffer, B. E. (2020). The content of hippocampal replay. Hippocampus, 30(1), 6–18.
Pfeiffer, B. E. (2022). Spatial Learning drives Rapid goal representation in hippocampal ripples without Place Field Accumulation or goal-oriented Theta sequences. Journal of Neuroscience, 42(19), 3975–3988.
Quian Quiroga, R. (2023). An integrative view of human hippocampal function: Differences with other species and capacity considerations. Hippocampus, 33(5), 616–634.
Quiroga, R. Q. (2012). Concept cells: The building blocks of declarative memory functions. Nature Reviews Neuroscience, 13(8), 587–597.
Rebola, N., Carta, M., & Mulle, C. (2017). Operation and plasticity of hippocampal CA3 circuits: Implications for memory encoding. Nature Reviews Neuroscience, 18(4), 208–220.
Rees, C. L., Moradi, K., & Ascoli, G. A. (2017). Weighing the evidence in Peters’ rule: Does neuronal morphology Predict Connectivity? Trends in Neurosciences, 40(2), 63–71.
Rolls, E. T. (2018). The storage and recall of memories in the hippocampo-cortical system. Cell and Tissue Research, 373(3), 577–604.
Rolls, E. T., & Treves, A. (2024). A theory of hippocampal function: New developments. Progress in Neurobiology, 238, 102636.
SammonsRP, Vezir, M., Moreno-Velasquez, L., Cano, G., Orlando, M., Sievers, M., et al. (2024). Structure and function of the hippocampal CA3 module. Proceedings of the National Academy of Sciences, 121(6), e2312281120.
Sanchez-Aguilera, A., Wheeler, D. W., Jurado-Parras, T., Valero, M., Nokia, M. S., Cid, E., et al. (2021). An update to Hippocampome.org by integrating single-cell phenotypes with circuit function in vivo. PLOS Biology, 19(5), e3001213.
Siegle, J. H., & Wilson, M. A. (2014). Enhancement of encoding and retrieval functions through theta phase-specific manipulation of hippocampus. Eichenbaum H, editor. eLife.;3:e03061.
Stachenfeld, K. L., Botvinick, M. M., & Gershman, S. J. (2017). The hippocampus as a predictive map. Nature Neuroscience, 20(11), 1643–1653.
Steinmetz, N. A., Aydin, C., Lebedeva, A., Okun, M., Pachitariu, M., Bauza, M., et al. (2021). Neuropixels 2.0: A miniaturized high-density probe for stable, long-term brain recordings. Science, 372(6539), eabf4588.
Tecuatl, C., Wheeler, D. W., Sutton, N., & Ascoli, G. A. (2021a). Comprehensive estimates of potential synaptic connections in local circuits of the rodent hippocampal formation by Axonal-Dendritic Overlap. Journal of Neuroscience, 41(8), 1665–1683.
Tecuatl, C., Wheeler, D. W., & Ascoli, G. A. (2021b). A method for estimating the potential synaptic connections between axons and dendrites from 2D neuronal images. Bio-protocol, 11(13), e4073–e4073.
Tononi, G., & Cirelli, C. (2003). Sleep and synaptic homeostasis: A hypothesis. Brain Research Bulletin, 62(2), 143–150.
Tononi, G., & Cirelli, C. (2006). Sleep function and synaptic homeostasis. Sleep Medicine Reviews, 10(1), 49–62.
Tononi, G., & Cirelli, C. (2014). Sleep and the price of plasticity: From synaptic and Cellular Homeostasis to memory consolidation and integration. Neuron, 81(1), 12–34.
Treves, A., & Rolls, E. T. (1991). What determines the capacity of autoassociative memories in the brain? Network, 2(4), 371.
Treves, A., & Rolls, E. T. (1994). Computational analysis of the role of the hippocampus in memory. Hippocampus, 4(3), 374–391.
Tsodyks, M., Pawelzik, K., & Markram, H. (1998). Neural networks with dynamic synapses. Neural Computation, 10(4), 821–835.
Tukker, J. J., Lasztóczi, B., Katona, L., Roberts, J. D. B., Pissadaki, E. K., Dalezios, Y., et al. (2013). Distinct dendritic arborization and in vivo firing patterns of parvalbumin-expressing Basket cells in the hippocampal area CA3. Journal of Neuroscience, 33(16), 6809–6825.
Varga, C., Golshani, P., & Soltesz, I. (2012). Frequency-invariant temporal ordering of interneuronal discharges during hippocampal oscillations in awake mice. Pnas, 109(40), E2726–E2734.
Venkadesh, S., Komendantov, A. O., Wheeler, D. W., Hamilton, D. J., & Ascoli, G. A. (2019). Simple models of quantitative firing phenotypes in hippocampal neurons: Comprehensive coverage of intrinsic diversity. PLOS Computational Biology, 15(10), e1007462.
Viney, T. J., Lasztoczi, B., Katona, L., Crump, M. G., Tukker, J. J., Klausberger, T., et al. (2013). Network state-dependent inhibition of identified hippocampal CA3 axo-axonic cells in vivo. Nature Neuroscience, 16(12), 1802–1811.
Vöröslakos, M., Kim, K., Slager, N., Ko, E., Oh, S., PariziSS, et al. (2022). HectoSTAR µLED Optoelectrodes for Large-Scale, High-Precision in vivo opto-electrophysiology. Advanced Science, n/a(n/a), 2105414.
Waydo, S., Kraskov, A., Quiroga, R. Q., Fried, I., & Koch, C. (2006). Sparse representation in the human medial temporal lobe. Journal of Neuroscience, 26(40), 10232–10234.
Wheeler, D. W., White, C. M., Rees, C. L., Komendantov, A. O., Hamilton, D. J., & Ascoli, G. A. (2015). Hippocampome.org: A knowledge base of neuron types in the rodent hippocampus. Elife.;4.
Wheeler, D. W., Kopsick, J. D., Sutton, N., Tecuatl, C., Komendantov, A. O., Nadella, K. (2024). Hippocampome.org 2.0 is a knowledge base enabling data-driven spiking neural network simulations of rodent hippocampal circuits. Scharfman HE, Colgin LL, editors. eLife.;12:RP90597.
White, C. M., Rees, C. L., Wheeler, D. W., Hamilton, D. J., & Ascoli, G. A. (2020). Molecular expression profiles of morphologically defined hippocampal neuron types: Empirical evidence and relational inferences. Hippocampus, 30(5), 472–487.
Zong, W., Obenhaus, H. A., Skytøen, E. R., Eneqvist, H., de Jong, N. L., Vale, R., et al. (2022). Large-scale two-photon calcium imaging in freely moving mice. Cell, 185(7), 1240–1256e30.
Acknowledgements
The authors are grateful to Drs. Carolina Tecuatl, Diek Wheeler, Rebecca Goldin, Nate Sutton, and Jonah Ascoli for helpful discussions.
Funding
This work was supported by Department of Energy (DOE) grants DE-SC0022998 and DE-SC0023000, and National Institutes of Health (NIH) grant R01NS39600. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Author information
Authors and Affiliations
Contributions
Conceptualization: Jeffrey D. Kopsick, Giorgio A. Ascoli; Data curation: Jeffrey D. Kopsick, Joseph A. Kilgore; Formal analysis: Jeffrey D. Kopsick, Giorgio A. Ascoli; Funding Acquisition: Gina C. Adam, Giorgio A. Ascoli; Investigation: Jeffrey D. Kopsick, Giorgio A. Ascoli; Methodology: Jeffrey D. Kopsick, Giorgio A. Ascoli; Project Administration: Giorgio A. Ascoli; Resources: Gina C. Adam, Giorgio A. Ascoli; Software: Jeffrey D. Kopsick, Joseph A. Kilgore; Supervision: Gina C. Adam, Giorgio A. Ascoli; Validation: Jeffrey D. Kopsick, Giorgio A. Ascoli; Visualization: Jeffrey D. Kopsick; Writing– original draft: Jeffrey D. Kopsick; Writing– review & editing: Jeffrey D. Kopsick, Joseph A. Kilgore, Gina C. Adam, Giorgio A. Ascoli.
Corresponding author
Ethics declarations
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Competing interests
The authors declare no competing interests.
Additional information
Action Editor: Michael Hasselmo.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kopsick, J.D., Kilgore, J.A., Adam, G.C. et al. Formation and retrieval of cell assemblies in a biologically realistic spiking neural network model of area CA3 in the mouse hippocampus. J Comput Neurosci (2024). https://doi.org/10.1007/s10827-024-00881-3
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10827-024-00881-3