1 Introduction

The advent of optogenetic technology (Deisseroth et al., 2006) has opened new doors toward the investigation of the brain. It is now possible to trace down simple cognitive processes down to the activation of a group of neuronal cells. Following the seminal work of Tonegawa and colleagues (Liu et al., 2012; Ramirez et al., 2013; Ryan et al., 2015; Roy et al., 2017), new ideas about the formation and use of long-term memory are emerging (Tonegawa, 2015; Trettenbrein, 2016). Briefly, the hypothesis is that memory storage and retrieval involve two different circuits and mechanisms, i.e., the retention of specific patterns of connectivity between engram cells required for the storage of information, on the one hand, and the synaptic strengthening needed for its consolidation and retrieval, on the other. These hypotheses are supported by the observation that various expressions of memory can be obtained by leaving connectivity patterns untouched and by acting on synaptic strengths only. These various expressions can be controlled, observed and measured through optogenetic manipulations, which in turn allow for the experimental induction of retrospective amnesia, the direct activation of memory engrams and the creation of false memories. Silent memory engrams (Roy et al., 2017), defined as memory traces whose access can be temporarily blocked and then restored at will, stand as the key concept of this new theory.

These findings are addressed here from a computational point of view, i.e., toward the goal of defining a model of such a dual memory that could lead to simulated experiments. Brain simulations using either artificial neural networks (Hopfield, 1982) or analytical methods (Izhikevich, 2006; Markram et al., 2015) (i.e., mainly differential equations modeling electrical currents (Hodgkin & Huxley, 1952)), as customarily performed today in computational neuroscience, have not been used so far to model such memories. Symbolic neural dynamic (Bonzon, 2017) abstracting the mechanisms of synaptic plasticity has been proposed as a new approach to modeling brain cognitive capabilities and used to define the basic mechanisms of an associative memory with dual store and retrieval processes. This formalism is extended here to reproduce optogenetic manipulations, thus defining a computational model of memory engrams.

2 Materials and Methods

This section introduces a formalism that has been previously published (Bonzon, 2017, 2019).

2.1 A New Approach to Modeling Brain Cognitive Functionalities

In this new formalism, brain processes representing synaptic plasticity are abstracted through asynchronous communication protocols and implemented as virtual microcircuits. The basic units of these microcircuits are constituted by threads, which correspond either to a single or to a cluster of connected neurons. Contrary to traditional neuron models in which incoming signals are summed in some integrated value, thread inputs can be processed individually, thus allowing for threads to maintain parallel asynchronous communications. Threads can be grouped into disjoint sets, or fibers to model neural assemblies, and discrete weights (e.g., integer numbers) can be attached to pairs of threads that communicate within the same fiber. A fiber containing at least one active thread constitutes a stream. Mesoscale virtual circuits linking perceptions and actions are built out of microcircuits. Circuits can be represented either graphically or by sets of symbolic expressions. These expressions can be compiled into virtual code implications that are used just in time to deduce instructions to be finally interpreted by a virtual machine performing contextual deductions (Bonzon, 1997).

To introduce this formalism, let us consider a simple case of synaptic transmission between any two threads P and Q (NB throughout this text, identifiers starting with a capital letter stand for variable parameters). This can be represented by the circuit fragment (or wiring diagram) contained in the simple stream given in Fig. 1, where the symbol ->=>- represents a synapse.

Fig. 1
figure 1

Circuit fragment implementing a synaptic transmission. Reproduced from Bonzon (2019)

This circuit fragment can be represented by two symbolic expressions involving a pair of send/receive processes as shown in Fig. 2.

Fig. 2
figure 2

Thread patterns for a synaptic transmission. Reproduced from Bonzon (2019)

In Fig. 2, the thread P (e.g., a sensor thread sense(us) with us representing an external stimulus as in Fig. 4) will fire in reaction to the capture of an external stimulus, with the send process corresponding to the signal, or spike train, carried by a presynaptic neuron’s axon. In the thread Q (e.g., an effector thread motor(X), where the variable X becomes instantiated as the result of the stimulus), the receive process represents the possible reception of this signal by a postsynaptic neuron. The compilation of these expressions will give rise to the execution of virtual code instructions implementing the communication protocol given in Fig. 3.

Fig. 3
figure 3

Communication protocol for an asynchronous communication. Reproduced from Bonzon (2019)

This protocol corresponds to an asynchronous blocking communication subject to a threshold. It involves a predefined weight between the sender P and the receiver Q that can be either incremented or decremented. On one side, thread P fires thread Q if necessary and sends it a signal. On the other side, thread Q waits for the reception of a signal from thread P and proceeds only if the weight between P and Q stands above a given threshold. The overall process amounts to opening a temporary pathway between P and Q and allows for passing data by instantiating variable parameters (see example below).

Example

As a simple example, let us consider the classical conditioning of Aplysia californica (Kandel & Tauc, 1965). In this experiment, a light tactile conditioned stimulus cs elicits a weak defensive reflex, and a strong noxious unconditioned stimulus us produces a massive withdrawal reflex. After a few pairings of cs and us, where cs slightly precedes us, cs alone triggers a significantly enhanced withdrawal reflex. The corresponding circuit, adapted from a previous similar schema (Carew et al., 1981), is represented in Fig. 4. In this circuit, the symbol /|\ represents the modulation of a synaptic transmission, the sign * used in the upper branch indicates the conjunction of converging signals, and the sign + indicates either the splitting of a diverging signal, as used in the lower branch, or a choice between converging signals, as used in the right branch instantiating the thread motor(X), where X is a variable parameter to be instantiated into either cs or us.

Fig. 4
figure 4

A circuit implementing classical conditioning. Reproduced from Bonzon (2019)

In Fig. 4, the thread ltp (standing for long-term potentiation) acts as a facilitatory interneuron reinforcing the pathway between sense(cs) and motor(cs). Classical conditioning then follows from the application of Hebbian learning (Hebb, 1949), i.e., “neurons that fire together wire together.” Though it is admitted today that classical conditioning in Aplysia is mediated by multiple neuronal mechanisms including a postsynaptic retroaction on a presynaptic site (Antonov et al., 2003), the important issue is that this activity depends on the temporal pairing of the conditioned and unconditioned stimuli, which leads to implement the thread ltp as a detector of coincidence as done in the protocol given in Fig. 5.

Fig. 5
figure 5

Microcircuit and communication protocol for ltp. Reproduced from Bonzon (2019)

The generic microcircuit abstracting the mechanism of long-term potentiation is reproduced in Fig. 5 with its communication protocol. In order to detect the coincidence of P and Q, thread P fires an ltp thread that in turn calls on process join to wait for a signal from thread Q. In parallel, thread Q calls on process merge to post a signal for ltp and then executes a send(R) command to establish a link with thread R. After its synchronization with thread Q, thread ltp increments the weight between Q and R.

The circuit in Fig. 4 can be represented by the fiber, or set of symbolic expressions, given in Fig. 6.

Fig. 6
figure 6

Fiber corresponding to a circuit of classical conditioning

2.2 A Computational Model of an Associative Long-Term Memory

The concept of an associative memory has been studied from various perspectives (Palm, 1980). In our framework, an associative memory extends the mechanism of long-term potentiation by allowing for two threads P and Q attached to separate streams (and thus also possibly active at different times) to be associated in order to trigger a recall thread R. These two streams are linked together through a double communication protocol applied to a long-term memory ltm(P) thread, this construct being depicted by the symbol -{P}- meaning that P is both stored and retrievable through the thread ltm(P). This new protocol involves two complementary long-term storage/retrieval (lts/ltr) threads that allow for the building of a storage trace and later retrieval of a previously active thread. This is in line with results by Rubin and Fusi (Rubin & Fusi, 2007) demonstrating that if the initial memory trace in neurons is below a certain threshold, then it cannot be retrieved immediately after the occurrence of the experience that created the memory. The corresponding microcircuit is given in Fig. 7 together with its communication protocol.

Fig. 7
figure 7

Microcircuit and communication protocol for a long-term associative memory

As a distinctive difference from an ltp(Q,R) thread (which gets fired by P and waits for a signal from Q in order to relate Q and R), an ltr(P,Q,R) thread is fired by Q and waits for a path from ltm(P) in order to relate Q and R, thus defining the basic mechanisms of an associative memory.

2.3 Simulation Platform

A simulation platform has been designed to implement the formalism described above. Defined by a logic program of about 300 lines, this platform can be run on any PC equipped with a Prolog compiler, which thus allows for an easy reproduction of results. It does rely on three fundamental concepts, i.e., the formal notions of

  • an object in context represented by symbolic expressions in a logical language

  • communicating processes between concurrent threads that is used to model a network of interactive objects

  • a virtual machine interpreting virtual code that differs from a processor’s native code and thus constitutes the key mechanism allowing for interfacing high-level abstract objects, e.g., software, with their low-level physical support, e.g., hardware.

At its top level, this virtual machine executes a “sense-act” cycle of embodied cognition as defined in Fig. 8 (see the Supplementary information for its complete operational specifications).

Fig. 8
figure 8

High-level definition of a virtual machine run. Reproduced from Bonzon (2019)

As a key point, let us point out the ist predicate standing for “is true” and implementing contextual deduction (Bonzon, 1997). Clock register values T are used to deduce, for each active thread, a possible next instruction. As postulated independently (Zeki, 2015), there is no central clock, leading thus to the modeling of the brain as a massively asynchronous, parallel organ. Whenever an instruction is executed successfully, the thread clock is advanced and the next instruction is deduced and executed, and whenever it fails, the current instruction is attempted again until it eventually succeeds. Before being executed, virtual machine instructions are deduced “just in time” from circuits which have been compiled into virtual code implications. The execution of virtual instructions leads to a wiring/unwiring process that produces model configurations that are akin to plastic brain states. This procedure matches a fundamental principle in circuit neuroscience according to which inhibition in neuronal networks during baseline conditions allows in turn for disinhibition, which then stands as a key mechanism for circuit plasticity, learning and memory retrieval (Letzkus et al., 2015). This framework thus represents a computing device that greatly departs from a traditional von Neumann computer architecture, and could prove to be close to that of a real brain.

3 Results

3.1 A Mesoscale Circuit Representing a Memory Engram of Contextual Fear Conditioning

Contextual fear conditioning can be viewed as a case of classical conditioning and modeled in our framework as represented in Fig. 9 with two parameters, i.e., a first parameter designating a context (e.g., a or b) that recruits cells for a second parameter designating a percept such as fear, where _ stands for the absence of a perception.

Fig. 9
figure 9

Microcircuit implementing contextual fear conditioning

Memory engrams have been defined as connected neuronal ensembles that allow for the recall of information through various types of activations (Roy et al., 2017). In our framework, this can be achieved by extending the circuit of Fig. 9 into an associative memory as represented in Fig. 7. Replacing threads P, Q and R in Fig. 7 with, respectively, sense(X,F), sense(Y,_) and recall(sense(Y,F) gives rise to the circuit given in Fig. 10. Depending on the application, X and Y can represent either the same (e.g., in the case of retrograde amnesia) or different (e.g., in the case of conditioning false memories) contexts. As for the possible projection of this circuit into actual brain regions, it is suggested that the left part of the circuit, i.e., the formation of {sense(X,F)}via lts, be identified with the upstream connections between the medial entorhinal cortex (MEC) and the dentate gyrus (DG) engram cells, on one side, and the right part, i.e., the possible recall via ltr, with the downstream connections of the DG with the hippocampal CA3 and basolateral amygdala (BLA) engram cells, on the other side.

Fig. 10
figure 10

Microcircuit representing a memory engram of contextual fear conditioning

3.2 Experimental Schedule Implementation

Using optogenetic technology (Liu et al., 2012; Ramirez et al., 2013; Ryan et al., 2015; Roy et al., 2017), memory engram cells can be first by injection and during a training session. At will, they can be then and through various injections according to specific experimental schedules. Simulations performed with the circuit of Fig. 10 have reproduced some of the original behavioral schedules of actual experiments (Liu et al., 2012; Ramirez et al., 2013; Ryan et al., 2015; Roy et al., 2017). These schedules have been assembled from the set of fibers given in Fig. 11 (for the interpretation of instructions such as , see the Supplementary information section).

Fig. 11
figure 11

Fibers for the implementation of behavioral schedules

Each schedule starts with a targeting phase in which the dentate gyrus of transgenic mice is with the injection of viruses and implanted with optic fibers. Active cells are then through a contextual fear conditioning training session that results in the retention of a specific pattern of connectivity between engram cells required for the storage of information. A testing phase allows for its retrieval through natural cues. Memory consolidation can be by an injection that inhibits protein synthesis and thus closes the pathway to memory consolidation, and subsequently . Finally, activating cells demonstrate silent engrams (Ramirez et al. 2013) and the creation of false memories (Roy et al., 2017).

3.3 Simulation Results

Simulations based on the circuit of Fig. 10 and assembled from the fibers of Fig. 11 have reproduced the results of actual experiments (Liu et al., 2012; Ramirez et al., 2013; Ryan et al., 2015; Roy et al., 2017). Inputs to sensors and injectors are preceded by a prompt |: and outputs from effectors by . Additional outputs report about the successive states of the engram.

Contextual fear conditioning control experiment

In the run of Fig. 12, cells involved in fear conditioning are for any context, for context b though a training session ending with an upstream synaptic strength and pathway (meaning that protein synthesis for memory consolidation is active). The consolidated memory with increased downstream synaptic strength then allows for a memory recall though natural cues.

Fig. 12
figure 12

Execution log of a control simulation run

Reversible retrograde amnesia

Under amnesia, impaired synaptic strengthening prevents the activation of engram cells by natural recall cues. Toward this end, the training session is directly followed by a injection that causes a protein synthesis inhibition resulting in a retrograde amnesia without memory consolidation, which can be then reversed to allow for the reactivation of protein synthesis, the consolidation of memory and a freezing reaction due to memory recall though natural cues (Fig. 13).

Fig. 13
figure 13

Simulation run of a reversible retrograde amnesia

Silent engram direct activation

Following retrograde amnesia in context b, the resulting silent engram for context a gets directly activated with stimulation leading to a freezing behavior without memory recall (Fig. 14).

Fig. 14
figure 14

Simulation run of a light-induced direct silent engram activation

Creation of a false memory

In this experiment (Ramirez et al., 2013), the cells labeled in context a without shock exposure serve as a conditioned stimulus. They get then artificially stimulated by during the delivery of an unconditioned fear stimulus in context b and subsequently express a false fear memory by freezing in context a, but not in a novel context c (see the Supplementary information for a definition of the virtual instruction that allows for a displaced memory consolidation) (Fig. 15).

Fig. 15
figure 15

Simulation run of a false memory

Artificial association of independent memories

In this last example, we reproduce an experiment (Ohkawa et al., 2015) in which coincident firing of distinct neural assemblies generates an artificial link between distinct memory episodes. This looks similar to the creation of a false memory modeled above, except that in this case the conditioning occurs through the exposure to light stimulation, i.e., activating([injector( ,c,_)]) in a third context c (Fig. 16).

Fig. 16
figure 16

Simulation run of the artificial association of independent memories

4 Discussion

The simulations that have been presented provide an illustration of how impaired synaptic strengthening caused by the injection of a protein synthesis inhibitor immediately after contextual fear conditioning prevents the effective activation of engram cells by natural recall cues, thus leading to retrograde amnesia. The information stored in engram cell ensemble connectivity can nevertheless be retrieved by light-induced direct activation of labeled nodes. Altogether, these results support the hypothesis that separate processes are involved in long-term memory, i.e., the retention of specific patterns of connectivity between engram cells required for the storage of information, on the one hand, and the synaptic strengthening needed for its consolidation on the other (Tonegawa, 2015; Trettenbrein, 2016). In other words, synaptic connectivity could provide a substrate for memory storage whereas the potentiation of synapses would be required for its retrieval.

It is acknowledged today that individual fear memories require engram cells from multiple brain regions (Tonegawa, 2015). In our simulations, non-instantiated expressions are attached to cell populations whose elements can be indifferently recruited for labeling various contexts such as a and b. As our framework readily accommodates instantiated tags that could be used for recruiting specific cells for different contexts or tasks, it can be used to design and predict the results of finer grain experiments involving multiple brain regions (Abdou et al., 2018; Oishi et al., 2019).