1.1 Introduction

1.1.1 Background: Information Processing and Storage in the Brain

For clarity, we should begin by going back to the definition of the word “information” as outlined in the introduction to this book. Depending on the context, there are many different definitions of the word “information.” For example, in psychology and perhaps also in everyday parlance, information may be defined as the ensemble of “facts provided or learned about something or someone” [20]. In computing, on the other hand, information may be defined as “data that are processed, transmitted and/or stored by technical devices” (loc. cit.). Information theory, finally, has an even wider definition of information, for example, “as a mathematical quantity expressing the probability of occurrence of a particular sequence of symbols, impulses, etc. as against that of alternative sequences(loc. cit.).

Here, I want to outline how information is processed and stored in the brain of primates, especially humans. How did we come to believe that information is mainly stored and processed in the brain? There were times in history when, for example, some Greek philosophers thought of the brain as a mere cooling mechanism with a large blood supply and a large surface that could produce a lot of sweat on the head’s surface. But even then, some of the old Greek philosophers disagreed with this view and thought of the brain rather than the heart as the organ of thoughts and emotions. In the end, this view prevailed.

For a period of time, people thought of brain functions in terms of hydraulics. This was in the age of the steam engine and it is true that the brain receives a lot of blood supply. Later on, in the age of the emergence of fine watches, some people drew parallels between mechanical devices and brain function. There is, for example, a somewhat strange story by E.T.A. Hoffmann about a mechanical woman made from wood (“Olimpia” in “Der Sandmann”). The question as to how exactly the brain collected information about the world and how it learned received no particular attention, apart from visual perception, with some Greek philosophers such as Euclid (300 BC) suggesting that the eyes project light to the outer world that “senses” the world in a way similar to what fingers do.

Now we know that the brain relies on electrical activity but is not a computer, an analogy that was popular with some people about 50 years ago. Today, we feel safe to say that the brain consists of a number of adaptive, recurrent networks that are associative and rely on electric activity. These networks, for example, enable humans to store different types of information on different time scales, and the storage time of information is one way of classify information storage in the brain. Another way to classify information processing is by discriminating between information that can be coded in words, the so-called declarative information about facts and events (last Christmas, the French revolution), and non-declarative information, like in procedural skills (riding a bike, playing piano) or associative or operant conditioning (cf. Pavlov’s dog).

1.1.2 Some Problems That Have to Be Solved: From Outside Objects to Meaningful Internal Representations

Already the ancient Greeks raised the question how information about the outer world can enter the brain. The atomists resolved the problem by postulating that all objects send out miniature/atomic copies of themselves that enter the eye and from there reach the brain. But already Socrates (470–399 BC) proved this view to be wrong, for example, by pointing out that objects outside the central visual field are only rudimentarily perceived. In this chapter, we will outline our contemporary views on how information enters the nervous system through specialized peripheral receptor organs, is encoded by electric activity in nerve cells and transmitted to the central nervous system. Similarly well understood are the cellular and biochemical mechanisms that underlie the storage of information in the central nervous system. This part of information processing is indeed very well if not completely understood.

But there are other questions that are even more difficult to answer. For example, how does the brain decide between important and unimportant information? The information content in bits per second supplied by the optic nerve alone is far higher than what the human brain can process. Hence, the so-called attention selects the portion of information that seems to be most important. But what are the mechanisms of attention? And moreover, selection by attention alone would never be able to do the trick. Already on a rather low level, there are mechanisms that compress the incoming information and remove redundancies. Attention, whatever this might be, employs knowledge in a top-down way to filter the incoming information, while at the same time being triggered by abrupt changes in the input from external sources. On the next level, the question arises as to which information to store and for how long. In order to make information about the outer world useful, it is necessary to compare inputs with stored templates, hence to generalize and to categorize inputs. An important question is: how does the brain categorize and generalize? How can it infer the future from the past?

For some types of information brains are able to process many parallel sources of inputs, while other types of information have to be processed in a serial manner. Stimuli from the visual domain are good examples. We are able to analyze, at one glimpse, the orientation of line elements over the entire visual field, that is, we analyze this feature in parallel (see Fig. 1.1a). This we achieve by hundreds of thousands “detectors,” each devoted to one visual field position. The same is true for color (Fig. 1.1b). This extraction of contours and their orientation removes quite a bit of redundant information, a bit like a jpg reduces the storage required for an image—we reduce images a bit towards ‘cartoons’. However, for small differences in orientation (Fig. 1.1d), for a combination of color and orientation (the oblique green stimulus in Fig. 1.1c), for a combination of features (vertical green and horizontal red; Fig. 1.1e) and conjunction (Fig. 1.1f) “the odd man out” has to be searched for, by means of a serial process that is governed by visual attention that “binds” together the different features of each element (e.g., color and orientation). It is also clear that a fair amount of information is processed subconsciously or completely unconsciously. This is certainly true for most information regarding posture and walking. But in contrast to what you may read in the newspapers, the exact interactions within the neuronal networks underlying these different types of information processing, from attention over object recognition, space perception, and language-related processes to decision making and motor planning are far from clear.

Fig. 1.1
figure 1

(a) A red line stimulus among green stimuli (distractors) pops out, that is, it is immediately found, irrespective of the number of distractors (parallel search). (b) A target defined by color (e.g., red) and orientation (e.g., vertical) has to be searched for among red vertical and green horizontal distractors (serial search). (Figure reproduced from [14] Hochstein & Ahissar (2002): View from the Top Hierarchies and Reverse Hierarchies in the Visual System. Neuron 36, p. 793, with permission from © Elsevier, 2018. All Rights Reserved)

We will look at the hardware, or should we say wetware, that stores information in the brain. Interesting questions here concern not only the mechanisms and mechanics of learning but also the many different types of data storage in the brain, for example, regarding the difference between short-term and long-term storage of information, the role of forgetting, and regarding the types of information that have to be stored since there is a big difference between information about objects, on the one hand, and on space, on the other hand. (We want to recognize objects irrespective of their position in space, hence generalize over space, while for interacting with objects, space information is crucial.) Moreover, one has to discriminate between data that can be stored and described by language versus other “knowledge” that cannot be expressed in words. In the spatial domain, there are important differences between storage in self-centered coordinate systems versus object-centered coordinate systems and it seems also between the near and the far space. A highly important point we will address is how the system is able to improve performance, usually called learning.

In my view, it is always important to scrutinize research results to find out by which methods these results were obtained. When somebody tells me my future, I am highly interested to learn how this person knows and came to the knowledge about my future. If this person tells me that my future can be read out of the positions of the stars, I personally am skeptical that his forecast will come true. Hence, we will look at the methods that brain research employs these days to study brain function in primates and especially humans.

1.1.3 Methods to Investigate the Brain: From Behavior and Patient Studies to Anatomy, Histology, EEG, and Single Cells

As outlined above, I believe that all results can only be as good as the method that has been used to reveal them. Therefore, I will not only present results on the processing and storage of information in the brain, but also shortly indicate the main methods employed to gain these results. However, for space reasons, a short overview has to be sufficient. For more details the reader is referred to textbooks such as the excellent one by [16].

Probably the earliest method to infer the processing of information and its storage in brains was to observe behavior, both in animals and in humans. Even in everyday life, we tend to infer from the behavior of others whether or not information has been collected and stored by these individuals. Elaborate methods exist to examine behavior with high precision both of animals and humans. For animals, especially operant conditioning—where one trains an animal, for example, to pull a lever in response to a stimulus—and associative conditioning are used—as with Pavlov’s dog that learned to associate the ring of a bell with food. The methods employed with humans range from psychophysics (the quantification of sensory impressions) to personality psychology (classifying personality traits) and even psychoanalysis that deals also with the subconscious level of information processing. These investigations into normal brain function are complemented by studies in patients where deficits in function can be correlated with defects in brain structure and by modeling that helps to understand complex experimental results. This so-called black-box analysis (the name refers to the fact that all we know are the inputs into the system and its outputs, but not what happens in between, i.e., in the brain) is supplemented by methods able to give us some hints about what is actually going on in this black box. A relatively well-established example of these methods is the recording of EEG-potentials (electroencephalography). These can be recorded from the skull and reflect the activity of large ensembles of neuronal cells. The EEG-recordings are complemented by the recording of magnetic field potentials, MEG. A relatively new method relies on magnetic resonance imaging, called functional MRI, which is able to track the changes in blood flow in an intact brain when defined stimuli are applied.

On the next level are recordings of field potentials and/or single cells, usually in animals, but also in cell cultures and in rare instances even in humans. Additional new methods are able to visually record the activity of large portions of the cortex by applying either specific dyes that change their color depending on the activity level of the underlying cortex or by genetically modifying animals so that defined cell types in their brains can emit light when activated. On an even finer scale of observation, it is possible to study and to modify the transmitters by which nerve cells communicate with others—the field of neuropharmacology.

Fig. 1.2
figure 2

Histological examples from different parts of the human cortex, clearly demonstrating a structural specialization that mirrors the functional specialization of cortex. As can be seen, the thickness of cortex varies between different parts of cortex, as does the pattern of distribution of neurons. On the basis of such differences, Brodman segmented the human cortex. The result is shown in Fig. 1.13. (Figure reproduced from [6, 7] Brodal (1969): Neurological Anatomy. In Relation to Clinical Medicine, p. 475, by permission of © Oxford University Press, 2018. All Rights Reserved)

The methods used to study the structure of brains complement the functional methods outlined above: anatomy and histology that study not only the microscopic structure of brains but also their fine structure on the level of single cells (see Fig. 1.2). Additional methods, such as electron-microscopy and biochemistry, finally investigate both function and structure on the level below single cells, that is, on the level of communication between cells and how information in nervous systems is stored by modifications in the way in which cells communicate with each other and what the underlying biochemical and intracellular processes mightbe.

It should have become clear from this short overview that the study of information processing and storage in brains is a highly interdisciplinary endeavor, encompassing disciplines such as biophysics, theoretical physics, biochemistry, biology, neuroinformatics, psychology, and even philosophy.

1.2 Behavioral Level

1.2.1 Collecting and Selecting Information About the Outer World

We will look at questions such as the ones outlined above on different levels, according to the methods used to obtain the results as indicated earlier. Let us start with the level of the black box or of phenomenology; that is, we will look at what can be inferred from investigating the relation between inputs to the brain and the outputs produced by the brain, be it by generating verbal responses or other types of behavior. It is customary to divide the hypothetical stages within the black box a bit according to introspection and on the basis of model assumptions. It sounds reasonable to postulate, as a first step, a conversion of an external stimulus by a sense organ into the type of information used by the brain. Obviously, sense organs are able to register a number of quite different carriers of information, for example, mechanical pressure on the skin, odor molecules streaming through the air from certain parts of our surroundings, as well as electromagnetic waves of a certain range of wavelengths (“colors”), and changes in air pressure that are quite low (usually called sounds). (We will not consider here information about the internal state of the body—this is a field by itself.)

The information collected by the sense organs has to be transmitted to the brain via nerve fibers, something we know from anatomy, but that can be inferred clearly even from black-box analysis: there must be some type of connection between the sense organs and the brain. We can also infer, from a black-box analysis, that there is a constant selection of information. The most clear external sign of this selection of information is human eye movements. We can measure the decrease of resolution from the center of gaze toward the periphery and find that already at 10° eccentricity, visual resolution is only about 20% of what it is at the center of gaze (Fig. 1.3). Clearly, that is the reason why we as humans move our eyes approximately once per second to gaze at the most “interesting” parts of a visual scene, guided by attentional processes. This change of gaze direction implies a constant selection of visual information in as far as only the objects at the center of gaze are resolved in any detail, while all the rest is not clearly perceived.

Fig. 1.3
figure 3

Visual resolution of humans as a function of distance from the center of gaze. The dotted line indicates the effect of the blind spot, that is, the start of the optic nerve. Since only cones reside at the center of the retina and these do not function in the dark, acuity is best at about 10° eccentricity—where the maximum resolution by rods exists—during the night. (Figure adapted from [10] Fahle (2003): Gesichtssinn und Okulomotorik. Schmidt, R.F. & Unsicker, K. (Eds.) Lehrbuch Vorklinik Teil B: Anatomie, Biochemie und Physiologie des Nervensystems, der Sinnesorgane und des Bewegungsapparats. Kommunikation, Wahrnehmung und Handlung, p. 273)

Through behavioral experiments we can also find out some of the criteria that govern the selection of information, for example, by testing which information that is supplied during an experiment has been stored and can be retrieved. We can also, through behavioral experiments, find out which types of features can be extracted by the brain in parallel, that is, simultaneously, for a number of criteria or for a number of frequencies or positions in outer space. A clear example is shown in Fig. 1.1 from [14]. A stimulus defined by a difference in color relative to that of its neighbors can easily be detected. But it is quite a difficult task to find a stimulus that is defined by a combination of a specific color and a specific orientation, when there are stimuli around that have either the same orientation (but a different color) or the same color (but a different orientation). For the latter task, we need time since we have to analyze if not each single item but at least groups of items sequentially by focused attention before we find the one stimulus that is unique.

Another example is from my own work. Finding a Vernier stimulus amidst stimuli that are aligned, as in Fig. 1.4a1, b1, is quite easy, while it is time consuming to find the Vernier that is offset to the right among Verniers offset to the left (Fig. 1.4a2, b2). So we can conclude that we do have detectors in the nervous system that can analyze the entire visual field regarding color, line orientation, and collinearity—enabling parallel, that is, simultaneous processing—while we need attention to “bind” color and orientation together as is indicated in the expression “spotlight of attention” and to discriminate between an offset to the left versus an offset to the right (serial processing). Figure 1.4c shows the increase of reaction times as a function of stimuli presented simultaneously. This is an important difference between the brain and standard computers that usually have a rather limited number of processors. The use of graphic cards for some computational tasks somewhat decreases this difference between brain and computers. Computers start to have many information-processing units in parallel, as has the brain, up to the point of in-memory computing (see Chap. 3).

Fig. 1.4
figure 4

Left (a1 and b1) A target that has a small lateral offset, such as a Vernier target, pops out among distractors that are aligned. Left (a2 and b2) A target offset to the left has to be searched among distractors that are offset to the right. (Figure reproduced from [9] Fahle (1991): Parallel perception of vernier offsets, curvature, and chevrons in humans. Vision Research 31, p. 2151 (top), with permission from © Elsevier, 2018. All Rights Reserved) Right (c) Time required to find a target increases linearly with the number of distractors for serial search, while not to find a target defined by a difference in color or in orientation (shape). POS target present in display, NEG target not present in display. (Figure reproduced from [26] Treisman & Gelade (1980): A Feature-Integration Theory of Attention. Cognitive Psychology 12, p. 104, with permission from © Elsevier, 2018. All Rights Reserved)

1.2.2 Processing Times in Brains Versus Computers

We can also measure how long it takes before observers detect and/or identify a stimulus, for example, a visual stimulus. Detecting a stimulus requires less time (around 280 ms) than to categorize a stimulus (above 300 ms) [3]. Still, the identification and categorization task can be quite fast, as we just saw; it may not take much longer than the mere detection and reaction to a stimulus. This is even true for such complex discriminations as between men and women and even for aesthetic judgments and evaluations of other persons as sympathetic versus nonsympathetic, the latter requiring around 1 s (e.g., [18]). For such discriminations, an association is required to evaluate quite complex features of the image presented.

Also from behavioral reactions, we find that different modalities are combined in the brain. When looking at a ventriloquist, we hear the voice as coming from his or her puppet, while our acoustic sense by itself would tell us that this is not the case but that the sound arises from the ventriloquist’s mouth. Our visual system modifies the input from the acoustic system in this case. The processing times are, of course, quite long for such easy tasks as detecting that a stimulus has appeared, if we compare them with the processing speed of computers. The number of processing steps a brain can achieve per second is probably in the range of maximal 10, that is, 10 Hz. You may contrast this with processor speeds of 2,000,000,000 Hz, achieved by a standard processor to understand why scanners are used at the supermarket check-out and why self-driving cars will need less of a security distance to other cars than cars operated by human drivers need once the software is perfected.

1.2.3 Information Storage in the Brain as Studied Behaviorally

As already mentioned, information can be stored in the brain in different ways, if we consider a black-box analysis leading to declarative versus nonverbal types of storage, and trying to retrieve stored information does not always succeed, as we all know from introspection. A synopsis of the different types of declarative versus non-declarative memory comes from the textbook by [15] (Fig. 1.5). It discriminates between clearly different types of storage that can be inferred from a purely behavioral standpoint and moreover indicates in the lower part of the figure brain structures in a somewhat tentative way that might underlie the storage of these different types of information.

Fig. 1.5
figure 5

A relatively large number of different types of learning and hence information storage exist in the human brain. Some information can be expressed in words (declarative), a large amount of learned content, however, is averbal (non-declarative). This textbook diagram presents also some ideas on where in the brain these different types of information are stored. (Figure reproduced from [15] Kandel et al. (2000): Principles of Neural Science, p. 1231, with permission from © McGraw Hill Companies, 2018 All Rights Reserved)

Fig. 1.6
figure 6

The Rey-Osterrieth-Complex Figure. Subjects are asked to copy the figure while it is presented to them. Immediately after withdrawing the original (and in some cases additionally after a defined delay), subjects have to draw the figure by heart. The more elements they correctly draw, the higher is their score. (Figure adapted from: [23] Rey, A. (1941): L’examen psychologique dans les cas d’encéphalopathie traumatique. (Les problems.) Archives de Psychologie, 28)

As mentioned above, a second systematics of information storage on the behavioral level is the time interval over which information can be recalled. There is one type of storage that is extremely short, and that can be used, for example, to store phone numbers until we have something to write them down. This is called the articulatory loop, and as soon as we have to reply to somebody, to a question, we have lost the information about the phone number. Another extremely short-term memory, that is information storage, is the so-called iconic scratch-pad (photographic memory). It allows most of us to shortly memorize an image and is usually tested in patients by having them copy the figure displayed in Fig. 1.6, then to remove the figure and to have them draw the figure from heart (of course, it should be called by brain rather than by heart). On the medium-long time level, we have the so-called short-term memory which may last for hours or days and can be erased, for example, by concussion of the brain or by the application of electro-convulsion-therapy. And there is finally long-term memory that seems to be most stable and can last for a lifetime. We will come back to the mechanics underlying these different types of memory when we look at the next level, that of physiology and biochemistry. But first let us have a look at the results of patient studies, as a mixture of black-box analysis and a macroscopic view of the brain (see also the Chap. 2).

1.2.4 Patient Studies: Structural (Brain) Defects Leading to Behavioral Disabilities

Patient studies have greatly enhanced our knowledge about information processing and storage in human brains. Early on it was discovered that some people, especially men, suffer from deficits in color vision. Studying these people, it became clear that there are at least three different photoreceptor types in the normal human retina and that each of these photoreceptor types can be disturbed selectively (see Fig. 1.7).

Fig. 1.7
figure 7

The mosaic of photoreceptor-types in the human retina: Long-wavelength (L, red), medium-wavelength (M, green), and short-wavelength (S, blue) cones are shown as red, green, and blue dots, respectively. (Figure reproduced from [24] Roorda et al. (2001): Packing arrangement of the three cone classes in primate retina. Vision Research 41, p. 1295 (part d), with permission from © Elsevier, 2018. All Rights Reserved)

Another early insight was that the part of the brain that understands language is separated from the part that produces speech. It became even clear that after strokes, patients may not only suffer from motor disturbances or blindness, but also from more subtle disturbances of information processing, for example, from the inability to discriminate colors, to perceive motion, to judge distances, or to discriminate between faces. From postmortem analysis, researchers were even able to localize the relevant parts of the cortex whose damage caused the symptoms mentioned.

Some patients may have normal visual resolution and normal low-level visual abilities, but suffer from disturbances in more complex visual functions [12]. One example is mentioned above, the inability to discriminate between faces. A more general disturbance is the so-called visual apperceptive agnosia. Patients suffering from this deficit have normal visual resolution and normal visual fields, but are unable to copy even simple line drawings, as is illustrated in Fig. 1.8. Other patients, when asked which objects they just copied, are unable to give an answer. These patients suffering from associative agnosia are also impaired at visually recognizing everyday objects while they might be able to name the same objects when these emit a typical sound. Hence, one can discriminate between patients who are unable to name an object that is visually presented, due to the fact that they cannot correlate the visual impression with their visual memory, that is, with the storage of objects that have been previously seen, on the one hand, and other patients who may still be able to do so but lack a connection to the corresponding word and who may be able to signal the meaning or function of an object by pantomime, on the other hand. Taken together, these deficits give the basis for a model of the sequence of processing steps in the processing of visual input, a final result of years of learning visual inputs during childhood to the evolution of visual categories that are stored in the visual association areas of the brain and that are connected to words that can be produced when the corresponding object is presented visually or acoustically or even haptically. Figure 1.9 shows a synopsis of these stages for the visual system.

Fig. 1.8
figure 8

Right part: Apperceptive agnosia. Patients suffering from this rare disorder can more or less clearly see individual dots and lines but are unable to combine these line elements perceptually to forms, let alone objects. Hence, patients suffering from an apperceptive agnosia are unable to copy even simple line drawings. Left part: Patients suffering from associative agnosia are able to copy line drawings but are unable to identify the objects they copied. (Figure reproduced from [1] Álvarez & Masjuan (2016): Visual agnosia. Rev. Clin. Esp. 216(2), p. 87; with permission from © Elsevier, 2018. All Rights Reserved)

Fig. 1.9
figure 9

Schematic overview of the levels of cortical information-processing and the symptoms arising from disturbances on the different levels. On the first level beyond the retina, boundaries are detected based on transitions in luminance, hue, (stereoscopic) depth, motion-direction or speed, texture, or other cues, for example, second-order features in these submodalities. Defects on this level lead to symptoms that are called indiscriminations. On the next level, contours or boundaries are combined and bound together to form objects. Defects on this level lead to apperceptive agnosias. On a third level, the objects are compared with stored representations of objects encountered earlier, that is, the present object is categorized and hence recognized. Defects on this level lead to associative agnosias, for example, to prosopagnosia if only recognition of faces is defective or alexia if the defect concerns words. The recognized object is usually linked to a noun (semantic storage). Missing of this link leads to optic aphasia. (Left figure and legend from [11] Fahle & Greenlee (2003): The Neuropsychology of Vision, p. 196; by permission of © Oxford University Press 2018. All Rights Reserved)

A final example are two patients described by [13]. One suffered from a pronounced deficit in object recognition similar to the one described in Fig. 1.8, but had no problems finding her way in space (her “what”-pathway was defective). The other one, with a structural defect in another part of the brain, was able to easily identify objects but suffered from massive problems in moving through space and in identifying the orientation of objects in space (her defect was in the “where”-pathway; see Sect. 1.3.6 for a description of the “what” versus “where” cortical pathways).

Another insight from patient studies is that information that is stored in the brain might not be readily retrievable. We all know from everyday experience that sometimes we have problems to retrieving information that we know is somewhere in our brain. Patients with strokes of the brain often suffer, if temporary, from so-called neglect. These patients have only reduced or no access at all at information regarding usually the left side of their bodies and/or outer space. Well-known examples of symptoms produced by such patients are that a patient complains of not having enough food because he or she ate only the right part of the food present on the plate. Rotating the plate by 180° will satisfy such a patient.

In a nice experiment, the Italian neuropsychologist Edoardo Bisiach [4] demonstrated that the information about outer space is still present in these patients. He asked patients suffering from neglect to imagine standing in front of the dome at the central place in Milan and to describe which buildings they would see (Fig. 1.10). Surprisingly, these patients mentioned only those buildings that were on the right side of their imagined visual field. Later on, he asked them to imagine standing on the other side of the same place, now looking towards the dome. Again, they were asked to report all the buildings they saw in their imaginary field of view, and again, they mentioned only the buildings on their imagined right side. This result shows that the information about all the buildings at this place was present in their brains but they could only access part of this information.

Fig. 1.10
figure 10

An experiment by Italian neuropsychologist Edoardo Bisiach clearly shows that information stored in the brain cannot be accessed in these neglect patients. His patients remembered only part of the buildings at the central place in Milan, depending on where they imagined to be standing. (Left image from: https://pxhere.com/en/photo/744007. Right figure reproduced from [16] Kandel et al. (2013): Principles of Neural Science, p. 384, with permission from © McGraw Hill Companies, 2018 All Rights Reserved. Adapted from [4] Bisiachi & Luzzatti (1978): Unilateral neglect of representational space. Cortex 14, 129–133, with permission from © Elsevier, 2018 All Rights Reserved)

A somewhat related phenomenon in normal subjects is so-called priming. When a list of, say, 20 words is read aloud and people are then asked to write down as many words of this list as they can remember, the result is that, depending on the length of the list, for example, only half of the words might be written down. If, on the other hand, people are asked to complete letter sequences consisting of three letters (such as abs …), then they tend to produce the words they had heard as part of the list that was read aloud (for example “absurd”)—even those they did not consciously remember, hence “absurd” rather than “absent,” “absolve,” “absorb,” “absinth.” This effect is most pronounced in patients suffering from so-called amnesic syndrome who will actively remember only very few of the words but will achieve about the same number of hits as normal people, if the task is not to remember the words but to complete three-letter strings.

1.3 A Probe into the Brain: Anatomy and Electrophysiology

We now leave the level of black-box analysis and enter the level of looking inside the brain. The two most important techniques here are anatomy with histology, on the one hand, which investigate the structure of the brain, and electrophysiology and functional imaging (fMRI), on the other hand, which examine the function of cells and neuronal assemblies.

Fig. 1.11
figure 11

Bottom: Lateral view of a human brain (that of the author) representing the surface between cortical grey and underlying white matter. Blue, frontal lobe; green, parietal lobe; yellow, temporal lobe; red, occipital lobe. Top: Medial view of the same brain. (Figure and legend after [11] Fahle & Greenlee (2003): The Neuropsychology of Vision (Plate 3), by permission of © Oxford University Press, 2018. All Rights Reserved)

1.3.1 Anatomy: Segmentation and Specialization of Human Cortex

Let us start with anatomy. Figure 1.11 shows a lateral view of a human brain where different colors indicate different parts of the cerebral cortex. It can be seen that the posterior part of the brain is almost exclusively devoted to processing inputs, be they visual, acoustic, or somatosensory. The anterior part on the other hand is almost exclusively devoted to planning and producing motor acts.

1.3.2 Histology: The Elements of Brains—Neurones

The brain consists of about 10 billion neurones with up to 1000 connections to other neurones. Figure 1.2 shows all the cell bodies in thin slices of brain tissue while Fig. 1.12 shows about 1% of neurons including their arborizations (dentrites and axons) in a somewhat thicker slice. If all cells had been impregnated, the slice would be solid black. The great thing about these silver stains is that not only do they stain only a limited number of neurons but they moreover stain those neurons they choose in their entity, that is, with all their axomal and dentritic arborizations. The image in Fig. 1.12 gives a good impression of how the elements of information processing in the brain actually look like.Of course, there are quite a number of different types of neurons. But they have in common that they have first, an input part with so-called dendrites where they receive (mostly) chemical activations from other cells. Second, they have a cell boy with the cell nucleus and the genetic information and third an axon that may have many arborizations and up to 1,000 or even more connections to other cells. On the basis of differences in the fine structure of cortical architectures, as are evident in Fig. 1.2, Brodman [7] segmented the human brain into around 50 different areas. Only much later it became clear that these differences in fine structure are the basis of different functional specializations, such as seeing, hearing, or moving (see Fig. 1.13).

Fig. 1.12
figure 12

The Golgi silver stain (named after its inventor, Camillo Golgi) stains only about 1% of neurons in a slice, but these neurons are stained completely, giving a clear impression of the structure of the cortical machinery. (Figure reproduced from [5] Braitenberg (1977): On the Texture of Brains. An Introduction to Neuroanatomy for the Cybernetically Minded, p. 107, with permission from © Springer Nature, 2018. All Rights Reserved)

Fig. 1.13
figure 13

Segmentation of the human cortex on the basis of cytoarchitecture (from [7] Brodman 1909). The numbers indicated were assigned by Brodman himself—unfortunately, his system of assigning numbers is difficult to understand. (Figure and legend after [11] Fahle & Greenlee (2003): The Neuropsychology of Vision, p. 185, by permission of © Oxford University Press 2018. All Rights Reserved)

1.3.3 Electrophysiology: From Generator Potentials to Orderly Cortical Representations

Electrophysiology can tell us some of the properties of these neurons. By recording from receptors in the eye and ear, electrophysiologists found that these receptors react to the adequate stimulation by producing a so-called generator-potential. The stronger the stimulus, the stronger the generator-potential becomes. This is to say that the transduction from an external signal to body-inherent information produces, as the first step, an analogue signal. But this analogue signal is transformed into a digital one, namely, the action potential or spike in the axon that leads to the next station toward the brain. In the axon, the stimulus’ strength is no longer coded in an analogue fashion, but by the number of spikes per second, which all have the same form and amplitude. These spikes may then be integrated at the next synapse via smaller or larger amounts of transmitter that is set free into the synaptic cavity between the axon of the “transmitting” and the dendrite of the receiving neuron. A few action potentials per second may not be able to excite the next neuron and will vanish without any consequences, due to the integration in the postsynaptic neuron that may require a large number of incoming spikes to be activated, for example, more or less simultaneous inputs from several neurons.

Fig. 1.14
figure 14

Top: Visual field defects (scotoma) caused by lesions at different levels of the visual system. Retinal lesions produce unilateral, incongruent scotoma in the affected eyes (and hemifields), and so do optic nerve defects, (A) lesions of the chiasm, (B) optic tract, (C) lateral geniculate nucleus (LGN), and optic radiation (D, E, F) produce lesions only in the contralesional hemifield that are roughly congruent, that is, cover the same area when tested monocularly. The same is true for the primary, or striate, visual cortex, area 17. Lesions of extrastriate cortical areas often create blindness in one quadrant while the other one is spared, due to the fact that the representation of the upper and the lower visual field are separated in extrastriate cortical areas. (Figure and legend from [11] Fahle & Greenlee (2003): The Neuropsychology of Vision, (Plate 1), by permission of © Oxford University Press 2018. All Rights Reserved.) Bottom: (a) Projection from both halves of both retinae to the LGN and primary visual cortex. (b, c) Topography of the projection of the visual world on to the primary visual cortex. The center of gaze, corresponding to the fovea, is represented at the occipital pole. ((c), lower part). Visualization of the projection to the primary visual cortex. The stimulus displayed in the upper part of the graph produces the cortical activation displayed in the lower part. (Figure and legend from [11] Fahle & Greenlee (2003): The Neuropsychology of Vision, (Plate 2), by permission of © Oxford University Press 2018. All Rights Reserved)

Fig. 1.15
figure 15

An overview of the macaque visual system, as seen from lateral and medial views of the right hemisphere and from unfolded representations of the entire cerebral cortex and major subcortical visual structures. The cortical map contains several artificial discontinuities (e.g., between V1 and V2). Minor retinal outputs (~10% of ganglion cells) go to the superior colliculus (SC), which projects to the pulvinar complex, a cluster of nuclei having reciprocal connections with many cortical visual areas. All structures (except the much thinner retina) are ~1–3 mm thick. (Figure and legend from [27] van Essen et al. (1992): Information Processing in the Primate Visual System: An Integrated Perspective. Science, 255, p. 420, reprinted with permission from © AAAS, 2018. All Rights Reserved)

Fig. 1.16
figure 16

Schematic view of the cortical areas involved in visual analysis in the monkey brain, based on the pattern of axonal connections between areas, as well as on differences in cytoarchitectonic and electrophysiological properties of single neurons. (Figure from [27] van Essen et al. (1992): Information Processing in the Primate Visual System: An Integrated Perspective. Science, 255, p. 421, reprinted with permission from © AAAS, 2018. All Rights Reserved)

In the acoustic system, there is a separation between stimuli on the basis of their frequency already in the inner ear, while in the visual system this separation is primarily on the basis of spatial position and, of course, on the basis of color or, more precisely, wavelength. From the receptor organs, signals are conducted to specialized parts of the cortex, as indicated in Fig. 1.14. The visual system, which is certainly the best studied of the sensory systems, encompasses about 50 different cortical areas that can be distinguished from each other on the basis of histology and/or response-characteristics of their neurons. Figures 1.15 and 1.16 show that together these areas make up about a third of the macaque monkey’s cortex. This means that the processing of visual information is very important for mammals and especially primates. In line with the results of patient studies, some of areas are specialized to analyze motion or color or depth or faces or places.

1.3.4 Functional Magnetic Resonance Imaging: A Window into the Functioning Human Brain

Brain tissue that is activated, for example, by visual or acoustic input or by planning motor acts, requires more oxygen and glycogen than when at rest. By means of functional magnetic resonance imaging (fMRI), the concentration of (deoxy)hemoglobin and its changes in the brain can be measured with a spatial resolution of about 1mm, that is, with a far finer spatial resolution than is possible with the EEG (see Fig. 1.17). The EEG, on the other hand, has a far higher temporal resolution. For further information about the underlying principles of these and similar methods such as PET (Position Emitting Tomography) and infrared cortical imaging, the reader is referred to the textbook by [16].

Fig. 1.17
figure 17

Functional magnetic resonance imaging. An increase in neuronal activity results in an increased supply of oxygenated blood. This decreases the deoxyhemoglobin concentration. The result is an fMRI image of the locations of metabolic activity as revealed by the changes in deoxyhemoglobin concentration. Colors in the image indicate regions of visual cortex that responded to visual stimuli placed at particular locations in the visual field. (Left figure reproduced from [16] Kandel et al. (2013): Principles of Neural Science, p. 432, with permission from © McGraw Hill Companies, 2018 All Rights Reserved. Legend after Kandel et al., 2013, p. 432. Right figure from Stephanie Rosemann, with permission from © Rosemann, 2018. All rights reserved)

1.3.5 Imaging Studies on Top-Down Influences: Attention and Expectation

Electrophysiology has also shown that the properties of neurons can be changed not only by sensory input but also under top-down influences—which may be the mechanics of the so-called attention-processes. This is to say that the processing and filtering characteristics of even rather early cortical neurons can be modified under top-down influences, that is, under the influence of higher and more complex cortical areas that have stored information about the external world. These influences may represent the influence of learned contents and experience on early sensory information processing and lead us to expect certain inputs under some circumstances and others under different circumstances, for example, depending on whether we are at home or in a jungle to name two quite different circumstances. The results on perceptual learning as outlined below indicate that through learning—that is through modifications and storage of information—performance on these early levels of information processing can be modified. In a way, we are able to concentrate on specific features of sensory inputs depending on the task at hand.

1.3.6 Different Processing Pathways for Object Recognition Versus Space Processing as Defined by Electrophysiology and Imaging Studies

In order to interact with the outer world, the brain has to solve two quite different problems. The first is to identify visual objects, and the second one is to analyze the spatial structure of the surroundings in order to interact with objects, as mentioned above. To solve the first task, identification of objects, it is desirable to generalize across space so that you can identify an object irrespective of its location in space. The temporal or ventral pathway of visual information processing is specialized for this task. On the other hand, to navigate through space and to interact with objects of the outer world, the exact object properties are less important than their exact position. The parietal or dorsal pathway of the mammalian cortex (see Fig. 1.18) is specialized for this task.

Fig. 1.18
figure 18

The so-called ventral or “what” and dorsal or “where” pathways. Since the visual system has to solve two tasks: identifying objects irrespective of their location, and to exactly localize these same objects, two pathways of information processing evolved. The so-called dorsal pathway localizes objects (as a first step to interact with them), the ventral pathway identifies and categorizes these objects irrespective of their position in space. (Figure from Cathleen Grimsen, with permission from © Grimsen, 2018. All Rights Reserved)

1.3.7 Stimulating the Cortex

It is possible to stimulate the cortex and to elicit specific responses. For example, applying a strong, short magnetic impulse through the intact skull over the motor cortex will produce a movement in the body part represented underneath this part of the skull. Stimulation over the posterior (visual) part of cortex will produce short flashes, called phosphenes. In patients, direct stimulation of cortex can even elicit memories of events and produce emotional responses [22].

In animals, recording from single cells can identify neurons whose discrimination of a stimulus moving slightly to the left versus to the right is more precise (if this response is fed into a computer) than that of the monkey [19]. This indicates that given the fact that a population of neurons is always responding to stimuli, the decision stage may not know which neuron is the best to follow, while the experimenter has carefully chosen one neuron with the (near) optimal signal-to-noise ratio. If now a neuron responding to, say, a movement to the right (or rather a small population of neurons) is stimulated electrically while a stimulus is actually moving slightly to the left, the monkey may signal that it perceived a movement to the right [25].

1.4 Single-Cell Studies in Animals and Humans and Biochemistry/Neuropharmacology

1.4.1 Elementary Properties of Single Neurons

Single-cell studies have greatly enhanced our knowledge regarding the properties of single neurons and small neuronal populations in the brain. Hence, we know that some cells in the primary visual cortex are specialized to detect (short) line segments of different orientations, with the entire visual field being represented by millions of neurons arranged in an orderly fashion (see Figs. 1.19 and 1.20). From the primary visual cortex (V1) with its six layers, information regarding color, motion, orientation, depth, and complex form is relayed to different cortical areas and further analyzed separately in these areas, such as areas V2, V4, and MT.

Fig. 1.19
figure 19

Parallel processing in visual pathways. The ventral stream is primarily concerned with object identification, carrying information about form and color. The dorsal pathway is dedicated to visually guided movement, with cells selective for direction of movement. These pathways are not strictly segregated, however, and there is substantial interconnection between them even in the primary visual cortex. (LGN, lateral geniculate nucleus; MT, middle temporal area.) (Figure reproduced from [16] Kandel et al., 2013, p. 571, with permission by © McGraw Hill Companies, 2018. All Rights Reserved. Legend after Kandel et al., 2013, p. 571)

Fig. 1.20
figure 20

Topography of the distribution of orientation preferences in the primary visual cortex of the cat. The centers of pinwheels are marked by stars. (Figure from [8] Crair et al., 1997, p. 3383, downloaded from: https://fr.wikipedia.org/wiki/Fichier:Pinwheel_orientation.jpg, accessed July 4, 2018)

Moreover, there are parts of the visual cortex that are specialized for the discrimination between different colors, different types of object classes, detection of faces, body parts, objects or places (see Fig. 1.21). In the auditory cortex, we find several tonotopic representations (see Fig. 1.22) and both the surface of the body and the muscles of the body are represented in an orderly fashion (see Fig. 1.23).

Fig. 1.21
figure 21

This schematic diagram indicates the approximate size and location of regions in the human brain that are engaged specifically during perception of faces (blue), places (pink), bodies (green), and visually presented words (orange), as well as a region that is selectively engaged when thinking about another person’s thoughts (yellow). Each of these regions can be found in a short functional scan in essentially all normal subjects. (Figure and legend reproduced from [17] Kanwisher (2010): Functional specificity in the human brain: A window into the functional architecture of the mind, PNAS 107 (25), p. 11164, with permission by PNAS, 2018)

Fig. 1.22
figure 22

The auditory cortex of primates is tonotopically organized. This means that there are stripes of cortex that are activated preferentially by the same frequencies and that these preferred frequencies change in a monotonic order over the cortical surface. The primary auditory cortex is surrounded by a number of higher-order areas that process no more single frequencies but more complex features of sound, up to areas devoted to understanding speech or analyzing music. Other parameters mirrored in the functional organization of the primary auditory cortex are response latency and loudness. The auditory map is only roughly present at birth (as is the case with the retinotopic map in the primary visual cortex) but evolves as a result of experience and learning

Fig. 1.23
figure 23

A homunculus illustrates the relative amounts of cortical area dedicated to individual parts of the body. (Adapted, with permission, from [22] Penfield and Rasmussen 1950.) (a) The entire body surface is represented in an orderly array of somatosensory inputs in the cortex. The area of cortex dedicated to processing information from a particular part of the body is not proportional to the mass of the body part but instead reflects the density of sensory receptors in that part. (b) Output from the motor cortex is organized in a similar fashion. The amount of cortical surface dedicated to a part of the body is related to the degree of motor control of that part. (Figure reproduced from [16] Kandel et al. (2013): Principles of Neural Science, p. 364, with permission from © McGraw Hill Companies, 2018 All Rights Reserved. Legend after Kandel et al., 2013, p. 364)

1.4.2 Storage of Information in the Brain: The Cellular Level

Here we will categorize information storage based primarily on the duration of storage. Present knowledge indicates that information is stored in the brain in two different ways. Initially, activation in the recurrent neuronal circuits, for example, in the frontal cortex and the hippocampus, stores information over relatively short time intervals. Later on, and maybe in parallel, information is stored by more long-term changes in the wiring pattern of the neuronal networks, by changing the strength of information or signal transduction between individual nerve cells.

1.4.2.1 Ultra-Short-Term Storage: Tetanic Stimulation and the Prefrontal, Parietal, and Inferior Temporal Cortex

Let us start with the first type of information storage. Humans are able to store acoustic, visual, and abstract information over short time intervals, for example, by means of the phonological loop or the iconic scratchpad, as mentioned above. This short-time storage relies on changes in the excitability and firing patterns of (cortical) neurons. One prominent cellular mechanism underlying this change of excitation of synapses is an increase in the amount of transmitter-release. It has been shown that repetitive activation of a synapse produces a relatively persistent increase in transmitter release (see Fig. 1.24a) leading to post-tetanic potentiation. However, it should be noted that prolonged tetanic stimulation will, on the contrary, decrease the amount of excitation through a synapse, a phenomenon called synaptic depression (Fig. 1.24b). This synaptic depression is thought to result from the temporary depletion of the store of releasable synaptic vesicles.

Fig. 1.24
figure 24

A burst of presynaptic action potentials leads to changes in the resulting postsynaptic membrane potentials. (a) Upper part: A short but high-frequency stimulation leads to gradually increasing transmitter increase and hence to increasing postsynaptic membrane potentials, the effect called “potentiation.” Note that the postsynaptic potentials remain enlarged even after the end of the presynaptic burst (lower part). (b) Upper part: A longer-lasting high-frequency stimulation may, however, decrease the amplitudes of the postsynaptic potentials (synaptic depression), probably due to a depletion of transmitter in the presynaptic membrane (lower part)

The prefrontal cortex plays a crucial role in such ultra-short memories, such as remembering a phone number until you can write it down. When a monkey performs a task that it cannot execute immediately after receiving the instruction (requiring working memory, such as when we remember the phone number until we find a pencil and a piece of paper), then specific neurons in the monkey’s prefrontal cortex strongly increase their firing rate until the monkey can perform the task (see Figure 67-1A from [16, p. 1489], which could not be reproduced here for copyright reasons). The underlying cellular mechanism probably relies on an influx of calcium++-ions (CGA++) through voltage-gated CA++-channels and the subsequent opening of CA++-activated nonselective CA++-channels (see Figure 67-1 from [16, p. 1489], which could not be reproduced here for copyright reasons). Recurrent neuronal networks have been described of neurons in the prefrontal, parietal, and inferior temporal cortices that are synaptically connected and are able to create persistent reverberatory activity. Based on these findings, models have been developed to describe the function of such reverberating networks based on a combination of excitatory and inhibitory interactions between neurons (see Figure 67-1B from [16, p. 1489], which could not be reproduced here for copyright reasons). This type of memory can be extinguished, for example, by a concussion of the brain.

1.4.2.2 Medium-Term Information Storage: The Hippocampus

Information that has to be remembered for somewhat longer times is, to the best of our present knowledge, stored especially in the hippocampus. There, information can be stored for longer periods, ranging from days to years. To be more precise, it is not just the hippocampus but in addition a part of phylogenetically old cortex that lies near the hippocampus that plays an important role.

A first indication that this part of the brain stores information over somewhat extended times and is crucial for the transfer of short-term to long-term memory of explicit knowledge was obtained from patient Henry Molaison (HM). This patient underwent an operation in 1953 to cure epileptic seizures that could not otherwise be prevented. But after an operation that removed the hippocampus and adjacent cortical tissue from both sides of his brain, he was no longer able to produce any lasting memory traces. His memory span shrank to less than 1 h and he continued to live subjectively in the year of his operation, not recognizing himself in a mirror after he had grown older.

We now know and understand relatively well the neuronal mechanisms involved in this type of memory-formation, namely, the so-called long-term potentiation (LTP) in hippocampal networks (see Figure 67-2 from [16, p. 1491], which could not be reproduced here for copyright reasons). The hippocampal network consists of a neuronal feedback-loop that links the so-called fornix with the hippocampus. The main output of the hippocampus is the pyramidal neurons in the hippocampal area GA1. These neurons receive two inputs, as is detailed in Figure 67-2 from [16, p. 1491]. The diagram is meant to show that the structures in the mammalian brain that store information are not designed in a rather straightforward and “clean” way by an engineer but evolved from evolution. The best analogy from artificial information processing systems therefore is probably that of an old, complex software system (as in the banking industry) that has seen many programmers adding features without really having an understanding of all the other features incorporated in the program.

Fig. 1.25
figure 25

A model for the induction of long-term potentiation. (a) During normal, low-frequency synaptic transmission, glutamate acts on both NMDA and AMPA receptors in the postsynaptic membrane of dendritic spines (the site of excitatory input). Sodium and K+ flow through the AMPA receptors but not through the NMDA receptors because their pore is blocked by Mg2+ at negative membrane potentials. (b) During a high-frequency tetanus the large depolarization of the postsynaptic membrane (caused by strong activation of the AMPA receptors) relieves the Mg2+ blockade of the NMDA receptors, allowing Ca2+, Na+, and K+ to flow through these channels. The resulting increase of Ca2+ in the dendritic spine triggers calcium-dependent kinases, leading to induction of LTP. (c) Second-messenger cascades activated during induction of LTP have two main effects on synaptic transmission. Phosphorylation through activation of protein kinases, including PKC, enhances current through the AMPA receptors, in part by causing insertion of new receptors into the spine synapses. In addition, the postsynaptic cell releases retrograde messengers that activate protein kinases in the presynaptic terminal to enhance subsequent transmitter release. One such retrograde messenger may be nitric oxide (NO), produced by the enzyme NO synthase (shown in part b). (Reproduced from [16] Kandel et al., 2013, p. 1494, with permission by © McGraw Hill Companies, 2018. All Rights Reserved. Legend after Kandel et al., 2013, p. 1495)

The biochemical mechanisms underlying the short-term storage of information (via long-term potentiation, LTP) are outlined in Fig. 1.25. Synapses involved in LTP can store information for durations between hours and weeks, maybe even longer. In these synapses, a strong, high-frequency stimulus through the input neuron (Fig. 1.25a, b) depolarizes the postsynaptic membrane more strongly than a single input as in Fig. 1.25a does. This leads to a de-blocking of the NMDA receptor, which is usually blocked by Mg++ ions, as can be seen by comparing Fig. 1.25a, b. Then, Ca++ ions enter the postsynaptic spine, resulting in four effects, as indicated (by arrows) in Fig. 1.25b. Again, it is not a simple straightforward effect but a combination of mechanisms on the cellular level leading to the storage of information. These effects can be complemented by still another one, the production, in the cell body, of additional receptors that are incorporated in the postsynaptic membrane (Fig. 1.25c).

As still another cellular mechanism to store information in the brain, LTP can activate so-called silent synapses (Fig. 1.26). Simultaneous electrophysiological (intracellular) stimulation in adjacent neurons in the hippocampus (see Fig. 1.26a) can activate the so-called silent synapses. Before LTP has been induced, neuron “a” cannot activate neuron “b,” since the NMDA-receptor of neuron “b” is blocked (Fig. 1.26b, left). But when the membrane potential of neuron “b” is depolarized by means of the intracellular electrode, the postsynaptic membrane is excited by a spike coming from neuron “a” (Fig. 1.26b, right). After LTP, when the Mg++-blockade of neuron’s “b” postsynaptic membrane is no longer blocked, AMPA-receptors can be incorporated into the membrane of neuron “b” and these receptors are by themselves able to react to presynaptic excitations, transforming a “silent” synapse into a “functioning” one. In this way, transmission between neurons is modified. Another way to modify is to interfere with the release of transmitters and/or to influence the enzymes that deactivate transmitters in the synaptic cleft. Most painkillers and especially neuropharmacological agents work in this way.

Fig. 1.26
figure 26

Long-term potentiation can activate previously “silent”, that is, nonactive synapses and in this way change processing in the network without formation of new synapses. (A) Stimulating neuron (a) connected to neuron (b) via a silent synapse does not produce an effect in neuron (b) before long-term potentiation (LTP), but does so after LTP. (B) The underlying mechanism is a deblocking of the NMDA receptor plus an activation of the AMPA-receptor

Some of the rules underlying changes of neuronal circuitry are outlined in Fig. 1.27. The underlying intracellular mechanisms involve several distinct forms. One is the increase of the transmitter amount set free by each presynaptic action-potential. The increase relies both on a larger number of synaptic vesicles and a location closer to the synaptic cleft of existing vesicles. Moreover, additional transmitter can be synthesized in the cell’s nucleolus (Fig. 1.28).

Fig. 1.27
figure 27

Different types of cooperative behavior between neurons. A single relatively weak input, producing a small postsynaptic potential cannot evoke long-term potentiation (LTP). Cooperation between several relatively weak inputs, however, is able to produce LTP; the same is true for association between a strong and a weak input. To prevent “spreading” of LTP, an unstimulated synapse neighboring stimulated synapses will not show LTP

Fig. 1.28
figure 28

A model for the molecular mechanisms of early and late phases of long-term potentiation. A single tetanus induces early LTP by activating NMDA receptors, triggering Ca++ influx into the postsynaptic cell and the activation of a set of second messengers. With repeated tetani, the Ca++ influx also recruits an adenylyl cyclase, which generates cAMP that activates PKA. This leads to the activation of MAP kinase, which translocates to the nucleus where it phosphorylates CREB-1. CREB-1 in turn activates transcription of targets (containing the CRE promoter) that are thought to lead to the growth of new synaptic connections. Repeated stimulation also activates translation in the dendrites of mRNA encoding PKMζ, a constitutively active isoform of PKC. This leads to a long-lasting increase in the number of AMPA receptors in the postsynaptic membrane. A retrograde signal, perhaps NO, is thought to diffuse from the postsynaptic cell to the presynaptic terminal to enhance transmitter release. (Figure and legend reproduced from [16] Kandel et al., 2013, p. 1502, with permission by © McGraw Hill Companies, 2018. All Rights Reserved)

For many years it was postulated from theoretical grounds that the presynaptic side should be influenced by whether or not its action, that is, its release of transmitter, was able to activate the postsynaptic side. On the basis of Hepp’s postulate that “neurons that fire together will wire together” this knowledge of the presynaptic side would be necessary to achieve the wiring together. It is still not completely clear how this feedback is achieved. Presently, we assume that NO is acting as a retrograde messenger, signaling to the presynaptic side if and when the postsynaptic side has been successfully activated.

It should also be noted that the hippocampus is crucial for spatial memory, containing so-called place-cells that are active if an animal is located at a specific place within its habitat and not when it is at different parts of it.

1.4.3 Long-Term Storage of Information: Building and Destroying Synapses

It should be clear to the reader by now that information storage in the brain relies on changes in the way neurons interact, that is, in their wiring together. We have seen that the strength of coupling between neurons by means of pharmacological transmitters can be modified at synapses (we have omitted the quite rare electrical couplings between neurons) and just learned that “silent” synapses can be awakened. But synapses can also be produced “from scratch” and also be dismantled (see Fig. 1.29). The latter is most important during early childhood—it seems that the genes produce quite a few synapses that are of no use.

Fig. 1.29
figure 29

(a) We assume two synaptic connectors between the sensory and the motor neuron in the “control” case. Long-term habituation leads to loss of synaptic connections, long-term sensitization (e.g., by associativity or cooperativity; see Fig. 1.27) leads to higher numbers of synaptic connectors. (b) Repeated stimulation (at sufficient temporal intervals, hence no LTP) leads to decreased numbers of synaptic connections and hence smaller responses (see Fig. 1.24b), while sensitization (see A) leads to increased numbers of synaptic connections and hence increased larger responses. (Figure b from [2] Bailey & Chen (1983): Morphological basis of long-term habituation and sensitization in Aplysia, Science 220, p. 92, with permission from © AAAS, 2018. All Rights Reserved)

Fig. 1.30
figure 30

Top: (a) A schematic neuronal realization. a axon; d dentrite; m modifiable synapse. (b) shorthand for a. Bottom: (a) A schematic neuronal realization of a recurrent associative neuronal network. a, c axons, d dentrite, m modifiable synapse. (b) shorthand for a. (Figures reproduced from [21] Palm (1982): Neural Assemblies. An Alternative Approach to Artificial Intelligence, p. 55, with permission from © Springer Nature, 2018. All Rights Reserved)

1.5 Modeling

A larger body of theoretical groundwork has been laid over the last decades regarding the storage of information in the brain. As indicated above, we now consider the brain as an assembly of interconnected neuronal networks that work in an associative way and are plastic, that is, can change as a result of prior experience. One early example comes from the work of Palm [21] on associative memories (see Fig. 1.30). While we do have detailed knowledge of the cellular and the subcellular mechanisms underlying information storage in nervous systems, the exact coding of information in these networks is not completely known. This is to say that we very well understand the workings of the brain on a cellular and on a subcellular level. We can also be sure that these well-understood elements are organized in neuronal networks, and we know relatively well which part of the cortical machinery does what (vision/hearing/somatosensing/motor). But we still do not know the consequences for network behavior of changes in the firing pattern of a single neuro or a small group of neurons. Work on “deep learning” in multilayered networks has led to great progress in artificial intelligence (AI) networks. There seems to be a close relationship between the functionality of these multilayered networks and the function of the brain. Unfortunately, it is not really clear what exactly changes in these networks during learning, that is, which elements change and why.

1.6 A Short Conclusion

We know much about the function and limitations of our brain in terms of information intake, processing, and storage on both the behavioral and the (sub)cellular levels. But we still lack a detailed concept of how exactly these neuronal networks produce behavior, that is, how sensory input is analyzed, categorized, and evaluated to produce behavior. Hence, brain research is still a wide open field.