Keywords

1 Memory of the Future

The evolution of greater complexity in primate brains has required elaborate brain representations of future events. This includes encoding of future actions that the organism intends to perform, both the actions themselves and the times they are to be performed. It includes encoding of the anticipated consequences of those actions, as well as anticipations of other future stimuli the organism expects to experience. Those consequences and future stimuli could include rewards or punishments, or they could be emotionally neutral. Also, there could either be certainty or uncertainty about what future events will occur, and if there is uncertainty a rough estimate of the probability of occurrence may or may not be available.

Hence the anticipatory system in the brain has to include interactions between representations of intentions, projected time, expectations, emotions, and probabilities. We do not yet have a systematic neural network theory that links all these elements and the known functions of brain regions. Yet a combination of animal single-cell and human brain imaging studies enables us to develop partial hypotheses about each part of the system. Clearly the human development of language makes available to our species a much more elaborate schema for anticipation than is available to non-human animals. Yet some of the key elements of anticipation seem to be present at least in monkeys and apes, as shown by the development of the prefrontal part of their association cortices and the “expectancy waves” of prefrontal origin that have been measured.

One of the first seminal articles dealing with anticipatory brain representations was written in 1985 by the neuroscientist David Ingvar, who coined for these processes the widely used term memory of the future [1]. What Ingvar meant was “the action programs or plans for future behaviour and cognition. As these programs can be retained and recalled, they might be termed ‘memories of the future’.” (p. 127) Later investigators (e.g., [24]) have given such stored memories of planned future actions the more prosaic term prospective memory.

Ingvar located these action programs in the prefrontal cortex, based on the work of Joaquin Fuster and several other investigators, much of it summarized by Fuster [57]. This work included deficits of prefrontally damaged patients and monkeys on delay tasks that required linkage of events across time. They also included observations of a slow negative EEG wave when a human or monkey has experienced one stimulus and is waiting for a second stimulus; this expectancy wave, called the contingent negative variation (CNV) was first observed in humans [8] and later in monkeys [9].

Fuster [10] listed three overarching cognitive functions of prefrontal cortex that are important for linking events across time. These functions are “short-term memory, preparatory set, and control of interference” (p. 169). At the time he wrote there had been only a few studies of blood flow in the brain during cognitive tasks, and the fMRI methods commonplace today had just barely been developed, so it was hard to link each task definitively with a particular frontal lobe region. Yet Fuster was able to assign short-term memory largely to the dorsolateral prefrontal cortex (DLPFC; Brodmann areas 9 and 46 of the cortex) and control of interference to the orbital and medial prefrontal cortex (OMPFC; areas 11 and 12), and these assignments have stood the test of time. More recent imaging studies (e.g., [4, 11]) have suggested a role in preparatory set for the furthest forward part of the frontal lobes or rostral prefrontal cortex (rPFC; area 10), also known as the anterior prefrontal cortex or the frontal pole. Fuster [10] saw these three functions as interrelated parts of the overarching function of “mediating cross-temporal contingencies.” Since that article was written, there has been considerable work on the specific functions of prefrontal subregions, but less work on integrating all these function into an overall theory of prefrontal involvement in anticipation. In part this reflects that fMRI tends to say more about separate regions than it does about their interactions. In part it also reflects the reward structure of the scientific research community, whereby short-term measurable results yield both grants and publications at a faster rate than do broad theoretical studies. This paper will review some of the literature on all these segments of anticipation and make a few tentative suggestions about their interrelationships.

2 Hierarchies in the Prefrontal Cortex

Fuster [10] proposed that processing of intended motor actions in the cerebral cortex is roughly parallel to the processing of sensory stimuli except reversed in time, with high-order association areas that form the intention to act projecting to motor areas that gradually become more and more specific about what actions to perform. Again, recent studies have largely supported this theoretical proposal (see [12] for a partial review).

A considerable number of recent fMRI studies (e.g., [11, 1315]) have shown that different parts of the prefrontal cortex encode different levels of abstraction, both of stimuli and of intended behaviors. Roughly the hierarchy goes from less to more abstract as one moves further forward in the lateral portions of the PFC.

Clearly the prefrontal machinery that has evolved in humans, and to a lesser degree in other primates, enables far more abstraction, and thereby anticipation, than is available to other animals. Christoff and Gabrieli [11] note that the rostral (particularly rostrolateral) PFC (RLPFC) is the final terminus of this abstraction, so there are limits to the amount of abstraction that can occur even in humans. Yet we have not discovered to this date what those limits are, and fMRI studies have uncovered neural substrates for a great many of our complex mental capabilities. For example, Christoff and Gabrieli found evidence that the DLPFC deals with working memories of external events while the RLPFC deals with working memories of internal events, such as intentions. Schubotz [16] notes that the rostral PFC (rPFC) connects to other prefrontal areas involved in monitoring actions [17] and in retrieval from both semantic and episodic memory [18]. She concludes that the rPFC is of major importance in prospective memory due to its ability to link together two or more different cognitive processes operating simultaneously (see also [19]).

A further advance was the fMRI study of Momennejad and Haynes [4], which uncovered evidence that the rostral PFC encoding of intentions could be decomposed into the What and When of future actions. These researchers gave their subjects a task which involved classifying a display of a number by color (red or green), parity (even or odd), and value (larger or smaller than 5). The subjects were told to perform the color classification but also told that after a certain time (either 15, 20, or 25 s) they had to switch to another classification task (either parity or value). Momennejad and Haynes used a general linear model to fit activation of different brain areas to the possible time delays (which the subject had to store internally, as they were not given feedback about when that time had passed) and to the possible second tasks. This model showed that different parts of rPFC stored “what” and “when” during the color task, and then the pattern of activations changed during the second task that followed.

Another brain region that has long been associated with working memory is the hippocampus. Buckner [20] reviews evidence that in addition to consolidating new memories of past events, the hippocampus and its connections to prefrontal and other areas of cortex are active when people envision future events. The hippocampus seems to replay event sequences including novel combinations of events, and to facilitate predictions about future events. Moreover, hippocampally damaged amnesics show an impoverished ability to imagine the future.

3 Anticipation and Emotion

Anticipation often includes an expectation of a potential reward or punishment, combined with the degree of certainty that this reward or punishment will ensue. To this date there has been a considerable literature on the anticipation of reward or punishment, but this literature has been largely separate from the literature on anticipation involved in cognitive tasks. This reflects in part the continuing culturally-based tendency to regard the emotional and cognitive realms as separate, or even mutually antagonistic, in spite of overwhelming evidence that emotion and cognition are deeply interlinked in the brain. Specifically, Pessoa [21] notes that the lateral prefrontal cortex, sometimes considered “cognitive” by contrast with the “emotional” orbital PFC, is actually an important area for cognitive-emotional interactions, if only because emotional stimuli have a selective advantage in the competition with other stimuli for working memory storage.

The neurotransmitter dopamine is known to have a strong connection with reward, not so much in the actual feelings relating to receiving or expecting a reward but in the energizing of behaviors and actions that lead to reward [22]. A series of seminal single-cell studies on monkeys by Wolfram Schultz and his colleagues (e.g., [23, 24]) showed that dopamine also plays a major role in the anticipation of reward. Schultz and his colleagues have found that in the course of a typical conditioning experiment where a previously neutral stimulus is paired with reward, in the early stages dopamine nuclei in the midbrain experience a burst of firing when the reward occurs. After a period of training, however, these same dopamine cells burst in response to the conditioned stimulus and not to the reward itself. Also, in extinction trials when the conditioned stimulus is presented and not followed by reward, these dopamine neurons experience a dip in responding.

The subcortical region that particularly receives reward-related dopamine inputs is the ventral striatum of the basal ganglia, in particular the nucleus accumbens. This region plays a particularly important role in a combined EEG and fMRI study of reward anticipation by Plichta et al. [25]. Plichta and his colleagues found a network of interrelationships between the ventral striatum, thalamus, and supplementary motor area of the cortex in humans anticipating a monetary reward. These interactions included the contingent negative variation (CNV) EEG which could be predicted by a combination of thalamic fMRI response and top-down regulation from the supplementary motor area to the two subcortical areas.

Also, many studies show that the DLPFC (Area 46), sometimes carelessly regarded as a more “cognitive” than “emotional” area, is key to the anticipation of rewards or punishments. Leon and Shadlen [26] gave monkeys a working memory task in which they received a water reward for eye movements toward the location of a previously lit target stimulus, with the color of the fixation point indicating the size of the reward they would receive. A significant number of DLPFC neurons responded more strongly to a larger reward, as long as the target was in the neuron’s visual receptive field. Watanabe and Sakagami [27] found that neurons in the lateral prefrontal cortex (which part unspecified) responded to both the cognitive and motivational context of stimuli.

One of the possible motivational roles for DLPFC is due to its influence on dopaminergic nuclei in the midbrain [28]. But more importantly, the DLPFC as a working memory region needs to integrate motivational and emotional information with other forms of information, and in large part performs that function through inputs from the anterior cingulate (ACC; Brodmann area 32), another key area for integrating cognitive and emotional information [29, 30]. Medalla and Barbas [29] also found strong excitatory connections from ACC to the rPFC (area 10) which could play a role in deciding between the demands of multiple tasks.

This integrated system enables emotional information, and information about the anticipated reward or punishment value of different classes of stimuli or actions, to be represented at each of the levels of abstraction indicated by different parts of the lateral PFC. Dias, Robbins, and Roberts [31] previously found that OMPFC and DLPFC both make affective discriminations but DLPFC does so at a higher level of abstraction. The distinction of “affective and attentional shifts” from the title of their article is misleading because their attentional shifts are about which attribute of a stimulus is relevant for learned reward expectation, and therefore are also affectively significant. Based on the work we have cited by Christoff and her colleagues and other work reviewed in [16], RLPFC should be expected to make affectively relevant discriminations at a still higher level of abstraction.

4 Certainty and Uncertainty in the Future

In addition to their positive or negative affective value, expectations about significant future events should also be influenced by the degree of certainty or confidence that the events will in fact occur. Certainty or its absence has a strong influence on emotions. For example, the classic studies of Ellsberg [32] showed that people tend to avoid situations where probabilities are unknown if there are available alternatives with known numerical risks.

There have been many recent behavioral studies of the effects on decision making of inducing emotions which engender feelings of certainty or uncertainty; some of that literature is reviewed in [33]. Tiedens and Linton [34] induced their participants to feel one of the four emotions of contentment, anger, worry, and surprise. They found that participants induced to feel emotions of certainty (contentment or anger) but not those induced to feel emotions of uncertainty (surprise or worry) tended toward a heuristic that made them examine evidence less carefully. Specifically, they were more likely to agree with an argument about education if they believed the argument had been made by a professor than by a student, and were less influenced than other participants by the essay’s content. There was no significant difference between the positive and negative emotions at each certainty level.

Yet with some more challenging cognitive tasks, certainty emotions can have the reverse effect of making people more confident in their own cognitive processes and thereby more deliberate. Inbar and Gilovich [35] gave participants some general knowledge questions that have quantitative answers (e.g., the boiling point of water on the top of Mt. Everest) which they were expected to guess by “anchoring” on values they were likely to know already (e.g., the boiling point of water at sea level) and then adjusting upward or downward as appropriate. The amount they adjusted from these anchor values was considered an indication of how deeply they engaged their cognitive processes. If the participants generated their own anchor values, they adjusted more from self-generated anchors if they had seen film clips promoting anger or disgust (certain) than if they had seen clips promoting sadness or fear (uncertain). Inbar and Gilovich’s explanation was that “the appraisals of certainty associated with some emotions can lead individuals to feel confident and in control, and thus to engage in more energetic cognitive processing” (p. 567). The same effect did not occur if the anchor was experimenter-generated.

If the task instructions cue cognitive passivity, certainty emotions (positive or negative) can lead participants to feel confident in answers they have already arrived at, engendering heuristic processing. But if the task instructions cue a high level of cognitive activity, the same certainty emotions can lead participants to feel confident in their own mental acuity, engendering careful processing. Another example of certainty emotions leading to more careful processing arose in a study of ratio bias from my laboratory [36]. In the ratio bias task, participants are asked to decide which of two small probabilities is larger; with incongruent pairs, whereby the larger numerator and denominator correspond to the smaller probability (e.g., 9/100 versus 1/10), many choose the larger numbers even with worse odds (e.g., [37]). Liu [36] induced in different groups of participants, through cuing recall of emotion-appropriate life experiences, the four emotions of happiness (positive and certain), hope (positive and uncertain), disgust (negative and certain), and fear (negative and uncertain), and then gave her participants problems involving judging which is the larger of two probabilities expressed as ratios. She found that certainty-inducing emotions, especially happiness, led to both greater confidence and greater accuracy on this numerical judgment task.

There have been a few studies of brain correlates of certain and uncertain emotions: so far they have not been conclusive and have focused predominantly on the negative effects of uncertainty. Sarinopoulos et al. [38] found that fMRI responses in two brain regions sensitive to aversive pictures, namely the amygdala and insula, were larger after visual cues that were uncertain (i.e., on different trials those cues either preceded an aversive or a neutral picture) than after cues that always preceded an aversive picture. The magnitude of the difference in responses of those regions to these two situations correlated negatively with uncertainty-related activity in the ACC, suggesting that the ACC plays some kind of anticipatory or preparatory role in the perception of uncertainty. This observation is consistent with previous theories linking dorsal ACC activation to situations which are high in error likelihood [39] or just uncertainty [40]. Stern, Gonzalez, Welsh, and Taylor [41] gave their participants a task where they were told the distribution of red and blue cards in two different decks, and then upon being presented with a sequence of cards from one of the two decks had to decide which deck they came from and state how confident they were in their answers. These researchers distinguished between objective uncertainty (based on the distribution, and largest when the ratio of red to blue was closest to 1:1) from subjective uncertainty which varied between individuals. They found that subjective uncertainty correlated with activity in the OMPFC, presumably reflecting emotional arousal.

Yet uncertainty has an attractive as well as an aversive aspect. Many researchers have noted that decision making in complex environments involves a trade-off between exploitation and exploration; a gambler, for example,

balances the desire to select what seems, on the basis of accumulated experience, the richest option, against the desire to choose a less familiar option that might turn out more advantageous (and thereby provide information for improving future decisions) [42, p. 876].

Daw et al. [42] did an fMRI study of humans in a gambling task and defined as exploitative any decision based on what previous experience had indicated would provide the best payoff, calling all other decisions exploratory. The area which was more active during exploratory than exploitative decisions was the rostral prefrontal cortex (rPFC). Given the role we have discussed for the rPFC in high-level abstraction and prospective memory, this brain region could also be a site for integrating information about the potential benefits of options with unknown consequences.

5 General Discussion

The human ability to envision the future in detail is clearly facilitated by the development of our capacity for language, as noted by Ingvar [1]. The neural network modeler Leonid Perlovsky has described mental development throughout one’s lifetime as involving parallel streams of cognition and language that have separate and merging dynamics (see, e.g., [43, 44]).

Yet there are considerable data indicating that at least other primates who lack our language capacity still anticipate upcoming events. These data include the anticipatory EEG waves [9] and the dopamine cell responses to expected rewards [23, 24].

Principles are emerging from sophisticated neural network models that promise to unify all these results, in both monkeys and humans [4547]. Some of the principles are stated in [47] as follows:

In summary, perceptual/cognitive processes often use excitatory matching and match-based learning to create stable predictive representations of objects and events in the world. Complementary spatial/motor processes often use inhibitory matching and mismatch-based learning to continually update spatial maps and sensory-motor gains. Together these complementary predictive and learning mechanisms create a self-stabilizing perceptual/cognitive front end for activating the more labile spatial/motor processes that control our changing bodies as they act upon objects in the world (p. 226).

Various models of interacting cognition and emotion [45, 46, 48] have been successful at reproducing data on emotional valuation of these sensory and motor processes in neural networks that include analogs of several cortical and subcortical brain regions. Future extensions of some of these networks should be able to incorporate the data on sensory-motor, affective, and probabilistic anticipation described in this chapter.