10.1 A Primer on Biopsychology and Its Methods

As a discipline, biopsychology aims to explain experience and behavior based on how the brain and the rest of the central nervous system work. Biopsychological approaches to motivation , then, seek to explain motivational phenomena based on an understanding of specific functions of the brain. Most research in this area uses mammalian animal models, such as rats, mice, and sometimes primates, on the assumption that the way motivational processes and functions are carried out by the brain is highly similar across related species and that findings obtained in other mammals will therefore also hold for humans .

When studying motivational processes, biopsychologists often use lesioning (i.e., selective damaging) techniques to explore the contributions of specific brain areas or endocrine glands to motivational behavior, reasoning that if destroying a specific brain area or gland alters a motivational function, then the lesioned substrate must be involved in that function. Other techniques often utilized in this type of research include direct recordings from neuron assemblies in the behaving animal to determine, for instance, which brain cells fire in response to a reward, and brain dialysis, which allows the researcher to examine how much of a neurotransmitter is released in a behaving animal in response to motivationally relevant stimuli. Finally, biopsychologists frequently use pharmacological techniques , for instance, to increase synaptic activity associated with a specific neurotransmitter by administering a transmitter agonist (which mimics the action of the neurotransmitter) or to decrease synaptic activity by administering a transmitter antagonist (which blocks neurotransmitter activity). This is often done locally in the brain, allowing the researcher to determine the contribution of specific neurotransmitter systems to a function subserved by a circumscribed brain area. These methods are often combined with one another, and they are almost always used in combination with behavioral or learning paradigms designed to reveal the contribution of a brain area, neurotransmitter, or hormone to specific aspects of motivation (e.g., instrumental learning, responding to reward).

One major advantage of the biopsychological approach to motivation is that it can go beyond the circular explanations of motivation that often arise when only behavioral measures are used to infer the causal effects of motivation. For instance, the observation of aggressive behavior (the explanandum) might be explained by the presumed existence of an underlying aggression drive (the explanans), which is in turn inferred from the observation of aggressive behavior. As long as there is no independent means of assessing the presumed aggression drive, the explanation for aggressive behavior will remain circular (e.g., “Why is he shouting at Mary?” “Because he has a strong aggressive disposition.” “How do you know that?” “Because he’s shouting at Mary.”). In contrast to purely behavioral accounts of motivation, biopsychologists would argue that the activity in certain brain regions or the release of certain transmitters and hormones, in interaction with environmental cues, precedes or causes aggressive behavior, thus separating the explanandum from the explanans. One very successful account of aggressive behavior, Wingfield’s challenge hypothesis (Wingfield, Hegner, Dufty, & Ball, 1990), holds that increased levels of testosterone predispose animals to assert their dominance but only if their dominance is challenged by competitors and in certain situational contexts, such as breeding seasons . Clearly, the explanans here (testosterone) is not only more specific and concrete than a postulated “aggression drive,” it is also distinct from the explanandum (aggressive or dominant behavior), and its causal relationship to the explanandum can be studied empirically by, for instance, removing the animal’s gonads, administering testosterone, or a combination thereof.

What animal models of motivated behavior cannot reveal, however, is the relationship between the brain and the subjective states that accompany and characterize some aspects of motivation. Animal research is therefore increasingly complemented by studies on humans that allow researchers to relate measures of brain activity or physiological changes to both behavior and subjective states. With the advent of sophisticated brain-imaging methods, such as functional magnetic resonance imaging (fMRI) , which provide relatively high temporal and spatial resolution in assessments of the active human brain, biopsychological research on motivational and emotional processes has both experienced an unprecedented growth spurt and undergone a remarkable transformation, resulting in the new and burgeoning field of affective neuroscience (Panksepp, 1998).

In the present chapter, we will review the current status of biopsychological research, focusing on the key brain systems and processes that have been found to mediate motivational phenomena in studies on animals and humans. Our aim is to provide the reader with an overview of the key substrates of motivation and emotion and to highlight some important recent findings and developments in the field. For more comprehensive and detailed accounts of the biopsychology of motivation, we refer the reader to the excellent books by LeDoux (2002), Panksepp and Biven (2012), Rolls (2005a), and Toates (1986).

10.2 Hallmarks of Motivation

To make sense of biopsychology’s contributions to the understanding of motivation, we feel it is important to first provide an overview of the core phenomena and processes of motivation on which biopsychologists tend to focus. This will equip us with the proper conceptual framework to understand biopsychological contributions to the science of motivation. We will therefore outline what biopsychologists consider to be the hallmarks of motivation in this section, before moving on to describe the key brain structures and processes involved in motivation in Sect. 3.

10.2.1 Motivation’s Affective Core

One common thread in the rest of this chapter is that motivation entails emotions and affective responses to stimuli , and this is actually the backbone on which virtually all biopsychological research on motivation is built. Motivation is, at its very core, about affect. We do some things because they feel good; we shun others because they would make us feel bad; and we are indifferent about many things, because we have neither a positive nor a negative affective response to them. But why is affect so central for motivated regulation of behavior? Physiologist Michel Cabanac (1971, p. 1104) gave the following answer:

PLEASANT = USEFUL

Things that we experience as pleasant were the ones that aided our survival in our evolutionary past and frequently continue to do so. And the flip side of this is that unpleasant things or events are detrimental and/or were at some point during evolution. Thus, according to Cabanac (1971, 1992, 2014), pleasure/displeasure codes for the survival value of the stimuli and events that can happen to an organism and provides a common currency to weigh the many different options for action against each other and come up with a decision about what to do next. Imagine yourself on a hot day. Should you have an ice cream? Jump into a cold pool? Or sit in the sun? Rake the leaves from the lawn? If you take only the anticipated (immediate) pleasure/displeasure of each option into account, you will go with the one that maximizes your pleasure (but see also Sect. 3.4 for how long-term goals can override the impulse to act based on short-term pleasure and displeasure alone). So regardless of how different your options are and what kinds of different stimuli, contexts, and events they would make you encounter, (dis)pleasure brings it all into one shared currency according to which an action’s potential value can be judged and ranked.

Note, however, that hedonic value is not a fixed property of things but depends on the current needs of the individual. Think about the previously described options for action from the perspective of a day with freezing temperatures and a corresponding greater need for the body to generate warmth. Suddenly options that promised pleasure on a hot summer day do not appear attractive anymore (e.g., jumping into a cold pool), because they would further decrease your body temperature, which would be bad for survival. In contrast, actions that would have been unpleasant in the summer show an increase in predicted hedonic value (e.g., raking the leaves), because they would help you get warm and thus increase your chance of survival.

Study

The Role of Pleasure in Motivation

In one of his many studies of the role of pleasure in motivation and decision-making, Cabanac (2014) had two hedonically relevant factors – playing a pleasant computer game and sitting in an unpleasantly cold room – “compete” against each other. Research participants were seated in a climate-controlled chamber in which they were allowed to play a computer game. As time progressed, they repeatedly rated the pleasantness of this activity on a scale. Meanwhile, the temperature in the chamber was continually lowered, and participants also repeatedly rated the unpleasantness of the ambient temperature on another scale. Figure 10.1 shows the ratings of two participants from this study (note that the originally negative unpleasantness scale ratings were flipped such that higher numerical values on the combined evaluation scale represent both higher ratings on unpleasantness and on pleasantness). In both cases, shortly after the unpleasantness of the cold ambient temperature exceeded, in absolute values, the pleasantness of playing the computer game, participants left the chamber. The same effect was found for all participants tested. Here, too, pleasure was the common currency for deciding which of two very different things – playing a computer game and sitting in a cold chamber – determined what to do next.

Fig. 10.1
figure 1

Plots of two research participants, continuously rating the pleasantness of playing a computer game and the unpleasantness of doing this in a room whose temperature keeps going down (for the sake of comparison, both ratings are scaled in the same direction). The arrow marks the time when participants decided to stop playing and leave the room. Across the entire sample, participants quit approximately 5 min after the displeasure associated with the dropping temperature exceeded the pleasure associated with playing the computer game (Adapted with permission from Cabanac (2014))

It is important to keep in mind that pleasure can be experienced both as an evaluation of a currently encountered stimulus/situation and as an expectation of a future situational outcome based on remembered affective responses to similar situations in the past. For instance, your prediction of how tasty your next ice cream will be is based on your remembered pleasure in response to past ice creams eaten. This prediction is what motivates you for buying the next ice cream, and the higher the predicted pleasure, the stronger the motivation. But of course, you may find out that your prediction was flawed, that the next ice cream is dramatically more unpleasant (or pleasant) than predicted . Such an outcome should have consequences for your future behavior. And that is a key reason why motivation has different phases, an issue to which we turn next.

10.2.2 Motivation Consists of Two Distinct Phases

Biopsychological studies strongly support the view that motivation consists of relatively distinct segments or phases that serve different functions. Most theorists agree that the motivational process features at least two consecutive elements: a motivation phase during which the organism works to attain a reward or to avoid a punishment and a consummation phase during which the outcome is evaluated – i.e., during which the organism consummates the act and determines the actual pleasantness of the reward or assesses whether a danger or punishment has been successfully avoided (e.g., Berridge, 1996; Craig, 1918). Thus, an animal may become motivated to eat either because it sees a tasty morsel or because its hunger indicates a state of nutrient depletion (or a combination of the two) and start working toward the goal of obtaining food. The motivation phase can be as simple as taking few steps toward a food trough and starting to eat or as complex as hunting down an elusive prey in the jungle. Note also that the motivation phase is characterized by observable behaviors (instrumental activity to attain a reward or avoid a punishment) and an affective-motivational state, which in humans can be characterized subjectively by such terms as craving, longing, or being attracted to (or repelled by) the goal object but in animals can only be inferred from behavior. Berridge (1996) has labeled this phase of the motivational sequence wanting and differentiates it from liking, that is, the evaluation of the hedonic qualities of the reward (or punishment) accompanying the consummation of an incentive (see Fig. 10.2). From the perspective of regulating adaptive behavior, it is absolutely necessary to have an evaluation phase that is separate from the motivation phase. This ensures that individuals will calibrate their motivated future behavior to their most recent experience with the hedonic value (usefulness; Cabanac, 1971) of the goal state or object. If it is less pleasant – and hence less useful – than predicted, future motivational responses to predictive cues are reduced. If it is more pleasant – and hence more useful – than predicted, future motivational responses will be enhanced. This fundamental point was already made some time ago by Rescorla and Wagner (1972) in their theoretical analysis of Pavlovian conditioning, that is, the process by which cues that reliably predict rewards and punishments become imbued with affective-motivational properties.

Fig. 10.2
figure 2

Overview of the two main phases of the motivational process, the functions and anatomical substrates associated with them and the functional connections between them (see Sect. 3 “Brain Structures Generally Involved in Motivation” for further details)

While most people intuitively assume that you want what you like and vice versa, research indicates that the two phases of motivation are in fact dissociable. For instance, drug addicts feel compelled to take “their” drug, even though there is no longer any pleasure in taking it (wanting without liking; cf. Robinson & Berridge, 2000). Conversely, people subjectively and objectively respond to tasty food with signs of liking, irrespective of whether they are hungry or have just eaten a big meal – thus, liking can remain constant despite strong differences in wanting (Epstein, Truesdale, Wojcik, Paluch, & Raynor, 2003). As we will see later, the two phases of motivation are also associated with distinct brain systems.

10.2.3 Motivated Behavior Comes in Two Basic Flavors: Approach and Avoidance Motivation

A key characteristic of motivated behavior is that it can be aimed either at attaining a pleasurable incentive (reward) or at avoiding an aversive disincentive (punishment). This hallmark of motivation has assumed a central role in the conceptual frameworks proposed by major motivation theorists (e.g., Atkinson, 1957; Carver & Scheier, 1998; Craig, 1918; Gray, 1971; Mowrer, 1960; Schneirla, 1959) and is today an important and active area of research in biopsychology and the affective neurosciences. While an organism in the approach motivation mode works to decrease the distance from a desired goal object (e.g., prey, a food pellet, or a good exam grade) until that object is attained, an organism in the avoidance motivation mode seeks to increase the distance from an aversive goal object or state (e.g., a predator, starvation, or a bad exam grade). Avoidance of a disincentive may take two fundamentally different forms: active avoidance or passive avoidance.

Active avoidance characterizes the behavioral strategy of actively executing behavior that is instrumental in distancing the individual from the disincentive. This behavior can be as simple as fleeing from a dangerous object or as complex as spending a great deal of time studying for a biochemistry exam in order to avoid a bad grade. Some theorists have posited that avoidance motivation is a particularly inefficient form of motivation, because the individual can never be quite sure how far is far enough (Carver & Scheier, 1998). Approach motivation terminates upon contact with the goal object or state, but when does avoidance motivation stop? When a predator is 100 yards away? When it is out of sight? But if the predator is out of sight, how can the organism be sure that it is away far enough? In other words, it could be argued that avoidance motivation is problematic: first, because it requires the presence of the disincentive as a reference point, enabling the organism to gauge its spatial or psychological distance to the aversive object or state, and, second, because there is no clear-cut criterion of when that distance is far enough for the organism to terminate behavior aimed at avoiding the feared goal object or state.

Based on earlier work, Mowrer (1960) and Gray (1971) proposed that one way out of the active avoidance dilemma would be to conceive of objects or places that have been associated with nonpunishment during the past learning episodes as safety signals with actual reward value. In other words, instead of running away from a feared object, the individual reframes the situation and, in a sense, switches from avoidance to approach motivation by reorienting his or her behavior with reference to a safe and thus rewarding object or place. This also solves the problem of how far away the individual needs to be from the aversive object in order to feel safe: as soon as the safety object or place is reached, the motivational episode ends.

Study

Switch From Avoidance of Danger to Approach to Safety

A classic study by Solomon and Wynne (1953) illustrates this switch from avoidance of danger to approach to safety. Solomon and Wynne trained dogs to jump from one compartment of a box to another as soon as a stimulus signaling impending foot shock appeared. Remarkably, most dogs not only learned to avoid the shock by jumping to the safe compartment within very few trials; they were also amazingly resistant to extinction: some continued to jump to the safe compartment upon presentation of the warning signal for more than 600 trials! Equally remarkably, they soon ceased to show any sign of fear once they had learned how to cope with the threat of shock.

The other mode of avoidance motivation is passive avoidance. The following are all examples of this behavioral manifestation of motivation: an animal ceasing all foraging behavior and keeping very still when it spots a predator; a rat that learns to stop bar-pressing in the presence of specific discriminatory stimuli, because bar-pressing then reliably produces foot shock; and a student refraining from participating in a class discussion in order not to be ridiculed for saying something stupid. The fundamental difference between passive avoidance, on the one hand, and active avoidance and approach, on the other, is that the former involves the inhibition of behavior in order to avoid a certain goal state or object, whereas the latter entails the execution of behavior in order to avoid or attain something. Thus, active and passive avoidance represent behaviorally very different solutions for dealing with the same problem, namely, avoiding a punishment.

10.2.4 Many Qualitatively Different Types of Rewards Can Stimulate Motivation

Many different types of rewards (or punishments) can stimulate motivated behavior, and what motivates behavior can vary both across individuals and within an individual across time. Learning psychologists often conceive of rewards as unconditioned stimuli toward which all Pavlovian and instrumental learning is ultimately directed. The types of reward and the associated motivational systems that have enjoyed a long history of research in biopsychology include food in the case of feeding and hunger motivation, water in the case of thirst, orgasm in the case of sexual motivation, social closeness in the case of affiliation motivation, and being on top of the social hierarchy in the case of dominance motivation. Social and personality psychologists, who study humans rather than animals, would add achievement motivation, in which mastery experiences are rewarding; intimacy, in which deepening one’s relationship to a specific other is rewarding; and power motivation, in which having impact on others is experienced as rewarding (similar to, albeit more subtle than, the dominance motivation studied in animals). Another fundamental motivational system, curiosity or exploration, does not seem to be associated with a specific reward, with the possible exception of the discovery of any kind of pleasurable unconditioned stimulus that was hitherto unpredicted. Some of these rewards can be differentiated into several kinds of specific rewards. For instance, research on hunger and feeding reveals that the amounts of protein, fat, or carbohydrates contained in food all represent distinct kinds of rewards to which organisms are differentially sensitive, depending on the kind of nutrient they most urgently need.

While these are all very different kinds of rewards, fulfilling a variety of functions related to the organism’s individual and genetic survival, they are also similar in the sense that animals (including humans) want them, feel compelled to attain them repeatedly, and will show invigorated responding in situations in which their behavior could lead to the attainment of a reward. Whether an individual feels more or less wanting for a given reward depends, of course, on his or her need state (e.g., how long has it been since he or she last ate?), as well as on his or her liking of that reward or, in the parlance of human motivational psychology, on whether the individual has a motive for attaining a given reward (McClelland, 1987; Schultheiss, 2008). The more he or she responds with pleasure to obtaining the reward, the stronger the motive to seek it out in the future.

10.2.5 Motivation Is Dynamic

Another key feature of motivation emerges from the interplay of wanting and liking, namely, that motivation is a dynamic process. For instance, even the most dedicated glutton will not spend all available time eating but will switch to the pursuit of a different kind of reward once he or she has eaten to satiety. However, because the glutton enjoys food so much (high liking for the reward), he or she will sooner become motivated to eat again and will thus eat with greater frequency or intensity than a person who takes little pleasure in the reward of tasty food. Moreover, the degree of liking for one and the same reward can change as a function of how much of that reward an individual has already consumed. One piece of chocolate can be quite tasty and rewarding. But even a chocoholic is likely to experience nausea and disgust if forced to eat 2 lb of the stuff at once. Cabanac (1971) termed this changing subjective evaluation of the same reward over time as alliesthesia. This phenomenon is assumed to track the usefulness of a given reward as a function of the changing needs of the organism. Clearly, food is highly useful and thus very pleasant, for a semistarved individual but becomes less useful and thus less pleasant, for someone who has already eaten to satiety.

Thus, motivation for a particular type of reward waxes and wanes, depending on the recency of reward consummation, on the degree to which the reward is experienced as pleasurable and on other factors, such as the presence or absence of cues in the environment that predict the availability of a particular reward or the strength of competing motivational tendencies. The dynamic nature of motivation, which can even be mathematically modeled (cf. Atkinson & Birch, 1970), is clear to anyone who studies motivation through observation in humans and other animals but has frequently been overlooked by personality trait researchers, who emphasize the consistency of behavior over time (for a discussion of this issue, see Atkinson, 1981).

10.2.6 Motivation Can Be Need-Driven, Incentive-Driven, or Both

Obviously, motivation is often triggered by the physiological needs of the organism. Falling nutrient levels induce hunger; increasing blood saltiness induces thirst. As a consequence, we seek food or drink to quench the need. Somewhat less obviously, however, motivation can also be triggered solely by cues in the environment. These motivation-arousing cues are called incentives, and a good illustration of incentive motivation is the salted-peanut phenomenon (Berridge, 2001). Imagine you are sitting in front of the TV after a good, filling dinner. Next to you, there is a bowl of salted peanuts. You are actually full, but why not try one? After you have eaten one and found it quite tasty, your hand goes back to the bowl for more, and half an hour later, you have eaten the entire contents of the bowl, even though you were not at all hungry! In this case, it was something rewarding about the peanuts themselves that made you eat them, rather than an unsatisfied physiological need for nutrients. Thus, how pleasurable a reward is depends not only on our need state but also on the nature or quality of the reward itself. An enticing reward can sometimes motivate us, even when we are not experiencing any need at all.

Study

Independent Effects of Incentive and Need

This principle is illustrated by an experiment investigating the independent effects of incentive and need factors on food intake behavior (Panksepp, 1998; see Fig. 10.3). Animals’ need state was manipulated by allowing them to eat regular lab chow whenever they wanted (ad-lib group; low need state) or by starving them for 24 h (high need state). Half of the animals were then offered regular lab chow (low incentive value), and half were offered a hamburger (high incentive value). Among the animals offered chow, there was a clear effect of need state: hungry, food-deprived rats ate more than did rats that had had constant access to chow. However, the results also document a clear incentive effect on motivation to eat: regardless of need state, all animals gorged themselves on the hamburger treat. These findings illustrate that motivation sometimes reflects differences in need state (in the chow condition) and sometimes reflects differences in the incentive value of a goal object (in the hamburger condition).

Fig. 10.3
figure 3

Effects of incentive (hamburger vs. chow) and need factors (food deprivation vs. ad-lib feeding) on food intake (Adapted with permission from Panksepp (1998))

Of course, need- and incentive-driven motivations frequently go hand in hand. Incentives can be more attractive, rewarding , or pleasurable when a person is in a high-need state and less so when he or she is in a low-need state. For instance, a hungry person may perceive and experience a bland piece of bread as deliciously tasty but consider that same piece of bread to be considerably less attractive when in a state of satiety.

10.2.7 Motivation Is Characterized by Flexibility of Cue-Reward and Means-End Relationships

Motivation drives, and in turn is influenced by, Pavlovian and instrumental learning processes. Hungry rats are quicker than satiated rats to learn that a certain sound (the conditioned stimulus or CS) reliably predicts the presentation of a food pellet (the unconditioned stimulus or US), and anxious people (i.e., individuals who are particularly motivated to avoid punishments) are quicker to learn that a particular face (CS) presented on the computer screen predicts an aversive noise (US) presented on their headphones (Pavlovian conditioning, e.g., Morris, Öhman, & Dolan, 1998). Similarly, hungry rats show better learning of bar-pressing behavior if the bar-pressing produces a food pellet. Anxious people are better at learning to respond to a complex stimulus sequence presented on the computer screen if a speedy response to the stimuli prevents the loss of points or money (instrumental learning; e.g., Corr, Pickering, & Gray, 1997). Finally, power-motivated individuals show enhanced implicit learning of a visuomotor sequence if their execution leads to the presentation of a face with a low-dominance expression and impaired learning if the sequence is followed by a face with a high-dominance expression (Schultheiss, Pang, Torges, Wirth, & Treynor, 2005).

Learned cues can, in turn, trigger motivation. This phenomenon is powerfully demonstrated in the case of post-traumatic stress disorder (PTSD ; Brewin, Dalgleish, & Joseph, 1996). PTSD is typically acquired during a traumatic episode of life. One key characteristic of the disorder is that any stimulus that happened to be present in the original, PTSD-inducing situation can trigger a stressful reliving of the traumatic event. For instance, a sudden loud noise can elicit a powerful panic response in someone who has been in combat and has learned to associate this noise with the imminent danger of enemy fire, whereas the same noise will only lead to a slight startle response in a person without PTSD. Thus, for the PTSD patient, sudden loud noises are conditioned danger signals that trigger a strong fear response. On the brighter side, mice and rats that have learned to associate a particular place in their environment with access to a sexual partner will show hormonal changes characteristic of sexual motivation whenever they revisit this place (Graham & Desjardins, 1980). Here, the place is the conditioned cue that elicits the motivational state.

In a sense, Pavlovian and instrumental learning processes make motivation possible in the first place, because they free individuals from fixed, instinctual responses to built-in trigger stimuli, allowing them to become motivationally aroused by a wide variety of stimuli that predict the availability of a reward and to develop an adaptive repertoire of behaviors that are useful for obtaining that reward. Although these learning processes are not entirely unconstrained in many species and domains of behavior (e.g., Seligman, 1970), they nevertheless make goal-directed behavior enormously flexible and adaptive.

10.2.8 Motivation Has Conscious and Nonconscious Aspects

Traditionally, biopsychology has not dealt with the issue of consciousness in the study of motivation, because most research in this field has been carried out in animals that lack the capacity for symbolic language and introspection. Almost by default, then, the majority of biopsychological accounts of motivation assume that consciousness is not a necessary prerequisite for goal-directed, reward-seeking behavior. Researchers working at the intersection of biopsychology, neuropsychology, psychopharmacology, and social psychology have examined the issue more closely but still come to essentially the same conclusion. For instance, Berridge (1996) reviewed evidence suggesting that, even for as fundamental a motivational system as feeding, humans rarely have accurate insight into what drives their appetites or what makes them start or stop eating – self-reports of motivation often contradict behavioral data. Similarly, Rolls (1999) has suggested that most of the brain’s considerable power for stimulus analysis, cognitive processing, and motor output primarily serves implicit (i.e., nonconscious) motivational processes representing the organism’s various needs for physical and genetic survival. Conscious, explicit motivation, by contrast, is the exception to the rule in the brain; it is language dependent and serves primarily to override implicit processes.

Berridge and Robinson (2003) have pointed out that implicit/explicit dissociations exist not only in the domain of motivation but can also be documented for emotion and learning. For instance, learning and memory can be divided into declarative (conscious, explicit) and nondeclarative (nonconscious, implicit) processes, with the former including memory for events and facts and the latter including Pavlovian conditioning and instrumental learning (Squire & Zola, 1996). In this context, it is worth noting that much of the human brain’s evolution took place in the absence of symbolic language, that is, without the ability to report on mental states. Accordingly, it is perhaps not surprising that language-based functions are relatively new in an otherwise highly developed and adaptive brain and that many motivational, emotional, and cognitive functions, which ensured our prelinguistic ancestors’ survival, do not depend on or require conscious introspection.

On the other hand, humans are able to formulate goals and to pursue them in their daily lives. If we were governed exclusively by phylogenetically shaped motivational needs, it would be almost inconceivable that any human would ever return to the dentist after experiencing the pain of a root canal procedure. Of course, conscious regulation of motivational processes is not restricted to overriding raw motivational impulses and needs but also extends to the formulation of short- and long-term goals and the elaboration of plans to attain them. Traditionally, the brain’s contributions to these uniquely human faculties have been studied by neuropsychologists and neurologists, who examined the role of frontal lobe lesions in higher order brain functions in humans. Presently it remains unclear to what extent brain structures subserving conscious self-regulation and goal pursuit are integrated with, dissociated from, or interact with brain structures subserving implicit motivational processes and systems. It is also unclear to what extent behavior executed in the pursuit of explicit, language-based goals represents motivation proper or a different type of behavioral regulation, because the successful implementation of explicit goals does not per se elicit pleasure (Schultheiss & Köllner, 2014). The elucidation of these issues will be an important task for affective neuroscience in the coming years.

Excursus

Aims of Biopsychological Research

Biopsychological research focuses on a set of intersecting properties of motivation. Motivated behavior is set in motion by the anticipation of rewards or punishments (that is, incentives and disincentives) whose (un)pleasantness signals the usefulness or harmfulness of such outcomes. The motivational process consists of two phases, one that involves decreasing or increasing the distance from a reward or punishment, respectively (wanting), and one that involves evaluating the hedonic qualities of the reward or punishment (liking) once it has been attained or (not) avoided, respectively. Motivation can be directed toward a positive incentive (approach motivation) or away from a negative incentive, through either behavioral approach toward a safe place (active avoidance) or suppression of behavior until the danger is over (passive avoidance). Different types of incentives (e.g., novelty, food, water, sex, affiliation, dominance) can give rise to motivated behavior. Motivated behavior changes its direction dynamically, depending on how recently a given need has been satisfied and what kinds of incentives are available in a given situation. Motivation can reflect the presence of a strong need state (e.g., energy depletion); it can be triggered solely by strong incentives, even in the absence of a profound need (pure incentive motivation); or it can be the product of the confluence of a need state and the presence of suitable incentives. Motivation is characterized by flexibility of cue-incentive and means-end relationships and drives and in turn is influenced by Pavlovian and instrumental learning processes. Finally, biopsychological approaches to motivation do not assume that motivation requires conscious awareness but acknowledge that, in humans, specialized brain systems support the conscious setting and execution of explicit, language-based goals.

10.3 Brain Structures Generally Involved in Motivation

While different motivational needs engage different networks of brain areas and transmitter systems, some systems fulfill such general, fundamental motivational functions that they are recruited by almost all motivational needs. This is particularly true of the amygdala, the striatum, and the orbitofrontal cortex (OFC) (cf. Cardinal, Parkinson, Hall, & Everitt, 2002). We will also examine the lateral prefrontal cortex (LPFC), one of several brain structures involved in the regulation of motivational impulses. Figure 10.4 provides an overview of the location of these structures in the human brain.

Fig. 10.4
figure 4

Sagittal cut of the brain at the midline, with approximate locations of key structures of the motivational brain. Closed circles represent structures fully or partly visible in a sagittal cut; dashed circles represent structures hidden from view in a sagittal cut. The amygdala is hidden inside the frontal pole of the temporal lobe; the lateral prefrontal cortex is located on the outer side of the prefrontal cortex; the striatum is situated at the front of the subcortical forebrain. The ventral tegmental area and substantia nigra modulate activity in the stratum via dopaminergic axons (arrow)

10.3.1 Amygdala: Recognizing Rewards and Punishments at a Distance

The amygdala is an almond-shaped structure located in the temporal lobes of the brain. Its critical role in motivational processes was first documented by Klüver and Bucy (1937, 1939), who observed a phenomenon that they termed “psychic blindness” in monkeys whose temporal lobes had been lesioned. Klüver and Bucy (1939, p. 984) described what they observed in one monkey as follows: “The […] monkey shows a strong tendency to approach animate and inanimate objects without hesitation. This tendency appears even in the presence of objects which previously called forth avoidance reactions, extreme excitement and other forms of emotional response.” Thus, loss of the amygdala leads to an inability to assess the motivational value of an object from afar (“psychic blindness”); the monkey needs to establish direct contact with the object to determine its significance. Also notable is the loss of fear accompanying amygdala lesioning.

Research over the last 60 years has led to a much more nuanced understanding of the “psychic blindness ” phenomenon observed by Klüver and Bucy. Specifically, the amygdala has been identified as a key brain structure in Pavlovian conditioning. It helps to establish associations between stimuli that do not initially carry any motivational meaning and unconditioned rewards or punishers, provided that the former reliably predicts the latter (LeDoux, 1996). Thus, an intact amygdala enables an individual to learn that the sight of a banana (conditioned visual cue) predicts a pleasant taste when the banana is eaten (food reward), whereas the sight of a rubber ball does not predict a rewarding taste if the ball is taken into the mouth. Similarly, the amygdala is necessary for rats or humans to learn that a visual stimulus like a blue light predicts a shock and thus to express fear upon presentation of the blue light. With an intact amygdala, CS-US associations can be learned within a few trials and sometimes even on the basis of a single trial; with a lesioned amygdala, humans and animals need hundreds of trials to learn such associations and may even fail to acquire them altogether.

The amygdala consists of several, highly interconnected nuclei (i.e., groups of neuronal cell bodies that serve similar purposes), two of which are particularly important in emotional and motivated responses to CS and US (cf. Fig. 10.5; LeDoux, 1996, 2002). Through its central nucleus, the amygdala influences primarily emotional reactions mediated by hypothalamic and brainstem structures. For instance, the central nucleus triggers the release of stress hormones (e.g., cortisol) through its effect on the endocrine command centers in the hypothalamus; it increases arousal, vigilance, and activation through its projections to major neurotransmitter systems (e.g., dopamine); and it activates various autonomic nervous system responses (e.g., galvanic skin response, pupil dilation, blood pressure). Through the basolateral nucleus, the amygdala influences motivated action through its projections to the striatum, a key structure of the brain’s incentive motivation system (see below). If the central nucleus is lesioned, animals are still able to show motivated responses (e.g., bar-pressing for food) in response to a CS, but preparatory emotional responses are impaired (e.g., salivation is lacking). Conversely, if the basolateral amygdala is lesioned, animals will still show an emotional response to a CS, but fail to learn instrumental responses to elicit (or avoid) the presentation of affectively charged stimuli (Killcross, Robbins, & Everitt, 1997).

Fig. 10.5
figure 5

A schematic overview of the amygdala and some of its nuclei (LA, lateral nucleus; BLA, basolateral nucleus; CE, central nucleus) and the emotional-motivational functions they mediate (After LeDoux (2002))

Another important feature of the amygdala is that it receives input from virtually all stages of sensory processing of a stimulus (LeDoux, 1996). This starts at the earliest stages of stimulus analysis at the level of the thalamus, which can elicit a “knee-jerk” amygdala response to crude stimulus representations (e.g., something that roughly looks like a snake) and extends all the way to highly elaborated multimodal representations from cortical areas that can trigger or further amplify amygdala responses (“It really is a venomous cobra slithering toward me!”) or dampen down amygdala responses (“Oh, it was just an old bicycle tire lying on the ground.”). The amygdala in turn sends information back to stimulus-processing areas like the visual areas at the occipital lobe, thus influencing stimulus processing and potentially prompting various forms of motivated cognition, such as an enhanced focus on emotionally arousing features of the environment (Vuilleumier, Richardson, Armony, Driver, & Dolan, 2004). The amygdala also influences memory for emotional events (Cahill, 2000).

The involvement of the amygdala in emotion and motivation has frequently been studied using procedures that involve punishments, such as foot shock, because many noxious stimuli are universally aversive, making it relatively easy to elicit fear-related amygdala activation and learning with such procedures (LeDoux, 1996). Despite this research focused on states of fear and other negative emotions, it should not be overlooked that the amygdala also plays a critical role in approach motivation and reward (Murray, 2007; Wassum & Izquierdo, 2015). For instance, Pavlov’s famous dogs would have had a hard time learning to salivate in response to the bell sound (CS) predicting food (US) if their amygdalae had been damaged. Other research shows that an intact amygdala is crucial for second-order reinforcement learning in animals (i.e., learning to bar-press in order to switch on a light that has previously been paired with the presentation of food or a sexual partner, e.g., Everitt, 1990) and that humans depend on the amygdala to generate affective “hunches” that guide their decision-making and behavior (Bechara, Damasio, Tranel, & Damasio, 1997).

In summary, the amygdala can be characterized as a motivational “homing-in” device whose activity is influenced by sensory information at all stages of cognitive processing, and that allows individuals to adjust their physiological states and overt behavior in response to cues predicting the occurrence of unconditioned rewards and punishers. In the case of rewards, an intact amygdala allows the individual to learn about cues that signal proximity to a desired event or object and to navigate the environment in order to approach the reward, moving from more distal to more proximal reward-predictive cues until the reward itself can be obtained. In the case of punishers, the amygdala enables individuals to respond to punishment-predictive “warning signals ,” either by freezing and an increase in vigilant attention or by active avoidance behavior that removes the individual from a potentially harmful situation.

10.3.2 Dopamine and the Striatum: Response Invigoration and Selection

The striatum, consisting of the caudate and putamen, is a comet-shaped subcortical structure, with a bulbous anterior head and a thinning posterior tail (see Fig. 10.4). It is part of the basal ganglia, brain structures that are critical for movement. However, the striatum is particularly important for the wanting phase of motivation, because this brain structure is responsible for the selection and invigoration of behaviors aimed at incentives or away from disincentives. So it’s not just about movement – it’s about motivated movement!

To support these functions , the striatum depends on the neurotransmitter dopamine (DA), which is released by axons projecting from a relatively small number of cells located in regions in the upper brain stem called the ventral tegmental area and the substantia nigra (Bromberg-Martin, Matsumoto, & Hikosaka, 2010; see Fig. 10.4). These cells do a couple of remarkable things (Schultz, Dayan, & Montague, 1997). First off, they respond with a brief burst in firing rate when the organism encounters an unexpected reward (see Fig. 10.6, upper panel). This observation might lead you to think, like it has some researchers, that DA is a reward transmitter. However, DA neurons stop responding to the actual reward and instead show a burst in response to a predictive cue (a CS) after several trials of learning (see Fig. 10.6, middle panel). And if one extends this by adding another, second-order CS that predicts this CS, one would observe the DA neurons to increase firing as soon as the second-order CS is presented, but no longer if the original CS is subsequently presented, and so on. In short, DA neurons respond with a brief burst of firing activity to the first unpredicted stimulus that is associated with an incentive.

Fig. 10.6
figure 6

Recordings from a striatal dopamine (DA) cell of a monkey who received rewarding drops of fruit juice (R) that it learned to associate with a predictive visual or auditory cue (CS). The histogram on top of each panel shows when the cell fired most frequently; single lines of dots below the histogram represent repeated recordings of the time before, during, and after the reward or cue was administered. Each dot indicates when the neuron was firing (Adapted with permission from Schultz et al. (1997))

But what if the CS no longer predicts a reward? When that happens, DA neurons initially still show the increased firing rate in response to the CS. But when the time comes for the US to appear and it does not, DA neurons, which normally have a baseline, “idle” firing rate, suppress even this baseline activity for a little while, thus demarcating the absence of the predicted US (see Fig. 10.6, lower panel). These observations have prompted researchers to think of DA neurons as coding for “reward prediction error ”; that is, if the state of affairs is better than expected, DA neurons mark this with increased firing and if it is worse than expected, they mark this with decreased firing (Schultz et al., 1997). If everything is exactly as predicted (including actual rewards), they retain their baseline firing pattern. In a sense, these DA neurons code for motivational value, because they show differential responses to rewards or punishment (here: absence of reward).

Complicating matters somewhat, there are also DA neurons that increase firing whenever a reward OR a punisher is encountered. Clearly, these neurons are not exclusively dedicated to reward prediction but instead fulfill a function that has been termed motivational salience (or incentive salience) attribution (Berridge & Robinson, 1998; Bromberg-Martin et al., 2010; Matsumoto & Hikosaka, 2009): They imbue any type of stimulus that is relevant for survival, be it pleasant or aversive, with neuronal significance, turning it into something that the organism feels strongly compelled to deal with in an active manner (note that passive avoidance is not supported by DA).

DA neurons project to two different portions of the striatum: the dorsal part (i.e., the top) and the ventral part (i.e., the bottom), which includes an area called the nucleus accumbens. In the latter structure, DA neurons, particularly those that code for motivational salience, appear to fulfill a primarily invigorating function, prompting strong behavioral urges to deal with incentives, be they positive or negative. This function is illustrated by a study with rats in which the function of DA neurons projecting to the nucleus accumbens was experimentally manipulated (Ikemoto & Panksepp, 1999). Rats were trained to run down a runway to a goal box filled with a tasty sucrose reward. At each trial, they received either varying amounts of a DA antagonist dissolved in a fluid (vehicle) and injected into the nucleus accumbens or just the vehicle as the control condition. The DA antagonist was intended to block the effects of natural DA release on synaptic transmission in the accumbens; treatment with the vehicle was not expected to interfere with the effects of DA release. After the first trial, rats who had received the highest dose of DA antagonist differed from all other groups in that they traversed the runway to the goal box much more slowly than any other group (left panel of Fig. 10.7). This difference persisted in subsequent trials. Notably, these rats’ consumption of the sweet sucrose solution was just as high as all the other rats once they reached the goal box (right panel of Fig. 10.7).

Fig. 10.7
figure 7

An illustration of the dissociation between wanting (running speed to goal box, left panel) and liking (intake of sweet solution, right panel) for different degrees of dopamine suppression via the administration of an antagonist (Adapted with permission from Ikemoto & Panksepp (1999))

These findings illustrate that DA transmission in the accumbens is required for the invigoration of goal-directed behavior (i.e., running toward the goal box) but does not have an impact on the hedonic response to the incentive itself (i.e., consumption of the sucrose solution). In other words, DA in the nucleus accumbens is highly relevant to wanting a reward but does not mediate its liking (Berridge & Robinson, 1998). In a sense, then, the ventral striatum DA system functions like an internal magnet, pulling the organism closer to a desired goal or object.

Brain-imaging studies have shown that synaptic activity in the accumbens is also related to incentive seeking in humans. In these studies, accumbens (and sometimes VTA) activation has been observed in response to such varied incentives as social approval and social punishment, beautiful opposite-sex faces, chill-inducing music, or computer games (Aharon et al., 2001; Blood & Zatorre, 2001; Koepp et al., 1998; Kohls et al., 2013). It is notable in this context that the human trait of extraversion seems to be related to the sensitivity of the DA system (see the excursus below).

10.3.2.1 Extraversion: An Incentive Motivation Trait?

Extraversion is perhaps the most salient personality trait. As early as the second century AD , the Greek physician Galen proposed that individual differences on the continuum from introversion (low extraversion) to high extraversion have a biological basis. The first modern biopsychological account of extraversion was formulated by Hans Eysenck (1967), who mapped individual differences in extraversion onto differences in brainstem arousal systems . Eysenck argued that extraverts suffer from low levels of arousal and engage in vigorous social and physical activities to achieve a comfortable level of brain arousal at which they can function properly. Introverts, in contrast, have high baseline arousal levels and appear withdrawn because they avoid vigorous activities that would push their arousal level “over the edge” and thus impair their overall functioning.

Although there is evidence supporting the validity of Eysenck’s arousal theory of extraversion , it does not seem to tell the whole story. For one thing, as Gray (1981) pointed out, high levels of extraversion resemble a disposition to impulsively seek rewards, whereas high levels of introversion are linked to the avoidance of punishments. Gray’s reinterpretation of the extraversion-introversion continuum, which is supported by considerable evidence from animal and human studies, suggests that this trait has less to do with differences in arousal than with differences in motivation (cf. Matthews & Gilliland, 1999). A second criticism that can be leveled against Eysenck’s theory is that the construct of arousal itself is too undifferentiated. Eysenck developed his theory based on pioneering studies conducted in the 1940s on the role of the brainstem in cortical arousal. However, later research indicated that the brain houses several arousal mechanisms that serve a variety of different functions, some supporting sensory processes, others supporting attention and memory, and yet others being involved in motor arousal or activation (e.g., Tucker & Williamson, 1984).

Both criticisms were taken into account in a new theory of the biological basis of extraversion formulated by Depue and Collins (1999). According to these authors, individual differences in extraversion levels are based on variations in the degree to which DA neurons, which can be viewed as representing a motor arousal system, respond to signals of reward with an increase in synaptic transmission. People high in extraversion respond to incentives with greater activation of the DA system and thus stronger wanting than people low in extraversion. As a consequence, their behavioral surface appears more activated, lively, and invigorated than that of introverts. To test his theory, Depue et al. (1994) administered DA agonists or a placebo (i.e., a substance lacking any neurochemically active compounds) to extraverts and introverts and measured hormonal and behavioral indicators of increased DA-dependent synaptic signal transmission, such as the suppression of the lactation hormone prolactin and increased eye-blink rate. As expected, after administration of the DA agonist but not of the placebo, extraverts showed more prolactin suppression (Fig. 10.8) and a greater increase in eye-blink rate than introverts. These findings suggest that extraverts have a greater capacity for DA-neuron activation, both naturally stimulated by incentive signals and artificially induced by DA agonists, than introverts.

Fig. 10.8
figure 8

Relationship between responses to a DA agonist as assessed by the amount of prolactin suppression relative to placebo (higher levels = greater suppression) and scale scores on positive emotionality, a measure of extraversion. Greater DA activation is associated with higher levels of positive emotionality (Adapted with permission from Depue et al. (1994))

Depue, Luciana, Arbisi, Collins, and Leon’s (1994) findings also suggest that people do seem to have some insight into the functioning of their motivational brain. Individuals who endorse many extraversion items on personality questionnaires (i.e., extraverts) may have an accurate perception that they are behaviorally engaged by many more things than people who do not endorse such items (i.e., introverts). Yet this does not mean that they can introspectively access the operating characteristics of their DA system; rather, they may perceive in themselves and in their behavior the same things that people who know them well perceive: namely, that they tend to be outgoing, active, and full of energy. However, they seem to be largely unaware of what exactly it is that engages their incentive motivation system in the first place. As Schultheiss and Brunstein (2001) have shown, people’s implicit motives, which reflect the incentives they like and will work for, do not correlate with measures of extraversion. In other words, although people do not have introspective access to what is particularly rewarding for them (determined by their implicit motives), they do seem to have a relatively accurate perception of how strongly they respond to reward-predictive cues when they encounter them (represented by their self-reported extraversion level).

In contrast to the invigorating functions of DA in the ventral striatum, DA in the dorsal striatum is involved in the selection of behaviors that are instrumental for obtaining rewards or avoiding punishments (Balleine, Delgado, & Hikosaka, 2007; Bromberg-Martin et al., 2010). Here, the reward-prediction-error function of DA neurons promotes actions that have resulted in better-than-predicted outcomes (i.e., reward) and suppresses actions that have resulted in worse-than-predicted outcomes (i.e., punishment) – the neuronal basis of Thorndike’s (1927) law of effect.

Study

Key Role of Dopamine For Instrumental Behavior

Research by Robinson et al. (2007) illustrates the key role of DA in the dorsal striatum for instrumental behavior. These authors used DA-deficient mice and trained them on a two-lever task. Pressing one lever, with blinking cue lights above it, led to food reward; pressing the other, without blinking lights, did not. Prior to training, one group of mice was injected into the dorsal striatum with a virus that infected nonfunctional DA cells projecting there and restored their ability to actually produce DA and hence to function as DA cells. Thus, mice treated in this way had restored DA function in the dorsal striatum only but not in the ventral striatum or other brain regions. Across a series of experiments, Robinson and colleagues were able to show that the untreated DA-deficient mice never learned to press the food-reward lever preferentially. But once their dorsal-striatum DA levels were virally restored, their learning curve was steep, clearly favoring the food producing (reward) over the inactive lever (no reward), and indistinguishable from controlled mice with normal DA function.

This research demonstrates that learning of action-outcome contingencies – like lever pressing > food – relies on DA in the dorsal striatum. It may also be helpful to highlight a key difference between this research and the Ikemoto and Panksepp (1999) study described previously: in that earlier study, lowered DA in the ventral striatum (nucleus accumbens) only reduced running speed. It did not abolish this motor behavior entirely, nor did it entail a choice between two different behaviors. Thus, it was about a change in general motivation, in invigoration, and in wanting proper. In contrast, the research by Robinson and colleagues (2006) documents a selective increase of behavior followed by a reward (pressing a lever resulting in food) and an equally selective decrease of behavior followed by non-reward (pressing a lever resulting in no food). There was no evidence of a general increase of vigorous behavior, only for a selecting, instrumental learning effect.

10.3.3 The Orbitofrontal Cortex: Evaluating Rewards and Punishments

The OFC is situated directly above the eye orbits, on the ventral (i.e., downward facing) side of the frontal cortex. It receives highly processed olfactory, visual, auditory, and somatosensory information. It is interconnected with both the amygdala and the striatal DA system, making it one of three major players in the brain’s incentive motivation network. The OFC plays a key role in scaling the hedonic value of a broad array of primary and conditioned reinforcers, including perceived facial expressions, various nutritional components of food, monetary gains and losses, and pleasant touch (Kringelbach, 2005; Rolls, 2000).

Two notable features characterize the OFC. First, different types of reinforcers are represented by anatomically distinct areas of the OFC (see Fig. 10.9). Second, each area’s activity changes with the motivational value of a given reinforcer. Evidence for the existence of anatomically distinct reward areas comes from studies conducted by Rolls and colleagues (reviewed in Rolls, 2000, 2004). These studies showed that different subregions of the OFC respond to the degree to which a given foodstuff contains glucose, fat, salt, or protein (e.g., de Araujo, Kringelbach, Rolls, & Hobden, 2003). Similarly, brain-imaging studies conducted with human subjects show that specific OFC regions are activated in response to monetary gains and losses (O’Doherty, Kringelbach, Rolls, Hornak, & Andrews, 2001). Monetary punishment was associated with activation of the lateral OFC (i.e., toward the side), whereas monetary reward was associated with activation of the medial OFC (i.e., toward the body’s midline).

Fig. 10.9
figure 9

The OFC, viewed from below, with results of a meta-analysis superimposed. Dots represent activation maxima from single brain-imaging studies with human participants. The orange (middle) area on each side of the OFC appears to be most strongly related to acute subjective pleasure responses to diverse rewards, such as food or sex. The green area toward the midline appears to be more involved in memory and learning of rewards. The blue areas toward the outer rim of the OFC are active in response to punishers (Adapted with permission from Berridge & Kringelbach (2015))

The OFC ’s response to a specific reward is not fixed but changes dynamically with exposure to or consummation of a given reward and with changes in reward contingencies. Data from responses of single neurons recorded through hair-thin electrodes in primates provide a powerful illustration of the dynamic representation of reward value in the OFC (Rolls, 2000, 2004). If a monkey is given a single drop of glucose syrup (a highly rewarding, energy-rich food substance), glucose-specific cells in the OFC show a strong burst of activity. If the monkey is fed more and more glucose over time, however, the firing rate in these neurons decreases in a fashion that is closely correlated with the monkey’s acceptance of further glucose administrations, up to a point at which the OFC neurons stop firing and the animal completely rejects the glucose syrup (cf. Fig. 10.10). If the animal is given sufficient time after it has gorged itself on glucose syrup, however, it will eventually accept more syrup again, and its glucose-specific OFC neurons will resume their vigorous firing in response to the sweet taste. Findings such as these suggest that OFC neurons encode the individual’s hedonic response to reinforcers and that as the individual becomes “satiated” on a given reinforcer, neural responding dies down – a neurobiological manifestation of the alliesthesia effect.

Fig. 10.10
figure 10

An illustration of need-dependent reward evaluation in a monkey’s OFC. In both panels, the x-axis displays amount of glucose solution fed (in ml). Upper panel: the y-axis displays the firing rate of sweet-responsive neurons in response to glucose, relative to responses to drops of saline (SA) or blackcurrant juice (BJ). Lower panel: behavioral acceptance of glucose solution (Adapted with permission from Rolls (2005b))

Findings from brain-stimulation reward studies are consistent with this interpretation of OFC functioning (Rolls, 1999). In this type of research, an electrode is implanted in the brain, and the animal can activate the flow of current at the electrode tip by pressing a lever. Depending on where in the brain the electrode is located, the animal is sometimes observed to press the lever frantically, as if that stimulation triggers a pleasurable sensation, and this increase in lever pressing is taken as an indication that a brain reward site has been located. Brain-stimulation reward effects have been documented for many OFC sites, suggesting that pleasurable emotions are indeed experienced when these sites are activated. Notably, for food-related OFC reward sites, it has been observed that lever pressing varies with the need state of the organism: hungry animals display vigorous lever pressing at this site, but lever pressing ceases when they have eaten (Rolls, 1999). This suggests that OFC reward sites are sensitive to the degree of satiation that an organism has reached with regard to a specific reward and must therefore integrate information about the reward’s incentive value with the organismic need states.

OFC reward areas can also become activated by conditioned incentives (e.g., sights or sounds that predict food; Rolls, 2000, 2004). For instance, an area that responds strongly to the taste of food can, through learning, also become activated by the sight of that type of food. Together with the findings on the pleasurable properties of OFC activation, this observation suggests that conditioned incentives can feel just as pleasurable as the “real thing,” that is, the actual reward. This idea is at the core of many modern theories of incentive motivation (e.g., Bindra, 1978). Interestingly, the OFC is also able to break or even reverse learned CS-reward associations very rapidly (Rolls, 2000, 2004). For instance, through learning, OFC neurons will respond to a triangle shape that reliably precedes food reward but not to a square shape that is not associated with food. As soon as the relationship is reversed and the triangle no longer predicts food but the square does, the same OFC neurons will cease responding to the triangle and start responding to the square. Thus, the OFC encodes not only the reinforcement value of rewards but also of the stimuli associated with them, and it can rapidly change its evaluations as soon as the reward value of a conditioned incentive changes. Not surprisingly, lesions to the OFC abolish the individual’s ability to represent changing CS-reward contingencies, and emotional responses may become “unhinged” and persevere for long periods (Damasio, 1994; Rolls, 1999).

The OFC is not the only site of the “incentive motivation network” that codes for the pleasantness of a reward. Some research suggests that portions of the nucleus accumbens and of the ventral pallidum (both parts of the basal ganglia, a subcortical brain structure involved in motor control and instrumental conditioning) code the pleasantness of food reward (Berridge & Kringelbach, 2015). Conversely, the OFC is not only involved in reward evaluation but also plays a role in response inhibition and the regulation of emotion (Bechara, Damasio, & Damasio, 2000).

10.3.4 The Lateral Prefrontal Cortex: Motivational Regulation and Override

The lateral prefrontal cortex (LPFC) is the portion of the frontal cortex just behind the forehead, extending to the temples. Along with the OFC and the medial PFC, it is one of the last parts of the cortex to appear phylogenetically and is the last to come to maturation, not reaching its full functional capacity until early adulthood (Fuster, 2001). The LPFC supports a host of important mental functions, including speech (Broca’s area in left LPFC), working memory, memory encoding and retrieval, and motor control. The most important from a motivational perspective are two specific functions of the LPFC. First, the LPFC is the place in the brain where goals and complex plans to enact them are represented. Second, and related to the first function, the LPFC can regulate the activation of core motivational structures of the brain, such as the amygdala.

Evidence for the key role of the LPFC in goal-directed action comes from neurological case studies (Luria, 1973; Luria & Homskaya, 1964). It is perhaps not surprising that individuals with LPFC lesions that destroy language capability and working memory find it difficult to initiate and execute voluntary behavior, particularly if that behavior is complex. They lack the ability to instruct themselves and to pace themselves verbally through complex action sequences (language center lesion) and may not be able to retain all elements of a complex plan in memory for long enough to execute the plan in its entirety (working memory lesion). More subtle forms of volitional deficits are observed when LPFC lesions do not affect either working memory or speech centers. Neuropsychologist Alexander Luria (1973; Luria & Homskaya, 1964) described people with this type of lesion who were perfectly able to understand and remember a verbal action command, such as “Please take the pencil and put it on the table,” and could repeat it to the experimenter, but were unable to use it to guide their behavior. Thus, an intact LPFC is critical for the execution of complex plans that rely on working memory and language for the representation and updating of their elements and to feed these plans to the motor output. Note that the key role of language in the pursuit of complex goals and plans also makes the LPFC a critical point of entry for the social regulation of behavior. Specifically, although people with LPFC lesions may be relatively unimpaired in their ability to respond motivationally to innate or learned nonverbal social cues (e.g., facial expressions, the prosody of spoken language, or gestures), they lose their ability to coordinate flexibly their behavior with that of others through the pursuit of verbally shared goals or to adapt their behavior to the changing demands and expectations of their sociocultural environment.

The LPFC’s capacity to represent and enact complex, verbally “programmed” goals implies an ability to regulate and override ongoing motivational needs and impulses and to resolve conflict between competing behavioral tendencies. Anyone who has ever had to study for an exam on a beautiful sunny day knows that it takes some effort and self-control, often mediated through verbal commands directed at oneself, to focus on one’s books rather than jumping up and running outside. The LPFC seems to achieve this feat through its inhibiting effects on activity in structures related to incentive motivation, such as the amygdala. Studies show that nonverbal stimuli with strong incentive properties, such as facial expressions of emotion or pictures with negative affective content (such as depictions of mutilated bodies; Adolphs & Tranel, 2000), cause activation of the amygdala in humans. However, these findings are usually obtained under conditions of passive viewing that do not require LPFC participation in the task. As soon as participants are asked to verbally label the expression of a face or to reappraise a negative scene such that it becomes subjectively less aversive, LPFC becomes activated and amygdala activation decreases (Lieberman et al., 2007; Ochsner, Bunge, Gross, & Gabrieli, 2002). This disrupting effect of LPFC activation on amygdala activity may enable people to refrain from impulsive aversive responses, for example, to remain seated at their desk to study for an exam instead of giving in to their impulse to engage in motivationally more exciting activities. These findings suggest that engagement of the LPFC’s verbal-symbolic functions to deal with an emotionally arousing stimulus dampens down activity in emotion generators such as the amygdala (cf. Lieberman, 2003).

In summary, LPFC supports the planning and implementation of complex behavior through its ability to adopt or formulate explicit (i.e., verbally represented) goals and to keep them activated in working memory and by controlling activation in the brain’s incentive motivation network and thereby inhibiting impulsive responses to motivational cues.

The Brain’s Incentive Motivation Network

Many motivational processes make use of what we have termed the brain’s incentive motivation network, consisting of the amygdala, the mesolimbic dopamine system, and the orbitofrontal cortex. The amygdala is involved in learning in which environmental cues predict the occurrence of a reward or punishment and thereby guiding the organism toward pleasant and away from noxious outcomes. The striatal dopamine system regulates how vigorously the individual engages in reward seeking, but also in active avoidance of punishments, by receiving information about conditioned cues from the amygdala. It is also involved in the selection of behaviors that maximize pleasurable outcomes. The orbitofrontal cortex evaluates the “goodness” of primary and learned rewards, based on the individual’s current need state and learning experiences. Motivational processes rely on these three structures to act in concert, such that cues that predict (amygdala) stimuli that have been experienced as pleasant (orbitofrontal cortex) elicit behavioral selection and invigoration (striatal dopamine system) directed at reward attainment. Behavioral impulses generated by this incentive motivation system are influenced by other functional structures, such as the lateral prefrontal cortex. The lateral prefrontal cortex guides behavior through the formulation of complex, verbally represented goals and plans for their implementation and can shield explicit goals from the interference of incentive-driven motivational impulses by regulating the output of the brain’s incentive motivation network.

We should emphasize at this point that the preceding sections have selectively discussed just some of the most important brain areas involved in motivation and its regulation and omitted other key structures such as the hippocampus (involved in context-dependent modulation of emotional and motivational states) and the medial prefrontal cortex including the anterior cingulate cortex (involved in the regulation of attention, response conflict resolution, and movement initiation). Instead, we will dedicate the remainder of the chapter to the discussion of specific motivational systems that are rooted in hypothalamic structures (Schultheiss, 2013; see Fig. 10.4 for the location of the hypothalamus in the human brain) and that harness the brain’s incentive motivation network to guide behavior.

10.4 Specific Motivational Systems

Certain tasks and goals in an organism’s life are recurrent. All animals need to find food and eat regularly to get energy; they need to drink so as not to dehydrate; they are driven to find a mate to pass their genes on to their offspring. The attainment of these recurring needs and goals involves challenges such as competing with and dominating other same-sex members of the species. Of course, the tasks and challenges facing currently living beings also occupied their ancestors, reaching back millions of years in evolutionary history. Hence, it is hardly surprising to find that evolution has equipped brains (and bodies) with special systems that ensure that the recurring needs for day-to-day individual survival and the need for genomic generation-to-generation survival are met adaptively and efficiently (LeDoux, 2012). Such specialized systems that coordinate and support the attainment of specific classes of incentives have been identified and described in considerable detail for drinking, feeding, affiliation, dominance, and sex. In the following, we take a closer look at how evolution has shaped four of these motivational systems.

How Many Specific Motivational Systems Are There?

As many other chapters in this book document, the question of how many fundamental motivational systems exist is a consequential one in motivation science. If research focuses on motivational phenomena that lack any specific and identifiable foundation in our mammalian brains or if it fails to uncover such biologically based systems, the study of motivation will be based on a very weak foundation.

Jaak Panksepp (1998; Panksepp & Biven, 2012) has taken a distinctly biopsychological approach toward determining which motivational systems are truly fundamental. Combining causal analysis with an evolutionary approach, he contends that when electrical stimulation of specific brain sites gives rise to the same affectively charged instinctual behavioral patterns in several mammalian species, a fundamental emotional-motivational system has been identified. “Affectively charged ” means that the stimulation elicits intrinsically positive or negative affective states that animals will strive for or avoid. Learning psychologists would call the overall pattern of affective and behavioral responses to such stimulation an unconditioned response (UR). Because such responses are not normally elicited by brain stimulation but by stimuli that over the course of evolutionary history have been recurring and critical for the survival of species, each must have suitable natural elicitors. Learning psychologists would call such natural elicitors US. For instance, Panksepp and Biven (2012) argue that natural elicitors activating the FEAR system are pain, startling stimuli, and, in some species such as rats and mice, the scent of predators. And the FEAR system responds with an affective state, ranging from mild anxiety to full-blown terror, depending on the kind and intensity of the elicitor. It also orchestrates instinctual, hard-wired physiological and behavioral responses, such as pupil dilation, heart rate changes, freezing, or panicky flight.

With this approach toward identifying fundamental motivations, Panksepp has outlined seven distinct systems , which he calls SEEKING, LUST, CARE, PLAY, PANIC/GRIEF, FEAR, and RAGE. Distinct positive affective states are at the core of the first four systems, whereas distinct negative affective states are critical for the latter three. For all systems, Panksepp has located the affective “hot spots” in subcortical brain areas. Each system consists of a complex network of subcortical brain sites and neurotransmitters, and these sites and transmitters partially overlap between systems, often reflecting shared evolved functionality. Table 10.1 provides a brief sketch of Panksepp’s seven systems (based on Panksepp, 1998, 2006; Panksepp & Biven, 2012).

Table 10.1 Panksepp’s seven emotional-motivational systems

Panksepp’s model converges with the approach presented in this chapter when it comes to characterizing a general-purpose system that energizes behavior aimed at incentives. His SEEKING system largely overlaps with the striatal dopamine system we have described as being critical for response selection and invigoration. It also converges with our approach by drawing attention to the fact that the phylogenetically evolved, fundamental US-UR connections at the core of each system can be elaborated and extended in an individual’s development through conditioning processes – a feature that in his and our approach critically depends on the amygdala. However, Panksepp’s model departs from our approach, which assigns a critical role to the OFC as the neuronal basis of pleasant and unpleasant affective responses to incentives, in that he argues that specific affective states are rooted in subcortical brain sites, with the periaqueductal gray (PAG) in particular representing an epicenter of raw affects. We suggest that this apparent contradiction can be resolved, however, if one realizes that the affects generated by Panksepp’s motivational systems are frequently associated with the first phase of motivation (motivation proper) and may represent what individuals experience when they feel compelled to go after certain incentives (e.g., greed, lust) or avoid certain disincentives (e.g., fear, sadness). In contrast, the affects generated by the OFC appear to be related more to the second, consummatory phase of motivation, evaluating the quality of the outcome brought about by the preceding motivational episode on a fundamental hedonic pleasure-displeasure continuum. Finally, Panksepp’s model also diverges from the ideas presented in this chapter in another, subtler way. When looking at the overview of the seven systems he proposes, you may note that not all of the special-purpose systems we present toward the end of this chapter are listed here. While affiliation and attachment can be roughly mapped onto either CARE or PANIC/GRIEF or both and sex can be matched to LUST, feeding and dominance do not appear on Panksepp’s list. Panksepp (1998) clearly acknowledges feeding as a fundamental system, but categorizes it as a homeostatic system (i.e., as being dedicated to restoring and maintaining vital balances in our bodies’ nutrient levels) and thus not quite on par with the motivational-emotional systems described in the list presented above. The absence of dominance from Panksepp’s list reflects the fact that Panksepp sees no strong evidence for the existence of such a brain system (see Panksepp & Biven, 2012). He contends that what many researchers characterize as dominance or power motivation is merely a by-product of either the LUST or the RAGE system or their combined functions (see van der Westhuizen & Solms, 2015 for further discussion of this issue).

So how many motivational systems are there? From the discussion of Panksepp’s approach, we think it is safe to draw three conclusions. First, the final list will not be long. Over the course of evolutionary history, only a handful of problems have recurred for our ancestors so frequently and consequentially that they exerted persistent selective pressure for the development of brain systems dedicated to dealing with them efficiently (LeDoux, 2012). Panksepp’s seven systems may provide a good approximation. Second, we think that Panksepp’s criterion of electrical stimulation eliciting specific affective-instinctual patterns across individuals and species is sensible and hard-nosed at the same time. It may help to separate the wheat from the chaff in theorizing about the nature and number of motivational systems. Our third conclusion is that despite this, more research is needed to parse the biopsychological systems supporting different kinds of motivation with sufficient precision and differentiation and to reconcile apparent contradictions between approaches (e.g., is dominance motivation supported by a distinct, separate motivation system or is it an emergent property of other systems?).

10.4.1 Feeding

The primary reason to eat is to provide energy for the body to function. Hunger reflects the need to replenish nutrients. In the modern, developed world, however, where food is overabundant, there are many other factors that motivate us to eat. These include routine (i.e., “It’s noon – it’s lunchtime!”), stress, pleasure, and social factors (i.e., when other people are eating). The physiological mechanisms that control the regulation of eating involve an interplay between the brain (especially the hypothalamus, a key brain area in the regulation of basic physiological needs) and other organs, such as the liver, stomach, and fat stores. In this section, we will cover some of the neurobiological signals that activate and deactivate the drive to ingest food: the need for energy as well as the desire for the pleasures of taste.

10.4.1.1 Energy Needs

All organisms need nutrients to provide the energy necessary to sustain the chemical processes of life. Our cells use glucose as their primary energy source. Glucose can be stored as glycogen in the liver, and fat is used for the longer-term storage of energy. The body has multiple ways of sensing when more energy might be needed; e.g., when glucose levels drop, fat stores decline, or intestinal motility changes. These conditions trigger activity in brain circuitry that generates a feeling of hunger or motivation to eat.

Many of the body’s systems for sensing energy needs begin in the digestive tract. The stomach contains stretch receptors that send signals of fullness to the brain. The gut also produces many neurohormones that act on the brain to let it know how recently and how much food has been consumed. One such neurohormone is cholecystokinin (CCK). The more food enters the gut, the more CCK is released. CCK acts on the vagus nerve, which sends a satiety (i.e., fullness) signal to the brain. Thus, CCK helps to inhibit motivation to eat. High levels of CCK actually induce nausea – a “warning signal” that tells us to stop eating (Greenough, Cole, Lewis, Lockton, & Blundell, 1998) (Table 10.2).

Table 10.2 Neuropeptides that affect hunger and feeding

Another satiety signal comes from fat. Fat cells produce a hormone called leptin (see the excursus below), which travels through the blood and acts at the hypothalamus to inhibit food intake. The more fat there is on the body, the more leptin is produced. When leptin levels are low, we feel hungry and eat more; when they are high, we eat less. Leptin thus serves as a signal to the brain, indicating the amount of fat stored in the body, and helps to regulate body weight in the long term. Leptin also acts as a short-term signal: leptin levels in the blood increase at the end of a meal, promoting satiety, and decrease some hours post-meal, promoting hunger (Friedman & Halaas, 1998).

10.4.1.2 Genes and Obesity

Researchers discovered leptin via a mutant mouse strain that overeats and becomes very obese (cf. Fig. 10.11). This strain has a defective gene, which scientists termed the ob gene (for obesity). Later, it was found that, in normal mice, the ob gene codes for the hormone now known as leptin. Without a functioning ob gene, the mutant mice cannot produce leptin . Their brains respond as if their bodies contained no fat: the animals act as if they were starving and eat voraciously. Injections of leptin return the mice’s body weight and food intake to normal (Friedman & Halaas, 1998).

Fig. 10.11
figure 11

The mouse on the left lacks the ob gene, which codes for the protein leptin. Without leptin, this mouse overeats and becomes obese. The mouse on the right is genetically “normal” (Photo copyright Amgen Inc., used with permission)

Melanocortins were known to affect skin pigmentation in rodents, but their role in food intake was likewise discovered via a mutant mouse strain. This strain also overeats despite extreme obesity, and it has yellow fur – hence its name, the agouti mouse . Researchers found that this mouse strain has a defective gene for a particular melanocortin receptor. The lack of this receptor means that melanocortins like alpha-melanocyte-stimulating hormone (α-MSH) cannot act in the brain or on the skin, resulting in obesity and different pigmentation (Carroll, Voisey, & van Daal, 2004).

Do genetic mutations cause obesity in humans? For most obese people, the answer is no. A melanocortin precursor defect that leads to obesity, a pale complexion, and red hair have been discovered in humans, but this mutation is very rare. A complex confluence of genetic predispositions certainly influences the propensity to gain weight, but diet and exercise are the most important factors in human obesity (Martinez, 2000).

The brain also contains specialized neurons that monitor levels of glucose in the blood. These “glucostat” neurons, located in the hypothalamus, react when glucose levels drop and send a signal to other regions of the hypothalamus to trigger feeding (e.g., Stricker & Verbalis, 2002).

Which are the brain systems to which CCK, leptin, and glucostat neurons communicate? They are numerous but include neurons in a subregion of the hypothalamus called the arcuate nucleus that produce neuropeptide Y (NPY), a potent hunger-inducing molecule. Miniscule amounts of NPY injected into the brains of laboratory animals cause them to eat voraciously. One of the ways that leptin acts in the brain is by inhibiting the neurons that produce NPY and thus staunching hunger. Similarly, CCK inhibits NPY production in the hypothalamus (Levine & Billington, 1997; Billington & Levine, 1992).

Neurons producing and responding to a class of neuropeptides called melanocortins are also active in the hypothalamus. Peptides that activate melanocortin receptors , such as alpha-melanocyte-stimulating hormone (α-MSH), lead to satiety, whereas peptides that block these receptors, such as agouti-related protein (AGRP), stimulate hunger (Irani & Haskell-Luevano, 2005; Stutz, Morrison, & Argyropoulos, 2005). In addition to deactivating NPY , leptin and CCK cause α-MSH neurons to increase their firing rate, releasing more α-MSH and thus promoting satiety.

Gonadal steroids, which have a role to regulate fertility and sexual motivation (see Sect. 4.4), also have an impact on feeding. In female animals, estrogen has a significant restraining effect on food intake. After ovariectomy, which stops the production of estrogen in the ovaries, female rats increase their food intake and gain about 25% of body weight. Progesterone counteracts the effects of estrogen. High levels of progesterone lead to increased food intake and body mass, an effect that is consistent with progesterone’s role as a hormone that promotes and safeguards pregnancy, which is characterized by steeply increasing energy needs.

10.4.1.3 Reward

The need for energy is obviously not the only reason we eat. Eating is pleasurable and, like other pleasurable activities (sex, addictive drugs, etc.), causes release of dopamine (DA) in the nucleus accumbens, part of the brain’s reward learning system (see Sect. 3.2, “Dopamine and the Striatum: Response Invigoration and Selection”). In particular, sweet and/or fatty foods are naturally rewarding to humans, rats, and other omnivores. In rats, it has been shown that diets containing extra fat or sugar lead to greater activity in brain structures involved in pleasure and reward (Levine, Kotz, & Gosnell, 2003).

The body’s natural opioids contribute to the pleasurable experience of eating. Opioids are released in the brain during intake of sweet or fatty foods, in particular. Injecting laboratory rats with opioids causes them to eat somewhat more regular lab chow but a great deal more of a palatable sweet or high-fat chow. Whereas NPY seems to be involved in hunger driven by energy needs, opioids are more involved in the rewarding aspects of motivation for food. This was seen in a study that showed that injecting NPY to the brain increased animals’ intake of bland yet energy-rich chow, but not of tasty, but energy-dilute sugar-sweetened, water. On the other hand, injecting opioids caused a marked increase in sugar-water intake, without having much effect on chow intake (Levine & Billington, 2004).

Sweet and fatty foods are not the only foodstuffs we seek out. A flavor called umami, present in meats, seafoods, and soy, is very rewarding to humans and laboratory animals, possibly because it serves as a good indication that the food is rich in protein (Yamaguchi & Ninomiya, 2000). The food additive monosodium glutamate (MSG) powerfully activates umami taste receptors on the tongue, which is why foods containing MSG taste so good to us.

Finally, we are naturally motivated to seek out a variety of foods. Humans and laboratory animals exposed repeatedly to a single flavor, even one that is highly rewarding at the start, will rapidly tire of it and consume less of it. However, if they are then exposed to a different flavor, the rewarding nature of the first one will be renewed (Swithers & Martinson, 1998). Because of this phenomenon (alliesthesia), the best way to make a lab rat gain weight is to put it on a “cafeteria diet”: a choice of multiple foods (e.g., Gianotti, Roca, & Palou, 1988). That rat will gain considerably more weight than rats offered just one highly tasty food. This phenomenon is anecdotally observable in humans, as well.

Recently, researchers have found that different flavors activate different parts of the OFC in humans (O’Doherty, Rolls, Francis, Bowtell, & McGlone, 2001). Thus, different tasty flavors seem to be registered by distinct parts of this brain structure as different kinds of pleasurable reward. This finding seems to point to the neurobiological basis of the phenomenon that we crave a variety of flavors, rather than just one (Rolls, 2005b).

Hormonal signals from the organs, such as leptin (from fat) and cholecystokinin (from the digestive tract), enter the brain and act on neurons in the hypothalamus to affect hunger and satiety. In the hypothalamus, neuropeptide Y and agouti-related protein stimulate hunger, whereas alpha-melanocyte-stimulating hormone reduces hunger . Opioids play a role in the pleasurable aspects of eating.

10.4.2 Affiliation and Attachment

While almost all organisms have social interactions with others of the same species, attachments formed between parents and young or between mates are only common in mammals and birds. Parent-offspring attachments, which can be thought of as motivations to be near the parent or the offspring, probably evolved in mammals and birds because these animals require extended parental care, including warmth and nourishment, during immaturity. Mating-pair bonds, which give rise to a long-term motivation to be near the mate, exist in species that cooperate in rearing their offspring. Interestingly, the majority of bird species form mating-pair bonds, but very few mammalian species do – humans being a notable exception.

In this section, we will cover the basic biopsychology of the parent-offspring bond and the mating-pair bond. We will also briefly discuss neurobiological aspects of other kinds of attachments, such as friendships.

10.4.2.1 Parent-Offspring Attachments

Maternal-offspring attachments have been extensively studied in the rat and the sheep. In these species , there is little or no paternal involvement in brood care – in fact, paternal involvement tends to be restricted to those mammals that form mating-pair bonds.

Rat pups cannot regulate their body temperature in infancy, so the dam (mother) spends much time huddled over them to provide warmth. She also nurses the young and retrieves pups that get separated from the rest of the litter. Male rats and nulliparous females (females that have not borne offspring) do not display these behaviors upon initial contact with pups. In fact, nulliparous females find the odor of rat pups aversive and avoid them.

How, then, do females develop the motivation to care for their young? Estrogen and progesterone levels are very high during pregnancy and set the stage for maternal behavior. As the levels of these hormones drop at the end of pregnancy, levels of prolactin and oxytocin rise – these two hormones released by the pituitary gland are necessary for lactation. The oxytocin surge at the end of pregnancy also induces the uterine contractions of labor. All of these hormones are needed for full expression of maternal behavior (Mann & Bridges, 2001). Nulliparous female rats or castrated male rats treated with progesterone and estrogen followed by prolactin and a jolt of oxytocin – mimicking the hormonal status of the end of pregnancy – engage in maternal behaviors toward pups as frequently as a dam that has just given birth. A major site of action for these hormones is the medial preoptic area (MPOA), a brain region in the hypothalamus that is also important for sexual behavior (Young & Insel, 2002; see Sect. 4.4 for more on the MPOA and sexual behavior). The hormones also influence the brain’s olfactory system (which handles perception of odor) such that the dams do not mind the odor of pups. There is evidence that hormones also affect the olfactory system in humans at the end of pregnancy: new mothers rate smells associated with human babies as less unpleasant than do nulliparous women or men (Fleming et al., 1993).

The same hormones are also necessary for maternal behavior in sheep, where oxytocin has an important function in early recognition of young. Sheep live in large herds, and a lactating ewe must allow her own lambs to nurse while keeping other lambs away. Without a sufficient oxytocin surge at the end of pregnancy, however, ewes will reject their own lambs as well. It turns out that oxytocin is needed for the ewe to learn to recognize the smell, sight, and sound of her lambs as distinct from others. Once this learning process is complete, oxytocin is no longer required for offspring recognition (Keverne & Kendrick, 1994; Kendrick, 2004).

In species where fathers help take care of the young, such as Siberian hamsters, tamarin monkeys, and humans, male animals undergo hormonal changes that facilitate paternal behavior toward the end of their mate’s pregnancy. Prolactin appears to be important for paternal behavior in many species, including humans, with both mothers’ and fathers’ prolactin levels increasing at the end of pregnancy. In male wolves, prolactin fluctuates seasonally, increasing in the season in which pups are born. Other hormonal changes also tend to echo those of females in pregnancy. For example, testosterone levels increase in both mothers and fathers in species that need to defend their pups against hostile intruders (Wynne-Edwards, 2001).

Hormones may serve to initiate parental behavior, but the hormones of pregnancy quickly subside, whereas the behavior, once learned, continues. Hormones like oxytocin may cause long-term changes in the nervous system that support attachment to one’s young and the motivation to care for them. Rats that have already had litters in the past provide better, faster maternal care than new mothers. In primates, learning may be even more important. Monkeys that have not grown up in a normal social environment show severely deficient maternal behavior in adulthood (Harlow & Harlow, 1966). One famed female chimpanzee raised in captivity had to be trained by humans to provide her infant with proper nursing and care (Matsuzawa, 2003). Clearly, in this species and most likely in humans, hormones alone do not suffice to produce maternal behavior or a bond to one’s offspring.

What about the bond of the infant to its parent(s)? When rat pups are separated from their dams, they show signs of distress, including ultrasonic vocalizations that alert the dam to the fact that the pup has become separated from the litter. Applying warmth to the pups calms them and makes them cease vocalizing. Injections of opioid peptides – brain chemicals involved in pleasure and suppression of pain – achieve the same effect. Similar effects have been seen in young dogs, chickens, and primates: opioid drugs reduce separation distress, even at doses too low to cause sedation or other effects (Nelson & Panksepp, 1998). More evidence for opioid involvement in affiliation and attachment will be addressed in the Sect. “4.2.3

In many of the species studied, opioids and warmth are not the whole story. Rat pups prefer to huddle close to a warm object that smells of their particular dam, indicating that they can recognize their dam by smell (e.g., Sullivan, Wilson, Wong, Correa, & Leon, 1990). In other species, too, the young seem to form a particular attachment to their primary caregiver. For example, young dogs prefer their mother to other dogs, even in adulthood, when they have not had contact to her for 2 years (Hepper, 1994). In primates, including humans, infants quickly learn to recognize and prefer to be with their primary caregiver(s) (e.g., Porter, 1998). Again, it is thought that hormones like oxytocin may play a role in the formation of these bonds by facilitating long-term changes in the nervous system, which persist (along with the bond) after the hormones have subsided.

10.4.2.2 Mating-Pair Bonds

The best studied neurobiological animal model of pair bonding is in the prairie vole. When these small rodents mate for the first time, the pair forms an attachment that lasts until one of the animals dies. They live in a nest together, both participate in rearing their young, and they continue to mate with each other and to produce young in subsequent seasons. When separated, the voles exhibit considerable distress, similar to that experienced by infants of many mammalian species during separation from the mother.

Oxytocin and a closely related hormone, vasopressin, are crucial for the formation of this pair bond. Oxytocin and vasopressin levels surge during mating. As in the case of mother sheep learning to recognize their young, these hormones establish an attachment to the mate, which persists – represented in long-term changes in the brain – long after hormone levels have returned to normal. Experimentally blocking oxytocin/vasopressin effects in the brains of voles before their first mating prevents the formation of a pair bond. Conversely, pair bonds can be formed without mating by injecting these hormones into the brains of a pair of animals. Oxytocin seems to be the key hormone in females and vasopressin in males (Insel 1997; Insel, Winslow, Wang, & Young, 1998), although more recent research implicates oxytocin in pair bonding in both sexes.

While prairie voles form pair bonds, a closely related species, montane voles, do not. Like many other mammals, montane voles mate with multiple partners, and only the females care for the young. The difference between these two species lies in the pattern of oxytocin and vasopressin receptors in the brain. Pair-bonding prairie voles have many oxytocin and vasopressin receptors in the nucleus accumbens and ventral pallidum, areas of the brain involved in reward. The oxytocin and vasopressin released when two animals mate for the first time act at these brain sites, permanently changing the dopamine (reward learning) system such that being with the mate becomes rewarding. In a sense, after mating, the brain develops an “addiction” to the mate (Keverne & Curley, 2004).

Does oxytocin underlie pair bonding in other species, such as humans? Although some researchers have speculated this to be the case (e.g., Taylor et al., 2000), conclusive evidence is still lacking. It is clear that humans do not form attachments in the same way as prairie voles: in our species, a single sex act does not lead to a life-long commitment! Nonetheless, oxytocin may play a role in the formation of bonds or attachments in humans. As in other mammals, oxytocin levels rise during sex (in particular, at orgasm) and during massage or other soothing tactile contact (Uvnas-Moberg, 1998). This oxytocin increase may facilitate bonding. Moreover, brain-imaging studies have revealed comparatively greater activity in the ventral striatum – a region encompassing reward-related circuitry, such as the nucleus accumbens  – when people view photos of their significant other or own children than when they are shown photos of acquaintances or of other children (Bartels & Zeki, 2000, 2004). Thus, the reward circuitry that is crucial for vole pair bonding also seems to play a role in human attachment.

10.4.2.3 Other Attachments

Mating bonds and parent-offspring bonds are not the only attachments that animals form. Individuals of many species show signs of stress and pathology if isolated. Rodents, canines, and primates, for example, tend to live in close-knit groups and have strong motivations for contact and interaction with others in their group. In primates, in particular, attachments can form between unrelated, non-kin individuals. These are often supported by mutual grooming, which serves to strengthen ties and to soothe distressed apes. Motivation to be groomed seems to involve beta-endorphin , a naturally occurring opioid . Levels of this opioid in the nervous system rise during grooming, and individuals seek out grooming when opioid levels are low (Keverne, Martensz, & Tuite, 1989; see also Taira & Rolls, 1996).

Some studies suggest that opioids are involved in human affiliation, as well. After viewing an affiliation-related movie, people high in a “social closeness” trait felt more affiliative and had higher tolerance to heat-induced pain (opioids help to reduce pain). Both of these effects were blocked by naltrexone, an opioid antagonist (Depue & Morrone-Strupinsky, 2005). These findings suggest that the affiliation-related movie caused an increase in opioid release in this group of people.

Oxytocin has social functions beyond parent-infant and pair bonds, including an important role in social memory. When mice lacking the gene for oxytocin encounter a familiar mouse, they behave in the same way as they would with a stranger. When the missing oxytocin is replaced in their brains, they learn who is who in the same way as normal mice (Winslow & Insel, 2002).

Study

Oxytocin Associated with Trust Toward Strangers

Some intriguing studies suggest that oxytocin also plays a role in the trust that humans show toward strangers. Participants in one experiment played an economic game in which Player 1 was given a sum of money, some of which he or she could entrust to Player 2, in whose hands the money would triple. Player 2 then returned an amount of his or her choice (which might be nothing at all) to Player 1. It emerged that Player 2s who received higher sums of money from Player 1s had higher blood levels of oxytocin ; likewise, oxytocin levels were related to how much money Player 2s returned to Player 1s (Zak, Kurzban, & Matzner, 2005). In a follow-up study, one group was given a dose of oxytocin intranasally (some small molecules like oxytocin are able to enter parts of the brain, such as the hypothalamus, via the nose), and another group received a placebo. In the oxytocin group, Player 1s entrusted more money to Player 2s (Kosfeld, Heinrichs, Zak, Fischbacher, & Fehr, 2005). In both studies, when people played the game with a computer that allocated money at random, oxytocin had no relationship to money received or given. This suggests that oxytocin actually increases the ability of humans to trust others.

The hormones estrogen, progesterone, prolactin, and oxytocin are involved in the initiation of maternal behavior. Similar hormones are also involved in paternal behavior. In mothers, oxytocin facilitates early recognition of and bonding with offspring. Oxytocin and vasopressin are also necessary for the formation of pair bonds. Once an attachment has been formed, these hormones are no longer needed to sustain the bond. Opioids are involved in the attachment of an infant to its parent, as well as in affiliation in primates.

10.4.3 Dominance

Most animals not only have to evade predators, find sustenance, and gain access to a mate to survive as individuals and as sets of genes; they also have to compete with members of their own species to secure resources necessary for survival. Behaviors directed at defeating others in resource competitions are called dominance behaviors, and they often give rise to relatively stable dominance hierarchies within a group.

10.4.3.1 Mechanisms and Benefits of Dominance

Dominance issues are most obviously at stake when the males of a species compete with each other for a mate. The competition can be carried out intrasexually, with the aim of defeating other males and keeping them away from females, and/or intersexually, with the aim of attracting the attention of a female by advertising genetic fitness. In Darwin’s (1871) own words, this is the difference between “the power to conquer other males in battle” and “the power to charm females.” The two often go hand in hand, e.g., when a male’s large body size makes him more likely to win fights with other males and more attractive to females (Wilson, 1980).

Dominance extends beyond assertiveness and success in the mating game, however, and often involves privileged access to other resources, such as food or protected nest sites. In some species, including many birds, dominance is a relevant attribute only during mating and has to be renegotiated every mating season; in others, particularly animals living in social groups, dominance rank is a more stable individual attribute, determined and changed in occasional violent fights and reinforced frequently by nonviolent signals of dominance (e.g., a warning stare, bared teeth) and submission (e.g., exposure of the throat area in dogs and wolves).

The establishment of stable dominance hierarchies within a social group benefits both the “top dog,” the alpha animal at the tip of the hierarchy, and the lower-ranking animals (Wilson, 1980). A stable dominance hierarchy means that all group members can save energy by adhering to a pecking order at the food trough – there is no need to fight over who gets the first pick at each feeding occasion. In many species, the dominant animal actively enforces peace among subordinate group members by breaking up fights. Although dominant animals are usually more successful at procreating, subordinate members also get to promote their genes, either by “sneak copulations ” or by helping dominant animals with whom they share genetic ties to raise their offspring.

In humans, of course, things are more difficult, because it is much harder to pinpoint one specific dominance hierarchy that is binding for all. A student in a course may be subordinate to the high-expertise professor. Yet that professor may rank rather low among his or her colleagues in the department, whereas the student may be an undefeated ace on the tennis court and excel in the college debating society. Thus, humans’ dominance ranks are much more fluid than other animals’, reflecting the fact that each of us is a member of many different groups, not just one.

10.4.3.2 Brain Correlates of Dominance

The biopsychological roots and correlates of dominance have been extensively studied in the rat, biopsychology’s favorite animal model (Albert, Jonik, & Walsh, 1992). A male rat tries to establish or maintain dominance by launching an attack that involves pushing an intruder with his hind legs or flank and then chasing him away. He also shows piloerection; i.e., the hair on his body rises to make him look bigger and more intimidating. This pattern of lateral attack and piloerection is also observed in rat mothers trying to protect their pups. A hypothalamic network centered on the anterior nucleus (AN) of the hypothalamus plays a critical role in lateral attack and piloerection and thereby in rats’ dominance behavior (Albert et al., 1992; see also Delville, DeVries, & Ferris, 2000). If the AN is lesioned, lateral attack is no longer displayed against intruders; if it is stimulated, lateral attack can be elicited much more quickly and is more intense. This effect is particularly strong in the presence of high levels of testosterone in males or testosterone and estradiol in females. The hypothalamus interacts with other brain areas involved in incentive motivation and reward learning to regulate dominance behavior. For instance, lesions of the nucleus accumbens decrease rats’ inclination to attack intruders (Albert, Petrovic, Walsh, & Jonik, 1989). Conversely, elevated levels of gonadal steroids like testosterone and estradiol facilitate motivation to attack intruders in nonlesioned rats by binding to steroid receptors and thereby increasing transmission at dopaminergic synapses in the accumbens (Packard, Cornell, & Alexander, 1997). Some more recent work has also started to examine dominance motivation in the human brain. For instance, one study has shown that viewing facial expressions that signal a dominance challenge (anger), relative to non-challenging expressions, is associated with activation of the striatum and the insula, a part of the cortex that is involved with affective processing of somatic responses (Craig, 2009), in individuals with a strong need for power (Schultheiss et al., 2008; Hall, Stanton & Schultheiss, 2010). This suggests that individuals with a strong disposition to seek dominance response with an activation of their incentive motivation system to dominance challenges, whereas individuals lacking this need do not.

10.4.3.3 Dominance and Aggression

At this point, a word of caution is in order about the relationship between dominance and aggression . First, aggression is just one way of attaining and securing dominance in many species, a fact that may be obscured by a narrow focus on the rat as an animal model of dominance. Aggressive and violent behavior as a means of attaining dominance often backfires in primate groups and is almost universally outlawed in humans. Work on primates suggests that high levels of the neurotransmitter serotonin, which has a restraining effect on impulsive aggression, promote the attainment of high social rank (Westergaard, Suomi, Higley, & Mehlman, 1999). Thus, considerable social finesse is required to become dominant, and in humans more than most other species, nonaggressive means of achieving dominance have become critical for social success.

Second, not all forms of aggression are related to dominance (Panksepp, 1998). Besides the type of offensive aggression associated with dominance in many species, there is also defensive aggression elicited by threat and predatory attack directed against prey. The latter two are mediated by brain systems other than those we have described for offensive aggression; they serve very different functions, and they are not influenced by hormone levels.

Thus, it would be a mistake to equate dominance with aggression, because many forms of dominant behavior (particularly in higher mammals) are not overtly violent or aggressive, and some forms of aggression have nothing to do with dominance.

10.4.3.4 Hormonal Factors in Dominance Behavior

As indicated by the facilitating effect of gonadal steroids on AN-mediated offensive aggression, hormones play a key role in dominance interactions. In many species, including humans, high levels of testosterone facilitate aggressive and nonaggressive dominance behaviors (Nelson, 2011). For instance, seasonal variations in testosterone levels are strongly associated with seasonal changes in aggression and territorial behavior in many species: when testosterone is high, aggression is high. As testosterone production increases in male mammals and birds around puberty, there is a concomitant increase in aggression; castration abolishes both increases. In humans, it has been observed that those male and female prisoners who are high in testosterone are the ones engaging in more aggressive behavior and rule infractions, although the cause and effect are not clear, since aggressive behavior can boost testosterone (see below) (Dabbs, Frady, Carr, & Besch, 1987; Dabbs & Hargrove, 1997). In most species, those high in testosterone are more likely to engage in battles for dominance.

However, a recent study in which testosterone or a placebo was given to research participants underscores our caveat that dominance and aggression should not be equated (Eisenegger, Naef, Snozzi, Heinrichs, & Fehr, 2010). Participants played a game in which they were given money and could pass a share of this money on to another player. It was up to them how big a share they wanted to give. The other player could only accept the share or reject it. If the latter happened, neither player retained any money. Thus, the second player had a “veto” over the decision of the first player, and second players exercise their veto if they perceive the offer to be unfair. Contrary to the folk wisdom that testosterone equals aggression, testosterone-treated players offered fairer shares (i.e., closer to 50%) than placebo-treated players. After ruling out other explanations for this finding, the authors argued that this behavior protects the elevated dominance status of the money-giving player over the receiving player, because the latter could turn the tables by rejecting an offer. By making offers less likely to be rejected, the money-giving player remains the decision-maker.

Success or defeat in dominance contests in turn leads to increased or decreased levels of testosterone. Elevated levels of testosterone have been observed, for instance, in winners of sports competitions, in chess matches, and even in simple games of chance, whereas losers’ testosterone typically decreases (Mazur & Booth, 1998). These differences in testosterone responses to contest situations even extend to observed dominance. Research has shown that after a democratic election, supporters of the winning candidate have stable or increased testosterone, whereas supporters of the losing candidate have decreased testosterone (Stanton, Beehner, Saini, Kuhn, & Labar, 2009). Thus, the relationship between testosterone levels and dominance outcomes is a two-way street, in which testosterone levels influence dominance seeking and the results of this behavior affect testosterone levels (Mazur, 1985; Oyegbile & Marler, 2005).

Although basal levels of gonadal steroids like testosterone are usually under hypothalamic control (the hypothalamus regulates the release of hormones from the pituitary, which in turn regulates the release of hormones such as testosterone from glands in the body), this mechanism is relatively sluggish, and changes can take an hour or more. The testosterone increases and decreases typically observed in winners or losers of dominance contests occur within 10–20 min, however – much faster than hypothalamic control would permit. So what is it that drives these rapid changes in testosterone levels?

Robert Sapolsky (1987) solved this riddle in a series of elegant field experiments with wild-living baboons in Kenya. He exposed both high-ranking and low-ranking male baboons to stress by darting and immobilizing them (baboons, like many other mammals, experience immobilization as stressful). Sapolsky observed that, within minutes, low-ranking animals showed a drop in testosterone, whereas high-ranking animals’ testosterone surged. To find out what explained these differences in testosterone response to a stressor, he next applied a variety of hormone agonists and antagonists and studied their effect on testosterone release. Sapolsky observed a greater increase in the stress hormone cortisol in low-ranking than in high-ranking baboons; moreover, administration of dexamethasone (a cortisol-like substance) suppressed testosterone release in all animals by making the testosterone-producing cells in the testicles less sensitive to signals from the pituitary. In contrast, administration of a substance that inhibited the release of the sympathetic catecholamines epinephrine and norepinephrine (also called adrenaline and noradrenaline) abolished the post-stress testosterone increase in high-ranking baboons, which suggests that these hormones normally have a stimulating effect on testicular testosterone release. Sapolsky concluded from these findings that the balance between cortisol, which is more likely to be released in response to overwhelming stressors, and sympathetic catecholamines, which are released very quickly in response to stressors that are perceived as manageable, has a rapid and direct effect on testosterone. If the cortisol response to a stressor outweighs the catecholamine response, testosterone levels dip quickly – an outcome that is more likely in low-ranking, powerless animals. If the catecholamine response to a stressor outweighs the cortisol response, testosterone increases – a typical outcome for dominant animals that are used to calling the shots.

These findings from a relatively unusual darting-and-immobilization procedure mirror exactly what Sapolsky and others have observed in many mammalian species. Often, dominant and nondominant animals do not differ substantially in their basal testosterone levels (Sapolsky, 1987; Wingfield et al., 1990). When they are challenged, however, dominant animals respond with a rapid increase in testosterone, which increases muscle energy and aggressiveness and thus makes them more likely to win the fight, whereas nondominant animals respond with a testosterone decrease, lowering their pugnacity and thus their likelihood to get hurt in a fight. In humans, high levels of implicit power motivation may be the equivalent to dominant status in animals (Schultheiss, 2007; Stanton & Schultheiss, 2009). Power-motivated people respond to dominance challenges in which they can keep the upper hand with increased sympathetic catecholamines and decreased cortisol (Wiemers, Schultheiss, & Wolf, 2015; Wirth, Welsh, & Schultheiss, 2006). The net result is a testosterone increase within 15 min of the challenge. In contrast, low-power individuals respond to dominance challenges with increased cortisol levels and low catecholamine levels , suggesting that, even when they are able to keep the upper hand, they feel stressed and uncomfortable with the situation. The result is a drop in testosterone (Schultheiss, Wirth, Torges, Pang, Villacorta, & Welsh, 2005).

Excursus

Dominance

Dominance behaviors are aimed at gaining privileged access to resources that ensure the individual’s personal and genetic survival. Established dominance hierarchies bestow benefits on dominant and subordinate members of a group by lowering the incidence of energetically costly fights for resources. Dominance is not synonymous with aggression – while offensive, hormone-dependent forms of aggression clearly play a role in the establishment of dominant status, dominance also encompasses nonaggressive behaviors, and predatory and defensive aggression typically are unrelated to dominance. Dominance motivation is supported by the anterior nucleus of the hypothalamus and its interconnections to brain substrates of incentive motivation and by high levels of gonadal steroids such as testosterone and estradiol, which facilitate signal transmission in brain structures related to dominance motivation. In many species, high testosterone facilitates dominance and aggression, and the outcomes of dominance encounters cause rapid changes in testosterone, particularly in males, with winners registering an increase and losers a decrease. These testosterone changes are triggered by the effects of stress hormones on the gonads. Elevated cortisol levels inhibit while elevated sympathetic catecholamine levels stimulate the release of testosterone. In humans, high levels of implicit power motivation predispose individuals to respond to dominance challenges with low cortisol, elevated sympathetic catecholamines, and increased testosterone, whereas low-power individuals respond with increased cortisol, low sympathetic catecholamines, and decreased testosterone .

10.4.4 Sex

The need for sex is at once one of the most potent and most peculiar of all motivational systems. One does not have to be a Freudian to recognize that much of what goes on in the lives of humans and other beings revolves around sexual reproduction. At the same time, not having sex does not threaten our survival as individuals in the same way as not having food, water, or social protection does. But given that the transmission of genes to offspring is the ultimate and perhaps most magnificent goal of all sexually reproducing animals, extending an unbroken, billion-year-old chain of life by another generation, it makes sense that evolution ensured that no living being would forget about procreating by making the sexual urge an extremely powerful one. In the following, we review how sexual motivation is shaped by the interaction of biological factors and experience.

10.4.4.1 Developmental Origins of Sex and Gender

Although, for birds and mammals , biological sex initially resides in the genes, the gonads take over fairly early in fetal development. For the rest of our lives, the gonads govern sexual behavior to a large extent, partly through their permanent (organizational) effects on the developing brain and partly through their temporary (activational) effects on the adult brain (Nelson, 2011). If a gene on the Y chromosome that is present only in males is expressed at conception, testes develop and start producing testosterone and other androgenic hormones, leading to male body morphology (e.g., development of male genitals) and brain organization. If the gene is not activated at conception – as is the case in females, who do not carry the Y chromosome – ovaries develop. Because ovaries release almost no hormones during fetal development, the brain and the body develop in the female mode. It should be noted that sexual development is not all or none, either male or female. Rather, different parts of the body and of the brain are influenced by the interplay of hormones, hormone-metabolizing enzymes, and the expression of hormone receptors at different times during intra- and extrauterine development, which can lead to variations in the fit between “brain sex” (sexual identity, sexual preferences) and body sex. Thus, although in many cases male body sex is associated with male sexual identity and a preference for female partners and female body sex is associated with female sexual identity and a preference for male sexual partners, this is by no means a certain outcome and variations (e.g., transsexuality, homosexuality) do occur (LeVay & Hamer, 1994; Panksepp & Biven, 2012).

10.4.4.2 Hypothalamic Command Centers of Sexual Behavior

The differential “marinating” of the brain in gonadal hormones during fetal development leads to differences in the organization of hypothalamic control of sexual behavio r. These differences, and their effect on sexual motivation and behavior, have been most thoroughly studied in rats (Nelson, 2011; Panksepp, 1998). In female rats, the key command center of sexual behavior is the ventromedial nucleus (VMN) of the hypothalamus. If this nucleus is lesioned, female rats will not show any interest in mating with a male, as reflected in the absence of proceptivity (the active solicitation of male sexual interest) and receptivity (the readiness to allow males to mate with them). In rats, receptivity is easily observable as a behavior called lordosis, which consists in the female arching her back and deflecting her tail to allow the male to copulate with her. Electrical stimulation of the VMN, on the other hand, can trigger both proceptivity and receptivity, but only in the presence of the gonadal steroids estrogen and progesterone, which bind to steroid receptors in the VMN and are released during the fertile phase (estrus) of the rat’s estrous cycle. Of course, the central coordinating function of the VMN is functionally integrated with the operation of brain structures supporting incentive motivation generally. For instance, female rats in estrus show increased DA release in the nucleus accumbens at the sight of a male rat, and this increased DA release reflects increased motivation to approach the male (Pfaus, Damsma, Wenkstern, & Fibiger, 1995).

The key command center of male sexual behavior is the medial preoptic area (MPOA) of the hypothalamus, which, as a result of organizational effects of gonadal steroids, is larger in males than in females. MPOA lesions in males lead to an inability to copulate, whereas electrical stimulation of the MPOA makes male rats ejaculate earlier than normal. Testosterone treatment in castrated male rats restores normal levels of neuronal firing in the MPOA. As in females, the hypothalamic control of sexual behavior in males is integrated with general-purpose motivational brain systems and hormonal factors. In a series of elegant studies, Everitt (1990) showed that MPOA lesions led to a loss of copulatory ability, while sexual motivation remained intact (e.g., animals continued to bar-press for access to females). Conversely, if the basolateral amygdala was lesioned and the MPOA was spared, animals were no longer motivated to gain access to a female in estrus but were able to copulate with her once placed on top of her. Likewise, a reduction of DA transmission in the mesolimbic DA system led to a decrease in sexual motivation but did not affect copulatory ability. Notably, castration, which leads to an almost complete loss of testosterone, impaired both sexual motivation and copulatory ability.

10.4.4.3 Hormonal Factors in Sexual Motivation

This last finding suggests that hormones , which bring about differential organization of the hypothalamus in males and females in the first place, later play a key role in sexual motivation . Even with a fully functional brain, sexual behavior in mammals and other species is strongly dependent on sufficient levels of gonadal steroids (i.e., testosterone, estrogen, and progesterone; Nelson, 2011). In females of many species, including our own, initiation of sexual activity coincides with the high-estrogen phase of the reproductive cycle (Wallen, 2001; note, however, that in most other species, females not in estrus show no sexual interest at all). Removal of the ovaries leads to a loss of sexual appetite, which can be restored through the administration of estrogen (Zehr, Maestripieri, & Wallen, 1998). Similarly, male sexual motivation in humans and other species depends on sufficiently high levels of testosterone (Nelson, 2011). Notably, in many parts of the brain, testosterone needs to be converted to estrogen first before it can have an effect on behavior, and studies have shown that male sexual motivation requires the presence of both testosterone and testosterone converted to estrogen in the brain (Baum, 1992).

The release of gonadal steroids does not just fuel sexual motivation but can itself be the outcome of a motivational process. For instance, research on rats has shown that conditioned sexual cues can trigger the release of testosterone in males (Graham & Desjardins, 1980). By the same token, a study with human subjects revealed that heterosexual men experience a transient testosterone rush when they meet an attractive woman (Roney, Lukaszewski, & Simmons, 2007). Conversely, being committed to a romantic partner is associated with a reduction of testosterone in men, perhaps as a safeguard against aggression within the relationship and the lure of potential partners outside the relationship (Gray et al., 2004).

10.4.4.4 Learned Sexuality

Findings about the roles of the hypothalamus and hormone levels in sexual motivation may be taken to suggest that sexual motivation is a purely biological phenomenon that is not influenced by environmental factors.

However, biopsychologists have collected ample evidence that sexual behavior is strongly dependent on social learning processes, to the extent that some researchers even speak of “learned sexuality” (Woodson, 2002).

The conditioned hormone release effect described above is one example of learned sexuality. Moreover, rats reared in social isolation show clear deficits in sexual motivation and copulatory performance later in adulthood, and even animals that were reared socially need to learn, through Pavlovian and instrumental conditioning processes, how to tell male from female, what types of signals are sent by a potentially willing partner, and how to copulate appropriately. Even something as “biological” as male sperm production is amenable to learning: male Japanese quails release more spermatozoa and a greater overall volume of semen during copulation if they have been exposed to a Pavlovian-conditioned sexual cue that stimulated sperm production in the gonads in a preparatory fashion before copulation (Domjan, Blesbois, & Williams, 1998). This dependence of sexual behavior on learning may also explain why, in species whose behavior is particularly open to learning, such as humans, sexual motivation and performance can remain intact for a long time even after sudden loss of gonadal function and why the females of our species and some other primates (e.g., the bonobo chimpanzee) show sexual motivation and behavior even during low-estrogen, nonfertile phases of the reproductive cycle.

Hormonal factors play a critical role in the organization of gendered body morphology and brain structures during development. After maturation, sexual motivation and performance depend on the activational effects of gonadal steroids. The ventromedial nucleus and the medial preoptic area are the hypothalamic control centers for sexual behavior (particularly copulation) in females and males, respectively, and are functionally integrated with the brain’s incentive motivation network (i.e., amygdala, striatal dopamine system). Adaptive sexual behavior also depends on learning processes that allow organisms to learn about and discriminate sexual cues and to acquire behaviors that are instrumental for successful mating.

10. Conclusion

In this chapter, we have sought to provide an overview of the biopsychology of motivation – an incredibly vast, multifaceted, fascinating, and lively field of study that is often overlooked by social-cognitive motivation psychologists, who tend to rely primarily on self-report and experimental studies with humans. As a consequence, with relatively few exceptions, the biopsychological and social-cognitive approach to the study of motivation have pursued quite separate research agendas for a long time, with the former exploring the brain correlates of basal needs such as hunger, sex, or affiliation and the latter examining people’s goals, self-views, attributions, and information-processing biases. However, the fact that we were able to weave numerous studies involving human subjects into this chapter suggests that the divide between the two fields of motivation research is gradually vanishing. It is our hope that, as biopsychologists become more interested in the way that fundamental motivational needs play out in the human brain, human motivation researchers will become more interested in how motivational processes and constructs that are uniquely human are “embrained” and embodied.

Review Questions

  1. 2.

    Describe three research strategies that are frequently used in the biopsychology of motivation. What are these strategies almost always combined with?

    Biopsychological research on motivation often uses (1) lesioning techniques to study the contributions of specific brain areas to a behavior, (2) recording techniques (e.g., single-cell recording, in vivo dialysis) to study the behavior of specific neurons, and (3) pharmacological manipulations of synaptic signal transmission to study the role of specific transmitter systems. These strategies are almost always combined with behavioral methods (e.g., Pavlovian or instrumental learning procedures) to illuminate the contributions of specific brain areas or transmitter systems to specific cognitive or behavioral functions

  2. 2.

    What are the hallmarks of motivation from the perspective of biopsychology?

    Motivation is based on the (anticipated) experience of pleasure or displeasure upon encountering an incentive or a disincentive as a common currency for prioritizing possible courses of action. Motivated behavior can be directed toward the attainment of rewards (approach motivation) or away from punishers (avoidance motivation). Motivation consists of two distinct phases: a motivational phase proper, during which the individual engages in the pursuit of a reward (or avoidance of a punisher), and an evaluation phase, during which the individual consummates the reward and evaluates its “goodness.” Although there are many different classes of reward (e.g., food, sex, dominance), they can all engage similar motivational processes (e.g., response invigoration, learning). Motivated behavior changes its goals dynamically, depending on how recently a given need has been satisfied and what kinds of incentives are available in a given situation. Motivation can be induced through a physiological need, the presence of incentive stimuli, or both. Motivation makes use of, and shapes, learning of stimulus-stimulus (Pavlovian conditioning) and means-end (instrumental conditioning) relationships. Biopsychological approaches to motivation do not assume that motivation requires conscious awareness but acknowledge that specialized brain systems support the conscious setting and execution of goals in humans

  3. 3.

    What is a key function of the amygdala in motivation?

    The amygdala forges associations between affectively neutral stimuli (CS) and the affectively charged events or stimuli (US) that they reliably predict. In the process, the predictive stimuli take on affective meaning themselves and can induce motivational states. The amygdala thus acts as a motivational “homing-in” device that allows individuals to adjust their physiological states and overt behavior to cues that predict the occurrence of unconditioned rewards and punishers and bring them closer to the former or distance them from the latter

  4. 4.

    What is the key function of the striatum in motivation?

    The striatum has two main functions in motivation, both mediated by the neurotransmitter dopamine: the ventral stratum is critical for reward-driven invigoration of behavior, whereas the dorsal striatum plays a key role in learning about action-outcome contingencies and selecting behaviors that are instrumental for obtaining rewards (or avoiding punishers)

  5. 5.

    What is the key function of the orbitofrontal cortex (OFC) in motivation?

    The OFC evaluates the “goodness” of primary and secondary (i.e., learned) rewards based on the individual’s current need state, learning experiences, and previous exposure to the reward.

  6. 6.

    What is the key function of the lateral prefrontal cortex (LPFC) in motivation?

    The LPFC guides behavior through the formulation of complex, verbally represented goals and plans for their implementation. It also influences behavior by regulating the output of the brain’s incentive motivation network and can shield explicit goals from interference by incentive-driven motivational impulses

  7. 7.

    What is the difference between active and passive avoidance? Which structure of the motivational brain plays a critical role in the former but not in the latter?

    The difference between passive avoidance and active avoidance is that in the former, behavior is inhibited in order to avoid a punisher, whereas in the latter, behavior is executed in order to attain safety. Functions of the mesolimbic dopamine system play a critical role in active but not passive avoidance

  8. 8.

    What is alliesthesia? Give an example

    Alliesthesia is the changing subjective evaluation of a reward over repeated exposures or across changing stimulus contexts. For instance, most people experience one piece of chocolate as quite tasty and pleasant but would respond with nausea and aversion after eating a pound of it

  9. 9.

    Imagine you have just finished a large meal. Describe the signals sent to your hypothalamus to indicate that you are full and how neuropeptide systems in the hypothalamus would respond

    Leptin levels increase in the bloodstream; levels of CCK from the gut also rise. CCK sends signals to the vagus nerve. Leptin and the CCK signal from the vagus nerve act on the hypothalamus to increase the activity of α-MSH neurons and decrease the activity of NPY neurons

  10. 10.

    How do opioids and NPY differ in their control of food intake/motivation to eat?

    NPY is involved in hunger driven by energy needs. NPY causes animals to prefer the most calorically dense food available, even at the expense of taste. Opioids are involved in motivation to eat for pleasure. Opioids drive animals to choose the tastier option, at the expense of calories/energy

  11. 11.

    Describe one role of opioids in affiliation or attachment

    Any of the following: (a) Opioids reduce distress in infant mammals separated from their mothers, implicating opioid systems in infant-to-parent attachment. (b) In primates, opioids are involved in motivation to engage in mutual grooming. (c) In humans, opioid systems may be involved in feelings of affiliation, as evidenced by higher pain tolerance in people high in a “social closeness” trait after they watched an affiliation-related movie, an effect that was blocked by an opioid antagonist

  12. 12.

    Describe the role of oxytocin in parent-offspring attachments and pair bonds. Is oxytocin necessary for the initiation of attachment? For the maintenance of the attachment? Is it sufficient?

    High oxytocin levels in the bloodstream are necessary for the formation of parent-offspring attachments and pair bonds. However, oxytocin is not sufficient – other hormones and learning factors are also necessary. Oxytocin is not necessary for the maintenance of the attachment once it has been formed

  13. 13.

    What is the difference between intrasexual and intersexual competition?

    Intrasexual competition occurs when members of one gender fight or compete with each other to establish who will be allowed access to members of the other gender, whereas intersexual competition occurs when members of one gender vie, as potential mates, for the attention and acceptance of members of the other gender

  14. 14.

    What is the relationship between dominance and aggression?

    Aggression is one form of dominance behavior. However, not all forms of aggression serve dominance functions (e.g., predatory or defensive aggression are not aimed at dominance), and dominance also encompasses nonaggressive behaviors, which are particularly critical for success in primate species

  15. 15.

    Which hypothalamic structure plays a critical role in dominance, and how can this be demonstrated?

    The anterior nucleus (AN) of the hypothalamus plays a critical role in dominance, as assessed by piloerection and lateral attack. If the AN is lesioned, dominance behavior ceases; if the AN is stimulated, dominance behavior is facilitated

  16. 16.

    What is the relationship between dominance and gonadal steroid hormones?

    High levels of gonadal steroids (primarily testosterone but also estradiol) facilitate dominant and aggressive behavior, and success in dominance interactions can in turn increase gonadal steroid levels. Thus, the relationship between dominance and gonadal steroids is reciprocal

  17. 17.

    Which mechanism drives the rapid testosterone changes observed in the context of male dominance challenges?

    In males, rapid changes in testosterone release are governed by the stimulatory effects of sympathetic catecholamines (norepinephrine and epinephrine) and the inhibitory effects of cortisol on the testes. In dominant individuals, the effect of sympathetic catecholamines outweighs that of cortisol, producing a net increase in testosterone. In nondominant individuals, the effect of cortisol outweighs that of the sympathetic catecholamines, leading to a net decrease in testosterone

  18. 18.

    Which hypothalamic centers regulate male and female sexual behavior, and which specific aspects of sexual behavior are particularly dependent on these centers?

    The ventromedial nucleus (VMN) and the medial preoptic area (MPOA) are the hypothalamic control centers for sexual behavior in females and males, respectively. In females, both proceptivity (active solicitation of male sexual interest) and receptivity (readiness to allow males to mate with them) depend on an intact VMN and sufficiently high levels of estradiol and progesterone. In males, copulatory ability depends on an intact MPOA and sufficiently high levels of testosterone, whereas sexual motivation does not depend on the MPOA

  19. 19.

    What evidence is there to suggest that hypothalamic control centers of sexual behavior are functionally integrated with other structures of the brain’s incentive motivation network in sexual motivation?

    Female rats in estrous show increased dopamine (DA) release in the nucleus accumbens at the sight of a male rat, and this increased DA release reflects increased motivation to approach the male. In males, a reduction of DA transmission in the mesolimbic DA system leads to a decrease in sexual motivation but does not affect copulatory ability. Moreover, MPOA lesions lead to a loss of copulatory ability in males, while sexual motivation remains intact. Conversely, if the amygdala is lesioned and the MPOA is spared, male rats are no longer motivated to gain access to an estrous female but are able to copulate with her once placed on top of her. These findings suggest that sexual motivation depends not just on the hypothalamus for copulatory ability but also on the amygdala and the mesolimbic DA system for guiding and invigorating an animal’s behavior to gain access to a mate