Keywords

5.1 Introduction

The human face has traditionally been the primary way humans communicate emotion and inner feelings. Facial expressions of emotion are universal among human populations (Ekman 1999) and have provided a critical method of nonverbal communication that has served as an evolutionary adaptive behavior (Waller et al. 2008). These behaviors include the facilitation of social interaction, group bonding, and appropriate responses to others, such as mates, predators, and caregivers (e.g., Plutchik 2000; Waller et al. 2008). The human face is rich in communicative potential. Among the mammals, humans have the most extensively developed facial musculature (e.g., Roberts 1966). As such, in many cases, facial expressions of emotion are relatively easy to comprehend. It has been well documented that the basic facial emotions of sadness, happiness, surprise, anger, disgust, and fear are universally understood and expressed by all humankind. This evidence comes from cross-cultural examinations (e.g., Ekman 1999; Ekman et al.1969; Izard 1977), infant displays of facial expressions of emotion (e.g., Hauser 1996; Plutchik 2000), and facial expressions displayed by congenitally blind individuals (Cole et al. 1989). The term emotion has been described as a reaction to appropriately evocative stimuli that encompass cognitive appraisal, subjective experience, expressive behavior, physiological arousal, and goal-directed behavior (Borod 1993b; Borod et al. 2000, 2001; Plutchik 1984).

Asymmetry in facial emotional expression has been documented for over a century and has been interpreted as evidence of brain laterality since the late 1970s (e.g., Borod and Caron 1979). Facial asymmetry is defined as greater expression intensity or muscular involvement on one side of the face (i.e., “hemiface”) as compared to the other side (e.g., Borod et al. 1997). As described in Borod and Koff (1984) and in Borod et al. (1997), the first mention of facial asymmetry during emotional expression appears to date back to Darwin (1890) who, in his 1872 discussion “Sneering and Defiance,” noted that snarls (i.e., baring one’s teeth or the canine tooth) and sneers (i.e., insincere [half] smiles indicative of defiance) seemed only to occur on one side of the face. In an attempt to understand this asymmetry in expression, Darwin asked four Australian natives to produce a sneer, in the absence of any eliciting stimuli. Two individuals could only sneer on the left side, and one individual could only sneer on the right side, while the fourth individual could not voluntarily produce a sneer.

Over 65 years later, researchers performed the first detailed study of facial asymmetry during emotional expression tasks (Lynn and Lynn 1938). In this seminal study, Lynn and Lynn (1938) introduced the term “facedness” to indicate which side of the face is dominant during facial expressions of emotion. For example, a person with left facedness is a person whose left hemiface is more expressive and intense during emotional expression. Interestingly, Lynn and Lynn (1938) coined the term “facedness” to correspond to the term for dominant hand use, “handedness.” This corresponded well with their aim to examine facedness/handedness concordance or divergence and how it related to personality traits. Although research focusing on the relationship between facedness and personality traits has not been continued over the years, this body of work paved the way for future research on facial emotional asymmetry and brain laterality. Facial asymmetry was less researched during the 1950s and 1960s, but regained newfound interest and attention by researchers in the 1970s due to advances in technology and medicine

The modern era of systematic examination of facial expression in patients with lateralized brain damage (Buck and Duffy 1980; Ross and Mesulam 1979; see, also, Gainotti 1972; for a review of the early brain lesion literature, see Borod and Koff 1989) and healthy individuals began in the mid-to-late 1970s and very early 1980s (Borod and Caron 1979, 1980; Campbell 1978; Chaurasia and Goswami 1975; Ekman et al. 1981; Heller and Levy 1981; Sackeim and Gur 1978; Strauss and Kaplan 1980). The seemingly qualitative behavior of facial emotion is being studied in the laboratory using quantitative measures, such as the Facial Action Coding System (FACS) developed by Ekman and Friesen (1978). However, the study of lateralization of emotional facial expression has been dominated by three approaches: observation of patients with lateralized brain damage, assessment of asymmetry in whole faces of normal adults either by direct observation or through video recordings, and evaluation of unaltered photographed, composite, or chimeric faces.

The following chapter covers much of the current research to date on facial emotional asymmetry in terms of the prevailing theories (e.g., the right hemisphere hypothesis and the valence hypothesis). Special attention is paid to addressing consensus or discrepancies in the literature with regard to elicitation condition (i.e., posed vs. spontaneous), emotional valence (e.g., positive vs. negative), clinical populations (e.g., split-brain patients and stroke patients), age, gender, and methodological considerations. This chapter will conclude with suggestions for future research.

This chapter is based on and includes information about facial asymmetry studies from literature reviews and about neuropsychological approaches to theories of emotional processing from papers written over the past 35 years by Dr. Joan Borod and her colleagues.

5.2 How are Emotional Facial Expressions Captured and Studied?

There are a few ways to produce expressions of facial emotion stimuli that can be quantified by researchers or naïve raters. Studies of induced facial expression in normal adults have generally fallen into two categories: spontaneous expression and posed expression. Eliciting a posed emotional expression that is valid and reliable without running the risk of capturing an expression that does not actually portray the emotion of interest is a concern when utilizing this type of stimulus. Therefore, many researchers rely on validated sets of posed facial emotions that have been created using a standardized system. The most common and frequently used set of posed facial emotions is Ekman’s Pictures of Facial Affect (Ekman and Friesen 1976). Alternatively, researchers can give oral commands indicating a specific emotion to be displayed. While this method risks ecological validity, there are notable exceptions discussed in the review of the literature below. See Borod and Koff (1990) for a detailed description of elicitation procedures for producing facial emotional expressions. For a methodological perspective on how to study facial emotional expression in terms of procedures with humans and brain laterality, see Borod and Koff (1990).

Spontaneous emotions have a bit more ecological validity as they occur naturally and are brought about via an eliciting stimulus (i.e., an emotional film, emotionally provocative slides, a comic strip, etc.) or can be elicited by recalling previously experienced emotional events (i.e., “Tell me about the saddest day of your life.”). For procedures designed to elicit spontaneous facial emotional expression, see Borod et al. (1992), Malatesta and Izard (1984), and Montreys and Borod (1998). Studies investigating emotional facial expressions rely on posed and spontaneous expressions and, in some cases, can result in different experimental findings. Viewing a posed picture of a sad face can be very different from viewing a spontaneous emotional face in motion, either in person or via a video recording. Studies utilizing spontaneous facial emotions, as described above, need multiple, extensively trained raters to make facial ratings or comparisons and require high interrater agreement (Borod and Caron 1980; Borod et al. 1983).

Facial composite photographs may, in some ways, be viewed as a successor to the posed expression approach. The origins of this approach were with chimeric photographs that were presented to split-brain (i.e., commissurotomy) patients (Levy et al. 1972); also, see work by Heller and Levy (1981). For some studies of expressions of facial emotions, chimeric faces are created using photographs of posers demonstrating specific facial emotions; these photographs are divided vertically down the middle of the face. Each hemiface is then reproduced as a mirror image and combined with the original hemiface to form a full and perfectly symmetrical face. Variations on this include reversing the hemiface (i.e., creating a mirror image of the original photograph). Greater lateralized expressiveness may then be judged by having raters evaluate overall expressiveness for each doubled hemiface (e.g., original face vs. left–left face vs. right–right face). In numerous studies, left–left facial composites have been found to be more emotionally expressive for both positive and negative emotions as judged by naïve raters.

A major advantage of this technique is that it helps eliminate perceiver bias generated by the right hemisphere’s well-established involvement in the perception of emotion (for a review of lateralization for emotion perception in healthy adults, see Borod et al. 2001). Specifically, the right hemisphere’s preferential processing of facial emotion means that an observer would be more sensitive to emotional expression in their left hemispace (Borod et al. 1990; Levy et al. 1983; Moreno et al. 1990), which would be occupied by the subject’s right hemiface, leading to greater sensitivity for expressions generated by the left hemisphere of the subject being observed. It also obviates any need for extensive training of naïve raters, generally used in these studies.

5.3 The Right Hemisphere Hypothesis of Facial Emotional Expression

The past four decades have since given rise to multiple theories of facial emotional expression. Two of the major hypotheses concerning the lateralization of emotion are the right hemisphere hypothesis and the valence hypothesis. The right hemisphere hypothesis proposes that the right hemisphere (RH) is specialized for the production and perception of emotion, regardless of valence (for reviews, see Borod 1992, 1996; Borod et al. 1998). Much of the current research supports the right hemisphere hypothesis, finding the left side of the face to be more emotionally expressive than the right side (e.g., Borod et al. 1988, 1997; Campbell 1978; Sackeim and Gur 1978; Sackeim et al. 1978).

In some of the earlier research utilizing tachistoscopic methodology with healthy individuals, several studies demonstrated left visual-field (right hemisphere) superiority for discriminating emotional faces among individuals without brain damage (Landis et al. 1979; Ley and Bryden 1979; McKeever and Dixon 1981). Yet, studies using posed faces have yielded conflicting results. While judging the lower face in posed facial emotional expressions, several studies found that the lower left hemiface was perceived as more expressive for negative emotions as compared to the lower right hemiface (Borod and Caron 1980; Koff et al. 1983; Moreno et al. 1990). While these results do support the RH hypothesis, the evidence is mixed. For example, using both posed and spontaneous expressions, Wylie and Goodale (1988) found that the left side of the mouth moved more during spontaneous compared to posed expression. This finding may suggest that spontaneous emotions are more realistic/genuine, therefore, demonstrating an RH bias as compared to what occurs for posed emotions.

However, more recently, some interesting techniques have been used to capture and analyze the posed face. In a study by Nicholls et al. (2004), researchers were able to capture “posers” (i.e., the facial expression producer) digitally while they posed intense facial expressions of happiness and sadness and also produced a neutral expression. No eliciting stimulus was given to drive the emotion the posers were asked to express; they were just told to produce the most intense expressions they could. The researchers found that both the sad and happy expressions had greater movement in the left hemiface. These judgments were not made by raters but by specialized computer software that captured and digitized the face and head in 3 dimensions (3-D). This program could then digitally detect which hemiface displayed greater movement.

5.4 The Right Hemisphere Hypothesis: Studying Spontaneous Facial Expressions of Emotion in Healthy Adults

For spontaneous facial expressions of emotion in the lower face, Brockmeier and Ulrich (1993) and Borod et al. (1983) found that the lower left hemiface, compared to the lower right hemiface, exhibited greater expressiveness for negative emotions. In addition, positive facial emotions were found to be consistently more expressive on the left side of the lower face in four different studies (Borod et al. 1983 [for male subjects]; Chaurasia and Goswami 1975; Wyler et al. 1987; Wylie and Goodale 1988). In contrast, Brockmeier and Ulrich (1993) found the lower right hemiface to be more expressive than the lower left hemiface. However, no differences in lower face expressivity were observed in two of the studies reviewed (Ekman et al. 1981; Remillard et al. 1977).

Whole face examinations of spontaneous emotional expressions provide the most mixed results. In a literature review by Borod et al. (1997), whereas negative emotions were found to be lateralized to the left hemiface in four studies (Dopson 1984; Moscovitch and Olds 1982; Schiff and MacDonald 1990; Wemple et al. 1986), no lateralized difference was found in three other studies reviewed (Cacioppo and Petty 1981; Ekman et al. 1981; Monserrat 1985). Positive emotions were lateralized to the left hemiface in three studies (Dopson 1984; Monserrat 1985; Moscovitch and Olds 1982), whereas no differences in facial expressivity for positive emotions were found by Hager and Ekman (1985), Lynn and Lynn (1938), and Sackeim and Gur (1978). Also, using spontaneous emotional expressions, Schiff and MacDonald (1990) found that the right hemiface was significantly more expressive than the left hemiface.

5.5 The Right Hemisphere Hypothesis: Evidence from Composite Faces in Healthy Adults

Results from composite face studies more consistently support the RH hypothesis than the valence hypothesis. In a review by Borod et al. (2001), six of seven studies that examined both positive and negative emotions reported greater emotional expressivity for facial composites of left–left than right–right hemifaces (Asthana and Mandal 1997, 1998; Braun et al. 1987; Mandal et al. 1993, 1995; Moreno et al. 1990). Heller and Levy (1981) reported a similar finding but only examined one positive emotion. By contrast, one study found greater expressivity for positive emotions in the right–right composite faces (Brockmeier and Ulrich 1993). Of note, this study used only a single rater, whereas the other studies utilized multiple raters. Also, Brockmeier and Ulrich (1993) used “mouth deviation” as their outcome measure of facial asymmetry.

Interesting results have been found when three-dimensional faces are viewed. Indersmitten and Gur (2003) found that 3-D chimeric left–left faces were viewed as more emotionally intense as compared to 3-D right–right chimeric faces. Another study (Bourne 2011) investigated the effect of hemispheric lateralization for inverted chimeric faces. It is commonly accepted that the right hemisphere is specialized for gestalt processing or recognizing an image as a whole, whereas the left hemisphere processes information in more of a componential manner. In the study by Bourne (2011), chimeric faces expressing anger, disgust, fear, happiness, sadness, or surprise were presented in either an upright or an inverted orientation. When presented upright, a significant RH bias was found for all six emotions. However, when inverted, a significant left hemisphere bias was found for the processing of happiness and surprise, but not for the processing of negative emotions. These findings support the right hemisphere hypothesis and further elucidate that each hemisphere processes emotional faces differentially.

For comprehensive reviews of facial asymmetry literature in healthy adults, see Borod and Koff (1984, 1989), Borod (1993a), Borod et al. (1997, 1998, 2001), and Assuras et al. (2005).

5.6 The Right Hemisphere Hypothesis: Evidence from Brain-Damaged Individuals

Some of the most compelling support for the RH hypothesis, in terms of facial emotional expression, has come from studies using unilateral brain-damaged populations. Among three studies using right brain-damaged (RBD), left brain-damaged (LBD), and healthy control participants, researchers found greater expressiveness for both positive and negative emotions in the LBD participants (Borod et al. 1988; Buck and Duffy 1980). This demonstrates that facial emotions, regardless of valence, were more expressive when the RH was spared. Two studies examining the role of positive emotions on facial expression and lateral dominance (Blonder et al. 1993, 2005) found greater expressiveness among individuals with left hemisphere (LH) brain damage. It is important to note that the studies mentioned above requested posers to recount previously experienced emotional events or emotional monologues. Trained raters, naïve to the study conditions, then rated the video segments for emotional expressivity in the face. For a detailed description of elicitation and rating procedures for emotional, as well as nonemotional, monologues, see the New York Emotion Battery (Borod et al. 1992).

However, not all studies support a right-hemisphere advantage for facial emotional expression. For example, in a study by Mammucari et al. (1988), researchers did not find differences between RBD and LBD groups in facial expressiveness; however, they found both lesion groups to be less expressive than normal controls for negative emotions only, when facial expressions were evaluated using FACS (Facial Action Coding System; Ekman and Friesen 1978). It should be noted that in this study, posers expressed emotions alone in a room while being video-recorded. However, in the studies mentioned in the previous paragraph, posers were in a room with another individual and a camera during emotion recollection. The discrepant findings between these two sets of studies could be due to the fact that the posers were alone in the second set. This is not as externally valid if one is interested in the social display of emotions as recounting an emotional event to another individual, but could elicit private emotion more effectively, which is considered to be closer to genuine emotion.

Earlier studies have also been able to show that asymmetries are more likely to occur in the presence of an observer or when the subject knows that he or she is being watched (e.g., Buck 1984; Hager and Ekman 1985). Of the above-mentioned studies, nearly all used right-handed individuals, but this may not be true for the Mammucari et al. (1988) study where handedness information is not provided. In another study using FACS (Ekman and Friesen 1978) to quantify the muscle movements of the face, Weddell et al. (1988) found that both RBD and LBD patients were less facially expressive while performing a neuropsychological card sorting task as compared to healthy controls.

In a similar vein, studies that have examined the accuracy of identifying facial emotions (e.g., Borod et al. 1986; Mandal et al. 1999; for a review, see Borod et al. 2002) found patients with RBD to be less accurate in identifying facial emotional expressions as compared to the LBD patients or healthy controls. This supports the idea that facial emotions (i.e., both expressive and receptive) are lateralized to the RH.

In epileptic populations, the Wada test (intracarotid sodium amobarbital procedure) has been able to provide researchers a unique window into elucidating the lateralization of emotion and mood. By individually inactivating each hemisphere of the brain, researchers can observe each participant’s emotional state or expression while selectively “knocking out” the function of the left or right hemisphere during a variety of experimental tasks. Kolb and Milner (1981) took advantage of this phenomenon and compared facial expressions resulting from RH versus LH injections. Using the FACS method of measurement (Ekman and Friesen 1978), Kolb and Milner (1981) did not find a difference in the degree of facial expressivity between RH and LH inactivations. More recently, in a review of the literature on the Wada test and emotion laterality (Trimble 2010), a majority of the studies indicated that inactivating the RH frequently leads to a feeling of euphoria (among other behaviors); however, there is no clear pattern of emotion experienced with LH inactivation. Whereas some studies mentioned in that review found that LH inactivation led to feelings of depression and despair, this phenomenon was not seen in the majority of studies. This area of research is relatively rare, in part, because the Wada test is typically performed on presurgical epilepsy patients. As such, generalizations to the general population are difficult, due to disease state (i.e., epilepsy) and the inherent mood-altering nature of sodium amobarbital.

Experimental outcomes and anecdotal observations from Wada testing are seemingly inconsistent with other lesion research, as the euphoria often seen with right hemisphere inactivation is in contrast to the relative lack of positive emotional expression seen in patients with RH lesions. One possible explanation for this would be that it is a result of disorientation, considering the patient is receiving a powerful dose of an intoxicating and anesthetic drug. Further, when the right hemisphere is injected, certain perceptual distortions are likely to occur, such as unilateral visual neglect (Ahern et al. 1998).

5.7 The Valence Hypothesis of Facial Expression of Emotion

The valence hypothesis pertaining to emotional expression has undergone minor conceptual changes over the years as new research has emerged. Early clinical observations of brain-damaged patients (Jackson 1880; Mills 1912) noted differences in emotional dysfunction dependent on the side of the lesion. Later case studies (Goldstein 1952; Hecaen 1962) reinforced the idea that emotional function was related to the right cerebral hemisphere. Gainotti (1972), however, noted that patients with RH damage could often be indifferent, euphoric, or anosognosic, whereas those with LH damage might catastrophize or be depressed.

One of the earliest descriptions of the valence hypothesis (Silberman and Weingartner 1986) considered the LH to be specialized for the perception and expression of positive emotions and the RH for negative emotions. A variation of that hypothesis, according to Borod (1992, 1996), posited that the RH is specialized for the perception of emotions of both valences, whereas both hemispheres are responsible for experiencing and expressing emotion as a function of valence (Bryden 1982; Davidson 1984; Ehrlichman 1987; Hirschman and Safer 1982; Sackeim et al. 1982). Another conceptualization is that both hemispheres process emotion but that each hemisphere is specialized for particular types of emotion, particularly in the anterior cerebral cortex (Davidson et al. 1990). A majority of the literature on the valence hypothesis suggests that the LH is dominant for positive emotions and that the RH is dominant for negative emotions (Davidson 1992; Gur et al. 1994; Starkstein and Robinson 1988; Sackeim et al. 1978). For a discussion of potential mechanisms, both psychological and neuroanatomical, underlying the valence and right hemisphere hypotheses, see Borod (1992, 1996, 2000) and Borod et al. (1998).

Using EEG, Davidson and colleagues (1990) found that the traditional dichotomy of the valence hypothesis (e.g., the LH is dominant for positive emotions and the RH is dominant for negative emotions) did not hold true when analyzing neural activations during the experience of various emotions. Based on these data, Davidson et al. (1990) conceptualized the LH as involved in approach emotions (e.g., happiness) and the RH in withdrawal emotions (e.g., disgust). Although this conceptualization overlaps substantially with the idea that positive emotions are processed in the LH and negative emotions in the RH, Davidson’s theory (1990, 1992) considers the emotion of “anger” (i.e., a negative emotion) to be LH dominant. Other EEG experiments have provided additional support that positive and negative emotions are differentially lateralized, especially in the frontal cortex (e.g., Davidson and Fox 1982; Tucker et al. 1981). Somewhat later, Davidson et al. (Davidson 1993, 1998; Davidson and Sutton 1995) proposed that lateralization, particularly in the anterior frontal cortex, may depend on either personality traits or transient mood (e.g., Tomarken et al. 1992). As stated, both valence and approach/withdrawal dimensions have been used to conceptualize the valence hypothesis. The aforementioned studies tend to support the approach/withdrawal conceptualization of valence-dependent laterality, although the evidence supporting the valence lateralization hypothesis is still debated. For example, in another set of EEG studies, several groups of investigators failed to demonstrate valence-dependent lateralization (e.g., Collet and Duclaux 1987; Gotlib et al. 1998; Hagemann et al. 1998; Reid et al. 1998). Additionally, lesion data have not always supported the hypothesis (for a review, see Borod et al. 2002).

In a review of the literature by Borod et al. (1997), only two studies were found that used EMG recordings in place of visual observations for measuring facial activity (Schwartz et al. 1979; Sirota and Schwartz 1982). Both studies recorded activity in the zygomatic muscle on each hemiface. Both found no lateral differences for either positive or negative emotions in posed expression. Both found greater right hemiface activity for spontaneous positive emotions, and one (Schwartz et al. 1979) found greater left than right hemiface activity for spontaneous negative emotions. A more recent study (Zhou and Hu 2004) found higher activation in the left than right facial musculature during negative emotions and greater activity in the corrugator than zygomatic muscle. In a follow-up study that examined positive emotion produced by posing happiness, the mean value of EMG activity in the left zygomatic muscle region was the highest, followed by the right zygomatic, left corrugator, and right corrugator muscle regions (Zhou and Hu 2006).

One study by Smith et al. (2006) used the approach of applying intracranial stimulation to specific cerebral locations. Presurgical epilepsy patients were stimulated with subdural electrodes across the cortex. Along with changes in facial expression, motor responses and patient reports of subjective feelings were recorded. Although the investigators did not report facial expressivity findings separately from dysphoria or motor responses, they found negative emotional responses of some type when stimulation was applied to the right mesial frontal, insular, and orbitofrontal areas. Positive emotional responses to stimulation at any site were extremely infrequent, as were responses of any type to left hemisphere stimulation.

Using the region of interest (ROI) method with fMRI, Beraha et al. (2012) compared left and right hemispheric functioning in terms of emotional face processing. They found LH, but not RH, region-specific lateralization during passive viewing of stimuli from the International Affective Picture System (IAPS; Lang et al. 1997). Specifically, their data showed that asymmetry was left-lateralized for negative stimulus processing in subcortical brain areas, in particular, the amygdala and uncus; however, activation to positive stimuli was bilateral in differing brain regions.

Further, Schiff and Lamon (1989) had subjects perform muscle contractions on each side of the mouth that replicated positive and negative emotions (e.g., smiling and frowning); subjects then reported what emotions they felt. In two of three conditions, they found that contractions on the right side of the face, reflecting predominantly left hemisphere innervation, led to reports of positive emotion. This finding, however, was not supported in two later studies by other investigators (Fogel and Harris 2001; Kop et al. 1991). In an interesting study by Nicholls and colleagues (2004), researchers found that when posed expressions were rotated by 35° so that the left hemiface was featured more than the right hemiface, the rotated left hemifaces were evaluated by human raters as more expressive of negative emotion (i.e., sadness), whereas the rotated right hemifaces were seen as more expressive of positive emotion (i.e., happiness), supporting the valence hypothesis. Of note, when the same hemifaces were analyzed for movement using computerized measurement, the left hemiface had significantly greater movement than the right hemiface, which actually provides support for the RH hypothesis. On the other hand, although the face side by emotion-type interaction was not significant (p = 0.11), post hoc analyses (on a theoretical basis) showed that the left hemiface moved significantly more than the right hemiface for the sadness emotion, whereas there were no differences between the left and right face sides for the happiness emotion—findings providing partial support for the valence hypothesis.

5.8 The Upper–Lower Facial Axis Theory of Emotional Expression

The left versus right hemiface distinction is not the only facial delineation noted in the literature. In the 1940s, research into the nature of facial emotion production was studied by comparing the upper versus the lower hemiface (e.g., Coleman 1949; Hanawalt 1944). More recently, Ross et al. (2007) have argued that emotional displays in the upper hemiface are preferentially processed by the right hemisphere, whereas the lower hemiface displays are processed by the left. See, also, Ross et al. (2013). This argument is related to the theory that the left hemisphere preferentially processes voluntary, social emotional displays, which are enacted by the lower hemiface (see Ross et al. 1994, 2007). Studies in which observations were restricted to the lower face were primarily intended to test the right hemisphere hypothesis, as the efferent nerves to the lower face are predominantly contralateral, whereas the muscles for the upper face are bilaterally innervated (for reviews, see Borod and Koff 1984; Morecraft et al. 2004).

One part of the argument is that social displays of emotion are mediated by the left hemisphere. The idea that facial expressions may be mixed or in conflict goes back to Darwin. Research has supported the existence of a social (also called “voluntary,” “false,” or “non-Duchenne”) smile simultaneously with an unemotional upper face (e.g., Ekman 2003). Early lesion research (Buck and Duffy 1980) found that social display rules are impaired by LH lesions but not by RH lesions.

Research on split-brain patients by Gazzaniga and Smylie (1990) found that when patients were commanded to smile, the left side of the face lagged the right by 90–180 ms, implying that smile simultaneity (i.e., the lack of a lag time between hemifaces) in healthy individuals would be mediated by subsequent right-to-left transmission across the corpus callosum.

Asthana and Mandal (1997) asked healthy subjects to observe blended composites of upper and lower faces (i.e., each composite had two left lower faces and two right upper faces and the reverse) to compare to unchanged, reversed, and symmetrical faces. Using the emotions happiness and sadness, they found that symmetrical left lower faces provided the most expressiveness, supporting the RH hypothesis.

The amount of literature relevant to this hypothesis is somewhat limited. The theory that the LH may be involved in social displays seems uncontroversial but may reflect the navigation of social situations more than the expression of true emotion. Studies using clinical neurological populations have addressed left hemiface and right hemiface lateralization (e.g., Borod and Koff 1991), but, to our knowledge, have yet to address the upper face versus lower face distinctions in terms of brain lateralization.

5.9 Description of Table

The following Table 5.1 highlights much of the research mentioned in this chapter as a way to summarize a number of studies that have been carried out to date on facial asymmetry and/or hemispheric laterality during the expression of facial emotion. In the table, for each study, one can see poser information, the valence expressed, elicitation and evaluation procedures, and hemispheric advantage.

Table 5.1 Summary of facial emotional expression studies

5.10 Gender Differences in Facial Expressions of Emotion

There is little consensus in the literature regarding gender differences in facial asymmetry. Some studies have found no significant differences in facial asymmetry or brain lateralization between men and women, whereas others have reported significant gender differences with respect to emotional valence (Borod et al. 1986; Burton and Levy 1989; Bowers and LaBarba 1988; Crucian 1996; Hines et al. 1992; Russo 2000; Steele 1998; Witelson and Kigar 1988). Some studies have shown that female and male subjects process emotions differently. Women have been found to be more emotionally expressive than men (Grunwald et al. 1999; for a review, see Borod and Madigan 2000). Grossman and Wood (1993) note that this may be due to societal factors, whereas others support a more biological theory that women show stronger activations than men in limbic structures during tasks related to emotional expression (Wager et al. 2003). Levenson et al. (1991) studied emotional expression in old age and found no significant sex differences in facial expression, although elderly women reported more intense emotional experiences during this study than elderly men. Borod and Caron (1980) found that women were more lateralized for positive emotions and that men were more lateralized for negative emotions. By contrast, another study found that women showed increased facial asymmetry (i.e., greater lateralization) during sad expressions than did men (Asthana and Mandal 1998). However, many studies have reported that men show more lateralization of brain function than women (Bowers and LaBarba 1988; Crucian 1996; Hines et al. 1992; Russo et al. 2000; Steele 1998).

In an analysis of 33 studies comprehensively reviewed by Borod et al. (1998), they found no significant gender differences in 23 of the 33 (≈70 %) studies reviewed. Six studies showed that men were more left-faced (i.e., RH dominant) than women, and 4 studies showed that women displayed greater left-faced emotion as compared to men. The authors concluded that there were no significant gender differences with regard to facial emotion expression. In another review by Borod et al. (1997), 14 experiments did not display significant differences in facial asymmetry with regard to gender, and 7 experiments showed significant overall gender differences related to facial asymmetry; however, there were no systematic patterns. When gender and laterality have been assessed in infant populations, the same lack of a pattern has been found. Schuetze and Reid (2005) examined lateralization in 12-, 18-, and 24-month-old infants and did not find any gender differences in facial asymmetry for positive or negative emotional expressions.

In a study of 37 right-handed men and women (Borod et al. 1983), positive and negative emotions were elicited through two posed conditions (i.e., verbal and visual command) and one spontaneous condition (i.e., viewed emotionally provocative slides). The researchers found that the left hemiface moved significantly more than the right hemiface regardless of condition or gender.

In summary, the research on gender differences in facial asymmetry has not reached a solid consensus but seems to suggest no reliable sex differences.

5.11 Age and Facial Expressions of Emotion

At the present time, there is not much concurrence in the literature regarding aging and facial expression of emotion. Schuetze and Reid (2005) examined oral asymmetry in positive and negative facial expressions for 12-, 18- and 24-month-old full-term infants. Their results indicated that 24-month-old infants showed stronger left-faced oral (mouth movement) asymmetry during negative facial expressions than the 12- or 18-month-old infants. Although 12- and 18-month-old infants displayed distinct left-sided oral asymmetry for negative facial expressions, these asymmetries were significantly stronger by 24 months of age. No oral asymmetry patterns were detected for positive facial expressions for any of the infants. These results, although limited, provide some support for the right hemisphere hypothesis, indicating that these asymmetries may be present very early in life (Schuetze and Reid 2005). In order to interpret these results within the context of the valence hypothesis, which claims that the RH is associated with negative emotions and that the LH is dominant for positive emotions, one can speculate that lateralization of positive emotions, or left hemisphere emotional development, is delayed until after 2 years of age. The researchers noted that children begin developing complex negative emotions, such as shame and guilt, between 18 and 24 months of age, the same point where they found a significant increase in lateralization for negative expressions (Schuetze and Reid 2005).

In contrast, two studies have found greater LH (i.e., right hemiface) activation during emotional expression among infants within their first year of life (Best and Queen 1989; Rothbart et al. 1989). Moscovitch, Strauss, and Olds (1980) found an inconsistent right hemiface bias in 2–3-year-old children, and suggested that this age is likely a transitional period for facial emotional expression hemispheric specialization. This discrepancy suggests that emotional expression patterns and lateralization may change as the cortex matures.

Research shows that there is a decline in many RH-mediated functions as we age (Albert and Kaplan 1980; Borod and Goodglass 1980a; Borod et al. 2004; Brown and Jaffe 1975; Ellis et al. 1989; for a review, see Borod and Goodglass 1980b). According to the RH aging hypothesis, RH-related functions (e.g., facial asymmetry and expression) decline faster than activities mediated by the left hemisphere (Albert and Kaplan 1980). Moreno et al. (1990) tested the RH aging hypothesis by examining whether there were age-related changes in facial asymmetry in 30 young (21–39 years of age), 30 middle-aged (ages 40–59), and 30 elderly (ages 60–81) adult women. The researchers used trained raters to evaluate photographs of positive and negative posed facial expressions and found that all participants demonstrated left-sided facial asymmetry. Therefore, there were no significant lateralization differences as a function of age.

A more recent study by Magai et al. (2006) examined the intensity and duration of emotional expressions and found that young, middle-aged, and older adults did not differ in the intensity of spontaneous prompted facial expressions of surprise, joy, anger, sadness, contempt, disgust, fear, shame, or guilt. Older adults in this study reported that they experienced the emotion of interest with greater intensity than middle-aged and young adults. Facial expression duration differed between the age groups for shame, contempt, and joy, with younger adults expressing longer expressions of these emotions during their monologues.

The research in this area does not seem to reach a firm or consistent conclusion on how facial and emotional expression differ throughout the life span, with some studies reporting significant changes and others finding differences based on valence.

5.12 Nonhuman Primates

Primates have some of the most complex facial musculature of all the mammals and make the most intricate facial displays (Burrows 2008). A large body of research suggests that baboons, macaques, vervet monkeys, and chimpanzees routinely use facial expressions as a means of communication within their complex social environment, much like humans do. In fact, some facial features of nonhuman primates may be homologous to facial expressions in humans, such as laughing and smiling (Burrows 2008). As we have discussed in this chapter, human emotional displays of the face are, at least, in large part due to RH specialization. New research indicates that chimpanzees may also have an RH bias when it comes to facial displays of emotion. Through careful study, Fernández-Carriba et al. (2002) found that the facial expressions of play, silent bared-teeth, the scream face, and pant-hooting in chimpanzees all show greater mouth expressions on the left, as compared to the right, hemiface. These expressions are of both positive and negative valence and tend to accompany vocalizations, as well. It is clear that the right hemisphere plays a part in chimpanzee facial emotions or at a minimum, the lower half of the face. In a second study by the same researchers in the same year, they furthered their investigation by not only having naïve raters view chimeric chimpanzee faces (as was done in the study above) but by also measuring and quantifying the distance of expressions from midline of the face. These two variables could then be looked at separately or taken together. They found that humans judging the chimeric faces were just as good as the measurement techniques used when judging “play” and “silent bared-teeth.” Again, the authors were able to demonstrate that silent bared-teeth and play were consistently asymmetrical toward the left on all measures of rater judgments and on all measurements taken. The lack of asymmetry for the other facial expressions was thought to be due to a small sample size.

Further, in a study investigating the vocal and expressive characteristics of the rhesus monkey, Hauser and Akre (2001) found that there is also a RH bias in facial expressions on the rhesus. In adults, the left side of the mouth and face is first to display a facial expression when adult rhesus monkeys are producing copulation grimaces, fear grimaces, lip smacks, and open-mouth threats. This study implicates the RH as possibly dominant for facial expression in the rhesus monkey, as well. The authors pointed out that this asymmetry was not valence-specific as both negative and positive expressions presented with left-sided activity before the right side of the face began to move. Of interest, this left side bias has also been reported for screeching in infant and adult baboons (Lindell 2013).

In sum, it appears that there is RH dominance in nonhuman primates; however, it remains to be seen, without further study and investigation, whether a valence-specific model can be applied. These findings suggest that the right hemisphere’s specialization for the control of emotional expression must have emerged early in primate evolution. So far, the evidence is consistent with the human literature that suggests that functional lateralization of emotional facial displays may not be solely human but of the primate species.

5.13 Conclusions and Future Directions

Understanding the relationship between asymmetry of facial expressions and the lateralized brain is critical, because it can inform neuropsychological theory and answer discrepancies that remain in the emotion literature. There are some aspects we first must consider. Through what we have learned from studies with brain-damaged individuals and through clinical observations, the relationship between the expression of positive emotions and the right hemiface/left hemisphere has not been found as consistently as that between negative emotions and the left hemiface/right hemisphere. One problem may be that emotional expression in the face does not occur in isolation. More studies need to focus on facial emotion and body position (e.g., posture and gesture). An attempt should be made to maintain an atmosphere that mimics real-life social interactions in order to truly understand expressions of facial emotion. We live and interact in a three-dimensional social world, and more studies should focus on replicating a more natural environment for obtaining and recording the emotional expressions of both genders and all age groups. It may be possible that three-dimensional computerized facial imagery, such as that pioneered by Cohn and Kanade (e.g., Cohn et al. 1999), may be used to minimize human biases and error while capturing the face in motion. There is another factor to be considered in evaluating these studies. It has been pointed out (Etcoff 1986) that smiling is the easiest expression to consciously produce and is the most commonly invoked for social communication. This would suggest that the left hemisphere may have some involvement in intentional control of emotional expression, rather than positive emotions. This would be consonant with the high emotional reactivity seen in patients with Broca’s aphasia (and left hemisphere damage in general; Gainotti 1972).

We are just beginning to understand the complexity of emotional functioning in the brain and how it relates to facial expressions of emotion. Despite there being an enormous number of imaging studies of lateralized brain activation in response to emotional stimuli, we are unaware of any such studies that measure activation as it relates to true (or genuine) facial emotional expression. More sophisticated imaging techniques and creative paradigms would help elucidate the underlying functional connectivity and neural network that mediate the expression of facial emotion.