Abstract
Moral sense is defined as a feeling of the rightness or wrongness of an action that knowingly causes harm to people other than the agent. The large amount of data collected over the past decade allows drawing some definite conclusions about the neurobiological foundations of moral reasoning as well as a systematic investigation of methodological variables during fMRI studies. Here, we verified the existence of converging and consistent evidence in the current literature by means of a meta-analysis of fMRI studies of moral reasoning, using activation likelihood estimation meta-analysis. We also tested for a possible neural segregation as function of the perspective used during moral reasoning i.e., first or third person perspectives. Results demonstrate the existence of a wide network of areas underpinning moral reasoning, including orbitofrontal cortex, insula, amygdala, anterior cingulate cortex as well as precuneus and posterior cingulate cortex. Within this network we found a neural segregation as a function of the personal perspective, with 1PP eliciting higher activation in the bilateral insula and superior temporal gyrus as well as in the anterior cingulate cortex, lingual and fusiform gyri, middle temporal gyrus and precentral gyrus in the left hemisphere, and 3PP eliciting higher activation in the bilateral amygdala, the posterior cingulate cortex, insula and supramarginal gyrus in the left hemisphere as well as the medial and ventromedial prefrontal cortex in the right hemisphere. These results shed some more light on the contribution of these areas to moral reasoning, strongly supporting a functional specialization as a function of the perspective used during moral reasoning.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Moral sense has been widely recognized as an ensemble of psychological mechanisms that enable otherwise selfish individuals to reap the benefits of cooperation (Greene 2013). Theories about moral reasoning have their roots in the middle of the last century. Piaget and Kohlberg (1932; 1964) were pioneers in proposing pivotal theories about moral reasoning. According to Piaget, the individual acquires the morality rules through invariant stages as part of his sociocognitive development. These stages are reached by means of individual’s reciprocal interactions with the environment. These interactions allow improving the perspective taking ability and diminishing the typical childhood egocentrism (Piaget 1965). Afterwards, Kohlberg suggested a six-level progression of moral development, through which the perspective taking ability increases (Kohlberg et al. 1983). When the children, around the second years of life, start to comprehend their role as causal agents they also start to understand the actions intentionality (Kagan 1984; Kochanska et al. 1995). To understand when the moral rules are violated as well as the consequences of this violation seems to be influenced by intentionality and agency. Surely, our judgment within the wrongdoing context depends on whether the act was performed by ourselves or by somebody else (Sokol et al. 2004). The affective evaluation (self-emotions like “guilty” or “shame” due to the role the individual have had in the action) of a moral rule violated is strictly related to the intentional/accidental of the transgression, even if the physical consequences could be the same (Berthoz et al. 2006). Specifically, accidental harm is not considered morally wrong, while intentional harm is; but also inevitable harm – although intentional, but incurred as a side effect – is at times considered acceptable by individuals, as is intended harm if it is to produce a further benefit to the recipient (Turiel et al. 1987). Indeed, the ascription to the intentionality to an agent and its relation with the moral judgment is one of the traditional issues in philosophy and moral psychology (Manfrinati et al. 2013). Only in the last two decades the advent of the new neuroimaging techniques, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), has made it possible to investigate the neural foundation of the moral reasoning. Both lesional and functional neuroimaging studies (i.e., fMRI, PET) have contributed to shed some light on the neural correlates of human ability to judge the rightness or wrongness of an action (Moll et al. 2008a, b).
The correlation between brain states and moral reasoning has been initially grounded upon neurological case studies. One of the most representative cases is surely that of Phineas Gage, in the 1848, a railroad construction foreman who reported a wide lesion in the left frontal lobe as a consequence of a rock-blasting accident. He was one of the earliest documented cases providing the evidence that some areas of the pre-frontal lobes were linked to judgment, decision-making, social conduct, and personality (Bechara and Damasio 2005). Such an observation has predated the systematical neuroimaging and neuropsychological investigations of the last two decades, which have allowed for systematically associating the behavioral deficit with the lesion site (Damasio et al. 1994).
As stated above, since the first neuropsychological investigations, the association between moral reasoning and prefrontal cortex became evident (Anderson et al. 1999). In the past decade, systematic neuroimaging investigations in brain-damaged patients allowed a better characterization of the brain areas underpinnings moral reasoning. Koenigs et al. (2007) found that patients with bilateral damage to the ventromedial prefrontal cortex showed increased utilitarian responses when confronted with moral dilemma (i.e., sacrificing one person’s life to save a number of other lives). Furthermore, patients with damage to the ventromedial prefrontal cortex were found to judge attempted harms, that is actions undertaken with the intent to cause harm even if they fail, as more permissible and accidental harms, that is actions undertaken with innocent intentions that accidentally result in harm, as less permissible than the control group (Ciaramelli et al. 2012). Ciaramelli and colleagues (2012) suggested that, being ventromedial prefrontal cortex necessary for integrating outcome and belief information during moral reasoning, its damage induces a selective impairment in processing belief information, thus resulting in an increased weight of the outcome information. Interestingly, the ventromedial prefrontal cortex was also found to show significantly reduced activation in criminal psychopathic males, compared with control participants, during a moral judgment task (Pujol et al. 2012). Additionally, the temporary disruption of the activity of the right temporoparietal junction, by using the transcranial magnetic stimulation, induces participants to judge attempted harms as more morally permissible (Young et al. 2010). This finding suggests a possible role of the temporoparietal junction in processing mental states during moral judgment tasks.
Evidence from fMRI studies in the past decade helps to clarify the neural network underpinning the moral reasoning in healthy individuals (Fumagalli and Priori 2012). The most frequent behavioral paradigm in fMRI studies requires individuals to make a moral judgment about the appropriateness of an action one might perform in the presented scenario (i.e. Moral dilemma; see for example, Greene et al. 2001, 2004). Scenarios are usually written short stories, and the action to be morally judged frequently involves the sacrifice of one person’s life to save a number of other lives. Also brief moral statements have been used in fMRI studies (see for example, Avram et al. 2013 and Schaich Borg et al. 2008) as well as visual scenes (Yoder and Decety 2014) and pictures (Harenski et al. 2012). Evidence from these studies suggests a wide network of brain areas is involved in moral reasoning. This network encompasses the orbitofrontal and the ventromedial prefrontal cortices, the dorsolateral prefrontal cortex, the anterior cingulate cortex, the lateral portion of the parietal lobe as well as the medial parietal lobe, including also the posterior cingulate cortex and the precuneus, the temporoparietal junction, limbic areas such as the amygdala and the insula, and the temporal pole (Greene and Haidt 2002; Pascual et al. 2013). It has been proposed that these brain areas make important contributions to moral reasoning, although none is devoted specifically to it (Greene and Haidt 2002; Pascual et al. 2013).
Specifically, the ventromedial prefrontal cortex has been found to be crucial in processing emotional aspects as well as social norms and values during moral reasoning (Pascual et al. 2013). The orbitofrontal cortex has been associated with the online representation of reward and punishment and has been found to be engaged in simple moral judgment as well as in processing moral pictures compared with non-moral ones (Greene and Haidt 2002; Pascual et al. 2013). While the orbitofrontal and the ventromedial prefrontal cortices have been also proposed to be involved in emotionally driven moral judgment (Greene et al. 2004), the dorsolateral prefrontal cortex is involved in cognitive control and problem-solving, and it has been proposed to deal with emotional processing in the orbitofrontal and the ventromedial prefrontal cortices, probably mitigating their responses (Greene et al. 2004). Furthermore the anterior cingulate cortex has been proposed to monitor the moral conflict between a strong emotionally driven response (processed by the ventromedial prefrontal cortex) and a more cognitive controlled response (processed by the dorsolateral prefrontal cortex), supported by abstract reasoning (Greene et al. 2004).
The temporoparietal junction has been found to be crucial in belief attribution during moral judgment (Pascual et al. 2013) and in empathy-related processes (Bzdok et al. 2012). The posterior cingulate cortex and the precuneus, whose activation has been repeatedly found in fMRI studies of moral reasoning (Pascual et al. 2013), have been proposed to be involved in affective imagery (Greene and Haidt 2002). Instead the lateral portion of the parietal lobe (mainly the inferior parietal lobule) has been hypothesized to contribute to cognitive control during moral reasoning (Pascual et al. 2013).
Additionally, amygdala has been hypothesized to play a crucial role in processing empathic sadness during moral reasoning (Pascual et al. 2013) and affective response to one’s own moral violations (Berthoz et al. 2006), whereas insula probably processes visceral sensations associated with anger or indignation (Wicker et al. 2003; Moll et al. 2005) and with perceiving painful situations in others (Jackson et al. 2005) during moral decisions.
As it has been previously noted (Greene and Haidt 2002) most of the regions found to be involved during moral reasoning, such as precuneus, posterior cingulate cortex, ventromedial and orbitofrontal cortices, as well as superior temporal gyrus and inferior parietal lobe, belong to the so-called default mode network (DMN) (Yeo et al. 2011). As it is widely known, these regions show an increased activation during resting state and a reduced activation during goal-directed tasks (Gusnard and Raichle 2001). Thus, these areas have been proposed to be components of a tonically active neural system that continuously gather information about the self and the world (Gusnard and Raichle 2001). Anyway, the DMN has also found to be activated during tasks that require processing of internally generated information, including self-reference information (Kelley et al. 2002). Interestingly, Nakao et al. (2012) found that DMN was activated during internally guided decision-making. Specifically, internally guided decision-making occurs when individuals cannot rely on an externally determined objectively correct behavior and they have to rely on the internally generated preferences rather than on the externally presented criteria to regulate their own behavior (Nakao et al. 2012). This is often the case of moral judgment, where there is no definite correct responses, and individuals have to rely on an internal representation of their values system. Nakao et al. (2012) proposed that internally guided decision-making is based largely on intrinsic brain activity within the DMN, since it depends on the individual’s own criteria rather than on circumstantial criteria. It has also to be considered that neural activity within the anterior cingulate cortex and insula (i.e., the so-called Salience network; Chiong et al. 2013) has been found to affect the DMN activity during moral reasoning in healthy participants (Chiong et al. 2013). This modulation is reduced in patients with the behavioral variant of Fronto Temporal Dementia, who were found to give abnormally utilitarian responses to moral dilemmas (Chiong et al. 2013). Chiong and colleagues (2013) suggested that brain damage at the level of the Salience network may produce a dramatic alteration of the moral behavior due to the alteration in the causal influence of the salience network on the default mode network.
Overall, a large amount of data has been collected over the past decade about the neurobiological foundations of moral reasoning and decision-making. Though, the use of deeply different paradigms prevents from drawing firm conclusions about neural underpinnings of moral reasoning. One of the most questioned and debatable issues is the use of different perspectives (i.e., first and third person perspectives) in moral dilemmas and moral reaction tasks (Avram et al. 2014). Indeed, the moral dilemma can implicitly or explicitly impose a perspective on the individual with the aim to affect the judgment. In some studies, the individual is either asked to take the perspective of a protagonist (i.e., first person perspective 1PP), whereas in others, the dilemma is described in the third person (3PP) producing a difference in the individual’s answer (Royzman and Baron 2002). At least two sources of evidence are consistent with this debate. On one hand, neuroimaging evidence of different patterns of neural activity during observation of stimuli presented either in a first person perspective or third person perspective during both non-moral visuospatial tasks (Vogeley and Fink 2003) and social non-moral task (Ruby and Decety 2001). On the other hand, social psychological studies have repeatedly provided evidence for the so-called “actor-observed bias” (Jones and Nisbett 1971; Nadelhoffer and Feltz 2008), which consists of the individual’s tendency to attribute one’s own behavior to external factors and other people’s behavior to internal factors during negative situations. Nadelhoffer and Feltz (2008) strongly demonstrated the existence of this effect also during moral reasoning. Interestingly, they found that participants found moral violations more permissible when they are observers rather then when they are the actual actors. Accordingly, Avram et al. (2014) found a significant difference in accepting moral violations when a 1PP or a 3PP is required, with lower acceptability of moral violation for the 1PP. These authors also found that neural networks almost different subserve 1PP and 3PP moral reasoning. Furthermore, considering also moral emotions, when the individual has to judge the one’s own morally right or wrong actions, emotions like guilt or shame or even fear of negative social evaluation will guide his moral judgment (Berthoz et al. 2006; Finger et al. 2006; Takahashi et al. 2004). Differently, when it is required to judge another person’s action, emotions such as indignation and anger will prevail in moral judgment (Moll et al. 2008a, b; Shaver 1985). As a consequence neural activity in emotion processing regions is likely to differ between the two perspectives. Specifically, Zahn et al. (2009) found differences in neural activity when participants judged their own and others’ moral and immoral actions, interpreting such a difference due to the different moral emotions elicited by 1PP and 3PP. Authors found that shame, guilt and pride elicited by 1PP enhanced neural activity in PFC and anterior temporal lobe and indignation, anger, praise elicited by 3PP enhanced neural activity in lateral orbitofrontal cortex, insula and DLPFC.
Meta-analytic approach offers a good tool to overcome methodological issues, such as the difference in the adopted paradigm, the small sample size, the low reliability and the subtraction logic. Bzdok et al. (2012) made a first effort to find convergence between fMRI studies on moral reasoning by means of a quantitative meta-analysis, hypothesizing that moral reasoning is “implemented in brain areas engaged in theory of mind and empathy” (Bzdok et al. 2012, pp. 789). These authors found that the brain areas consistently activated during moral reasoning mainly overlapped with those supporting “theory of mind”. More recently Sevinc and Spreng (Sevinc and Spreng 2014) use a quantitative meta-analysis to test whether different experimental paradigms i.e., those requiring active deliberation or passive observation, used in fMRI studies of moral reasoning led to activation different brain regions. These authors found that both of these paradigms led to activation of the DMN and that active tasks specifically required activation of the temporo-parietal junction, angular gyrus and temporal pole. Otherwise passive tasks required activation of the lingual gyrus and amygdala.
Due to the paucity of studies directly testing the effect of the perspectives during moral reasoning (Avram et al. 2014; Berthoz et al. 2006), it remains unclear whether person perspective plays a crucial role in determining moral reasoning and if different brain regions were recruited during moral reasoning using a first person perspective or third person perspective. Nevertheless the huge amount of data collected with different paradigms (that is, those using either 1PP or 3PP) allows for assessing this issue by using a quantitative meta-analysis.
In the present study we tested the hypothesis that different brain networks contribute to moral reasoning as function of the person perspective (i.e., 1PP or 3PP), through a meta-analytic approach based on activation likelihood estimation (ALE) analysis. This method allows performing coordinate-based meta-analyses of fMRI data that condenses the wealth of neuroimaging findings into meaningful patterns. We applied a general ALE meta-analysis to identify neural substrates underpinning general aspects of moral reasoning as well as two separate ALE analyses to different perspectives, namely first person perspective (1PP) or third person perspective (3PP). Based on previous observations (Avram et al. 2014; Berthoz et al. 2006) we hypothesized that different brain areas of the neural network involved in moral reasoning (and identified in the general ALE analysis) are involved when different person perspectives are required. Such a study will also clarify whether previously observed differences between 1PP and 3PP in moral judgment (Agerström et al. 2013) may be due to the fact that when people adopt a 3PP they use a more abstract mental construction of actions and events.
Meta-analysis
Inclusion criteria for papers
The database search on PubMed was performed using the following string: “((((((((moral reasoning) OR moral decision making) OR moral judgement) OR moral judgment) AND fMRI) NOT vascular) NOT depression) NOT schizophrenia) NOT obsessive compulsive disorder”. Our a-priori inclusion criteria for papers were: 1) Inclusion of whole-brain analysis performed using functional magnetic resonance imaging (fMRI); thus, we excluded positron emission tomography (PET) studies, electrophysiology studies and papers that reported only results from ROI (Region of Interest) analysis. 2) Provision of coordinates of activation foci, either in Montreal Neurological Institute (MNI) or Talairach reference space. 3) All participants in the studies had to be healthy adults. Studies that included healthy elderly adults or children were excluded to avoid the effects of aging on moral reasoning. 4) All neuroimaging studies had to include a control condition to exclude all the activations that were not directly connected to moral reasoning. 5) The experimental tasks required participants to make a moral decision or to judge the moral acceptability of the proposed actions. Studies that did not focus on moral reasoning were excluded from the meta-analysis. 6) Only group studies were included. 7) There could be no pharmacological manipulation.
Using these criteria, we selected 31 papers. Meta-analysis was carried out on 91 neuroimaging experiments (described in the 31 published studies) using the “activation likelihood estimation” (ALE) analysis (Fox et al. 2014). A total of 799 individuals participated in these trials. Studies are summarized in Table 1.
Activation likelihood estimation
Coordinates from selected studies were analyzed for topographic convergence using Activation likelihood estimation (ALE) analyses. ALE meta-analysis determines if the clustering of activation foci across experiments, tapping on the same cognitive domain or function, is significantly higher than expected under the null distribution of a random spatial association of results from these experiments. In other words, ALE assesses the convergence between foci of different experiments by modeling the probability distributions centered at the coordinates of each one (Eickhoff et al. 2009).
A general ALE meta-analysis was performed on the foci derived from the selected studies on moral reasoning (Table 1). The coordinates of the foci were taken from the original papers. A total of 634 foci were reported in the 91 experiments.
We also performed two separate ALE analyses on different perspectives used in the experiments of moral reasoning (i.e., 1PP or 3PP). The experiments categorized as 1PP included those using first person perspective in moral dilemmas (Greene et al. 2004, 2001), those directly stressing self-agency (Berthoz et al. 2006) and those actually requiring a moral decision (Greene and Paxton 2010). Otherwise, experiments classified as 3PP included those using a third person perspective when participants were required to solve moral dilemmas (Cáceda et al. 2011) or they are viewing moral violation (Yoder and Decety 2014). Experiments including different types of perspective were excluded from these analyses. Thus, the data from these experiments were included in the general analysis but not in the individual analyses. Separate ALE analyses were performed on (1) 44 experiments using 1PP (348 participants) and (2) 41 experiments using 3PP (444 participants).
Finally, we performed two contrast analyses to directly compare the effects of the perspective [(1PP > 3PP) and (3PP > 1PP)]. These contrast analyses allowed highlighting voxels whose signal was greater in the first than the second condition. We also carried out a conjunction analysis of perspective [1PP ∧ 3PP] to identify voxels that subtended both conditions.
The ALE meta-analysis was performed using GingerALE 2.3.1 (brainmap.org) with MNI coordinates (Talairach coordinates were automatically converted into MNI coordinates by GingerALE). According to Eickhoff et al.’s (2009) modified procedure, the ALE activation maps were computed for each experiment and then a random effects analysis across all experiments were led. The resulting ALE map was thresholded at p < 0.05 using False Discovery Rate (FDR) correction for multiple comparisons and a cluster size >200 mm3.
The ALE results were registered on an MNI-normalized template using Mricro (http://www.mccauslandcenter.sc.edu/mricro/index.html).
Results
General ALE
The ALE meta-analysis showed a wide network of activation, spanning from occipital to frontal lobe (Fig. 1). Specifically, on the medial brain surface we found cluster of activation in the medial and ventromedial prefrontal cortices, anterior and posterior cingulate cortices. We also found activation in the bilateral amygdala and insula. On the lateral brain surface we found bilateral activation spanning from the parietal to the temporal lobe and including the temporo parietal-junction and lingual gyrus. We also found activation in the dorsolateral prefrontal cortex, extending also to the orbitofrontal cortex (Table 2).
Single ALE
To test whether 1PP and 3PP during moral reasoning led to activation of different brain areas, we led two ALE analyses. ALE analysis on fMRI experiments using a 1PP during moral reasoning (Fig. 2) revealed clusters of activation in the bilateral anterior and posterior cingulate cortices, insula, superior and middle temporal gyri and lingual gyrus, as well as the right supramarginal and angular gyri (see Table 3).
ALE analysis on fMRI experiments using a 3PP during moral reasoning (Fig. 2) revealed clusters of activation in the bilateral amygdala, medial and ventromedial prefrontal cortices, dorsal prefrontal cortex as well as the bilateral posterior cingulate cortex and middle and superior temporal gyri (Table 4).
Contrast analyses
Results of the T contrast [1PP > 3PP] showed clusters of voxels that were more activated by a 1PP (Fig. 3) in the bilateral insula and superior temporal gyrus as well as in the anterior cingulate cortex, lingual and fusiform gyri, middle temporal gyrus and precentral gyrus in the left hemisphere (Table 5).
Results of the opposite T contrast [3PP > 1PP] showed clusters of voxels that were more activated by a 3PP (Fig. 3) in the bilateral amygdala, the posterior cingulate cortex, insula and supramarginal gyrus in the left hemisphere as well as the medial and ventromedial prefrontal cortex in the right hemisphere (Table 5).
Conjunction analyses
Conjunction analysis [1PP ∧ 3PP] showed that the two perspectives partially share a neural network consisting of the bilateral posterior cingulate cortices and anterior part of the middle temporal gyri as well as the left posterior part of the superior temporal gyrus (Fig. 4).
Discussion
In the present study we tested the hypothesis that different brain networks contribute to moral reasoning as function of the person perspective (i.e., 1PP or 3PP) used during experimental task. To pursue this aim we used an ALE meta-analysis aimed to identify (i) neural underpinnings of the general aspects of moral reasoning (ii) specific neural substrates of moral reasoning from either 1PP or 3PP and (iii) brain regions involved in both 1PP and 3PP.
Consistently with previous meta-analyses (Sevinc and Spreng 2014; Bzdok et al. 2012) we found that a wide network of areas on both medial and lateral brain surfaces is correlated with moral reasoning (Fig. 1). This network is consistent with that identified by previous lesional and functional neuroimaging studies (see Introduction section) and compatible with functional specialization proposed before (Greene and Haidt 2002; Pascual et al. 2013). Indeed, this network includes ventromedial prefrontal cortex, anterior and posterior cingulate cortices and precuneus on the medial brain surface and supramarginal gyrus, inferior parietal lobule and superior and middle temporal gyri on the lateral brain surface. In addition, we found activations in the orbitofrontal and ventromedial prefrontal cortices, in the insula and amygdala.
More interestingly we found that several regions within this network were more activated by 1PP or 3PP (Fig. 3). Specifically, 1PP elicited higher activation in the bilateral insula and superior temporal gyrus as well as in the anterior cingulate cortex, lingual and fusiform gyri, middle temporal gyrus and precentral gyrus in the left hemisphere, whereas 3PP elicited higher activation in the bilateral amygdala, the posterior cingulate cortex, insula and supramarginal gyrus in the left hemisphere as well as the medial and ventromedial prefrontal cortex in the right hemisphere. Both 1PP and 3PP shared the activation of the posterior cingulate cortices and anterior part of the middle temporal gyri as well as the left posterior part of the superior temporal gyrus.
Globally, the results from the general ALE strongly confirm the importance of the ventromedial prefrontal cortex and that of the anterior cingulate cortex to human moral reasoning. This is consistent with previous functional neuroimaging (Greene et al. 2001, 2004; Yoder and Decety 2014; Harenski et al. 2012) and lesion studies (Damasio et al. 1994; Anderson et al. 1999; Koenigs et al. 2007). Interestingly we found that these regions exhibited higher activation during 1PP or 3PP. The left anterior cingulate cortex was higher activated during moral reasoning when a 1PP was required whereas the ventromedial prefrontal cortex was higher activated during moral reasoning from a 3PP. These regions, together with the dorsolateral prefrontal cortex, have been repeatedly hypothesized to play pivotal contributions to moral reasoning. As reported above, the ventromedial prefrontal cortex has been hypothesized to emotionally drive moral decisions whereas the dorsolateral prefrontal cortex has been hypothesized to act as a rational filter (Fumagalli and Priori 2012). The anterior cingulate cortex has been hypothesized to mediate the conflict between the emotional components of the moral reasoning (processed in the ventromedial prefrontal cortex) and the rational components of the moral reasoning (processed in the dorsolateral prefrontal cortex) (Fumagalli and Priori 2012). Our results shed some more light on the contribution of these areas to moral reasoning, strongly supporting a functional specialization as a function of the perspective used during moral reasoning.
Furthermore, present results confirm the involvement of structures such as amygdala and insula, previously hypothesized to play a specific role in processing empathic sadness (Pascual et al. 2013) and visceral sensations associated with anger or indignation (Wicker et al. 2003; Moll et al. 2005) during morally reasoning. Anyway the results of the present study (see contrast analyses) suggest that these regions play different contributions to moral reasoning as a function of the personal perspective adopted during moral reasoning. Actually, 1PP yields to higher activation of the bilateral insula, whereas 3PP yields to higher activation of the bilateral amygdala.
The consistent activation in the superior temporal gyrus, as well as the activation of medial part of the parietal lobe at the level of precuneus, support the idea that regions involved in encoding beliefs and integrating them with intentions (Young and Saxe 2008; Allison et al. 2000) is crucial in determining moral behavior (Pascual et al. 2013). Furthermore, neuroimaging findings in healthy subjects suggest a central role for the precuneus in a wide spectrum of highly integrated tasks, including self-processing operations, namely 1PP and an experience of agency (for a review see Cavanna and Trimble 2006).
Present results suggest that the neural network consistently activated during fMRI studies of moral reasoning roughly corresponds to the DMN, which is required in internally guided decision-making (Nakao et al. 2012). Furthermore, activity of this brain network is affected by activity in the salience network (i.e., anterior cingulate cortex and insula) (Chiong et al. 2013). This explains the dramatic consequences on the individual’s moral behavior following a brain damage at the level of the ventromedial prefrontal cortex (Chiong et al. 2013; Anderson et al. 1999; Koenigs et al. 2007). Actually, a lesion located at the level of the ventromedial prefrontal cortex may induce a disruption in the normal functioning of the neural network of moral reasoning.
Taken together with the findings of previous studies (Bzdok et al. 2012), present results strongly suggest that a wide network of brain areas, rather than a single brain area, underpins moral reasoning. This is also consistent with previous observation that among the brain areas found to be involved in moral reasoning, none is specifically devoted to it (Greene and Haidt 2002; Pascual et al. 2013). Furthermore, we hypothesize that the differences in the neural network are due to the fact that the 3PP implies a more abstract level of reasoning, while the 1PP implies a higher level of emotional involvement. Therefore, the moral implications of the 1PP action is more salient, as shown by previous research (Libby et al. 2007). We can suppose a system similar to that of the Dual-process morality described by Greene (2009). In our case the third person involves a shift towards the higher functions of moral reasoning, while the first person is more sensitive to an automatic emotional responses.
References
Agerström, J., Björklund, F., & Carlsson, R. (2013). Look at yourself! Visual perspective influences moral judgment by level of mental construal. Social Psychology, 44(1), 42–46. doi:10.1027/1864-9335/a000100.
Allison, T., Puce, A., & McCarty, G. (2000). Social perception from visual cues: role of the STS region. Trends in Cognitive Sciences, 4, 267–278. doi:10.1016/S1364-6613(00)01501-1.
Anderson, S. W., Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1999). Impairment of social and moral behavior related to early damage in human prefrontal cortex. Nature Neuroscience, 1032–1037. doi:10.1038/14833.
Avram, M., Gutyrchik, E., Bao, Y., Pöppel, E., Reiser, M., & Blautzik, J. (2013). Neurofunctional correlates of esthetic and moral judgments. Neuroscience Letters, 534(1), 128–132. doi:10.1016/j.neulet.2012.11.053.
Avram, M., Hennig-Fast, K., Bao, Y., Poppel, E., Reiser, M., Blautzik, J., ... Gutyrchik, E. (2014). Neural correlates of moral judgments in first- and third-person perspectives: implications for neuroethics and beyond. BMC Neuroscience, 15, 39. doi:10.1186/1471-2202-15-39.
Bahnemann, M., Dziobek, I., Prehn, K., Wolf, I., & Heekeren, H. R. (2009). Sociotopy in the temporoparietal cortex: common versus distinct processes. Social Cognitive and Affective Neuroscience, 5(1), 48–58. doi:10.1093/scan/nsp045.
Bechara, A., & Damasio, A. R. (2005). The somatic marker hypothesis: a neural theory of economic decision. Games and Economic Behaviour, 52, 336–372.
Berthoz, S., Grèzes, J., Armony, J. L., Passingham, R. E., & Dolan, R. J. (2006). Affective response to one’s own moral violations. NeuroImage, 31(2), 945–950. doi:10.1016/j.neuroimage.2005.12.039.
Buckholtz, J. W., Asplund, C. L., Dux, P. E., Zald, D. H., Gore, J. C., Jones, O. D., & Marois, R. (2008). The neural correlates of third-party punishment. Neuron, 60(5), 930–940. doi:10.1016/j.neuron.2008.10.016.
Bzdok, D., Schilbach, L., Vogeley, K., Schneider, K., Laird, A. R., Langner, R., & Eickhoff, S. B. (2012). Parsing the neural correlates of moral cognition: ALE meta-analysis on morality, theory of mind, and empathy. Brain Structure & Function, 217(4), 783–796.
Cáceda, R., James, G. A., Ely, T. D., Snarey, J., & Kilts, C. D. (2011). Mode of effective connectivity within a putative neural network differentiates moral cognitions related to care and justice ethics. PloS One, 6(2), e14730. doi:10.1371/journal.pone.0014730.
Cavanna, A. E., & Trimble, M. R. (2006). The precuneus: a review of its functional anatomy and behavioural correlates. Brain, 129, 564–583.
Chiong, W., Wilson, S. M., D’Esposito, M., Kayser, A. S., Grossman, S. N., Poorzand, P., Seeley, W. W., Miller, B. L., & Rankin, K. P. (2013). The salience network causally influences default mode network activity during moral reasoning. Brain, 136(6), 1929–1941. doi:10.1093/brain/awt066.
Ciaramelli, E., Braghittoni, D., & Di Pellegrino, G. (2012). It is the outcome that counts! Damage to the ventromedial prefrontal cortex disrupts the integration of outcome and belief information for moral judgment. Journal of the International Neuropsychological Society, 18, 962–971.
Cikara, M., Farnsworth, R. A., Harris, L. T., & Fiske, S. T. (2010). On the wrong side of the trolley track: neural correlates of relative social valuation. Social Cognitive and Affective Neuroscience, 5(4), 404–413. doi:10.1093/scan/nsq011.
Cushman, F., Murray, D., Gordon-mckeon, S., Wharton, S., & Greene, J. D. (2012). Judgment before principle: engagement of the frontoparietal control network in condemning harms of omission. Social Cognitive and Affective Neuroscience, 7(8), 888–895. doi:10.1093/scan/nsr072.
Damasio, H., Grabowski, T., Frank, R., Galaburda, A. M., & Damasio, A. R. (1994). The return of Phineas gage: clues about the brain from the skull of a famous patient. Science, 264, 1102–1105.
Eickhoff, S. B., Laird, A. R., Grefkes, C., Wang, L. E., Zilles, K., & Fox, P. T. (2009). Coordinate-based activation likelihood estimation meta-analysis of neuroimaging data: a random-effects approach based on empirical estimates of spatial uncertainty. Human Brain Mapping, 30, 2907–2926.
Finger, E. C., Marsh, A. A., Kamel, N., Mitchell, D. G. V., & Blair, J. R. (2006). Caught in the act: the impact of audience on the neural response to morally and socially inappropriate behavior. NeuroImage, 33(1), 414–421.
Fox, P. T., Lancaster, J. L., Laird, A. R., & Eickhoff, S. B. (2014). Meta-analysis in human neuroimaging: computational modeling of large-scale databases. Annual Review of Neuroscience, 37, 409–434.
Fumagalli, M., & Priori, A. (2012). Functional and clinical neuroanatomy of morality. Brain, 135(Pt 7), 2006–2021. doi:10.1093/brain/awr334.
Greene, J. D. (2009). Dual-process morality and the personal/impersonal distinction: a reply to McGuire, Langdon, Coltheart and Mackenzie. Journal of Experimental Social Psychology, 45, 581–584.
Greene, J. (2013). Moral tribes: Emotion, reason, and the gap between us and them. New York: The Penguin Press, HC.
Greene, J., & Haidt, J. (2002). How (and where) does moral judgment work? Trends in Cognitive Sciences, 6(12), 517–523.
Greene, J. D., & Paxton, J. M. (2010). Correction for Greene et al., patterns of neural activity associated with honest and dishonest moral decisions. Proceedings of the National Academy of Sciences, 107(9), 4486–4486. doi:10.1073/pnas.1000505107.
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science (New York, N.Y.), 293(5537), 2105–2108. doi:10.1126/science.1062872.
Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44(2), 389–400.
Gusnard, D. A., & Raichle, M. E. (2001). Searching for a baseline: functional imaging and the resting human brain. Nature Reviews. Neuroscience, 2, 685–694.
Han, H., Glover, G. H., & Jeong, C. (2014). Cultural influences on the neural correlate of moral decision making processes. Behavioural Brain Research, 259, 215–228. doi:10.1016/j.bbr.2013.11.012.
Harada, T., Itakura, S., Xu, F., Lee, K., Nakashita, S., Saito, D. N., & Sadato, N. (2009). Neural correlates of the judgment of lying: a functional magnetic resonance imaging study. Neuroscience Research, 63(1), 24–34. doi:10.1016/j.neures.2008.09.010.
Harenski, C. L., Harenski, K. A., Shane, M. S., & Kiehl, K. A. (2012). Neural development of mentalizing in moral judgment from adolescence to adulthood. Developmental Cognitive Neuroscience, 2(1), 162–173. doi:10.1016/j.dcn.2011.09.002.
Hayashi, A., Abe, N., Fujii, T., Ito, A., Ueno, A., Koseki, Y., & Mori, E. (2014). Dissociable neural systems for moral judgment of anti- and pro-social lying. Brain Research, 1556, 46–56. doi:10.1016/j.brainres.2014.02.011.
Heekeren, H. R., Wartenburger, C. A. I., Schmidt, H., Schwintowski, H., & Villringer, A. (2003). An fMRI study of simple ethical decision- making. NeuroReport, 14(9), 1215–1219.
Heekeren, H. R., Wartenburger, I., Schmidt, H., Prehn, K., Schwintowski, H. P., & Villringer, A. (2005). Influence of bodily harm on neural correlates of semantic and moral decision-making. NeuroImage, 24(3), 887–897. doi:10.1016/j.neuroimage.2004.09.026.
Jackson, P. L., Meltzoff, A. N., & Decety, J. (2005). How do we perceive the pain of others? A window into the neural processes involved in empathy. NeuroImage, 24, 771–779.
Jones, E. E., & Nisbett, R. E. (1971). The actor and the observer: Divergent perceptions of the causes of behavior. New York: General Learning Press.
Kagan, J. (1984). The nature of the child. New York: Basic Books.
Kahane, G., Wiech, K., Shackel, N., Farias, M., Savulescu, J., & Tracey, I. (2012). The neural basis of intuitive and counterintuitive moral judgment. Social Cognitive and Affective Neuroscience, 7(4), 393–402. doi:10.1093/scan/nsr005.
Kelley, W. M., Macrae, C. N., Wyland, C. L., Caglar, S., Inati, S., & Heatherton, T. F. (2002). Finding the self? An event-related fMRI study. Journal of Cognitive Neuroscience, 14, 785–794.
Kochanska, G., Aksan, N., & Koenig, A. L. (1995). A longitudinal study of the roots of preschoolers’ conscience: committed compliance and emerging internalization. Child Development, 66, 1752–1759.
Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., & Damasio, A. (2007). Damage to the prefrontal cortex increases utilitarian moral judgments. Nature, 446, 908–911.
Kohlberg, L. (1964). Development of moral character and moral ideology. In M. L. Hoffman, & L. W. Hoffman (Eds.), Review of child development research. New York: Russell Sage Foundation.
Kohlberg, L., Levine, C., & Hewer, A. (1983). Moral stages: A current formulation and a response to critics. Basel: Karger.
Libby, L. K., Shaeffer, E. M., Eibach, R. P., & Slemmer, J. A. (2007). Picture yourself at the polls:visualperspective inmental imagery affects self-perception and behavior. Psychological Science, 18, 199–203. doi:10.1111/j.1467-9280.2007.01872.x.
Majdandžić, J., Bauer, H., Windischberger, C., Moser, E., Engl, E., & Lamm, C. (2012). The human factor: behavioral and neural correlates of humanized perception in moral decision making. PloS One, 7(10), e47698. doi:10.1371/journal.pone.0047698.
Manfrinati, A., Lotto, L., Sarlo, M., Palomba, D., & Rumiati, R. (2013). Moral dilemmas and moral principles: when emotion and cognition unite. Cognition and Emotion, 27, 1276–1291.
Moll, J., de Oliveira-Souza, R., Bramati, I. E., & Grafman, J. (2002). Functional networks in emotional moral and nonmoral social judgments. NeuroImage, 16(3 Pt 1), 696–703. doi:10.1006/nimg.2002.1118.
Moll, J., Zahn, R., de Oliveira-Souza, R., Krueger, F., & Grafman, J. (2005). Opinion: the neural basis of human moral cognition. Nature Reviews. Neuroscience, 6, 799–809. doi:10.1038/nrn1768.
Moll, J., De Oliveira-Souza, R., & Zahn, R. (2008a). The neural basis of moral cognition: sentiments, concepts, and values. Annals of the New York Academy of Sciences, 1124, 161–180. doi:10.1196/annals.1440.005.
Moll, J., Oliveira-Souza, R., Zahn, R., & Grafman, J. (2008b). The cognitive neuroscience of moral emotions. In Sinnott-Armstrong (Ed.), Moral psychology (vol. 3, pp. 1–17). Cambridge: MIT Press.
Nadelhoffer, T., & Feltz, A. (2008). The actor–observer bias and moral intuitions: adding fuel to sinnott-armstrong’s fire. Neuroethics, 1(2), 133–144.
Nakao, T., Ohira, H., & Northoff, G. (2012). Distinction between externally vs. internally guided decision-making: operational differences, meta-analytical comparisons and their theoretical implications. Frontiers in Neuroscience, 6(31). doi:10.3389/fnins.2012.00031.
Parkinson, C., Sinnott-Armstrong, W., Koralus, P. E., Mendelovici, A., McGeer, V., & Wheatley, T. (2011). Is morality unified? Evidence that distinct neural systems underlie moral judgments of harm, dishonesty, and disgust. Journal of Cognitive Neuroscience, 23(10), 3162–3180.
Pascual, L., Gallardo-Pujol, D., & Rodrigues, P. (2013). How does morality work in the brain? A functional and structural perspective of moral behavior. Frontiers in Integrative Neuroscience, 7(65). doi:10.3389/fnint.2013.00065.
Piaget, J. (1932). The moral judgment of the child. Brace Jovanovich: Harcourt.
Piaget, J. (1965). The moral judgment of the child. London: Routledge and Kegan Paul.
Prehn, K., Wartenburger, I., Mériau, K., Scheibe, C., Goodenough, O. R., Villringer, A., & Heekeren, H. R. (2008). Individual differences in moral judgment competence influence neural correlates of socio-normative judgments. Social Cognitive and Affective Neuroscience, 3(1), 33–46. doi:10.1093/scan/nsm037.
Pujol, J., Batalla, I., Contreras-Rodríguez, O., Harrison, B. J., Pera, V., Hernández-Ribas, R., Real, E., Bosa, L., Soriano-Mas, C., Deus, J., López-Solà, M., Pifarré, J., Menchón, J. M., & Cardoner, N. (2012). Breakdown in the brain network subserving moral judgment in criminal psychopathy. Social Cognitive and Affective Neuroscience, 7(8), 917–923. doi:10.1093/scan/nsr075.
Reniers, R. L. E. P., Corcoran, R., Völlm, B. A., Mashru, A., Howard, R., & Liddle, P. F. (2012). Moral decision-making, ToM, empathy and the default mode network. Biological Psychology, 90(3), 202–210. doi:10.1016/j.biopsycho.2012.03.009.
Robertson, D., Snarey, J., Ousley, O., Harenski, K., Bowman, F. D., Gilkey, R., & Kilts, C. (2007). The neural processing of moral sensitivity to issues of justice and care. Neuropsychologia, 45, 755–766. doi:10.1016/j.neuropsychologia.2006.08.014.
Royzman, E., & Baron, J. (2002). The preference of indirect harm. Social Justice Research, 15(2), 165–184.
Ruby, P., & Decety, J. (2001). Effect of subjective perspective taking during simulation of action: a PET investigation of agency. Nature Neuroscience, 4(5), 546–550. doi:10.1038/87510.
Schaich Borg, J., Hynes, C., Van Horn, J., Grafton, S., & Sinnott-Armstrong, W. (2006). Consequences, action, and intention as factors in moral judgments: an FMRI investigation. Journal of Cognitive Neuroscience, 18(5), 803–817.
Schaich Borg, J., Lieberman, D., & Kiehl, K. A. (2008). Infection, incest, and iniquity: investigating the neural correlates of disgust and morality. Journal of Cognitive Neuroscience, 20(9), 1529–1546.
Schleim, S., Spranger, T. M., Erk, S., & Walter, H. (2011). From moral to legal judgment: the influence of normative context in lawyers and other academics. Social Cognitive and Affective Neuroscience, 6(1), 48–57. doi:10.1093/scan/nsq010.
Sevinc, G., & Spreng, R. N. (2014). Contextual and perceptual brain processes underlying moral cognition: a quantitative meta-analysis of moral reasoning and moral emotions. PloS One, 9(2), e87427. doi:10.1371/journal.pone.0087427.
Shaver, K. G. (Ed.) (1985). The attribution of blame: Causality, responsibility, and blameworthiness. New York: Springer-Verlag.
Shenhav, A., & Greene, J. D. (2010). Moral judgments recruit domain-general valuation mechanisms to integrate representations of probability and magnitude. Neuron, 67(4), 667–677. doi:10.1016/j.neuron.2010.07.020.
Shenhav, A., & Greene, J. D. (2014). Integrative moral judgment: dissociating the roles of the amygdala and ventromedial prefrontal cortex. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 34(13), 4741–4749. doi:10.1523/JNEUROSCI.3390-13.2014.
Sokol, B. W., Chandler, M. J., & Jones, C. (2004). From mechanical to autonomous agency: the relationship between children’s moral judgments and their developing theories of mind. New Directions for Child and Adolescent Development, 103, 19–36.
Sommer, M., Rothmayr, C., Döhnel, K., Meinhardt, J., Schwerdtner, J., Sodian, B., & Hajak, G. (2010). How should I decide? The neural correlates of everyday moral reasoning. Neuropsychologia, 48(7), 2018–2026. doi:10.1016/j.neuropsychologia.2010.03.023.
Takahashi, H., Yahata, N., Matsuda, T., Asai, K., & Okubo, Y. (2004). Brain activation associated with evaluative processes of guilt and embarassment: an fMRI study. NeuroImage, 23, 967–974.
Turiel, E., Killen, M., & Helwig, C. (1987). Morality: Its structure, functions, and vagaries. In J. Kagan, & S. Lamb (Eds.), The emergence of morality in young children (pp. 155–243). Chicago: University of Chicago Press.
Vogeley, K., & Fink, G. R. (2003). Neural correlates of the first-person-perspective. Trends in Cognitive Sciences, 7(1), 38–42.
Wicker, B., Keysers, C., Plailly, J., Royet, J. P., Gallese, V., & Rizzolatti, G. (2003). Both of us disgusted in my insula: the common neural basis of seeing and feeling disgust. Neuron, 40, 655–664. doi:10.1016/S0896-6273(03)00679-2.
Yeo, B. T., Krienen, F. M., Sepulcre, J., Sabuncu, M. R., Lashkari, D., Hollinshead, M., Roffman, J. L., Smoller, J. W., Zöllei, L., Polimeni, J. R., Fischl, B., Liu, H., & Buckner, R. L. (2011). The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of Neurophysiology, 106(3), 1125–1165. doi:10.1152/jn.00338.2011.
Yoder, K. J., & Decety, J. (2014). The good, the bad, and the just: justice sensitivity predicts neural response during moral evaluation of actions performed by others. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 34(12), 4161–4166. doi:10.1523/JNEUROSCI.4648-13.2014.
Young, L., & Saxe, R. (2008). The neural basis of belief encoding and integration in moral judgment. NeuroImage, 40, 1912–1920. doi:10.1016/j.neuroimage.2008.01.057.
Young, L., Camprodon, J. A., Hauser, M., Pascual-Leone, A., & Saxe, R. (2010). Disruption of the right temporoparietal junction with transcranial magnetic stimulation reduces the role of beliefs in moral judgments. Pnas, 107(15), 6753–6758. doi:10.1073/pnas.0914826107.
Zahn, R., Moll, J., Paiva, M., Garrido, G., Krueger, F., Huey, E. D., et al. (2009). The neural basis of human social values: evidence from functional MRI. Cerebral Cortex, 19(2), 276–283.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Human and animal rights and informed consent
No animal or human studies were carried out by the authors for this article and data from previous studies were collected using PubMed database.
Conflict of interest
Maddalena Boccia, Claudia Dacquino, Laura Piccardi, Pierluigi Cordellieri, Cecilia Guariglia, Fabio Ferlazzo, Stefano Ferracuti and Anna Maria Giannini declare that they have no conflict of interest.
Rights and permissions
About this article
Cite this article
Boccia, M., Dacquino, C., Piccardi, L. et al. Neural foundation of human moral reasoning: an ALE meta-analysis about the role of personal perspective. Brain Imaging and Behavior 11, 278–292 (2017). https://doi.org/10.1007/s11682-016-9505-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11682-016-9505-x