Introduction

Moral sense has been widely recognized as an ensemble of psychological mechanisms that enable otherwise selfish individuals to reap the benefits of cooperation (Greene 2013). Theories about moral reasoning have their roots in the middle of the last century. Piaget and Kohlberg (1932; 1964) were pioneers in proposing pivotal theories about moral reasoning. According to Piaget, the individual acquires the morality rules through invariant stages as part of his sociocognitive development. These stages are reached by means of individual’s reciprocal interactions with the environment. These interactions allow improving the perspective taking ability and diminishing the typical childhood egocentrism (Piaget 1965). Afterwards, Kohlberg suggested a six-level progression of moral development, through which the perspective taking ability increases (Kohlberg et al. 1983). When the children, around the second years of life, start to comprehend their role as causal agents they also start to understand the actions intentionality (Kagan 1984; Kochanska et al. 1995). To understand when the moral rules are violated as well as the consequences of this violation seems to be influenced by intentionality and agency. Surely, our judgment within the wrongdoing context depends on whether the act was performed by ourselves or by somebody else (Sokol et al. 2004). The affective evaluation (self-emotions like “guilty” or “shame” due to the role the individual have had in the action) of a moral rule violated is strictly related to the intentional/accidental of the transgression, even if the physical consequences could be the same (Berthoz et al. 2006). Specifically, accidental harm is not considered morally wrong, while intentional harm is; but also inevitable harm – although intentional, but incurred as a side effect – is at times considered acceptable by individuals, as is intended harm if it is to produce a further benefit to the recipient (Turiel et al. 1987). Indeed, the ascription to the intentionality to an agent and its relation with the moral judgment is one of the traditional issues in philosophy and moral psychology (Manfrinati et al. 2013). Only in the last two decades the advent of the new neuroimaging techniques, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), has made it possible to investigate the neural foundation of the moral reasoning. Both lesional and functional neuroimaging studies (i.e., fMRI, PET) have contributed to shed some light on the neural correlates of human ability to judge the rightness or wrongness of an action (Moll et al. 2008a, b).

The correlation between brain states and moral reasoning has been initially grounded upon neurological case studies. One of the most representative cases is surely that of Phineas Gage, in the 1848, a railroad construction foreman who reported a wide lesion in the left frontal lobe as a consequence of a rock-blasting accident. He was one of the earliest documented cases providing the evidence that some areas of the pre-frontal lobes were linked to judgment, decision-making, social conduct, and personality (Bechara and Damasio 2005). Such an observation has predated the systematical neuroimaging and neuropsychological investigations of the last two decades, which have allowed for systematically associating the behavioral deficit with the lesion site (Damasio et al. 1994).

As stated above, since the first neuropsychological investigations, the association between moral reasoning and prefrontal cortex became evident (Anderson et al. 1999). In the past decade, systematic neuroimaging investigations in brain-damaged patients allowed a better characterization of the brain areas underpinnings moral reasoning. Koenigs et al. (2007) found that patients with bilateral damage to the ventromedial prefrontal cortex showed increased utilitarian responses when confronted with moral dilemma (i.e., sacrificing one person’s life to save a number of other lives). Furthermore, patients with damage to the ventromedial prefrontal cortex were found to judge attempted harms, that is actions undertaken with the intent to cause harm even if they fail, as more permissible and accidental harms, that is actions undertaken with innocent intentions that accidentally result in harm, as less permissible than the control group (Ciaramelli et al. 2012). Ciaramelli and colleagues (2012) suggested that, being ventromedial prefrontal cortex necessary for integrating outcome and belief information during moral reasoning, its damage induces a selective impairment in processing belief information, thus resulting in an increased weight of the outcome information. Interestingly, the ventromedial prefrontal cortex was also found to show significantly reduced activation in criminal psychopathic males, compared with control participants, during a moral judgment task (Pujol et al. 2012). Additionally, the temporary disruption of the activity of the right temporoparietal junction, by using the transcranial magnetic stimulation, induces participants to judge attempted harms as more morally permissible (Young et al. 2010). This finding suggests a possible role of the temporoparietal junction in processing mental states during moral judgment tasks.

Evidence from fMRI studies in the past decade helps to clarify the neural network underpinning the moral reasoning in healthy individuals (Fumagalli and Priori 2012). The most frequent behavioral paradigm in fMRI studies requires individuals to make a moral judgment about the appropriateness of an action one might perform in the presented scenario (i.e. Moral dilemma; see for example, Greene et al. 2001, 2004). Scenarios are usually written short stories, and the action to be morally judged frequently involves the sacrifice of one person’s life to save a number of other lives. Also brief moral statements have been used in fMRI studies (see for example, Avram et al. 2013 and Schaich Borg et al. 2008) as well as visual scenes (Yoder and Decety 2014) and pictures (Harenski et al. 2012). Evidence from these studies suggests a wide network of brain areas is involved in moral reasoning. This network encompasses the orbitofrontal and the ventromedial prefrontal cortices, the dorsolateral prefrontal cortex, the anterior cingulate cortex, the lateral portion of the parietal lobe as well as the medial parietal lobe, including also the posterior cingulate cortex and the precuneus, the temporoparietal junction, limbic areas such as the amygdala and the insula, and the temporal pole (Greene and Haidt 2002; Pascual et al. 2013). It has been proposed that these brain areas make important contributions to moral reasoning, although none is devoted specifically to it (Greene and Haidt 2002; Pascual et al. 2013).

Specifically, the ventromedial prefrontal cortex has been found to be crucial in processing emotional aspects as well as social norms and values during moral reasoning (Pascual et al. 2013). The orbitofrontal cortex has been associated with the online representation of reward and punishment and has been found to be engaged in simple moral judgment as well as in processing moral pictures compared with non-moral ones (Greene and Haidt 2002; Pascual et al. 2013). While the orbitofrontal and the ventromedial prefrontal cortices have been also proposed to be involved in emotionally driven moral judgment (Greene et al. 2004), the dorsolateral prefrontal cortex is involved in cognitive control and problem-solving, and it has been proposed to deal with emotional processing in the orbitofrontal and the ventromedial prefrontal cortices, probably mitigating their responses (Greene et al. 2004). Furthermore the anterior cingulate cortex has been proposed to monitor the moral conflict between a strong emotionally driven response (processed by the ventromedial prefrontal cortex) and a more cognitive controlled response (processed by the dorsolateral prefrontal cortex), supported by abstract reasoning (Greene et al. 2004).

The temporoparietal junction has been found to be crucial in belief attribution during moral judgment (Pascual et al. 2013) and in empathy-related processes (Bzdok et al. 2012). The posterior cingulate cortex and the precuneus, whose activation has been repeatedly found in fMRI studies of moral reasoning (Pascual et al. 2013), have been proposed to be involved in affective imagery (Greene and Haidt 2002). Instead the lateral portion of the parietal lobe (mainly the inferior parietal lobule) has been hypothesized to contribute to cognitive control during moral reasoning (Pascual et al. 2013).

Additionally, amygdala has been hypothesized to play a crucial role in processing empathic sadness during moral reasoning (Pascual et al. 2013) and affective response to one’s own moral violations (Berthoz et al. 2006), whereas insula probably processes visceral sensations associated with anger or indignation (Wicker et al. 2003; Moll et al. 2005) and with perceiving painful situations in others (Jackson et al. 2005) during moral decisions.

As it has been previously noted (Greene and Haidt 2002) most of the regions found to be involved during moral reasoning, such as precuneus, posterior cingulate cortex, ventromedial and orbitofrontal cortices, as well as superior temporal gyrus and inferior parietal lobe, belong to the so-called default mode network (DMN) (Yeo et al. 2011). As it is widely known, these regions show an increased activation during resting state and a reduced activation during goal-directed tasks (Gusnard and Raichle 2001). Thus, these areas have been proposed to be components of a tonically active neural system that continuously gather information about the self and the world (Gusnard and Raichle 2001). Anyway, the DMN has also found to be activated during tasks that require processing of internally generated information, including self-reference information (Kelley et al. 2002). Interestingly, Nakao et al. (2012) found that DMN was activated during internally guided decision-making. Specifically, internally guided decision-making occurs when individuals cannot rely on an externally determined objectively correct behavior and they have to rely on the internally generated preferences rather than on the externally presented criteria to regulate their own behavior (Nakao et al. 2012). This is often the case of moral judgment, where there is no definite correct responses, and individuals have to rely on an internal representation of their values system. Nakao et al. (2012) proposed that internally guided decision-making is based largely on intrinsic brain activity within the DMN, since it depends on the individual’s own criteria rather than on circumstantial criteria. It has also to be considered that neural activity within the anterior cingulate cortex and insula (i.e., the so-called Salience network; Chiong et al. 2013) has been found to affect the DMN activity during moral reasoning in healthy participants (Chiong et al. 2013). This modulation is reduced in patients with the behavioral variant of Fronto Temporal Dementia, who were found to give abnormally utilitarian responses to moral dilemmas (Chiong et al. 2013). Chiong and colleagues (2013) suggested that brain damage at the level of the Salience network may produce a dramatic alteration of the moral behavior due to the alteration in the causal influence of the salience network on the default mode network.

Overall, a large amount of data has been collected over the past decade about the neurobiological foundations of moral reasoning and decision-making. Though, the use of deeply different paradigms prevents from drawing firm conclusions about neural underpinnings of moral reasoning. One of the most questioned and debatable issues is the use of different perspectives (i.e., first and third person perspectives) in moral dilemmas and moral reaction tasks (Avram et al. 2014). Indeed, the moral dilemma can implicitly or explicitly impose a perspective on the individual with the aim to affect the judgment. In some studies, the individual is either asked to take the perspective of a protagonist (i.e., first person perspective 1PP), whereas in others, the dilemma is described in the third person (3PP) producing a difference in the individual’s answer (Royzman and Baron 2002). At least two sources of evidence are consistent with this debate. On one hand, neuroimaging evidence of different patterns of neural activity during observation of stimuli presented either in a first person perspective or third person perspective during both non-moral visuospatial tasks (Vogeley and Fink 2003) and social non-moral task (Ruby and Decety 2001). On the other hand, social psychological studies have repeatedly provided evidence for the so-called “actor-observed bias” (Jones and Nisbett 1971; Nadelhoffer and Feltz 2008), which consists of the individual’s tendency to attribute one’s own behavior to external factors and other people’s behavior to internal factors during negative situations. Nadelhoffer and Feltz (2008) strongly demonstrated the existence of this effect also during moral reasoning. Interestingly, they found that participants found moral violations more permissible when they are observers rather then when they are the actual actors. Accordingly, Avram et al. (2014) found a significant difference in accepting moral violations when a 1PP or a 3PP is required, with lower acceptability of moral violation for the 1PP. These authors also found that neural networks almost different subserve 1PP and 3PP moral reasoning. Furthermore, considering also moral emotions, when the individual has to judge the one’s own morally right or wrong actions, emotions like guilt or shame or even fear of negative social evaluation will guide his moral judgment (Berthoz et al. 2006; Finger et al. 2006; Takahashi et al. 2004). Differently, when it is required to judge another person’s action, emotions such as indignation and anger will prevail in moral judgment (Moll et al. 2008a, b; Shaver 1985). As a consequence neural activity in emotion processing regions is likely to differ between the two perspectives. Specifically, Zahn et al. (2009) found differences in neural activity when participants judged their own and others’ moral and immoral actions, interpreting such a difference due to the different moral emotions elicited by 1PP and 3PP. Authors found that shame, guilt and pride elicited by 1PP enhanced neural activity in PFC and anterior temporal lobe and indignation, anger, praise elicited by 3PP enhanced neural activity in lateral orbitofrontal cortex, insula and DLPFC.

Meta-analytic approach offers a good tool to overcome methodological issues, such as the difference in the adopted paradigm, the small sample size, the low reliability and the subtraction logic. Bzdok et al. (2012) made a first effort to find convergence between fMRI studies on moral reasoning by means of a quantitative meta-analysis, hypothesizing that moral reasoning is “implemented in brain areas engaged in theory of mind and empathy” (Bzdok et al. 2012, pp. 789). These authors found that the brain areas consistently activated during moral reasoning mainly overlapped with those supporting “theory of mind”. More recently Sevinc and Spreng (Sevinc and Spreng 2014) use a quantitative meta-analysis to test whether different experimental paradigms i.e., those requiring active deliberation or passive observation, used in fMRI studies of moral reasoning led to activation different brain regions. These authors found that both of these paradigms led to activation of the DMN and that active tasks specifically required activation of the temporo-parietal junction, angular gyrus and temporal pole. Otherwise passive tasks required activation of the lingual gyrus and amygdala.

Due to the paucity of studies directly testing the effect of the perspectives during moral reasoning (Avram et al. 2014; Berthoz et al. 2006), it remains unclear whether person perspective plays a crucial role in determining moral reasoning and if different brain regions were recruited during moral reasoning using a first person perspective or third person perspective. Nevertheless the huge amount of data collected with different paradigms (that is, those using either 1PP or 3PP) allows for assessing this issue by using a quantitative meta-analysis.

In the present study we tested the hypothesis that different brain networks contribute to moral reasoning as function of the person perspective (i.e., 1PP or 3PP), through a meta-analytic approach based on activation likelihood estimation (ALE) analysis. This method allows performing coordinate-based meta-analyses of fMRI data that condenses the wealth of neuroimaging findings into meaningful patterns. We applied a general ALE meta-analysis to identify neural substrates underpinning general aspects of moral reasoning as well as two separate ALE analyses to different perspectives, namely first person perspective (1PP) or third person perspective (3PP). Based on previous observations (Avram et al. 2014; Berthoz et al. 2006) we hypothesized that different brain areas of the neural network involved in moral reasoning (and identified in the general ALE analysis) are involved when different person perspectives are required. Such a study will also clarify whether previously observed differences between 1PP and 3PP in moral judgment (Agerström et al. 2013) may be due to the fact that when people adopt a 3PP they use a more abstract mental construction of actions and events.

Meta-analysis

Inclusion criteria for papers

The database search on PubMed was performed using the following string: “((((((((moral reasoning) OR moral decision making) OR moral judgement) OR moral judgment) AND fMRI) NOT vascular) NOT depression) NOT schizophrenia) NOT obsessive compulsive disorder”. Our a-priori inclusion criteria for papers were: 1) Inclusion of whole-brain analysis performed using functional magnetic resonance imaging (fMRI); thus, we excluded positron emission tomography (PET) studies, electrophysiology studies and papers that reported only results from ROI (Region of Interest) analysis. 2) Provision of coordinates of activation foci, either in Montreal Neurological Institute (MNI) or Talairach reference space. 3) All participants in the studies had to be healthy adults. Studies that included healthy elderly adults or children were excluded to avoid the effects of aging on moral reasoning. 4) All neuroimaging studies had to include a control condition to exclude all the activations that were not directly connected to moral reasoning. 5) The experimental tasks required participants to make a moral decision or to judge the moral acceptability of the proposed actions. Studies that did not focus on moral reasoning were excluded from the meta-analysis. 6) Only group studies were included. 7) There could be no pharmacological manipulation.

Using these criteria, we selected 31 papers. Meta-analysis was carried out on 91 neuroimaging experiments (described in the 31 published studies) using the “activation likelihood estimation” (ALE) analysis (Fox et al. 2014). A total of 799 individuals participated in these trials. Studies are summarized in Table 1.

Table 1 fMRI studies included in the ALE meta-analysis

Activation likelihood estimation

Coordinates from selected studies were analyzed for topographic convergence using Activation likelihood estimation (ALE) analyses. ALE meta-analysis determines if the clustering of activation foci across experiments, tapping on the same cognitive domain or function, is significantly higher than expected under the null distribution of a random spatial association of results from these experiments. In other words, ALE assesses the convergence between foci of different experiments by modeling the probability distributions centered at the coordinates of each one (Eickhoff et al. 2009).

A general ALE meta-analysis was performed on the foci derived from the selected studies on moral reasoning (Table 1). The coordinates of the foci were taken from the original papers. A total of 634 foci were reported in the 91 experiments.

We also performed two separate ALE analyses on different perspectives used in the experiments of moral reasoning (i.e., 1PP or 3PP). The experiments categorized as 1PP included those using first person perspective in moral dilemmas (Greene et al. 2004, 2001), those directly stressing self-agency (Berthoz et al. 2006) and those actually requiring a moral decision (Greene and Paxton 2010). Otherwise, experiments classified as 3PP included those using a third person perspective when participants were required to solve moral dilemmas (Cáceda et al. 2011) or they are viewing moral violation (Yoder and Decety 2014). Experiments including different types of perspective were excluded from these analyses. Thus, the data from these experiments were included in the general analysis but not in the individual analyses. Separate ALE analyses were performed on (1) 44 experiments using 1PP (348 participants) and (2) 41 experiments using 3PP (444 participants).

Finally, we performed two contrast analyses to directly compare the effects of the perspective [(1PP > 3PP) and (3PP > 1PP)]. These contrast analyses allowed highlighting voxels whose signal was greater in the first than the second condition. We also carried out a conjunction analysis of perspective [1PP ∧ 3PP] to identify voxels that subtended both conditions.

The ALE meta-analysis was performed using GingerALE 2.3.1 (brainmap.org) with MNI coordinates (Talairach coordinates were automatically converted into MNI coordinates by GingerALE). According to Eickhoff et al.’s (2009) modified procedure, the ALE activation maps were computed for each experiment and then a random effects analysis across all experiments were led. The resulting ALE map was thresholded at p < 0.05 using False Discovery Rate (FDR) correction for multiple comparisons and a cluster size >200 mm3.

The ALE results were registered on an MNI-normalized template using Mricro (http://www.mccauslandcenter.sc.edu/mricro/index.html).

Results

General ALE

The ALE meta-analysis showed a wide network of activation, spanning from occipital to frontal lobe (Fig. 1). Specifically, on the medial brain surface we found cluster of activation in the medial and ventromedial prefrontal cortices, anterior and posterior cingulate cortices. We also found activation in the bilateral amygdala and insula. On the lateral brain surface we found bilateral activation spanning from the parietal to the temporal lobe and including the temporo parietal-junction and lingual gyrus. We also found activation in the dorsolateral prefrontal cortex, extending also to the orbitofrontal cortex (Table 2).

Fig. 1
figure 1

Results of the general ALE meta-analysis. Notes LH = Left hemisphere, RH = Right hemisphere

Table 2 Regions showing consistent suprathreshold activations in fMRI studies on moral reasoning, as it results from ALE meta-analysis

Single ALE

To test whether 1PP and 3PP during moral reasoning led to activation of different brain areas, we led two ALE analyses. ALE analysis on fMRI experiments using a 1PP during moral reasoning (Fig. 2) revealed clusters of activation in the bilateral anterior and posterior cingulate cortices, insula, superior and middle temporal gyri and lingual gyrus, as well as the right supramarginal and angular gyri (see Table 3).

Fig. 2
figure 2

Results of the separate ALE analyses. Yellow-to-red patches showed consistent activations of fMRI studies using 1PP, whereas Blue-to-Light blue patches showed consistent activation of fMRI studies using 3PP. Notes LH = Left hemisphere, RH = Right hemisphere

Table 3 Regions showing consistent suprathreshold activations in fMRI studies on first person perspective (1PP) in moral reasoning, as it results from ALE meta-analysis

ALE analysis on fMRI experiments using a 3PP during moral reasoning (Fig. 2) revealed clusters of activation in the bilateral amygdala, medial and ventromedial prefrontal cortices, dorsal prefrontal cortex as well as the bilateral posterior cingulate cortex and middle and superior temporal gyri (Table 4).

Table 4 Regions showing consistent suprathreshold activations in fMRI studies on third person perspective (3PP) in moral reasoning, as it results from ALE meta-analysis

Contrast analyses

Results of the T contrast [1PP > 3PP] showed clusters of voxels that were more activated by a 1PP (Fig. 3) in the bilateral insula and superior temporal gyrus as well as in the anterior cingulate cortex, lingual and fusiform gyri, middle temporal gyrus and precentral gyrus in the left hemisphere (Table 5).

Fig. 3
figure 3

Results of the contrast analysis. Yellow-to-red patches showed areas with higher activation for 1PP as compared to 3PP, whereas Blue-to-Light blue patches showed areas with higher activation for 3PP as compared to 1PP. Notes LH = Left hemisphere, RH = Right hemisphere

Table 5 Regions showing contrasts between first person perspective (1PP) and third person perspective (3PP) activations in fMRI studies on moral reasoning, as it results from ALE meta-analysis

Results of the opposite T contrast [3PP > 1PP] showed clusters of voxels that were more activated by a 3PP (Fig. 3) in the bilateral amygdala, the posterior cingulate cortex, insula and supramarginal gyrus in the left hemisphere as well as the medial and ventromedial prefrontal cortex in the right hemisphere (Table 5).

Conjunction analyses

Conjunction analysis [1PP ∧ 3PP] showed that the two perspectives partially share a neural network consisting of the bilateral posterior cingulate cortices and anterior part of the middle temporal gyri as well as the left posterior part of the superior temporal gyrus (Fig. 4).

Fig. 4
figure 4

Results of the conjunction analysis between 1PP and 3PP. Notes LH = Left hemisphere, RH = Right hemisphere

Discussion

In the present study we tested the hypothesis that different brain networks contribute to moral reasoning as function of the person perspective (i.e., 1PP or 3PP) used during experimental task. To pursue this aim we used an ALE meta-analysis aimed to identify (i) neural underpinnings of the general aspects of moral reasoning (ii) specific neural substrates of moral reasoning from either 1PP or 3PP and (iii) brain regions involved in both 1PP and 3PP.

Consistently with previous meta-analyses (Sevinc and Spreng 2014; Bzdok et al. 2012) we found that a wide network of areas on both medial and lateral brain surfaces is correlated with moral reasoning (Fig. 1). This network is consistent with that identified by previous lesional and functional neuroimaging studies (see Introduction section) and compatible with functional specialization proposed before (Greene and Haidt 2002; Pascual et al. 2013). Indeed, this network includes ventromedial prefrontal cortex, anterior and posterior cingulate cortices and precuneus on the medial brain surface and supramarginal gyrus, inferior parietal lobule and superior and middle temporal gyri on the lateral brain surface. In addition, we found activations in the orbitofrontal and ventromedial prefrontal cortices, in the insula and amygdala.

More interestingly we found that several regions within this network were more activated by 1PP or 3PP (Fig. 3). Specifically, 1PP elicited higher activation in the bilateral insula and superior temporal gyrus as well as in the anterior cingulate cortex, lingual and fusiform gyri, middle temporal gyrus and precentral gyrus in the left hemisphere, whereas 3PP elicited higher activation in the bilateral amygdala, the posterior cingulate cortex, insula and supramarginal gyrus in the left hemisphere as well as the medial and ventromedial prefrontal cortex in the right hemisphere. Both 1PP and 3PP shared the activation of the posterior cingulate cortices and anterior part of the middle temporal gyri as well as the left posterior part of the superior temporal gyrus.

Globally, the results from the general ALE strongly confirm the importance of the ventromedial prefrontal cortex and that of the anterior cingulate cortex to human moral reasoning. This is consistent with previous functional neuroimaging (Greene et al. 2001, 2004; Yoder and Decety 2014; Harenski et al. 2012) and lesion studies (Damasio et al. 1994; Anderson et al. 1999; Koenigs et al. 2007). Interestingly we found that these regions exhibited higher activation during 1PP or 3PP. The left anterior cingulate cortex was higher activated during moral reasoning when a 1PP was required whereas the ventromedial prefrontal cortex was higher activated during moral reasoning from a 3PP. These regions, together with the dorsolateral prefrontal cortex, have been repeatedly hypothesized to play pivotal contributions to moral reasoning. As reported above, the ventromedial prefrontal cortex has been hypothesized to emotionally drive moral decisions whereas the dorsolateral prefrontal cortex has been hypothesized to act as a rational filter (Fumagalli and Priori 2012). The anterior cingulate cortex has been hypothesized to mediate the conflict between the emotional components of the moral reasoning (processed in the ventromedial prefrontal cortex) and the rational components of the moral reasoning (processed in the dorsolateral prefrontal cortex) (Fumagalli and Priori 2012). Our results shed some more light on the contribution of these areas to moral reasoning, strongly supporting a functional specialization as a function of the perspective used during moral reasoning.

Furthermore, present results confirm the involvement of structures such as amygdala and insula, previously hypothesized to play a specific role in processing empathic sadness (Pascual et al. 2013) and visceral sensations associated with anger or indignation (Wicker et al. 2003; Moll et al. 2005) during morally reasoning. Anyway the results of the present study (see contrast analyses) suggest that these regions play different contributions to moral reasoning as a function of the personal perspective adopted during moral reasoning. Actually, 1PP yields to higher activation of the bilateral insula, whereas 3PP yields to higher activation of the bilateral amygdala.

The consistent activation in the superior temporal gyrus, as well as the activation of medial part of the parietal lobe at the level of precuneus, support the idea that regions involved in encoding beliefs and integrating them with intentions (Young and Saxe 2008; Allison et al. 2000) is crucial in determining moral behavior (Pascual et al. 2013). Furthermore, neuroimaging findings in healthy subjects suggest a central role for the precuneus in a wide spectrum of highly integrated tasks, including self-processing operations, namely 1PP and an experience of agency (for a review see Cavanna and Trimble 2006).

Present results suggest that the neural network consistently activated during fMRI studies of moral reasoning roughly corresponds to the DMN, which is required in internally guided decision-making (Nakao et al. 2012). Furthermore, activity of this brain network is affected by activity in the salience network (i.e., anterior cingulate cortex and insula) (Chiong et al. 2013). This explains the dramatic consequences on the individual’s moral behavior following a brain damage at the level of the ventromedial prefrontal cortex (Chiong et al. 2013; Anderson et al. 1999; Koenigs et al. 2007). Actually, a lesion located at the level of the ventromedial prefrontal cortex may induce a disruption in the normal functioning of the neural network of moral reasoning.

Taken together with the findings of previous studies (Bzdok et al. 2012), present results strongly suggest that a wide network of brain areas, rather than a single brain area, underpins moral reasoning. This is also consistent with previous observation that among the brain areas found to be involved in moral reasoning, none is specifically devoted to it (Greene and Haidt 2002; Pascual et al. 2013). Furthermore, we hypothesize that the differences in the neural network are due to the fact that the 3PP implies a more abstract level of reasoning, while the 1PP implies a higher level of emotional involvement. Therefore, the moral implications of the 1PP action is more salient, as shown by previous research (Libby et al. 2007). We can suppose a system similar to that of the Dual-process morality described by Greene (2009). In our case the third person involves a shift towards the higher functions of moral reasoning, while the first person is more sensitive to an automatic emotional responses.