Abstract
In this article, I argue for four theses. First, libertarian and compatibilist accounts of moral responsibility agree that the capability of practical reason is the central feature of moral responsibility. Second, this viewpoint leads to a reasons-focused account of human behavior. Examples of human action discussed in debates about moral responsibility suggest that typical human actions are driven primarily by the agent’s subjective reasons and are sufficiently transparent for the agent. Third, this conception of self-transparent action is a questionable idealization. As shown by psychological research on self-assessment, motivated reasoning, and terror management theory, humans oftentimes have only a limited understanding of their conduct. Self-deception is rather the rule than the exception. Fourth, taking the limited self-transparency of practical reason seriously leads to a socially contextualized conception of moral responsibility.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Moral responsibility is surrounded by intricate philosophical questions: Why are humans morally responsible at all? Can there be moral responsibility in a deterministic universe? Philosophical debates about responsibility-related questions are far from being settled and there is a vast and growing variety of competing positions. However, despite all the persistent dissents, there is one central concept underlying many prominent accounts of moral responsibility: human rationality. Several scholars have pointed out that both compatibilist and libertarian accounts of moral responsibility converge on their emphasis on human rationality (Clarke, 2003a, pp. 15–16; Kane, 2005, p. 60; Keil, 2009, p. 60; Levy, 2011, pp. 44–45; Martin, 2014, pp. 23–24; Talbert, 2016, p. 29; Vargas, 2013, p. 66; Vierkant et al., 2013, 3). For many philosophers, the capability of practical reason is a necessary condition for moral responsibility. The crucial ability that separates humans from other animals is the ability to understand reasons and to act in accordance with them. Because humans are capable of evaluating reasons and transforming these reasons to actions, they can justifiably become the targets of ethical criticism.
In this article, I argue for four claims. First, despite all the numerous disagreements between compatibilists and libertarians, most responsibility accounts from both sides identify practical reason as the central human feature that guarantees moral responsibility. Second, this rationalistic conception of moral responsibility implies a one-sided account of human action. Defining human rationality as the core of moral responsibility suggests that subjective reasons are the main drivers of human behavior and that actions usually are sufficiently transparent for the agent. Third, this picture of human conduct is unrealistic. As suggested by different branches of research in psychology, sufficiently self-transparent actions are much more seldom than we think. Especially in cases in which moral norms and values are at stake, self-deception is the rule and not the exception. Fourth, since the social context shapes people’s abilities for self-understanding, moral responsibility is to be conceptualized as a socially contextualized phenomenon.
2 The Common Denominator between Compatibilism and Libertarianism
Compatibilists and libertarians usually are in disagreement about at least two questions (Griffith, 2013; Kane, 2005, 2011; Keil, 2009, 2013; Talbert, 2016; Tiberius, 2015): First, is determinism true? Second, are determinism and moral responsibility compatible? Compatibilists hold that we are morally responsible in a deterministic universe, and they claim that our universe is deterministic or that it does not matter whether it is deterministic or not. Libertarians deny that moral responsibility and determinism can be reconciled and they argue that determinism is not true.
However, both sides of the debate agree that the human capacity of practical reason is the central feature of moral responsibility, as aptly pointed out by Randolph Clarke:
Several compatibilist accounts identify free will with a capacity to direct one’s behavior by reflective practical reasoning. […] Acting with such a capacity is, I believe, a necessary condition of acting with free will, and an adequate libertarian account will need to affirm this. (Clarke, 2003b, 295, Fn. 24, emphasis in original).
There is ample evidence proving Clarke right in his claim that many compatibilist accounts of moral responsibility (and free will) revolve around the capacity of practical reason. For example, according to Fischer and Ravizza, for an agent to be morally responsible, actions have to be the result of the agent’s “own, moderately reasons-responsive mechanism” (Fischer & Ravizza, 1998, p. 1992). Details of Fischer and Ravizza’s account aside, their fundamental point is that a morally responsible agent has to be able to understand reasons and to act in accordance with them. Other compatibilists have brought forward similar thoughts using differing terminology. Kadri Vihvelin identifies “some bundle of abilities in virtue of which we think and deliberate and make decisions and choices and form intentions about what to do” (Vihvelin, 2017, 58) as the guarantors of free will and moral responsibility. Susan Wolf calls the central features of moral responsibility “the ability to know what is in accordance with the True and the Good” and “the ability to convert one’s knowledge into action” (Wolf, 1990, p. 87), while R. Jay Wallace talks about “the ability to grasp and apply moral reasons, and to govern one’s behavior by the light of such reasons” (Wallace, 1994, p. 1). Even the compatibilist position of Harry Frankfurt, according to which human will is crucial for moral responsibility, acknowledges the central role of practical reason in moral responsibility:
In maintaining that the essence of being a person lies not in reason but in will, I am far from suggesting that a creature without reason may be a person. For it is only in virtue of these rational capacities that a person is capable of becoming critically aware of his own will and of forming volitions of the second order. The structure of a person’s will presupposes, accordingly, that he is a rational being. (Frankfurt, 1988, p. 17).
Of course, all these compatibilist positions differ in various ways. Nonetheless, all are committed to the notion that practical reason is the central condition of moral responsibility.
On the libertarian side of the fence, characterizations of moral responsibility are very similar, inasmuch as reasons-talk is ubiquitous in libertarian philosophies. For example, according to Timothy O’Connor, “the prior presence of consciously considered reasons” (O’Connor, 2003, 276) is a necessary condition of libertarian free and responsible actions. Robert Kane lists the following conditions that ensure that choices are sufficiently under the control of an agent: “the choices were willed either way, were done for reasons and the agents endorsed them” (Kane, 2007, 29). For Laura Ekstrom, a “decision or other act is directly free just in case it is caused non-deviantly and indeterministically by reasons” (Ekstrom, 2019, p. 137). David Wiggins writes that “patterns of practical deliberation” (Wiggins, 2003, 114) are needed for free and responsible actions. Finally, according to Geert Keil, freedom of will and moral responsibility are based on the „complex capability of practical reasoning, of examining one’s own wishes and suspending them if necessary, and of letting the outcome of this process result in action.” (Keil, 2009, p. 27, translated from German).
I do not want to blur the lines between compatibilism and libertarianism. Libertarian accounts of moral responsibility presuppose metaphysical alternatives in human reasoning and action, whereas compatibilist accounts do not. Nevertheless, both sides converge on one point: The human capacity of practical reason is the central feature of moral responsibility.
3 Subjective Reasons Govern Human Action
The fact that many philosophers of moral responsibility focus on the human capacity of practical reason leads to an understanding of human action that is heavily reasons-focused. From this perspective, human action is governed primarily by the reasons the agent cites for their action. Before humans act, they reflect on reasons for and against different actions and subsequently they act because of one or several of these reasons. If the agent does not consciously consider reasons before acting, they can reliably give reasons after the action. Robert Nozick paradigmatically expresses this reasons-focused perspective:
Making some choices feels like this. There are various reasons for and against doing each of the alternative actions or courses of action one is considering, and it seems and feels as if one could do any one of them. In considering the reasons, mulling them over, one arrives at a view of which reasons are more important, which ones have more weight. One decides which reasons to act on; or one may decide to act on none of them but to seek instead a new alternative since none previously considered was satisfactory. (Nozick, 1995, 101).
What is exemplary here for many analyses of human action in debates about moral responsibility is the fact that Nozick describes human action as exclusively driven by the agent’s subjective reasons. Nozick and many other philosophers of moral responsibility characterize human action by sufficient self-transparency. At least under normal circumstances, humans can reliably explain why they behave in a certain way and the reasons they cite for their behavior are the main (or the sole) grounds behind their actions. Cases of self-deception, in which people tell a false or a highly incomplete story about their own conduct, are rare occasions. Usually, humans know why they are doing what they do.
The debate about moral responsibility is pervaded by examples of self-transparent actions that are guided exclusively by the agent’s reasons. No matter if you take Robert Kane’s case of a woman that is torn between doing the morally right thing and acting selfishly (Kane, 1998, p. 126, Kane, 2007, 28), Fischer und Ravizza’s numerous examples of different actions by various individuals (Fischer & Ravizza, 1998, 59 ff., 65, 83, 85, 87 f., 99, 215 ff., 232, 234, 241 ff., 243), or even Harry Frankfurt’s much discussed case of a drug-addict (Frankfurt, 1988, 17 ff., 22, 24 ff., 96, 164), in all these and other examples that philosophers use to illustrate core features of their theories, agents are self-transparent beings.
4 The Case for Omnipresent Self-Deception
Undoubtedly, there are cases of self-transparent action. If I go to the bakery to buy some bread, my subjective reasons (e.g., being hungry and craving bread) fully justify and explain the action. At least, there are no grounds for assuming that my reasons are a mere delusion and that there are other factors actually causing me to go to the bakery. Nevertheless, many cases of human action are much more complicated. Oftentimes, the agent’s subjective reasons are only one of many factors that drive the action. However, people often do not recognize these other factors. Instead, they tend to rationalize their behavior.
As aptly put by Schwitzgebel and Ellis, “rationalization is post-hoc reasoning toward a favored conclusion, where both the preference for the conclusion and the search for justifications are shaped by some epistemically non-probative distorting factor that isn’t explicitly appealed to in those justifications.” (Schwitzgebel & Ellis, 2017, p. 172) In cases of rationalization, people do not realize that their subjective reasons are only a part of the story and that there are other factors exerting decisive influence on their conduct. People often disregard or misunderstand the influence of their self-image, identity, self-esteem, and their basic worldview on their reasoning and behavior. Simultaneously, they are convinced that their subjective reasons provide a comprehensive explanation of their action. The point is not that people believe themselves to be purely rational and self-transparent in general, but that they identify concrete examples of self-deception primarily in other people.
[W]e tend to treat our own introspections as something of a gold standard in assessing why we have responded in a particular manner and whether our judgments have been tainted by bias. By contrast, we treat the introspections of other actors as merely another source of plausible hypotheses – to be accepted or rejected as a function of their plausibility in light of what we know about the particular actor and about human behavior in general. (Pronin et al., 2004, 784).
While we have no problem pointing out cases of limited self-understanding in other people, we believe our own action explanations to be valid under normal circumstances. However, as suggested by psychological evidence, we underestimate the complexity of our actions much more often than we think. Self-deception is an ubiquitous phenomenon.
4.1 Failures of Self-Assessment
Oftentimes, people have difficulties in arriving at a sound evaluation of their own capabilities (Dunning et al., 2004; Pronin et al., 2004; Zell & Krizan, 2014). In many realms, people believe themselves to be better than average (Zell et al., 2020). For example, people think that they are above average in driving a car (Svenson, 1981), in handling guns safely (Stark & Sachau, 2016), and in numerous moral qualities and intellectual abilities (Brown, 2012; Friedrich, 1996; Kemmelmeier & Malanchuk, 2016; Kruger & Dunning, 1999; Pronin et al., 2002). Concurrently, people think to be less suggestible than other people are (Andsager & White, 2013; Davison, 1983; Eisend, 2017; Paul et al., 2000). For instance, people judge themselves to be less influenced than other people are by political polls (Wei et al., 2011), fake news (Jang & Kim, 2018), deceptive advertising (Xie & Johnson, 2015), and by sexual content in movies and the internet (Rosenthal et al., 2018; Shen et al., 2015). Since it is impossible that the average person actually has above average skills or is less suggestible than average, these research results suggest robust overconfidence in people.
In addition, people are highly skilled in explaining away evidence that contradicts this inflated self-image. They identify the causes of failures and mistakes in external circumstances or in temporary conditions, while simultaneously attributing the causes of success to internal and stable factors (Allen et al., 2020; Mezulis et al., 2004). If your sports team loses, it was the referee’s fault, but if you win, it was due to your team’s excellent physical fitness.
In sum, people tend to have an overly optimistic opinion about themselves. One central explanation for this phenomenon is motivational (Gaertner et al., 2002; Mezulis et al., 2004; Sun et al., 2008; Zell et al., 2020). We want to see ourselves in a positive light. Thinking of ourselves in overly positive terms is an easy way to boost our self-esteem. People do not want to appear unintelligent, incapable, or immoral and their inflated self-image helps them satisfying this desire. However, we do not justify our self-assessments in this way. We do not claim that we believe ourselves to be better than other people are because this protects our self-esteem. We simply believe to assess our qualities reliably. Self-evaluations frequently are cases of self-deception. Beyond that, because people’s self-evaluations can determine in which kind of actions they engage (e.g., driving extremely fast, handling with loaded guns, etc.), flaws in self-insight can contribute to unethical behavior.
4.2 Selective Information Processing and Motivated Reasoning
Human information processing is heavily distorted in favor of people’s basic identities and worldviews. People tend to pay less attention to information that contradicts their fundamental convictions (W. Hart et al., 2009), to process this information rather superficially (Richter & Maier, 2017), and to remember it inaccurately (Hennes et al., 2016; Shao & Goidel, 2016). Beyond that, human reasoning is oftentimes highly biased in a similar manner. People accept evidence that supports their fundamental attitudes and criticize contrary evidence comprehensively. Simultaneously, they are convinced that they are just giving an objective assessment of the evidence (Hornsey & Fielding, 2017; Kunda, 1990; Nickerson, 1998; Rothmund et al., 2017). For example, in regard to various contested topics (e.g., climate change, death penalty, gaming and aggression, etc.), people judge scientific and journalistic evidence to be valid and credible if it supports their attitudes, and they doubt contrary information by citing sophisticated counter-arguments. Scientific experts are equally assessed to be reliable sources only as long as they support people’s opinions (Bender et al., 2016; Ditto et al., 2019; Greitemeyer, 2014; Kahan, 2013; Kahan et al., 2011; Lord et al., 1979; Nauroth et al., 2014, 2015). Confronting people that hold false opinions with contrary scientific evidence can in some cases even backfire and lead to a strengthening of these wrong attitudes (Chapman & Lickel, 2016; P. S. Hart & Nisbet, 2012; Nyhan et al., 2014).
One main factor behind such motivated reasoning is the fact that people’s attitudes are usually a central part of core identities and global worldviews. For example, attitudes about the nonexistence of climate change or about the dangerousness of vaccines are connected to various moral convictions, personal values, political ideologies, spiritual beliefs, and religious creeds (Browne et al., 2015; Hornsey et al., 2016; Hornsey et al., 2018; Rutjens et al., 2018). Having these worldviews questioned is very unpleasant and, consequently, people try to get out of this unpleasant state of mind (Cooper, 2007; Festinger, 1957). Because, in most cases, it is much easier to doubt counter-attitudinal evidence in various ways than to question and change one’s identity, people selectively believe in arguments that help them to protect their worldview.
It is easy to see how distorted information processing can contribute to morally problematic behavior. Just imagine a mother whose belief in conspiracy theories about vaccination is the product of distorted information processing and worldview defense. Since the mother is strongly convinced that she simply has discovered the truth about vaccination based on an objective evaluation of information, she refuses vaccinations for her child. Due to the lack of immunization, the child gets severely ill and suffers from long-term effects for life.
Information processing is a striking case of self-deception. People do not claim to hold on to their opinions in the face of contrary evidence because this is the easiest way to protect their identity. On the contrary, they claim that they simply have the better arguments on their side. While people think that they believe in their fundamental convictions and basic worldviews because of the reasons they cite, psychological research suggests that it is oftentimes the other way round. People find reasons appealing because these reasons help them justifying their identity.
4.3 Dealing with Existential Terror
Humans are the only known animals that are aware of their mortality. Humans know that their existence will inevitably end one day. According to terror management theory, much of human activity is devoted to suppressing existential terror that comes along with awareness of the finiteness of life (Pyszczynski et al., 1999; Solomon et al., 2000, 2015). Humans create culture to fight their fear of death. By adopting cultural worldviews (e.g., moral norms, political beliefs, religious convictions, etc.), humans take part in supra-individual entities that will outlast their own existence. Cultural worldviews provide symbolic immortality. By taking part in a culturally shaped lifestyle, humans are not mere biological organisms but rather unique beings that participate in phenomena transcending their individual existence. When humans live in accordance with norms, values, and expectations of their cultural worldview, their self-esteem is protected or increased, whereby existential terror can be kept at a minimum.
From the perspective of terror management theory, human culture is a coping mechanism for dealing with existential terror. The downside of this is that competing cultures can be threatening. Since different cultures value different behaviors, beliefs and traditions, other cultures question one’s own cultural worldview. Thereby, other cultures challenge one’s own way of coping with mortality and have the potential of triggering existential fear. For this reason, people engage in worldview defense. When people are reminded of their own mortality, they strengthen their political beliefs (Burke et al., 2013; Jost et al., 2017). Furthermore, making people aware of their mortality can trigger devaluations of other cultures, intolerance, stereotypes, prejudice, aggression, and propensity to violence (Burke et al., 2010; Greenberg & Kosloff, 2008; Martens et al., 2011; Niesta et al., 2008; Vail et al., 2019). Consequently, research on terror management theory suggests that existential terror is one driver of conflicts between cultures.
Human coping with existential terror is a further case of self-deception. Admittedly, people sometimes claim to create things (e.g., art or science) in order to leave something meaningful behind after they have died. However, people do not claim that they believe in certain moral norms or political attitudes because this helps them dealing with their fear of death. People do not justify the devaluation of different religious creeds by referring to the possible threat to their own terror management. They do not cite their existential fear as a reason for fighting another culture. Instead, people are convinced that they have the right cultural beliefs simply for the right reasons.
4.4 Summing up
There is much more psychological evidence for decisive impacts on human conduct, which we oftentimes do not cite as reasons, such as the influence of implicit just world beliefs on the tendency to blame victims (Furnham, 2003; Hafer & Bègue, 2005; Lerner, 1980), the influence of existential and epistemic motives on political opinions (Jost, 2017; Jost et al., 2017; Jost, 2018; Jost et al., 2003), or the influence of people’s striving for a positive self-image on their justification of meat consumption (Bastian & Loughnan, 2017; Timm, 2016). However, the basic pattern should be clear by now. In many instances, we do not realize the marked influence of our core desires, motives, and fears on our attitudes, our behavior, and on the reasons that we cite for our attitudes and behavior. While humans are much worse at gaining valid self-insight than they think, they are extremely good at rationalizing their conduct.
5 A Realistic Conception of Human Action
Examples of human actions discussed in the debate about moral responsibility usually follow a typical pattern. Either an agent A performs action X for reason R, or the agent A decides between doing X because of reason R and doing Y because of reason S. However, as suggested by psychological research, a large number of human actions do not follow that simplistic pattern. In light of the highly limited self-transparency of practical reason outlined above, a realistic conception of human action would be more like the following: A is convinced to have performed an action because of reason R. However, A has performed the action also because of S, T, and Q without realizing the influence of these factors. Furthermore, if S, T, and Q did not influence A’s action, or if A was aware of their influence, A would probably not cite R as a reason.
Let me illustrate this with one example: Peter attacks peaceful demonstrators of the climate justice movement and hits one of the protesters in the face. He says that he punched the activist because Peter knows that climate change is a hoax and that he wanted to protect the rights and liberties of the citizens, which the climate activists are planning to undermine. He further claims that his belief in the nonexistence of climate change is based on a thorough internet research of the relevant facts and arguments. While Peter feels confident that he has provided a comprehensive explanation of his action, matters are much more complicated. Peter does not realize that he is a climate sceptic because denying climate change simply fits nicely into his broader worldview. He has always been a strong advocate of individual liberties and an opponent of state interventions. In addition, Peter believes that there is no intrinsic value to nature. It is much easier to integrate the conviction that climate change is a hoax into this belief system than to realize that fundamental changes of society are needed in order to preserve the natural bases of human existence. Moreover, Peter is a convinced Republican and he thinks that the U.S. is the greatest country on earth. Therefore, his climate change denial serves the preservation of his political and national identity. Beyond that, all his normative convictions and identity-related attitudes have highly biased Peter’s internet research in favor of climate skepticism. However, since Peter thinks that he is a fairly intelligent, clever, rational, and well-educated citizen, he is sure that he merely sorted out the good from the bad arguments. Finally, the fundamental attitudes and worldviews that lie beneath his climate skepticism serve his personal terror management. However, if you asked him whether his political and moral convictions served the purpose of suppressing existential terror, he would think that you are joking.
This case illustrates that the subjective reasons of the agent are oftentimes only one of numerous decisive contributing factors to human actions. By identifying the faculty of practical reason as the core feature of moral responsibility and by concentrating on self-transparent instances of action, philosophers of moral responsibility present a highly idealized, simplistic, and overly intellectualized picture of human behavior. Of course, humans do possess practical reason and, admittedly, psychological studies cannot disprove this anthropological fact (Brink, 2013, 140 ff.; Schlosser, 2013, 217-222 and 229 ff.; Tiberius, 2015, 144 ff.). Nevertheless, concentrating on the capacity of practical reason bears the risk of detaching philosophical discussions of human action from reality. While the agent’s reasons are an important aspect of action, they are by far not the only one and, in many cases, subjective reasons are mere rationalizations. A conception of moral responsibility that wants to do justice to the complexity of human behavior has to take into account the limited self-transparency of practical reason. Consequently, identifying the capacity of practical reason as the core guarantor of moral responsibility is not wrong, but it is unsatisfactory.
6 Moral Responsibility as a Socially Contextualized Phenomenon
While we oftentimes have considerable difficulties gaining valid self-insight, the social contexts we live in have a mayor influence on how far we come on our journey to self-understanding. The amount and the kind of education we get, the institutions we are a part of, the way political messages are communicated to us, the social diversity of the city we live in, and many more aspects of our social environment can promote self-critical, reliable, and thorough self-understanding or they can impede it. Because social contexts provide the basis for a more or a less self-transparent functioning of practical reason, moral responsibility itself has to be socially contextualized.
Conceptualizing moral responsibility as a socially contextualized phenomenon means that there is responsibility for social surroundings. Moral responsibility is not limited to individual actions, attitudes, or character traits. We are also responsible for the creation of social contexts and for the impact that these contexts have on ourselves and on other people. One dramatic example to illustrate this are the Milgram experiments (Blass, 1999, 2012; Lüttke, 2004; Milgram, 1974). These experiments have shown that an alarmingly high number of people have extreme difficulties standing up to authority figures. Even mild pressure exercised by an authority can nudge people into doing horrible things. Furthermore, people are not very good at anticipating this devastating authority influence on their own behavior (Blass, 1999; Grzyb & Doliński, 2017; Milgram, 1974). People tend to underestimate to a considerable degree how many other people will commit immoral deeds under authority pressure and they believe themselves to be even less suggestible by authority than other people are.
On my account, responsibility for immoral deeds done under authority pressure has to be expanded beyond the individual perpetrators. Those people that put other people in a position in which authority pressure makes immoral behavior likely are co-responsible for the misdeeds. Take the example of war crimes. It is highly probable that the hierarchy based and authority driven structure of the military can contribute to such wrongdoings. Soldiers are trained to follow orders and not their conscience. Consequently, training people in this manner and sending them to highly stressful battle situations means creating a social environment that can be detrimental to self-transparent reasoning in the soldiers and, thereby, make immoral deeds likely. That is why the soldiers are not the only ones responsible for their crimes. Generals and politicians that put soldiers in this position must bear some share of the blame. In addition, since it is a societal decision to fund the military and to send soldiers to battle, at least those parts of society that approve of this decision are partly to blame for military wrongdoings, too.
Moral responsibility for social surroundings, however, is not limited to detrimental influences on moral conduct. More important is responsibility for enabling societal conditions. Since societal conditions can have a huge influence on how reliable people understand their own reasoning processes, it is possible to shape these conditions in a way that facilitates reflective self-understanding. Therefore, we are responsible for how effective we support each other in understanding ourselves.
At a very basic level, this includes, among others, the way we communicate with each other. Let us assume that the activist in the aforementioned example has openly ridiculed Peter’s beliefs about climate change before Peter punched him. This ridicule has challenged Peter’s fundamental identity and self-image and, thereby, triggered several biased reasoning processes, whereby Peter came to believe in the non-existence of climate change even stronger than before. Consequently, while the activist is not responsible for the fact that Peter punched him, he bears at least some responsibility for communicating in a way that strengthened Peter’s problematic attitudes.
At a higher level, responsibility for enabling societal conditions entails the responsibility for the general opportunities that people have to acquire reliable self-insight. Through socialization processes, societies can provide individuals with the possibilities to gain a deeper understanding of their own limits of reasoning and self-understanding. For example, had Peter grown up in a society in which every citizen is routinely educated thoroughly about psychological biases, he might have been able to reflect more critically on his own reasoning. However, since the society Peter lives in has failed to provide him with enough opportunities to gain skills of critical self-reflection, society is partly co-responsible for his distorted opinions about climate change. More specifically, the political decision makers that shape the educational system in Peter’s society are co-responsible for the fact that citizens of their society lack the opportunity to gain better self-insight and, thereby, have false or problematic opinions.
As a final note, I want to stress that by emphasizing the influences of social contexts on the human capability of practical reasoning I am not taking sides with sceptics of moral responsibility. Some philosophers have interpreted psychological research as showing that unconscious processes triggered automatically by external stimuli control human behavior and they have uttered highly skeptical claims about agency, personhood, and moral responsibility (Caruso, 2012, 2015; Doris, 2009, 2015; Taylor, 2010). In my opinion, such skeptical views could only be defended by explaining why research on the effects cited by these philosophers, such as priming or implicit biases, is characterized by fundamental inconsistencies, by several methodological and interpretive problems, and primarily by effects that are much too small to explain human conduct exclusively through unconscious processes (Carlsson & Agerström, 2016; Greenwald et al., 2015; Greenwald et al., 2009; Jones et al., 2017; Kidder et al., 2018; Lodder et al., 2019; Oswald et al., 2013; Weingarten et al., 2016). To my knowledge, no such an explanation is currently available. My own account is much less skeptical since it does not deny or marginalize the influence of conscious reasoning on human behavior. On the contrary, my emphasis on responsibility for enabling societal conditions implies that humans are capable of improving their self-insight under favorable circumstances. What I do question, however, is that human actions are characterized adequately by a primary focus on the agent’s subjective reasons. Human behavior is extremely complex and our self-understanding is limited. Therefore, contextualizing moral responsibility is not a skeptical account, but it is rather an attempt to develop a more differentiated and comprehensive picture of moral responsibility.
References
Allen, M. S., Robson, D. A., Martin, L. J., & Laborde, S. (2020). Systematic review and meta-analysis of self-serving attribution biases in the competitive context of organized sport. Personality & Social Psychology Bulletin, 46(7), 1027–1043. https://doi.org/10.1177/0146167219893995
Andsager, J. L., & White, H. A. (2013). Self versus others: Media, messages, and the third-person effect. Routledge.
Bastian, B., & Loughnan, S. (2017). Resolving the meat-paradox: A motivational account of morally troublesome behavior and its maintenance. Personality and Social Psychology Review, 21(3), 278–299. https://doi.org/10.1177/1088868316647562
Bender, J., Rothmund, T., Nauroth, P., & Gollwitzer, M. (2016). How moral threat shapes laypersons’ engagement with science. Personality and Social Psychology Bulletin, 42(12), 1723–1735. https://doi.org/10.1177/0146167216671518
Blass, T. (1999). The milgram paradigm after 35 years: Some things we now know about obedience to authority. Journal of Applied Social Psychology, 29(955-978). https://doi.org/10.1111/j.1559-1816.1999.tb00134.x
Blass, T. (2012). A cross-cultural comparison of studies of obedience using the Milgram paradigm: A review. Social and Personality Psychology Compass, 6(2), 196–205. https://doi.org/10.1111/j.1751-9004.2011.00417.x
Brink, D. O. (2013). Situationism, responsibility and fair opportunity. Social Philosophy and Policy, 30(1-2), 121–149. https://doi.org/10.1017/S026505251300006X
Brown, J. D. (2012). Understanding the better than average effect: Motives (still) matter. Personality and Social Psychology Bulletin, 38(2), 209–219. https://doi.org/10.1177/0146167211432763
Browne, M., Thomson, P., Rockloff, M. J., & Pennycook, G. (2015). Going against the herd: Psychological and cultural factors underlying the ‘vaccination confidence gap’. PLoS One, 10(9), 1–14. https://doi.org/10.1371/journal.pone.0132562
Burke, B., Martens, A., & Faucher, E. (2010). Two decades of terror management theory: A meta-analysis of mortality salience research. Personality and Social Psychology Review, 14(2), 155–195. https://doi.org/10.1177/1088868309352321
Burke, B., Kosloff, S., & Landau, M. J. (2013). Death goes to the polls: A meta-analysis of mortality salience effects on political attitudes. Political Psychology, 34(2), 183–200. https://doi.org/10.1111/pops.12005
Carlsson, R., & Agerström, J. (2016). A closer look at the discrimination outcomes in the IAT literature. Scandinavian Journal of Psychology, 57(4), 278–287. https://doi.org/10.1111/sjop.12288
Caruso, G. (2012). Free will and consciousness: A determinist account of the illusion of free will. Lexington Books.
Caruso, G. (2015). If consciousness is necessary for moral responsibility than people are less responsible than we think. Journal of Consciousness Studies, 22(7-8), 49–60.
Chapman, D. A., & Lickel, B. (2016). Climate change and disasters: How framing affects justifications for giving or withholding aid to disaster victims. Social Psychological and Personality Science, 7(1), 13–20. https://doi.org/10.1177/1948550615590448
Clarke, R. (2003a). Libertarian accounts of free will. Oxford University Press.
Clarke, R. (2003b). Toward a credible agent-causal account of free will. In G. Watson (Ed.), Free will (2nd ed., pp. 285–298). Oxford University Press.
Cooper, J. (2007). Cognitive dissonance: Fifty years of a classic theory. SAGE Publications.
Davison, W. P. (1983). The third-person effect in communication. Public Opinion Quarterly, 47, 1–15. https://doi.org/10.1086/268763
Ditto, P. H., Liu, B. S., Clark, C. J., Wojcik, S. P., Chen, E. E., Grady, R. H., et al. (2019). At least bias is bipartisan: A meta-analytic comparison of partisan bias in liberals and conservatives. Perspectives on Psychological Science, 14(2), 273–291. https://doi.org/10.1177/1745691617746796
Doris, J. (2009). Skepticism about persons. Philosophical Issues, 19(1), 57–91.
Doris, J. (2015). Talking to our selves: Reflection, ignorance, and agency. Oxford University Press.
Dunning, D., Heath, C., & Suls, J. M. (2004). Flawed self-assessment: Implications for health, education, and the workplace. Psychological Science in the Public Interest, 5(3), 69–106. https://doi.org/10.1111/j.1529-1006.2004.00018.x
Eisend, M. (2017). The third-person effect in advertising: A meta-analysis. Journal of Advertising, 46(3), 377–394. https://doi.org/10.1080/00913367.2017.1292481
Ekstrom, L. W. (2019). Toward a plausible event-causal indeterminist account of free will. Synthese, 196(1), 127–144. https://doi.org/10.1007/s11229-016-1143-8
Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.
Fischer, J., & Ravizza, M. (1998). Responsibility and control: A theory of moral responsibility. Cambridge University Press.
Frankfurt, H. G. (1988). The importance of what we care about: Philosophical essays. Cambridge University Press.
Friedrich, J. (1996). On seeing oneself as less self-serving than others: The ultimate self-serving bias?, 23(2), 107–109. https://doi.org/10.1207/s15328023top2302_9
Furnham, A. (2003). Belief in a just world: Research progress over the past decade. Personality and Individual Differences, 34(5), 795–817. https://doi.org/10.1016/S0191-8869(02)00072-7
Gaertner, L., Sedikides, C., Vevea, J. L., & Iuzzini, J. (2002). The “I,” the “we,” and the “when”: A meta-analysis of motivational primacy in self-definition. Journal of Personality and Social Psychology, 83(3), 574–591. https://doi.org/10.1037/0022-3514.83.3.574
Greenberg, J., & Kosloff, S. (2008). Terror management theory: Implications for understanding prejudice, stereotyping, intergroup conflict, and political attitudes. Social and Personality Psychology Compass, 2(5), 1881–1894. https://doi.org/10.1111/j.1751-9004.2008.00144.x
Greenwald, A., Poehlman, T., Uhlmann, E., & Banaji, M. (2009). Understanding and using the implicit association test: III. Meta-analysis of predictive validity. Journal of Personality and Social Psychology, 97(1), 17–41. https://doi.org/10.1037/a0015575
Greenwald, A., Banaji, M., & Nosek, B. (2015). Statistically small effects of the implicit association test can have societally large effects. Journal of Personality and Social Psychology, 108(4), 553–561. https://doi.org/10.1037/pspa0000016
Greitemeyer, T. (2014). I am right, you are wrong: How biased assimilation increases the perceived gap between believers and skeptics of violent video game effects. PLoS One, 9(14), 1–7. https://doi.org/10.1371/journal.pone.0093440
Griffith, M. (2013). Free will: The basics. Routledge.
Grzyb, T., & Doliński, D. (2017). Beliefs about obedience levels in studies conducted within the Milgram paradigm: Better than average effect and comparisons of typical behaviors by residents of various nations. Frontiers in Psychology, 8. https://doi.org/10.3389/fpsyg.2017.01632
Hafer, C. L., & Bègue, L. (2005). Experimental research on just-world theory: Problems, developments, and future challenges. Psychological Bulletin, 131(1), 128–167. https://doi.org/10.1037/0033-2909.131.1.128
Hart, P. S., & Nisbet, E. C. (2012). Boomerang effects in science communication: How motivated reasoning and identity cues amplify opinion polarization about climate mitigation policies. Communication Research, 39(6), 701–723. https://doi.org/10.1177/0093650211416646
Hart, W., Albarracín, D. [. D.]., Eagly, A. H., Brechan, I., Lindberg, M. J., & Merrill, L. (2009). Feeling validated versus being correct: A meta-analysis of selective exposure to information. Psychological Bulletin, 135(4), 555–588. https://doi.org/10.1037/a0015701
Hennes, E. P., Ruisch, B. C., Feygina, I., Monteiro, C. A., & Jost, J. T. (2016). Motivated recall in the service of the economic system: The case of anthropogenic climate change. Journal of Experimental Psychology: General, 145(6), 755–771. https://doi.org/10.1037/xge0000148
Hornsey, M. J., & Fielding, K. S. (2017). Attitude roots and Jiu Jitsu persuasion: Understanding and overcoming the motivated rejection of science. American Psychologist, 72(5), 459–473. https://doi.org/10.1037/a0040437
Hornsey, M. J., Harris, E. A., Bain, P. G., & Fielding, K. S. (2016). Meta-analyses of the determinants and outcomes of belief in climate change. Nature Climate Change, 6, 622–626. https://doi.org/10.1038/NCLIMATE2943
Hornsey, M. J., Harris, E. A., & Fielding, K. S. (2018). The psychological roots of anti-vaccination attitudes: A 24-nation investigation. Health Psychology, 37(4), 307–315. https://doi.org/10.1037/hea0000586
Jang, S. M., & Kim, J. K. (2018). Third person effects of fake news: Fake news regulation and media literacy interventions. Computers in Human Behavior, 80, 295–302. https://doi.org/10.1016/j.chb.2017.11.034
Jones, K. P., Sabat, I. E., King, E. B., Ahmad, A., McCausland, T. C., & Chen, T. (2017). Isms and schisms: A meta-analysis of the prejudice-discrimination relationship across racism, sexism, and ageism. Journal of Organizational Behavior, 38(7), 1076–1110. https://doi.org/10.1002/job.2187
Jost, J. T. (2017). Ideological asymmetries and the essence of political psychology. Political Psychology, 38(2), 167–208. https://doi.org/10.1111/pops.12407
Jost, J. T. (2018). A quarter century of system justification theory: Questions, answers, criticisms, and societal applications. British Journal of Social Psychology, 58(2), 263–314. https://doi.org/10.1111/bjso.12297
Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. J. (2003). Political conservatism as motivated social cognition. Psychological Bulletin, 129(3), 339–375. https://doi.org/10.1037/0033-2909.129.3.339
Jost, J. T., Stern, C., Rule, N. O., & Sterling, J. (2017). The politics of fear: Is there an ideological asymmetry in existential motivation? Social Cognition, 35(4), 324–353. https://doi.org/10.1521/soco.2017.35.4.324
Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision making, 8(4), 407–424.
Kahan, D. M., Jenkins-Smith, H., & Braman, D. (2011). Cultural cognition of scientific consensus. Journal of Risk Research, 14(2), 147–174. https://doi.org/10.1080/13669877.2010.511246
Kane, R. (1998). The significance of free will. New York u. Oxford: Oxford University Press.
Kane, R. (2005). A contemporary introduction to free will. New York u. : Oxford University Press.
Kane, R. (2007). Libertarianism. In J. M. Fischer, R. Kane, D. Pereboom, & M. Vargas (Eds.), For Views on Free Will (pp. 5–43). Blackwell Publishing, Malden et al.
Kane, R. (2011). Introduction: The contours of contemporary free will debates (part 2). In R. Kane (Ed.), The Oxford handbook of free will (2nd ed., 3–35). Oxford et al.: Oxford University press.
Keil, G. (2009). Willensfreiheit und Determinismus [Free will and determinism]. Reclam.
Keil, G. (2013). Willensfreiheit [freedom of will] (2nd ed.). DeGruyter.
Kemmelmeier, M., & Malanchuk, O. (2016). Greater self-enhancement in Western than eastern Ukraine, but failure to replicate the Muhammad Ali effect. International Journal of Psychology, 51(1), 78–82. https://doi.org/10.1002/ijop.12151
Kidder, C. K., White, K. R., Hinojos, M. R., Sandoval, M., & Crites, S. L. (2018). Sequential stereotype priming: A meta-analysis. Personality and Social Psychology Review, 22(3), 199–227. https://doi.org/10.1177/1088868317723532
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121–1134. https://doi.org/10.1037/0022-3514.77.6.1121
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480
Lerner, M. J. (1980). The belief in a just world: A fundamental delusion. Springer Science+Business Media.
Levy, N. (2011). Hard luck: How Luck Undermines Free Will & Moral Responsibility. Oxford University Press.
Lodder, P., Ong, H. H., Grasman, R. P. P. P., & Wicherts, J. M. (2019). A comprehensive meta-analysis of money priming. Journal of Experimental Psychology. General, 148(4), 688–712. https://doi.org/10.1037/xge0000570.
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109. 10.1037/0022-3514.37.11.2098.
Lüttke, H. B. (2004). Experimente unter dem Milgram-Paradigma. Gruppendynamik und Organisationsberatung, 35(4), 431–464. https://doi.org/10.1007/s11612-004-0040-7
Martens, A., Burke, B. L., Schimel, J., & Faucher, E. H. (2011). Same but different: Meta-analytically examining the uniqueness of mortality salience effects. European Journal of Social Psychology, 41(1), 6–10. https://doi.org/10.1002/ejsp.767
Martin, Z. T. (2014). Are we free? Psychology’s challenges to free will. Doktorarbeit, Florida State University. Download von: http://diginole.lib.fsu.edu/islandora/ob-ject/fsu%3A254465am04.05.2016.
Mezulis, A. H., Abramson, L. Y., Hyde, J. S., & Hankin, B. L. (2004). Is there a universal positivity bias in attributions? A meta-analytic review of individual, developmental, and cultural differences in the self-serving attributional bias. Psychological Bulletin, 130(5), 711–747. https://doi.org/10.1037/0033-2909.130.5.711
Milgram, S. (1974). Obedience to authority: An experimental view. Tavistock.
Nauroth, P., Gollwitzer, M., Bender, J., & Rothmund, T. (2014). Gamers against science: The case of the violent video games debate. European Journal of Social Psychology, 44(2), 104–116. https://doi.org/10.1002/ejsp.1998
Nauroth, P., Gollwitzer, M., Bender, J., & Rothmund, T. (2015). Social identity threat motivates science-discrediting online comments. PLoS One, 10(2), 1–26. https://doi.org/10.1371/journal.pone.0117476
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. https://doi.org/10.1037/1089-2680.2.2.175
Niesta, D., Fritsche, I., & Jonas, E. (2008). Mortality salience and its effects on peace processes. Social Psychology, 39(1), 48–58. https://doi.org/10.1027/1864-9335.39.1.48
Nozick, R. (1995). Choice and indeterminism. In T. O’Connor (Ed.), Agents, causes, events: Essays on indeterminism and free will (101-114). New York u. Oxford: Oxford University press.
Nyhan, B., Reifler, J., Richey, J., & Freed, G. L. (2014). Effective messages in vaccine promotion: A randomized trial. PEDIATRICS, 133(4), 1–8. https://doi.org/10.1542/peds.2013-2365
O’Connor, T. (2003). Agent Causation. In G. Watson (Ed.), Free will (2nd ed., pp. 257–284). Oxford University Press.
Oswald, F., Mitchell, G., Blanton, H., Jaccard, J., & Tetlock, P. (2013). Predicting ethnic and racial discrimination: A meta-analysis of IAT criterion studies. Journal of Personality and Social Psychology, 105(2), 171–192. https://doi.org/10.1037/a0032734
Paul, N., Salwen, M. B., & Dupagne, M. (2000). The third-person effect: A meta-analysis of the perceptual hypothesis. Mass Communication and Society, 3(1), 57–85. https://doi.org/10.1080/00913367.2017.1292481
Pronin, E., Lin, D. Y., & Ross, L. (2002). The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin, 8(3), 369–381. https://doi.org/10.1177/0146167202286008
Pronin, E., Gilovich, T., & Ross, L. (2004). Objectivity in the eye of the beholder: Divergent perceptions of bias in self versus others. Psychological Review, 111(3), 781–799. https://doi.org/10.1037/0033-295X.111.3.781
Pyszczynski, T., Greenberg, J., & Solomon, S. (1999). A dual-process model of defense against conscious and unconscious death-related thoughts: An extension of terror management theory. Psychological Review, 106(4), 835–845. https://doi.org/10.1037/0033-295X.106.4.835
Richter, T., & Maier, J. (2017). Comprehension of multiple documents with conflicting information: A two-step model of validation. Educational Psychologist, 52(3), 148–166. https://doi.org/10.1080/00461520.2017.1322968
Rosenthal, S., Detenber, B. H., & Rojas, H. (2018). Efficacy beliefs in third-person effects. Communication Research, 45(4), 554–576. https://doi.org/10.1177/0093650215570657
Rothmund, T., Gollwitzer, M., Nauroth, P., & Bender, J. (2017). Motivierte Wissenschaftsrezeption. Psychologische Rundschau, 68(3), 193–197. https://doi.org/10.1026/0033-3042/a000364
Rutjens, B. T., Sutton, R. M., & van der Lee, R. (2018). Not all skepticism is equal: Exploring the ideological antecedents of science acceptance and rejection. Personality and Social Psychology Bulletin, 44(3), 384–405. https://doi.org/10.1177/0146167217741314
Schlosser, M. (2013). Conscious will, reason-responsiveness, and moral responsibility. The Journal of Ethics, 17(3), 205–232. https://doi.org/10.1007/s10892-013-9143-0
Schwitzgebel, E., & Ellis, J. (2017). Rationalization in moral and philosophical thought. In J.-F. Bonnefon & B. Trémolière (Eds.), Moral inferences (pp. 170–190). Routledge.
Shao, W., & Goidel, K. (2016). Seeing is believing? An examination of perceptions of local weather conditions and climate change among residents in the U.S. Gulf Coast. Risk Analysis, 36(11), 2136–2157. https://doi.org/10.1111/risa.12571
Shen, L., Palmer, J., Kollar, L. M. M., & Comer, S. (2015). A social comparison explanation for the third-person perception. Communication Research, 42(2), 260–280. https://doi.org/10.1177/0093650212467644
Solomon, S., Greenberg, J., & Pyszczynski, T. (2000). Pride and prejudice: Fear of death and social behavior. Current Directions in Psychological Science, 9(6), 200–204. https://doi.org/10.1111/1467-8721.00094
Solomon, S., Greenberg, J., & Pyszczynski, T. (2015). The worm at the Core: On the role of death in life. Random House.
Stark, E., & Sachau, D. (2016). Lake Wobegon’s guns: Overestimating our gun-related competences. Journal of Social and Political Psychology, 4(1), 8–23. https://doi.org/10.5964/jspp.v4i1.464
Sun, Y., Pan, Z., & Shen, L. (2008). Understanding the third-person perception: Evidence from a meta-analysis. Journal of Communication, 58(2), 280–300. https://doi.org/10.1111/j.1460-2466.2008.00385.x
Svenson, O. (1981). Are we all less risky and more skillful than our fellow drivers? Acta Psychologica, 47(2), 143–148. https://doi.org/10.1016/0001-6918(81)90005-6
Talbert, M. (2016). Moral responsibility. Cambridge u. Polity Press.
Taylor, A. (2010). Moral responsibility and subverting causes. Doctoral dissertation, University of Reading, retrieved from http://philpapers.org/rec/TAYMRA.
Tiberius, V. (2015). Moral psychology: A contemporary introduction. Routledge.
Timm, S. C. (2016). Moral intuition or moral disengagement? Cognitive science weighs in on the animal ethics debate. Neuroethics, 9(3), 225.234. https://doi.org/10.1007/s12152-016-9271-x
Vail, K. E., Courtney, E., & Arndt, J. (2019). The influence of existential threat and tolerance salience on anti-islamic attitudes in american politics. Political Psychology. Advance online publication. https://doi.org/10.1111/pops.12579.
Vargas, M. (2013). Building better beings: A theory of moral responsibility. Oxford University Press.
Vierkant, T., Kiverstein, J., & Clark, A. (2013). Decomposing the will. Meeting the zombie challenge. In A. Clark, J. Kiverstein, & T. Vierkant (Eds.), Decomposing the will (1–30). Oxford et al.: Oxford University Press.
Vihvelin, K. (2017). Dispositional compatibilism. In K. Timpe, Griffith, & N. Levy (Eds.), The Routledge companion to free will (52-61). New York u. London: Routledge.
Wallace, R. J. (1994). Responsibility and the moral sentiments. Cambridge u. : Harvard University Press.
Wei, R., Chia, S. C., & Lo, V. (2011). Third-person effect and hostile media perception influences on voter attitudes toward polls in the 2008 U.S. presidential election. International Journal of Public Opinion Research, 23(2), 169–190. https://doi.org/10.1093/ijpor/edq044
Weingarten, E., Chen, Q., McAdams, M., Yi, J., Hepler, J., & Albarracín, D. [Dolores] (2016). From primed concepts to action: A meta-analysis of the behavioral effects of incidentally presented words. Psychological Bulletin, 142(5), 472–497. doi: https://doi.org/10.1037/bul0000030.
Wiggins, D. (2003). Towards a reasonable libertarianism. In G. Watson (Ed.), Free will (2nd ed., pp. 94–121). Oxford University Press.
Wolf, S. (1990). Freedom within reason. New York u. : Oxford University Press.
Xie, G., & Johnson, J. M. Q. (2015). Examining the third-person effect of baseline omission in numerical comparison: The role of consumer persuasion knowledge. Psychology & Marketing, 32(4), 438–449. https://doi.org/10.1002/mar.20790
Zell, E., & Krizan, Z. (2014). Do people have insight into their abilities? A metasynthesis. Perspectives on Psychological Science, 9(2), 111–125. https://doi.org/10.1177/1745691613518075
Zell, E., Strickhouser, J. E., Sedikides, C., & Alicke, M. D. (2020). The better-than-average effect in comparative self-evaluation: A comprehensive review and meta-analysis. Psychological Bulletin, 146(2), 118–149. https://doi.org/10.1037/bul0000218
Acknowledgments
I would like to thank Manuel Klein for very helpful comments on an earlier draft of this paper.
Availability of Data and Material
Not applicable.
Code Availability
Not applicable.
Funding
Open Access funding enabled and organized by Projekt DEAL. Parts of the research where possible due to a scholarship of Studienstiftung des deutschen Volkes.
Author information
Authors and Affiliations
Contributions
Not applicable.
Corresponding author
Ethics declarations
Conflicts of interest/Competing interests
I have no conflicts of interest to disclose.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Franz, D.J. Moral Responsibility for Self-Deluding Beings. Philosophia 50, 1791–1807 (2022). https://doi.org/10.1007/s11406-022-00469-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11406-022-00469-0