Many draw a distinction between epistemic and non-epistemic norms. On this approach, a belief can be epistemically good, or required, or justified, even if in some other sense, it is bad, or forbidden, or unjustified. To see why, consider two cases:

Bribe for Belief A demon will donate $1 million to a good charity only if you form the belief that the number of stars in the universe is even.

Bribe for Withholding A demon will donate $1 million to a good charity only if you withhold belief regarding the proposition that 2 + 2 = 4.

Suppose that you somehow earn the demon’s bribe for withholding; you withhold belief as to whether 2 + 2 = 4. It is morally good that you do so.Footnote 1 Nevertheless, by withholding, you violate an important norm. Intuitively, that norm is more deeply concerned with truth and knowledge than with the other good-making features of doxastic states—including, for instance, the consequences of having those states. Call norms like this epistemic norms.

Cases like the ones above provide evidence that there is a distinction to be drawn between epistemic norms and non-epistemic norms. But how, precisely, should we draw that distinction? One initially tempting approach is straightforward: epistemic norms are sensitive only to considerations that have to do with the truth or likelihood of belief, and therefore, are never sensitive to practical or moral considerations.

Despite the initial appeal of this simple approach, moral considerations have been creeping back into epistemology. Recent years have seen several defenses of the view that there is moral encroachment in epistemology: the view, that is, that whether a person has knowledge of p sometimes depends on moral considerations, including even moral considerations that do not bear on the truth or likelihood of p. What’s more, these defenses seek to retain the distinction between epistemic and non-epistemic reasons. They hold, for instance, that Bribe for Belief does not make it epistemically rational for you to believe that the number of stars in the universe is even, and that Bribe for Withholding does not make it epistemically rational for you to withhold belief about whether 2 + 2 = 4.

But, if moral bribes make no difference to epistemic norms, which moral considerations do? Defenders of moral encroachment call attention to cases that suggest a subtler connection between morality and knowledge. Consider, for instance:

Parked Car Low Stakes Ava parked her car four hours ago, and she cannot currently see it. Ava’s friend Emil points out that, if her car is parked illegally, she might get a written warning. Ava has the opportunity to check on her car and, if need be, to move it. Ava thinks back, and she seems to remember (although not too vividly) that she parked it legally. She forms the belief that her car is currently parked legally, and she remains sitting in her easy chair.

Parked Car High Stakes César parked his car four hours ago, and he cannot currently see it. César’s friend Maryam informs him that there is a maniacal traffic officer on the loose, and if the officer sees César’s car parked illegally, he will fly into a homicidal rage and kill five innocents. César has the opportunity to check on his car and, if need be, to move it. César thinks back, and he seems to remember (although not too vividly) that he parked it legally. He forms the belief that his car is currently parked legally, and he remains sitting in his easy chair.

Moral encroachment makes room for the possibility that, although Ava and César have the same sort of evidence, and their beliefs are based on that evidence in the same way, there is a difference in the epistemic status of their beliefs. Perhaps, although Ava knows that her car is parked legally, César does not. If so, then knowledge that p is sensitive to some moral considerations that do not bear on the truth of p. After all, the only important difference between the two cases above involves the moral risks at play in the believer’s environment. And the relevant moral risks do not make a difference to the truth or likelihood that Ava’s or César’s car is parked legally.

The above cases closely resemble much-discussed cases from the literature on pragmatic encroachment. Indeed, one prominent approach to moral encroachment defends it as an extension, or an underappreciated implication, of traditional pragmatic-encroachment views of the sort found in Stanley (2005), Hawthorne and Stanley (2008), and Fantl and McGrath (2009).Footnote 2 But another approach to moral encroachment radically departs from the pragmatic-encroachment literature. To see how, consider the following cases:

Birdwatching Stereotype Fatima’s friend tells her that a canary is in the next room. Fatima has strong, but not flawless, inductive evidence supporting the prediction that any given canary in her country will be yellow. She forms the belief that the canary in the next room is yellow.Footnote 3

Racial Stereotype Aidan is a waiter at a restaurant. As he leaves work for the night, he crosses paths with a Black family entering the restaurant. He has strong, but not flawless, inductive evidence supporting the prediction that any given set of Black diners at his restaurant will give their waiters tips lower than 20%. On the basis of the family’s race, he forms the belief that they will leave one of his colleagues a tip lower than 20%.Footnote 4

Fatima and Aidan base their beliefs on similar bodies of inductive evidence. But there seems to be an important moral difference between the two cases: while Aidan’s belief will strike many as an instance of morally problematic racist reasoning, Fatima’s seems entirely morally unproblematic. Several philosophers have recently argued that the moral problems with Aidan’s reasoning can explain why his belief is also epistemically problematic.

The four cases we’ve just seen raise a key question for defenders of moral encroachment: which moral considerations make a difference for epistemic rationality? The Parked Car cases raise moral questions about action; Ava’s action is morally acceptable, but César’s is not. Racial Stereotype, on the other hand, does not obviously raise any questions about action. To the extent that we think that there is a moral problem with Aidan, it is not with his action, but with his character, his belief-forming practices, or his belief itself.

Some defenders of moral encroachment (including Renée Bolinger, Sarah Moss, and myself) claim that epistemic norms are sensitive to moral features of actions and options. I’ll call this sort of sensitivity moderate moral encroachment. Others (including Rima Basu, Michael Pace, and Mark Schroeder) claim that epistemic norms are sensitive to moral features of beliefs themselves. I’ll call this sort of sensitivity radical moral encroachment.

The goal of this paper is to argue against radical moral encroachment while defending moderate moral encroachment. In Sect. 1, I raise a challenge for all defenders of moral encroachment: they must explain why the moral considerations they cite are not reasons of the wrong kind within epistemology. In Sect. 2, I show that defenders of moderate moral encroachment are well-positioned to meet this challenge. In Sect. 3, I show that defenders of radical moral encroachment are not. In Sect. 4, I explain how we can approach cases like Racial Stereotype without taking on the unattractive commitments of radical moral encroachment.

1 Reasons of the wrong kind

This section introduces the distinction between reasons of the right kind (RKRs) and reasons of the wrong kind (WKRs). I’ll argue that we can use this distinction to make headway in answering the core question of this paper: which moral considerations, if any, make a difference to epistemic norms?

What does it mean to say that a reason is “of the right kind” or “of the wrong kind”? We first grasp this distinction through examples—usually, examples involving incentives for having a mental state. The fact that there is a poisonous snake next to me is a RKR to fear the snake. The fact that someone will pay me if I fear a teddy bear, by contrast, is a WKR to fear the teddy bear. The fact that a flight would bring me to an exciting destination is a RKR to desire to buy a plane ticket. The fact that Donna will punch someone in the face unless I desire to buy a plane ticket, by contrast, is a WKR to desire to buy a plane ticket.

Many have noted that there is a unified phenomenon here—a single distinction that applies to a host of mental states (including, for instance, fear and desire). And it’s striking that the cases with which I began this paper, Bribe for Belief and Bribe for Withholding, seem to be paradigmatic instances of the phenomenon: more specifically, they seem to involve paradigmatic WKRs. There are good grounds for thinking, then, that the difference between bribes for belief and paradigmatically epistemic reasons for belief is one instance of a general pattern: the difference between RKRs and WKRs.Footnote 5

By focusing on RKRs and WKRs, we can reframe the debate about moral encroachment in epistemology.Footnote 6 The defender of moral encroachment claims that certain moral features bear not only on the desirability but also on the epistemic rationality of belief. She must explain why the moral features she cites, unlike moral bribes, are reasons of the right kind.

How can a theorist justify claims of this sort? How, in other words, can we determine whether a consideration is a RKR or a WKR? Broadly speaking, there are two methods. The first is the method of analogy. In order to determine whether some consideration is a WKR for belief, we can ask whether a consideration of that sort would be a WKR for a different mental state—including, for instance, emotion, desire, or intention. Of course, we should not erase important differences between types of mental states. Nevertheless, I’ll show in Sect. 3 that certain analogies provide powerful evidence about the scope of epistemic rationality.

The second method for answering questions about WKRs and RKRs involves appealing to a theory of the RKR/WKR distinction. We can gain evidence that a moral consideration is a WKR by showing that a promising theory classifies it as a WKR. Now, there are many existing theories of the RKR/WKR distinction, and it is not possible to discuss all of them in a paper of this size. So, in what follows, I will not rely on any particular theory; instead, I’ll appeal to the two most promising general approaches to the RKR/WKR distinction.Footnote 7 My arguments will show that, on either of these general approaches, we should reject radical moral encroachment.

The first promising approach to the RKR/WKR distinction is a constitutivist one. On a constitutivist approach, we can explain the difference between RKRs and WKRs for a given mental state by appealing to facts about what it is to be in that mental state. Take an example: fear seems connected, by its very nature, to the question of whether something is threatening or dangerous. And RKRs for fear seem, in a systematic way, to be considerations regarding danger. WKRs for fear, like bribes, are not connected in the same way to considerations regarding danger. Constitutivist approaches to the RKR/WKR distinction can be found in D’Arms and Jacobson (2000), Schroeder (2010), and Sharadin (2016).

The second promising approach to the RKR/WKR distinction emphasizes a putative asymmetry in efficacy. Generally speaking, it seems easier to form a mental state (or, perhaps, to directly form it) on the basis of an RKR than on the basis of a WKR. For example, it is easier to fear a snake on the grounds that it is poisonous than it is to fear a teddy bear on the grounds that one has been bribed to do so. Perhaps this asymmetry in efficacy points the way toward the correct general explanation of the RKR/WKR distinction. Proponents of this efficacy-based approach include Persson (2007), Raz (2009), and Rowland (2015).Footnote 8

This second approach is often paired with a commitment to WKR skepticism: the view that there are, strictly speaking, no reasons of the wrong kind at all.Footnote 9 On this view, apparent wrong-kind reasons against a mental state are, at most, reasons for wanting to be in the mental state, or for bringing the mental state about. In what follows, I’ll refer to certain considerations as ‘reasons of the wrong kind,’ but WKR skeptics should feel free to interpret these as references to, e.g., reasons for bringing a mental state about.

I do not aim, in this paper, to settle the question of how we should theorize the RKR/WKR distinction. I aim, instead, to reach conclusions that are compatible with either of the most plausible approaches to that distinction. So, in what follows, I’ll treat facts about what it is to believe (and to withhold belief) as potential evidence about the shape of the RKR/WKR distinction, and I’ll also treat facts about efficacy as evidence. Section 2 shows that both of these approaches are nicely compatible with moderate moral encroachment. Section 3, however, shows that both approaches raise serious problems for radical moral encroachment.

2 Moderate moral encroachment and WKRs

Defenders of moral encroachment hold that some moral considerations, like bribes for belief, are WKRs within epistemology, but that some other moral considerations are RKRs within epistemology. But should we believe that any moral reasons really are RKRs within epistemology? And if so, which ones? In this section, I’ll show that defenders of moderate moral encroachment are well-placed to answer these questions successfully.

Recall the contrast between moderate and radical moral encroachment: defenders of moderate moral encroachment hold that norms of epistemic rationality are sensitive to facts about the moral status of one’s actions and options. Defenders of radical moral encroachment go farther: they argue that norms of epistemic rationality are sensitive to facts about the moral status of one’s beliefs themselves. Some defenders of radical moral encroachment also defend moderate moral encroachment.Footnote 10 But, for now, let’s consider moderate encroachment alone.

Defenders of moderate moral encroachment are interested in choice scenarios like the ones illustrated by Parked Car Low Stakes and Parked Car High Stakes. They hold that, while being offered a bribe to believe (or withhold) does not make a difference to epistemic rationality, facing certain choice scenarios (like the one César faces in Parked Car High Stakes) can. To make this claim plausible, they must argue that a case like César’s involves a RKR for withholding belief (or, put differently, for adopting higher evidential standards). I’ll now argue that, on either of the most plausible approaches to the RKR/WKR distinction, the defender of moral encroachment is in a good position to make this argument.

On a constitutivist approach, RKRs for a mental state bear some important connection to facts about what it is to be in that mental state. Certain mental states, on this view, simply “bring with them” an evaluative standard or presentation.Footnote 11 Fear, for example, is constitutively concerned with danger, so RKRs for fear are considerations that have to do with danger. WKRs, like bribes to be afraid or amused, are notably disconnected from the core evaluative concerns of the mental states they favor.

At first, the consititutivist approach may seem to present a problem for moral encroachment. It’s tempting to think that belief is constitutively concerned solely with truth.Footnote 12 This suggests a simple picture, on which evidence of truth or falsehood, and nothing else, is an RKR in epistemology. If this simple picture is right, it’s bad news for moderate moral encroachment: the fact that I face a certain choice is not (generally) evidence for the truth or falsehood of my beliefs.

A point familiar from the pragmatic encroachment literature defuses this point. Though it may be initially plausible that belief is constitutively concerned with truth alone, there’s no initial plausibility to the notion that the mental state of withholding belief is constitutively connected to truth in such a straightforward way.Footnote 13 Just what would it mean for a state of withheld belief to meet its constitutive standard for correctness? At a first pass, withheld belief as to p seems to “bring with it” a concern for whether one has enough epistemic support for p.Footnote 14 But this first pass does not seem to rule out practical or moral considerations; in fact, some have suggested that practical and moral considerations are the only ones that could possibly give an informative answer to the question of how much epistemic support is enough.Footnote 15 This line of thought shows that there is room in epistemology for constitutive standards that are sensitive to practical and moral considerations. I’ll now sketch a positive story about the constitutive concerns of belief and withheld belief—one that vindicates the presence of some, but not all, moral considerations in epistemology.

Many have observed that coarse-grained doxastic states (like belief, disbelief, and withheld belief) seem fit to play a role that finer-grained doxastic states (like credences or “degrees of belief”) cannot.Footnote 16 When I believe that p, I settle the matter as to whether p—at least provisionally, I commit myself to treating it as true. When I withhold belief that p, by contrast, I actively leave my view of p unsettled. By adopting coarse-grained doxastic states, in other words, I adopt a policy about how to treat a proposition in future reasoning.

This can teach us something about the constitutive standard for correctness for coarse-grained doxastic states. Take, for instance, withheld belief. On this story, we evaluate withheld belief qua withheld belief, at least in part, by assessing whether it is apt to play its distinctive role in future episodes of theoretical or practical reasoning. In other words, the question of whether it’s correct to withhold belief is intimately connected to the question of whether, by doing so, one takes up a mental state that will facilitate the projects of representing and navigating the world.

A story of this sort makes room for the moderate encroacher to explain why it’s correct to withhold belief in Parked Car High Stakes, but incorrect to withhold belief in Bribe for Withholding. In the latter case, withholding belief will have attractive downstream effects, but they have nothing to do with future episodes of practical or theoretical reasoning. In the former, by contrast, withholding belief is correct precisely because it’s part of a mental scheme that is apt to play a particular role in helping César to reason well—specifically, it ensures that he will not inappropriately assume that his car is parked legally.

I’ve now sketched, in broad outline, a story on which coarse-grained doxastic states are constitutively concerned with practical and moral matters. The outline could be filled out in a number of ways; the crucial point is that moderate moral encroachment seems entirely compatible with a constitutivist approach to the RKR/WKR distinction. In Sect. 3, we’ll see that the same cannot be said for radical moral encroachment.

Let’s move on to the second promising general approach to the RKR/WKR distinction. This general approach emphasizes the asymmetry in efficacy between RKRs and WKRs; it distinguishes between RKRs and WKRs by noting the difficulty of adopting (or, perhaps, directly adopting) a mental state on the basis of a WKR. If, as moral encroachment suggests, some moral considerations are WKRs and others are RKRs in epistemology, then this approach suggests that we should see a noteworthy gap in the difficulty of responding to those considerations by forming new doxastic states.

Interestingly, we find just such an asymmetry between Parked Car High Stakes and Bribe for Withholding. To see this, imagine yourself in the former case. It would very natural for you to respond to the news of the maniacal traffic officer by thinking, “Probably, my car is parked legally. But what if it’s not? What if I’m misremembering, and because of my illegal parking, innocent people will be murdered?” This reasoning seems apt to naturally, and directly, facilitate withheld belief.Footnote 17

Contrast this with a modified version of the case. In the modified version, you do not learn about the maniacal traffic officer; instead, you learn that a benefactor will give money to charity if you withhold belief about whether your car is parked legally. In this modified version, it would not be nearly as natural to focus on the possibility that your belief is false. It would be more natural to focus on your belief itself, and on possible ways to change it. You might think, for instance, “Wow, it sure would be good if I stopped believing that my car is parked outside!” This reasoning seems less likely to directly facilitate withholding belief.

In short, being in a situation like Parked Car High Stakes tends to bring one to focus on the possibility that one’s belief is false. Being in a situation like Belief for Withholding, by contrast, only makes salient the benefits of withholding. It’s very plausible that the former psychological state tends to facilitate withholding belief in a different way—a more natural way, and perhaps a more direct way—than the latter does.Footnote 18 Now, perhaps this difference in salience is not the fundamental explanation of the asymmetry between the cases. But, regardless of the precise nature of that asymmetry, these two cases seem to involve an asymmetry of just the sort that many theorists take to be the core difference separating WKRs from RKRs. If an efficacy-based theory of the RKR/WKR distinction is on the right track, then, the defender of moderate moral encroachment will be in a strong dialectical position. She has evidence that, while a moral bribe for withholding is a WKR, certain choice situations (like César’s) provide RKRs in favor of withholding.

As the next section will show, the same cannot be said for defenders of radical moral encroachment.

3 Radical moral encroachment

3.1 Against radical moral encroachment

In this section, I’ll turn from moderate moral encroachment to radical moral encroachment. There is radical moral encroachment in epistemology just in case norms of epistemic rationality are sensitive to moral features of belief itself. My discussion will focus on a recently popular proposal, one that has been defended by both Rima Basu and Mark Schroeder.Footnote 19 Basu and Schroeder both claim that the moral badness of a belief itself can make a difference to the epistemic rationality of that belief. I’ll argue against this approach, on the grounds that it cannot adequately distinguish between RKRs and WKRs.

Why think that belief itself can be morally bad? Defenders of radical moral encroachment use a variety of examples to make this notion plausible. Some have to do with beliefs that undermine personal relationships; Basu and Schroeder (2019), for instance, describe a person who believes on inconclusive evidence that her spouse has started drinking again. But the examples that are most frequently used to motivate radical moral encroachment involve beliefs based on inferences from statistics about demographic groups. In the Racial Stereotype case from the introduction, Aidan forms such a belief; he judges that the people entering his restaurant will leave a tip below 20%, solely on the basis of their race. Gendler (2011) offers a similar case involving racial profiling, and Schroeder (2018a) offers a similar case involving sexist profiling.

Defenders of radical moral encroachment make two distinctive claims about their cases. First, these cases involve beliefs that are morally bad in a non-derivative way; the beliefs’ moral badness does not depend, for instance, on the beliefs’ downstream consequences, or on the believer’s character.Footnote 20 Second, epistemic norms are sensitive to the non-derivative badness of such beliefs. Armed with these claims, the defender of radical moral encroachment can use the morally problematic nature of a belief to explain its epistemic irrationality.

The first of these two claims is quite controversial, but I’ll grant it for the sake of argument. I’ll argue that, even if some beliefs are non-derivatively morally bad, their moral badness does not make a difference to norms of epistemic rationality.

The easiest way to see this point is to consider an analogy with mental states other than belief. The fact that having a mental state would be non-derivatively morally wrong is, generally, a paradigmatic WKR. Consider two examples. First: some jokes are morally bad jokes, in the sense that there are moral reasons that count against anyone’s being amused by them. Second: it’s very tempting to think that there are often powerful moral reasons against envy. Further, these moral reasons need not arise solely in cases where there’s nothing at all funny about a joke, or when the envied party has nothing worth desiring. In at least some cases, it’s morally bad to be amused or envious even though, in some sense, amusement or envy is clearly appropriate. On the grounds of cases like these, it’s widely believed that the mere fact that amusement would be morally bad is a WKR against amusement, and the mere fact that envy would be morally bad is a WKR against envy.Footnote 21

Why? Recall the cases that inspire the RKR/WKR distinction in the first place: cases like Bribe for Belief. These cases cry out for a distinction between two ways of evaluating a mental state: we can evaluate a mental state for whether it is all-things-considered good to have, but we can also evaluate a mental state for whether it is fitting (or correct, or rational) in a narrower sense. Cases in which moral reasons count against emotions also cry out to be evaluated along two distinct lines. Even if we agree that it would be best, for moral reasons, if no one were amused by a joke, there is a second evaluative question that we have not addressed: is the joke funny?

In short, a mental state’s moral badness is typically a WKR. This provides evidence that the moral badness of a belief is, likewise, a WKR against having that belief. In other words, the moral badness of a belief does not bear on its epistemic rationality. So radical moral encroachment goes too far.Footnote 22

We don’t have to rely on analogy alone to see this point. On either of the most promising approaches to theorizing the RKR/WKR distinction, the moral badness of belief is a strong candidate to be a WKR. Consider, first, the constitutivist approach. As we saw in Sect. 2, there is a promising way to explain why high-stakes choice scenarios are relevant to the constitutive standard of correctness for withheld belief. Withheld belief is, by its nature, a sort of policy for future episodes of practical or theoretical reasoning. It brings with it, then, a concern for the degree of epistemic support necessary to support future reasoning. The defender of radical moral encroachment cannot tell a story of this sort. It’s just not plausible that the core standard of correctness for belief—the one intimately connected to what it is to believe—emphasizes avoidance of morally bad mental states. Even if we can sometimes take up a morally objectionable stance toward others by believing, belief is not by its nature concerned with being a morally acceptable stance toward others, any more than envy is by its nature concerned with being a morally acceptable stance toward others. This provides excellent evidence that, on a constitutivist approach to the RKR/WKR distinction, the moral badness of belief is a WKR.

Move on, now, to the efficacy-based approach to the RKR/WKR distinction. Here, again, the defender of radical moral encroachment is on shaky ground; noting that a belief is morally bad does not seem to facilitate withholding in a direct, straightforward way. This becomes particularly vivid when we compare a situation like Racial Stereotype with a situation like Parked Car High Stakes. As Sect. 2 noted, being placed in the latter sort of situation naturally calls attention to the high-risk possibility that one’s belief is false. It would be highly natural for César to wonder, “but what my car isn’t parked legally? Then five innocent lives would be in danger!” Reactions of this sort, I’ve argued, naturally facilitate withholding belief. Attending to the possibility that one’s belief is morally wrong, on the other hand, does not seem to do so in the same way—perhaps, in part, because it does not tend to bring to mind the possibility that the belief is false. When I note that my belief is morally bad, I am apt to react in just the same way I would react if faced with a bribe for withholding: by thinking something like, “wow, it sure is important that I get rid of this belief!” The defender of radical moral encroachment, then, cannot lay claim to even a prima facie asymmetry in efficacy between cases of morally bad belief and cases like Bribe for Withholding.Footnote 23 This is evidence that, if an efficacy-based treatment of the RKR/WKR distinction is on the right track, the moral badness of belief is a WKR against it.

Taking stock: the method of analogy suggests that the moral badness of a belief is a WKR. And the evidence regarding what it is to withhold belief, along with the evidence regarding efficacy in withholding, also suggests that the moral badness of a belief a WKR. This amounts to a powerful case against the notion that a belief’s moral badness makes a difference to epistemic rationality.

3.2 Interlude: why go radical?

The debate over radical moral encroachment is not over; in Sect. 3.3, I’ll consider a way in which radical moral encroachers can avoid the problems I’ve raised so far. But, before we move on to consider that revision, it’s worth pausing to ask about what motivates it. Why bother sticking with the radical moral encroachment hypothesis?

As we’ve already seen, defenders of radical moral encroachment are interested in cases where a belief seems both well-supported by evidence and also morally bad. They aim to make room for the claim that such beliefs are epistemically irrational. In the relevant set of cases, the thought is, it would be unacceptable for a person’s belief to be both morally bad and also epistemically rational.Footnote 24 Radical moral encroachment, then, is primarily motivated by an interest in precluding the possibility of tension between a doxastic state’s epistemic status and its moral status.

But this is a bad motivation. The defenders of moral encroachment have excellent reason to think that tension between a doxastic state’s epistemic status and its moral status is not merely possible, but also actual. To see this, consider a revised version of Aidan’s case:

Racial Stereotype 2 Aidan is a waiter at a restaurant. As he leaves work for the night, he crosses paths with a Black family entering the restaurant. He has evidence that suggests, to degree 0.8, that any given Black diner at his restaurant will give her waiter a tip lower than 20%. On the basis of the family’s race, he adopts credence 0.8 that they will leave one of his colleagues a tip lower than 20%.

Racial Stereotype 2 is morally worrisome in just the same way that the original Racial Stereotype case is. Aidan’s updated credence constitutes a racist judgment, and a problematic one; if a Black diner became aware of Aidan’s high credence, she could rightly complain, and she could rightly demand an apology. These points about blame and apology are just the considerations that defenders of radical moral encroachment tend to cite as evidence that beliefs can be non-derivatively morally bad. To the extent that we have reason to think that beliefs can be non-derivatively morally bad, then, we also have reason to think that credences alone can be non-derivatively morally bad.Footnote 25

Importantly, however, all parties should agree that Aidan’s updated credence, in Racial Stereotype 2, is epistemically rational. The case simply stipulates that his evidence makes it likely to degree 0.8 that any given Black diner at his restaurant will leave a tip lower than 20%. If he refuses to bring his credences about individual Black diners in line with his evidence, he will be epistemically irrational. The defenders of radical moral encroachment, rightly, tend to grant this point: they suggest that cases like Racial Stereotype make increased confidence (albeit not outright belief) epistemically rational.Footnote 26

If this is right, however, the defenders of radical moral encroachment are committed to acknowledging a tension regarding Racial Stereotype 2: in that case, Aidan’s credence could be both epistemically rational and morally problematic. And, as we’ve seen, there are good reasons for them to take on this commitment. But once we acknowledge that an epistemically rational credence can be morally problematic, we should be much less worried about the prospect that a belief might display just the same sort of tension.Footnote 27

There are also independent reasons for thinking that beliefs can be both morally bad and epistemically rational: the tension between RKRs in favor of a mental state and moral reasons against it is an entirely general one. Sometimes, it’s morally bad to envy someone else’s possession, but the possession is nevertheless enviable. Sometimes, it’s morally bad to have a positive aesthetic reaction to a work of art, but the artwork is nevertheless aesthetically impressive. Mature moral agents have to learn to navigate situations like this: situations in which the moral reasons against an attitude are both powerful and reasons of the wrong kind.

We’ll now move on to consider a way of revising radical moral encroachment to address the WKR problem. I’ll argue that this revision is unsuccessful on its own merits. But we should also worry about whether it is well-motivated. The primary motivation for refining a theory of radical moral encroachment is to avoid tension between the epistemic status and the moral status of a doxastic state. But, since defenders of radical moral encroachment are already committed to accepting that tension regarding credences, this is weak motivation indeed.

3.3 Radical moral encroachment redux

There is a way to develop radical moral encroachment that avoids the problems raised in Sect. 3.1. The development involves two key moves. First, the defender of radical moral encroachment accepts that, when a moral reason against belief has nothing to do with that belief’s truth or falsehood, it is a WKR. Second, she posits a class of moral reasons against belief that are intimately connected to the belief’s truth or falsehood. Within some range of cases, she must argue, it would be morally bad to believe that p only if p were false.

Schroeder (2018a) defends a radical view of moral encroachment with just this shape. On Schroeder’s view, the fact that a belief would wrong someone is a moral reason against holding it—but only a false moral belief can wrong someone.

Schroeder reaches his conclusion by appealing to three other commitments:

  1. (1)

    There is a set of cases, S, in which belief would be irrational, and the only viable explanation for the irrationality of belief appeals to the fact that the belief might morally wrong someone.

  2. (2)

    The fact that forming a belief that p might morally wrong someone does not, in the cases in S, provide evidence for or against p.

  3. (3)

    There is a significant but underappreciated class of non-evidential epistemic reasons against belief: reasons having to do with the cost of error.Footnote 28

On the grounds of these commitments, Schroeder infers that the fact that a belief might wrong someone is (at least in the cases in S) closely associated with the costs of error—in other words, the costs of believing falsely. He then suggests a general explanation for the required connection between morally wronging belief and false belief: only a false moral belief can wrong someone.

The claim that a belief’s moral badness depends on its falsehood is counterintuitive. Insofar as we are tempted by the thesis that beliefs can wrong others, we generally do not think that the question of whether they do so hinges on their truth or falsehood. We can think that Aidan wrongs the family entering his restaurant by forming his racist belief about their tipping practices, for instance, without our judgment being sensitive in any way to the question of whether his belief is false.

Since Schroeder motivates his counterintuitive conclusion through several controversial assumptions about the ethics of belief, it’s tempting to apply a Moorean shift here, using the implausibility of Schroeder’s conclusion to reject one of the commitments with which he supports it. Schroeder is sensitive to this, and he therefore attempts to debunk the intuition that his conclusion is false. He does so by drawing a distinction between two ways in which we can morally evaluate a person’s belief: we can ask whether the belief is objectively bad, or on the other hand, whether it is subjectively bad. People whose beliefs are true, Schroeder suggests, have not wronged anyone, and their beliefs are therefore guaranteed not to be morally bad in an objective sense. But this does not mean that every true belief is morally acceptable in a subjective sense. Perhaps, just as it is subjectively morally bad to feed an innocent person a meal that you reasonably think is poisoned, even if (by good fortune) the meal is actually not poisoned, it is subjectively morally bad to form certain beliefs on the basis of racial stereotypes, even if those beliefs (by good fortune) end up being true. By leaning on this distinction, Schroeder makes room for the claim that there is something morally bad about a belief like Aidan’s, even though only false beliefs wrong.

At first, the distinction between subjective and objective moral evaluation might seem to give Schroeder all the argumentative fuel he needs to push back against the Moorean shift. If his conclusion follows from an otherwise attractive picture of moral encroachment, and there’s a viable approach to the ethics of belief on which his conclusion is not so counterintuitive, then perhaps his argument should persuade us to endorse that approach to the ethics of belief.

But there are reasons to worry about the way that Schroeder applies the distinction between subjective and objective moral evaluation. To bring this out, I’ll note a general feature of objective moral evaluation: even when she knows that one of her past actions, A, was subjectively bad, a virtuous person will have a disposition to feel relief upon learning that A was not also objectively morally bad.Footnote 29

Consider an example:

Deathbed Promise As a benighted youth, Duane was inadequately attentive to his grandmother. After she passed away, he was not sure of whether he had made her a deathbed promise: to put flowers on her grave on October 1st, 1992. But when October 1st, 1992, rolled around, rather than trying to determine whether he really did make the promise, Duane decided to stay home and play video games rather than putting flowers on her grave.

Duane has now grown up, and he has become a virtuous person. He learns that he did not actually make his grandmother this deathbed promise.

The moral badness of young Duane’s action has more to do with the way he acted given his evidence than with the way he acted given all the facts. In other words, his action is easier to criticize as subjectively morally bad than as objectively morally bad. Had he actually made the deathbed promise, his action would have been morally bad in an objective sense as well. In this case, I suggest, Duane might well be disposed to feel relief when he learns that he never actually broke a deathbed promise. Perhaps that disposition would not be activated; perhaps, for instance, it would be overwhelmed by his sense that his action was subjectively morally bad. Nevertheless, it would surely be sensible if Duane had the sense of having escaped doing something that was morally bad in an importantly different way.

The problem is this: in the range of cases that motivate radical moral encroachment in the first place, a virtuous person would not be disposed to feel relief if her belief turned out to be true. Return to Aidan’s case: suppose that, after forming his racist belief about the diners entering his restaurant, he becomes a virtuous person, and he also learns that his racist belief was true. In this case, I suggest, Aidan would not have any disposition to be relieved. He would regard the diners’ actual tipping as irrelevant to his moral self-assessment.Footnote 30

This provides evidence that Schroeder’s debunking maneuver falls flat. If his application of the subjective/objective distinction were apt, we would regard true racist beliefs, roughly, like we regard actions that narrowly avoid breaking promises. But, morally speaking, forming a true racist belief is more like actually breaking a promise than like narrowly avoiding breaking a promise. So, even in the face of Schroeder’s debunking story, there are good reasons to be suspicious of the claim that the moral badness of racist beliefs like Aidan’s has something to do with the possibility that they are false.Footnote 31

To sum up: by positing a connection between a belief’s moral badness and its falsehood, the radical moral encroacher makes it more plausible that a belief’s moral badness is a RKR. But she also signs up to implausible claims about the source of moral badness in beliefs. Of course, if radical moral encroachment were well-motivated on independent grounds, this cost might be bearable. But in Sect. 3.2, we saw that the primary motivation for radical moral encroachment is no motivation at all. So it makes sense to respond to the many challenges that face radical moral encroachment not by further refining the theory, but instead to look for the best available alternate theory. In this paper’s final section, I’ll do just that.

4 Bad beliefs without radical moral encroachment

This paper aims to show that, although we can safely accept moderate moral encroachment, we should not accept radical moral encroachment. So far, I’ve been making the latter point by showing that radical moral encroachment commits us to an unattractive normative theory: either it draws the RKR/WKR distinction poorly, or it locates the moral problem with bad beliefs in the wrong place. In this final section, I’ll take a different approach: I’ll note some alternate treatments of the cases that motivate radical moral encroachment. If these cases do not require us to adopt radical moral encroachment, and radical moral encroachment is also both ill-motivated and beset with problems, we can comfortably reject it.

As I mentioned in Sect. 3, the cases that are most frequently cited by defenders of radical moral encroachment are structurally similar to Racial Stereotype. They involve beliefs about particular individuals that are based on information about statistical regularities. Many such cases seem morally problematic, and many also seem to involve epistemic irrationality. Can we explain the irrationality of beliefs like these without appealing to radical moral encroachment?

In the vast majority of cases, I think that we can. Most regularities that hold within demographic groups in modern societies, especially the ones that are most likely to be cited by bigoted thinkers, are remarkably weak. What’s more, most people have plenty of evidence to this effect. When a person sincerely avows the belief that some enormous percentage of a demographic group shares a trait of any importance, we should suspect that she’s approaching her evidence in a flawed way. So, in most real-life cases of beliefs based on putative statistical regularities, there’s no puzzle as to why the beliefs are epistemically irrational; they are based on assumptions that are ill-founded, irrational, or wildly inaccurate.Footnote 32

What should we say, though, about the rare cases in which there really is strong evidence of a demographic regularity? Even in cases of this sort, there does often seem to be pressure against forming judgments about particular individuals based on these regularities. I’ll now survey two ways in which we could interpret this pressure without taking on the worrisome costs of radical moral encroachment. On the first approach, the pressure is both moral and epistemic. On the second, the pressure is moral alone. Throughout, I’ll illustrate the views at hand by discussing Racial Stereotype, and simply stipulating that Aidan’s evidence genuinely does make it very likely that any given Black diner will leave a tip below 20%.

First, perhaps appeals to moderate moral encroachment are sufficient to explain why Aidan’s belief is epistemically irrational. Recall that, on moderate approaches to moral encroachment, a belief’s rationality depends on certain moral facts having to do with actions or options. Moss (2018a, sec. 4) and Bolinger (forthcoming, sec. 4) have both applied this view to cases like Racial Stereotype. Both suggest that, when we adopt certain beliefs based on statistical generalizations about demographic groups, we immorally risk relying on those beliefs in action, and thereby contributing to pernicious shared social practices.

This approach faces two initial problems. One has been noted by the proponents of radical moral encroachment: in some cases like Aidan’s, there does not seem to be any risk that the relevant belief will inform any future action.Footnote 33 Aidan forms his belief while leaving work, and even if he bumps into the family of diners again, he will surely not remember them. Why think that, by forming his belief, he imposes on them a risk of any kind?

The second problem for this approach is similar to the problem that I posed for radical moral encroachment in Sect. 3. Even if we grant that a belief like Aidan’s may dispose him to act badly, this possibility doesn’t seem closely connected to the truth or falsehood of that belief. To see this, suppose that Aidan reasons as follows: “It’s very likely that this family will leave a tip below 20%. But what if I act on the expectation that they are low tippers, but they turn out to be high tippers? Then my action would be morally problematic!” Here, Aidan seems to be assuming that it is morally acceptable for him to act in a certain way toward the family, unless they will actually leave a tip of 20% or higher. But this is a bad assumption: the moral status of his action does not depend on the family’s actual tipping practices.

The point generalizes to a great many of the cases that are discussed in conjunction with moral encroachment. In general, the most serious moral problems with actions taken on the basis of racial profiling do not depend on whether the profiling in question is accurate. It’s morally important that we put a stop to certain patterns of behaviors based on expectations about members of oppressed groups. But, usually, it’s no less important to do so when our expectations turn out to be accurate than when they turn out to be inaccurate. For instance: people who have never spent time in jail deserve not to be treated as felons solely on the basis of their race. But felons also deserve not to be treated as felons solely on the basis of their race.

This is a problem for the claim that Aidan has a RKR for withholding belief. As we saw in Sects. 2 and 3, we should prefer a view on which the moral reasons that bear on epistemic rationality are intimately tied to the risk of falsehood. This is the most promising way to distinguish between cases like Parked Car High Stakes and Bribe for Withholding. But the most noteworthy problems associated with cases like Racial Stereotype are not associated with the risk of acting on the basis of stereotypes when they do not hold; instead, they’re associated with the risk of acting on those stereotypes at all.

So there are reasons to think that, even if there is moderate moral encroachment in epistemology, it does not extend to Racial Stereotype. Now, perhaps this initial challenge can be handled. Moss (2018a) briefly suggests that the moral badness of acting as if someone has a statistically prevalent trait is indeed distinctively serious when she lacks that trait. Perhaps this is right. But note that, for this proposal to be made good, the distinctive badness of acting on the basis of false racial profiling must not simply be swamped by the moral badness that comes from acting on the basis of objectionable racial profiling in the first place. If the latter moral badness settles all questions of how to act, after all, the risk of error makes no difference to the policies it’s best to adopt for future episodes of practical reasoning.

There are reasons to worry, then, that moderate moral encroachment cannot establish that all cases like Aidan’s involve epistemic irrationality. In light of those reasons for worry, I want to offer an alternative approach—a second position that does not require us to take on the unattractive commitments of radical moral encroachment. On this second approach, the vast majority of beliefs like Aidan’s are epistemically irrational for banal reasons: they are based on spurious evidence, bad theory, projection errors, or irresponsible motivated reasoning. This approach also grants that, in some cases, questions about how to treat a person might hang on whether she actually fits a particular demographic trend; in those cases, moderate moral encroachment can be used to explain why outright belief is epistemically irrational.

In the rare cases where neither of these explanations is available, however, this second approach simply grants that the belief could be epistemically rational. Importantly, this is not to say that the belief is morally kosher. To the contrary, this second approach explicitly embraces the possibility of a tension between the moral status of a belief and its epistemic rationality. As I argued in Sect. 3.2, this is no cost to the theory. There can be tension between epistemic rationality and moral norms when it comes to credences, and there can be tension between the RKRs that favor an emotion and the moral reasons against it. It should be no surprise that this tension afflicts belief as well.

I’ll close by considering an objection: doesn’t this view let believers like Aidan off the hook?Footnote 34 One way to make this objection more precise is to lean on the notion that WKRs are comparatively inefficacious. When we accept that a belief’s moral badness is a WKR, we may thereby imply that the belief is difficult to abandon. And the difficulty of meeting a moral demand sometimes mitigates blame. So it may seem that, by allowing that beliefs like Aidan’s might be epistemically rational, I wrongly imply that Aidan might deserve little blame.

I’ll make two points in response to this worry. First, some sorts of moral badness (say, perhaps, viciousness) do not presuppose aptness for blame. My discussion leaves open the possibility that, though Aidan cannot be blamed for his belief, his belief is still very seriously morally bad in some other sense.

Second, those who are inclined to make room for blaming Aidan can certainly do so. Though withholding belief on the basis of a moral consideration is indeed distinctively psychologically difficult, getting oneself to withhold belief regarding an uncertain proposition is often not difficult at all. Getting oneself to withhold is, generally, nowhere near as difficult as getting oneself to believe against the evidence. If Aidan claimed, “I’m trying to abandon the belief that this diner will leave a tip below 20%, but I’m just having such a hard time keeping an open mind,” we would generally not accept his claim as an excuse.

Throughout this paper, I’ve argued that the moral badness of a belief does not make a difference to its epistemic rationality. Some have taken cases like Racial Stereotype to provide evidence to the contrary. In this final section, I’ve cast doubt on the evidential force of those cases by noting other available ways to interpret them.

In conclusion, we need not embrace radical moral encroachment; what’s more, by rejecting it, we can avoid a host of problems. The problems I’ve raised for radical moral encroachment, however, are not shared by moderate moral encroachment. Certain moral facts, then, may indeed play a surprising and important role in setting epistemic standards. But the fact that a belief would be morally bad to hold is not among them.