1 Introduction

It is commonly observed that our milieu and upbringing can influence what we believe. John Stuart Mill wrote in On Liberty, “the same causes which make [someone] a Churchman in London, would have made him a Buddhist or Confucian in Pekin” (2008, p. 23). In a similar vein, the late Cohen (2000) noted that Oxford graduates (like himself) tended to believe in the analytic/synthetic distinction, while Harvard graduates tended to deny it. While it’s proven hard to pin it down, such observations raise an important skeptical worry and suggest a regulative principle: to the extent that we find ourselves having beliefs that are shaped by contingent features of our milieu or upbringing, we have reason to reduce confidence in them. This is part of Mill’s point in On Liberty—he wants to suggest to would-be religious censors to take their beliefs with a grain of salt and admit their fallibility, given how easily they could have had radically different religious beliefs, had they been born elsewhere.

The problem is not merely one of having certain beliefs as a contingent matter, however. There are all sorts of beliefs we hold, for which the causal explanation of why we hold them does not play a justificatory role with regards to those beliefs (White, 2010). But such etiological information need not undermine our justification for holding such beliefs. Suppose you agree to meet a friend for coffee at an establishment that also houses a collection of used books. Your friend is running a few minutes late, and so you decide to browse the non-fiction section. There, you find a book on medieval history and learn some new information about a particular city’s economic trajectory in the mid-fourteenth century. Now, the beliefs you acquire in this way are highly contingent in one sense—they depend on your friend being late on that day and you deciding to pick up that particular book out of all the others at that specific bookshelf. Yet this contingency is not troubling at all, especially if the source is reliable.

Nonetheless, there is certainly something behind the worry expressed by Mill and Cohen. This paper draws on some recent work in psychology and social epistemology to develop a particular way of cashing out the skeptical challenge based on one’s upbringing or milieu. The basic idea is this. Primarily, beliefs play a representational role in our cognitive economy, and their task is to help us navigate the world. However, some have recently argued that beliefs can also play a signaling role (Funkhouser, 2017, 2021, 2022)—they can serve to assure other members of the community important to our success that they can count on us for certain types of cooperation. In other words, by having certain types of beliefs, we signal to fellow members of our “tribe” that we are part of the same team. Such “socially adaptive beliefs” (Williams, 2021b) are sensitive to social punishments and rewards in addition to (or in extreme cases, in lieu of) the total available evidence. The upshot of the paper is simple: where there is good reason to suspect that a particular class of someone’s beliefs are socially adaptive in this way, it is rationally incumbent on that person to reduce confidence in those beliefs. Evidence of socially adaptive belief formation with respect to P furnishes at least a partial debunker for one’s belief that P.

Now, it is notoriously difficult to investigate our own biases. Even highly intelligent people are prone to have a “bias blind spot” (Pronin & Kugler, 2007; West et al., 2012), so that the propositions which form the contents of biased, prejudiced, or otherwise irrational beliefs nonetheless present themselves as true and supported by the evidence we have. However, it is plausible that socially adaptive beliefs are likely to occur when three conditions are present: (i) the costs to the individual of being wrong are negligible, (ii) the beliefs fall under sufficiently intense social scrutiny, and (iii) the evidential landscape relevant to the beliefs is sufficiently complex so as to make easy verification difficult to come by. Rational utility maximizers will then be led to form beliefs that line up with their social incentives, independent of whether they are true (Bénabou & Tirole, 2016). While this may well be optimal from a practical (particularly prudential) perspective, the epistemic status of such beliefs is compromised. In particular, the presence of belief forming mechanisms responsive to such “off- track” social influences facilitates a debunking argument. Put more pithily, when conditions (i)–(iii) are present, creatures like us will find it difficult to form justified beliefs or attain knowledge. This yields a rough heuristic for socially situated knowers like us, that goes beyond the general and unhelpful suggestion that we correct for our biases: namely, try to determine whether conditions (i)–(iii) hold in your case with respect to some domain of beliefs S. To the extent they do, downgrade your confidence in your commitments regarding S, insofar as you can.

Conditions (i)–(iii) help us home in on particular sets of beliefs that are susceptible to the challenge presented here, and thus offer some practical guidance. Analogously, consider the fact that under specific conditions, pilots are liable to make certain errors. Without further details as to what these conditions are and how one might identify them, it is difficult to give practical guidance to pilots. “Believe that which your evidence supports” is about as useful as “Do what you ought to do,” which is to say, not very much. However, as the case of hypoxia, often discussed in the higher-order evidence literature, shows, we can sometimes do better.Footnote 1 Hypoxia, or oxygen deprivation, can occur at high altitudes and can impair judgment. Nevertheless, from the perspective of the decision-maker suffering from hypoxia, their beliefs and decisions will look fine. Yet, recognizing these features of hypoxia can help us produce useful guidance—namely, when one is above a certain altitude, one should not put as much weight on one’s first-order judgments about what is possible at the time with the aircraft. Rather, perhaps, one should defer to antecedently determined protocols, until a lower altitude is reached. Conditions (i)–(iii) are intended in a similar vein.

What kinds of beliefs are susceptible to debunking in the way described above? I will argue that creedal beliefs—moral, political, religious, sectarian, or ideological assumptions that serve to bind communities together—are particularly susceptible. It is here that conditions i)-iii) are most likely to obtain. The paper thus advances the fairly radical claim that on such matters it will prove very difficult for creatures like us to achieve knowledge. Insofar as we care about getting things right epistemically—i.e. having those doxastic attitudes best supported by our evidence—we ought to become more “doxastically open” (Ballantyne, 2019) in these domains. In this regard, Socrates might have been right when he said in the Apology:

So I withdrew and thought to myself: “I am wiser than this man; it is likely that neither of us knows anything worthwhile, but he thinks he knows something when he does not, whereas when I do not know, neither do I think I know; so I am likely to be wiser than he to this small extent, that I do not think I know what I do not know.” (Plato, 1997, p. 21d)

2 Politics, voters, and ignorance

Since Downs’s (1957) classic decision-theoretic analysis of democracy, it has been commonplace to point out that voters can be rationally ignorant of policy matters relevant to good political decision-making.Footnote 2 The main idea is that collecting such information involves significant costs—namely, the opportunity costs of the time spent reading bills, learning about economics, foreign policy, and the like. The benefits of being well-informed, on the other hand, are very small, even if one has an altruistic utility function. Thus, suppose two candidates, A and B, stand for election. It can often be the case in modern democracies that when they’re actually in power, the policies the candidates will succeed in implementing are not very different from each other. But suppose that in a particular year and place the difference is stark. Even then, however, cost–benefit analysis advises against being well-informed. This is because one vote is extremely unlikely to alter the outcome of any major election (Lomasky & Brennan, 2000). Thus, some argue that it is advisable on consequentialist grounds to spend that time doing other things (Freiman, 2020)—say, helping the needy (from a utilitarian perspective) or doing something enjoyable (from a prudential perspective).

The opportunity costs involved in gathering relevant information are only part of the story however. For they do not explain why political partisans are prone to selectively lazy or biased processing of evidence. Thus, partisans are more likely to correctly spot the mistakes when it comes to arguments that yield conclusions they antecedently disagree with, but less likely to spot similar mistakes when they serve to support a favored conclusion. For example, Kahan et al. (2017) observe errors in statistical reasoning in cases where the presented data conflicts with partisan ideology, even when the subjects are otherwise high in numeracy (i.e. good enough at math). Similar effects have been noted in the case of evaluating the logical validity of arguments (Gampa et al., 2019). Another study suggests that subjects tend to (mistakenly) judge co-partisans as more reliable in getting things right even in non-political matters; in this case, an incentivized shape recognition task (Marks et al., 2019). Importantly, being more knowledgeable about politics in general does not mitigate such effects—in fact, it typically exacerbates them (Hannon, 2022).

To the extent that this empirical work points to biased processing of evidence—over and above simply ignorance of relevant facts—we need to move beyond the opportunity cost model. It’s not merely that partisans are ignorant of the relevant facts because gaining such knowledge is time-intensive. Rather, it’s plausible that they actively avoid certain types of knowledge (cf. Golman et al., 2017).

Why would creatures like us do this? In general, having an inaccurate map of the world reduces our chances of reproductive success. Our ancestors in the Pleistocene who mistakenly believed that they could fly by flapping their hands or that tigers and bears were not dangerous would be prone to falling off cliffs or getting eaten. They would be unlikely to have left many surviving offspring. In the modern times, you will not leave many surviving offspring if you think that eating a spoonful of arsenic every day is healthy for you or that you can swim across the Atlantic if and only if nobody is looking. The capacity for forming and maintaining accurate beliefs is thus robustly adaptive.

However, notice that cost–benefit analysis in such cases firmly comes down in favor of having accurate beliefs. Hold false beliefs and you risk life and limb. But this need not always be the case. When it comes to certain types of political beliefs in particular, cost–benefit analysis might recommend having inaccurate beliefs. In some cases, possessing knowledge might be detrimental to our prudential interests, and we might succeed in such scenarios in maintaining motivated ignorance.Footnote 3

Now, the idea that dictates of prudence can conflict with the dictates of epistemic rationality is familiar in the philosophical literature. Most famously, Blaise Pascal (1910) argued that we can have good reason to believe in God even if such belief is not rationally called for by the available evidence. In general, we can have prudential or instrumental reason to believe P even if our available evidence suggests not-P—if for example, an evil demon supplies us with the needed incentives.Footnote 4

In the case of politics, it’s not gods and demons, but our communities—parents, teachers, colleagues, neighbors, co-worshippers, etc.—who supply us with the relevant incentives. Suppose you live in a context where your social and professional success depends on affirming P. Thus, if you deny or say that you’re agnostic about P you will lose friends and job opportunities. Or alternatively, family members or attendees of the religious congregation where you worship will admonish or distance themselves from you. As social creatures who depend on others for our material and psychological needs, and are often motivated by the desire for social status (Anderson et al., 2015), such conditions give us fairly strong practical reason to affirm P.

This is particularly the case in contexts marked by intense affective polarization—i.e. where in-group members view out-group members with a strong negative valence, and often are disposed to punish or distance themselves from out-group members where they can. The point is of special relevance nowadays since it has been widely noted that the modern United States in particular is marked by relatively high affective polarization (Iyengar & Westwood, 2015; Talisse, 2019). Though of course, this phenomenon is hardly new in human history—witness the Spanish Civil War or the religious conflicts of the seventeenth century, for instance.

So, it pays to believe P when the group important to your success incentivizes you to believe P. This isn’t enough however—we must also look at the costs of being wrong if P is false. Specifically, we need to look at the costs of having an inaccurate map of the world, apart from the costs imposed by the social group. As the cliché goes, you wouldn’t jump off a bridge if your friend group thinks you should. Indeed, the best thing to do here is to find new friends. The costs here of having the false belief—viz., all things considered, one should jump off a bridge—are extremely high.

In modern democratic contexts, however, the costs of being wrong about politics and policy are negligible. On the face of it this might sound ludicrous. Consider the matter of anthropogenic climate change. Scientific consensus and the best available evidence support the idea that global temperatures are rising and that this will have dire effects if not dealt with urgently. Yet, opinion on this matter is polarized, and downplaying the risks can be a badge of tribal identity (Greco, 2021; Kahan, 2012). Now consider Susan, who believes that the idea of anthropogenic climate change is bunk. Isn’t it obvious that the costs of being wrong here, both to Susan and to the rest of society, are extremely high? Similar points may be made about having false beliefs about a range of things: economic policy, foreign military intervention, etc.

However, to say this would be to run two things together that ought to be kept separate. On the one hand, there are the costs to society as a whole of taking the wrong policy approach to climate or economics or foreign relations. On the other hand, there are the costs to an individual of that individual being wrong about these matters. These latter costs are negligible even if the costs to that individual of society making the wrong decision are dire.

There are two important features of such cases. First, the individual has negligible impact on whether a certain policy is implemented. If Susan is the typical voter, she’s unlikely to affect the outcome of any major election, let alone affect the particular policies implemented by legislatures and executives on the issue of climate change. Second, the costs of Susan being wrong, miniscule as they are, are largely externalized: they are spread out across the nearly 8 billion people living on earth. In this way, good policy-making is itself a public good (a good that is, at least approximately, non-excludable and non-rivalrous), and as such classical decision theory predicts it will be undersupplied (Freiman, 2017).

Contrast this with the costs of being wrong about private goods and other personal decisions. If you make the wrong choice about where to live, what career to choose, how much to pay for a car, where to walk alone at night, which restaurants to eat at, etc., you will have to bear (most of) the bad consequences. In such cases, the practically rational decision-maker will have strong incentives to have largely accurate beliefs.

So, when it comes to policy-making, we pay negligible cost for being wrong. However, we pay a high cost for having beliefs contrary to those our community holds and in which it has strong enough affective investment. Now on the face of it, this doesn’t seem quite enough to incentivize holding the false beliefs our community holds. After all, what matters is what we say, not what we think—our beliefs may be transparent to us, but not to others.

However, this underestimates two things. First, we have evolved mechanisms to be able to tell, in typical cases, whether someone genuinely believes something. Perception can be smart in this way (Gallagher, 2008). In general, the ability to detect lying or deception would have been greatly advantageous for creatures like us, living in small tribal groups dependent on sustained internal cooperation, for most of our history. Furthermore, our beliefs often spill over into our words and actions, which are then observable by others. Second, there are psychological costs involved in convincingly saying one thing and believing another. Cognitive dissonance is uncomfortable, and we typically do what we can to avoid it (Festinger, 1962). As Funkhouser (2017, p. 825) puts it, “other things being equal, it is healthier for a person to be single-minded and non-duplicitous.” There is, in addition, the cost of constantly being on the lookout for who is around so that one can adjust what one says. The way to avoid all these costs, of course, is to simply believe what one’s community thinks it’s important to believe.

In this way, if the conditions are right, we can be incentivized to believe P even if the evidence does not support P, or supports the opposite conclusion. However, because of our capacity for self-deception, which is adaptive in a range of settings (Trivers, 2011), we will not see our belief that P as being causally influenced by social, as opposed to rational, considerations. The best way to convince others that one believes P is to believe it oneself, and furthermore, take there to be good evidence that justifies believing P. Thus, simple introspection will lead us to re-affirm that we believe P, along with our group, because it is the conclusion best supported by the available evidence—and not because of the social costs we’d face if we didn’t believe P.

3 Political beliefs: basic and complex

There is plausibly a limit to what most people can be incentivized to believe. If you’re promised one billion U.S. dollars if you actually come to believe the earth is flat or that 2 + 2 = 5 (assume a machine able to accurately scan and verify mental content), it will prove difficult to do so despite the enormous reward. With regard to these beliefs we (or most of us) can’t help but be evidence-responsive. Furthermore, our beliefs are typically not within our intentional control in the way our actions are. Thus, it might seem on its face implausible that our communities can incentivize us to believe certain claims independently of where our available evidence points.

The key point to note by way of response here is that when our reasoning is biased by social incentives, this need not be transparent to us. In other words, we do not for the most part consciously decide to hold irrational beliefs when we do. Consider again Susan, the climate change skeptic. In the usual case, Susan will not have consciously decided to become a skeptic so as to fit in with her community. Rather, the incentives she’s faced with will guide her processes of reasoning, inference, evidence gathering, and so on in a biased way that’s not transparent to her. These influences will operate in the background.Footnote 5

Thus, drawing on a large psychological literature, Ziva Kunda (1990, p. 498) writes: “People do not seem to be at liberty to conclude whatever they want to conclude merely because they want to. Rather, I propose that people motivated to arrive at a particular conclusion attempt to be rational and to construct a justification of their desired conclusion that would persuade a dispassionate observer.” Moreover, in such cases people typically “do not realize that the [reasoning] process is biased by their goals” (Kunda, 1990, p. 486). Recently, Mercier and Sperber (2017) have argued that our reasoning processes serve to rationalize our beliefs and actions in a way that we see as justifiable to others. However, our rationalizations do not appear to us as such; rather they are facilitated by subtle mechanisms such as selective memory search for supporting evidence (Kunda, 1990).

If the preceding discussion is right, we can expect that easily verifiable claims, or obvious a priori propositions, usually will not be the subject of such motivated ignorance. In the political realm, we can distinguish between basic political facts and more complex political claims (Gibbons, 2021). Basic political facts would include propositions like: the U.S. Congress is composed of the House of Representatives and the Senate; Tony Blair was the Prime Minister of the United Kingdom in 2000; India gained independence from Britain in 1947; and so on. Such claims are easily confirmed and supported by large bodies of evidence, and it is difficult to produce rationalizations which deny them.Footnote 6

Complex political claims, however, are ripe for motivated rationalization. Consider, for example, a claim like: E1 is a better economic policy than E2. Suppose E1 is actually better, all things considered. Nonetheless, like many policy proposals, there are some things to be said in favor of E2. Someone invested in defending E2 will then have ample resources to rationalize his view, to himself and others. For one, he might overemphasize the benefits of E2 and underemphasize the drawbacks. That is, he will be prone to weighting considerations that count in favor of E2 more heavily than is merited. Likewise, he will weight considerations that count in favor of E1 less heavily than he ought. Furthermore, he might selectively recall all the evidence that suggests E1 could cause layoffs, while ignoring stronger evidence that E2 is likely to cause more layoffs. He might largely consume newspapers, podcasts, TV shows, etc. that come down in favor of E2. That is, he will be a customer of the vendors at the “rationalization markets” (Williams, 2022) which cater to his (unconscious) goal of believing E2. Importantly, because the evidential landscape is relatively complex, and there is a large body of relevant evidence, some of which points one way and some of which points the other way, it will be difficult to easily verify to him that E1 is in fact better all things considered. Anyone who has entered into debates about such matters will be familiar with this sort of impasse.

Contrast this with the case of basic political claims. Imagine that your friend mistakenly believes that Theresa May was the U.K. Prime Minister in 2000. In this case, you can simply tell them that it was Blair, and they will likely take your word. And if they don’t, a simple internet search would then convince them. Such disagreement is unlikely to persist.

Complex political beliefs, particularly those which are relevant to tribal identity, are highly susceptible to the debunking argument presented in this paper. For, such beliefs are likely to be shaped by social influences—they are likely to be what Williams (2021b) calls socially adaptive beliefs. They meet the conditions outlined in Sect. 1, namely: (i) the costs to the individual of being wrong are negligible, (ii) the beliefs fall under sufficiently intense social scrutiny, and (iii) the evidential landscape relevant to the beliefs is complex enough so as to make easy verification difficult to come by.

4 Debunking and skepticism

This section aims to clarify the sense in which creedal beliefs satisfying conditions (i)–(iii) are susceptible to debunking. Furthermore, I argue that the scope of the debunking challenge does not problematically over-extend so as to invite more general skeptical worries.

I am claiming that certain facts about the causal influence of our social groups on our beliefs are incompatible with us being fully justified in having those beliefs. Now there are two senses in which we can understand the debunking import of such etiological explanations of belief, helpfully distinguished in White (2010). The first proposal would be that one is justified antecedently in believing P, but upon learning about and reflecting on the nature of social influence on our beliefs, reading the relevant psychological literature, and so on, one loses that justification for holding the belief that P. Thus, to go back to Cohen’s example, the idea would be that before he notices the pattern that Harvard graduates tend to deny the analytic/synthetic distinction while Oxford graduates tend to affirm it, Cohen is justified in believing in the distinction. It’s only after noticing the suspicious correlation that he loses justification. To use a simpler example, imagine you walk into a darkish room and notice a piece of paper that looks red. You then form the belief that the paper in front of you is red. However, you soon learn that the room is illuminated by a red light. While initially you were justified in believing that the paper is red, after learning this information you lose that justification.

On the second construal, the way in which we form certain beliefs is incompatible with them being justified in the first place. Here there is what may be called a blocking debunker, where “facts about your causal predicament block you from ever being justified, whether you realize it or not” (White, 2010, 575). It seems to me that the debunking challenge posed by socially adaptive belief is of this latter sort. Beliefs formed with the goal (despite this goal being unconscious) of reaping social rewards and avoiding social costs are presumably not justified to begin with. Importantly for the debunking story, beliefs formed in this way are causally influenced by processes that are not robustly truth-tracking.Footnote 7 However, the causal information we might learn through the psychological literature on motivated reasoning and reward-sensitive belief formation does not, after the learning process, defeat our justification. At best, it helps us see that we weren’t justified in the first place, and insofar as we care about being epistemically rational (as opposed to prudentially rational) we ought to become more agnostic about the relevant claims.

None of this is to deny that we can learn much from the testimony and guidance of others. Indeed, this how we acquire most of our scientific knowledge (Hardwig, 1991). Trust in the appropriate epistemic authorities is crucial to acquiring knowledge in a world marked by an intense division of cognitive labor. However, these observations are compatible with thinking that beliefs regarding which conditions i)-iii) hold are susceptible to the debunking challenge presented here.

Note further that the challenge is fairly circumscribed, in that it only affects beliefs for which the three conditions hold. Many of our beliefs simply do not meet these conditions: either they are not incentivized and scrutinized by our communities, or they’re a priori obvious or easily verifiable, or the costs of being misinformed about the subject matter are sufficiently high. Hence, for instance, I believe there is a computer in front of me, that 2 + 2 = 4, that if I get a bacterial infection I should take the antibiotics my doctor prescribes, that Tokyo is the capital of Japan, that the Sun generates energy by a process of nuclear fusion, and so on. None of these beliefs fall prey to this particular skeptical challenge, and thus no global skepticism of the implausible kind is threatened by the argument.

Furthermore, the debunking challenge presented here has a different structure than arguments leading to more global skepticism. These latter arguments start by noticing that we lack independent reason to think that our beliefs are not systematically misguided. Thus, for example, the Cartesian skeptic claims that we lack evidence, independent of our perception, that our perceptions are not mass delusions. However, the argument presented here takes a significantly different route: it marshals positive evidence to think that our belief forming mechanisms, under certain social conditions, are likely to be unreliable by being inappropriately responsive to the relevant evidence. The normative upshot, insofar as one cares about epistemic rationality, is thus consistent with what Katia Vavova calls the Good Independent Reason Principle, which says, “to the extent that you have good independent reason to think that you are mistaken with respect to p, you must revise your confidence in p accordingly—insofar as you can” (2018, p. 145). This principle offers a way to respond to (epistemically) irrelevant influences on our beliefs of the problematic sort, without thereby threatening a more global skepticism, given the contingency of many of our beliefs.

It bears mentioning that the heuristic is intended as a rough guideline, and there are bound to be vague and indeterminate cases. Nevertheless, one might worry whether the heuristic problematically overgeneralizes. Thus, suppose one comes to believe that aliens, not humans, constructed the Egyptian pyramids. Saying this out loud will likely have some adverse consequences—people will wonder whether the person is okay or maintains a basic grip on reality. And it’s not obvious that the claim is easily verifiable in the way that claims like ‘grass is green’ are. It might further be contended that being wrong about the history of the pyramids has no direct costs on someone living in the modern context. Yet, it’s implausible that this observation puts pressure on us to reduce credence on the proposition that humans built the pyramids.

There are three main points to note in response. First, it’s reasonable to surmise that being wrong about facts like these, within the modern context, is not likely to be an isolated mistake. Rather, it suggests that the person’s basic reasoning capacities, to put it bluntly, are misaligned in a way that is likely to invite costs as a general matter. Second, claims of these kinds regarding the pyramids are not nowadays matters of tribal identity; they are not creedal in the same sense. Hence, they’re not subject to social scrutiny of the sort discussed here—we do not scrutinize each other’s beliefs about pyramids in the same way we might scrutinize beliefs about politics or religion. Hearing someone say that aliens built the pyramids invites puzzlement and a kind of humorous curiosity, rather than, say, anger. Relatedly, for most of us, and presumably the reader, affirming the claim that humans built the pyramids carries no benefits, in the way that affirming in-group beliefs often can. Third, with regards to the ancient history of pyramids, we can be fairly confident that the epistemic practices and processes of verification and dissemination of information are not significantly affected by creedal interests. Thus, there will be easily identifiable and reliable sources that give us a relatively unbiased and complete picture, given what evidence is available to us on the whole. Scholars working on the history of pyramids will not plausibly face non-truth-tracking social forces as they conduct their work. Similar points might apply to a range of other subjects of inquiry—ornithology, photonics, metallurgy, and so on. This is less likely to be the case when it comes to characterizations of social causes and interpretations of recent history that are relevant to complex political claims.

Finally, it is noteworthy that the debunking import of socially adaptive beliefs is separate from the epistemic significance of disagreement. Given the way Cohen and Mill frame their worry, it can seem that the problem is fundamentally one of disagreement—i.e., does the fact that people just like us in the relevant ways (equally intelligent, careful, etc.) disagree with us, either actually or possibly, put pressure on the justification we have for our beliefs? In most cases of interest, however, mere disagreement should not be troubling, even if one takes the conciliationist (Elga, 2007) rather than steadfast (Kelly, 2005) route. For one, members of different “tribes” will typically have different sets of evidence, and thus will not meet the key condition of epistemic peerhood. Second, political disagreements about a particular issue often occur against a backdrop of disagreement on a range of other issues such that ex ante one will appropriately not consider the other as an epistemic peer. To put simply: if disagreement is all that’s going on, there’s no debunking challenge.

So why have authors like Cohen and Mill framed the worry in the context of disagreement? Why does it matter what people in Pekin (from the perspective of nineteenth century British readers) or graduates from Harvard (from the perspective of later twentieth century Oxford graduates) believe? One plausible interpretation is that reflecting on disagreement, especially disagreement with ourselves in certain other possible worlds might help us to see that our beliefs in the actual world are shaped by non-truth-tracking social forces. Had the social forces operating on us been different enough, we’d have believed differently, despite having access to the evidence that, by our lights now, is sufficient to defeat those beliefs. This, I think, is part of why Mill in particular emphasizes that historically, many cultures have had beliefs (and related practices) that we now consider abhorrent.

Now of course, the mere fact that people in history had different beliefs need not bother us epistemically. Such disagreement is not troubling simply by virtue of being disagreement. For example, at different times in history people believed the sun revolved around the earth and that bloodletting was a sound medical intervention. These observations put no pressure on our modern beliefs about the solar system or the latest medical research. However, the case of moral and political beliefs is presumably different, and many will feel the intuitive pull of Mill’s worry regarding beliefs in those domains when he writes:

Yet it is as evident in itself, as any amount of argument can make it, that ages are no more infallible than individuals; every age having held many opinions which subsequent ages have deemed not only false but absurd; and it is as certain that many opinions, now general, will be rejected by future ages, as it is that many, once general, are rejected by the present. (Mill, 2008, p. 23)

One possible diagnosis of the difference is the following. On various (though not necessarily all) empirical matters, past ages were in a vastly more impoverished evidential situation than us. Because of this, their disagreement puts no rational pressure on our present beliefs. For illustration, imagine a possible world where you are a brain in a vat, being fed a simulated virtual reality that presents a radically different picture from the actual world. Here you have lots of disagreeing beliefs, but this disagreement is not troubling because your evidential situation in that possible world is dramatically impoverished. However, it’s not obvious that past ages were in that much of an impoverished evidential situation regarding moral and political matters. It’s implausible for example, by our own present lights, that one needs the latest research in moral and political philosophy to have enough evidence to ascertain the impermissibility of slavery, human sacrifice, female foot-binding, and so on.

But if that’s right, then notice that presumably our psychological makeup could not have changed so much over such a small time-horizon (from an evolutionary perspective) so as to make us immune to the sorts of forces that led past generations to have false beliefs about morality and politics. Hence, the Millian point is that we too should be on the lookout for these same factors distorting our belief forming processes here and now, insofar as we care about getting things right. And again, the challenge is circumscribed—to see disagreement from the past as having a general debunking force would prove too much. It would implausibly entail that most of our scientific beliefs are susceptible to debunking. However, focusing on creedal matters, where conditions i)-iii) hold, the skeptical challenge is much more powerful and relevant.

5 Beyond politics: creedal beliefs in general

Sections 2 and 3 focused on the case of complex political beliefs that are the subject of partisan polarization. These are especially relevant to the modern context, particularly in many liberal democracies, due to the rising prevalence of affective polarization. However, complex political beliefs are not special in the sense that only they are likely to meet conditions i)-iii). Rather, any set of beliefs or commitments that serve to bind communities and distinguish in-group from out-group members can meet those conditions. As such, these beliefs are susceptible to the debunking challenge as well.

As the anthropologist John Tooby (2017) has noted, some of these beliefs can in fact be absurd, and this is a feature not a bug, when we consider their function—which is to sustain coalitions. He writes:

[O]ptimal weighting of beliefs and communications in the individual mind will make it feel good to think and express content conforming to and flattering to one’s group’s shared beliefs and to attack and misrepresent rival groups. The more biased away from neutral truth, the better the communication functions to affirm coalitional identity, generating polarization in excess of actual policy disagreements. Communications of practical and functional truths are generally useless as differential signals, because any honest person might say them regardless of coalitional loyalty. In contrast, unusual, exaggerated beliefs—such as supernatural beliefs (e.g., god is three persons but also one person), alarmism, conspiracies, or hyperbolic comparisons—are unlikely to be said except as expressive of identity, because there is no external reality to motivate nonmembers to speak absurdities.

Thus, some kinds of religious beliefs, for example, are particularly susceptible to the challenge (cf. Funkhouser, 2017). In many places and times, they have met conditions (i)– (iii). First, the cost of being wrong about them is negligible in a range of settings. This is so especially where adhering to the religion’s belief-system does not involve significant costs that go beyond those that are the norm in the society. Consider for example the practice of worshipping at certain times of the day or week. It might be thought that if the core religious beliefs are false, this presents an opportunity cost—that time could be spent on entertainment or career advancement, say. However, this is not likely to be the case where the social norm involves worshipping at those times. The social costs of being seen not worshipping can be higher than the relevant opportunity cost. Furthermore, there can be a range of social and communal benefits to interacting with fellow worshippers during those times. And in addition, there’s likely not going to be much to do that is enjoyable or productive if everyone else is worshipping.

Second, the beliefs in question are likely to be the subject of intense social scrutiny. In many religious communities, being a non-believer can mean ostracism or can even invite a high risk of violence. There is thus a huge incentive to at least appear to others as if one is a true believer. And finally, especially in the pre-modern context, it would have been difficult to easily falsify the religious claims in question. At the very least, even if particular claims are false in this domain, they are capable of being rationalized. Indeed, this is the function of apologetics, for example. Moreover, an explicitly distinct epistemology is often invoked, such that certain matters are to be taken by faith or scriptural authority.

An interesting, and perhaps counterintuitive, upshot here is that religious beliefs are less susceptible to the debunking challenge presented in this paper within the setting of liberal democracies with a sufficiently robust secular public culture. In modern times, a young professional pays little social cost, in most cases, for being an atheist in New York or Paris or Melbourne, for example. On the contrary, in some social and professional circles one may pay a cost in the other direction, viz., for being a devout enough adherent of some religion. This is of course not to say that religious beliefs may not be debunked on some other grounds in this context. It is just to say that the debunking challenge presented here, which is a very specific and circumscribed one, is less applicable in that context.

The broader point of interest here is that which of our beliefs are susceptible to the challenge is a highly context-sensitive matter. It is crucial to note what constitutes the relevant “reference network” (Bicchieri, 2006)—i.e. it matters hugely which group(s) are important for our social and professional success. The reference networks of an Iranian nurse, a New York based literary agent, and a Kenyan farmer are bound to be significantly different. Insofar as we care about being epistemically rational, we must reflect on which group(s) constitute our reference networks and what sorts of beliefs those groups incentivize and scrutinize.

Apart from religious beliefs, I want to briefly consider two other relevant kinds: ideological and sectarian. Ideological beliefs are related to complex political beliefs, and ideologies can offer a way for political parties to create a coherent platform. On Downs’s (1957, p. 141) analysis, they function to help voters “focus attention on the differences between parties,” and thus the existence of party ideologies will emerge as a result of politicians, parties, and voters interacting strategically with each other. Ideologies can, however, also be construed in a more general sense, as offering a unified way to see the social and political world. They typically further give a theory of the unfolding of history. As such, they can be somewhat orthogonal to particular party platforms within a given democratic setting. Various communities—religious, professional, geographical—have plausibly subscribed to particular ideologies in this latter sense, and moreover, scrutinized the behavior of their members so as to incentivize genuine belief. In modern times though, few would label their own worldview as an “ideology,” because of the negative connotations the word can often carry.

Ideological beliefs are especially prone to meet the conditions i)-iii): communities sharing a particular ideology will often see non-believers in Manichean terms, as standing in between them and the achievement of a just or virtuous society.Footnote 8 Individuals, however, will pay little cost to being wrong with respect to the beliefs that constitute the ideology. And finally, the evidential landscape relevant to ideologies is notoriously complex. In particular, almost any ideology makes claims about historical causes—certain events, systems, or processes playing the key role in causing certain outcomes. However, even where a claim of this type is false, it is often hard to easily verify it as such given the many causal forces operative through history. Ideological claims about historical causes, then, are ripe for rationalization.

Sectarian beliefs have similar features. Thus, imagine two sects—they can be nations, sub-religions, ethnic groups, linguistically differentiated communities, and so on—who are historical rivals at a particular place and time. Each group, generally speaking, believes that it has been wronged by the other, and that its own claims are in the right. Hence, each group emphasizes the actions of the other that it sees as illegitimate, and moreover, will make claims about historical causes as in the case of ideology. Again, it’s straightforward to see how conditions i)-iii) will hold with respect to these kinds of beliefs.

6 How to escape the debunking challenge

Socially adaptive beliefs are prone to occur where our communities put pressures on us such that the demands of practical and epistemic rationality come apart. That is, it pays to be epistemically irrational in such cases, even though as a general matter it pays to have accurate beliefs about the world. Hence, regarding these particular matters—the specifics of which will be highly context-sensitive—being epistemically rational can be detrimental to our social or professional success. But most of us care about such success, and this gives us a powerful practical reason to be epistemically irrational. Prima facie, it’s hard to say why we should even care about epistemic rationality in such cases. As Nietzsche (1966, p. 1) puts it provocatively at the beginning of Beyond Good and Evil, “Suppose we want truth: why not rather untruth? and uncertainty? even ignorance?”

Let me briefly sketch two reasons why one might aim for epistemic rationality despite the contrary incentives. First, one might simply want to have an accurate picture of the world, even on matters where society influences us to have an inaccurate map. Second, there is plausibly a moral reason not to have an inaccurate map of the world, especially on matters where our collective decisions are important. Large scale policies driven by motivated ignorance can be socially catastrophic (Williams, 2021a). Thus, consider again Susan the climate change skeptic. Though her vote is unlikely to make a policy difference, it can be argued that by participating in a collectively harmful activity (i.e., voting for politicians who promote policies harmful to the environment), she is complicit in wrongdoing (Brennan, 2009; Nefsky, 2017). If this argument is right, there is a moral reason to avoid motivated ignorance, and the resulting actions it makes likely, on such matters.

The argument of this paper yields a simple but challenging recipe for avoiding the debunking worry. The first step is to identify the group(s) important for one’s social and professional success. These will naturally be different for different readers. A helpful heuristic, however, is to reflect on which group(s) are the most capable of giving us the social and professional rewards we care about, and inflict corresponding costs in those domains. If the reader is a graduate student or professor in California, say, their reference network is likely to include their colleagues in the discipline, and friends/family members depending on the particulars. It is unlikely to include the plumber in Minnesota or the schoolteacher in Thailand.

The next step is to reflect on those claims the convincing affirmation of which is rewarded and the denial of which would invite severe enough social and professional costs. Furthermore, if the costs of being wrong about these matters to the reader as an individual are negligible, and the claims are sufficiently complex so as to enable competing rationalizations and believable narratives, the relevant beliefs are ripe for debunking. In other words, if one notices that one’s belief that P satisfies the conditions i)-iii), one should thereby reduce confidence in P.

This is not to say, of course, that denying P is sufficient to ensure that one’s belief that not-P is justified. Simply being a contrarian doesn’t automatically guarantee well-formed beliefs. Indeed, we can imagine a contrarian who merely wants to deny his group’s beliefs for whatever reason, and holds the contrary attitudes for all sorts of misguided reasons. His resulting worldview may thus be inaccurate and unjustified. Nonetheless, I want to claim, even this contrarian avoids the specific debunking challenge presented in this paper for the simple reason that his beliefs are not socially adaptive.

7 Conclusion

Sometimes the demands of prudence come apart from the demands of epistemic rationality. One particular way in which this can occur is the case of socially adaptive beliefs—i.e. where our social group incentivizes us to hold certain beliefs irrespective of whether they are true. This gives rise to an underexplored debunking challenge. The challenge applies to beliefs where: (i) the costs to the individual of being wrong are negligible, (ii) the beliefs fall under sufficiently intense social scrutiny, and (iii) the evidential landscape relevant to the beliefs is sufficiently complex so as to make easy verification difficult to come by. Insofar as we care about being epistemically rational about such matters then, we ought to hold these beliefs with reduced confidence, and attempt to cultivate greater doxastic openness.