Evidentialism as an account of epistemic justification is the position that,

(EJ) Doxastic attitude, D, towards a proposition, p, is justified for an intentional agent, S, at a time, t, iff having D towards p fits S’s evidence at tFootnote 1

where the fittingness of a doxastic attitude on one’s evidence is typically analyzed in terms of evidential support for the propositional contents of the attitude. For instance, belief in a proposition best fits one’s evidence, and is thus the justified attitude to take according to evidentialism, “[w]hen the evidence better supports [the] proposition than its negation” (Feldman & Conee, 2005, p. 97).Footnote 2

Evidentialism is a popular and well-defended account of justification.Footnote 3 Some even take the theory to be platitudinous—at least when the notions of evidence, evidential possession, and fittingness are properly understood.Footnote 4 In this paper, I raise a problem for evidentialism—as the position is standardly formulated in EJ—on the grounds that there can be epistemic circumstances in which a proposition is manifestly and non-misleadingly supported by an agent’s total evidence, and yet believing the proposition is not justified for the agent.Footnote 5 The problem I raise is indicative of a broader constraint on any plausible account of epistemic justification: in order for an agent to have justification to believe a proposition, it needs to be the case that the belief as possessed by the agent could exhibit certain epistemic good making features, e.g., the propositional content of the belief as possessed by the agent would be supported by the agent’s evidence. As I demonstrate, the fact that a proposition, p, is supported by an agent’s total evidence at a time, t, doesn’t guarantee that a belief in p as possessed by the agent at t could exhibit any epistemic good making features, including having propositional contents (i.e., p) that would be supported by the agent’s evidence. Thus, the fact that a proposition is supported by an agent’s evidence doesn’t guarantee that the agent has justification to believe the proposition.

1 The Argument

Take a proposition that is (named by) an instance of the following schema,

(Occurrent) I am not entertaining an occurrent thought about xFootnote 6

where we substitute for “x” the name of some entity (e.g., some object, event, class, etc.) of which one is not occurrently entertaining a thought but about which one can think; that is, one exhibits the requisite relation to x, whatever it happens to be, that allows one to have intentional mental states about x. (Note: Strictly speaking, there are no I-propositions, only sentences involving “I”. My use of “I” in Occurrent is to indicate that in believing a proposition named by an instance of the schema one would believe of oneself that the proposition holds, i.e., believe de se.) For simplicity, I will discuss the following instance of Occurrent,

(U) I am not entertaining an occurrent thought about the novel, Ulysses.Footnote 7

Imagine an ideally rational agent (IRA) who is a fan of Ulysses but isn’t entertaining an occurrent thought about the novel. An IRA is an intentional agent who, at any given time, t, possesses all and only the justified doxastic attitudes at t.Footnote 8 As Declan Smithies (2012, p. 280) writes, “the propositions that one has justification to believe are just those propositions that one would believe if one were to be idealized in relevant respects.” Considering what an IRA would believe given a body of evidence, E, serves as a useful heuristic in determining what one is justified to believe given one’s evidence is E.

I take it to be constitutive of occurrent thought that it is introspectively and immediately accessible.Footnote 9 Therefore, we are in a position to determine whether we are occurrently entertaining a thought about Ulysses. Because U is true of the IRA—that is, the IRA is occurrently entertaining some set of thoughts, S, none of the members of which is about UlyssesFootnote 10—and because occurrent thought is introspectively and immediately accessible, the IRA has extremely good evidence for U, regardless as to whether we take the IRA’s evidence to consist in (believed or known) propositions about the contents of the members of S (Dougherty, 2011b; Neta, 2008; Williamson, 2000a), facts about the contents of the members of S (Dancy, 2000), or the IRA’s introspective awareness of those contents (Conee & Feldman, 2004, 2008). Even on restrictive or strong forms of evidentialism (Feldman, 2004; Moon, 2012) according to which one’s evidence consists solely of one’s occurrent mental states (or the propositional contents thereof), the IRA’s evidence strongly supports U. Thereby, evidentialism is committed to the claim that belief in U is justified for the IRA. Seeing that the IRA is stipulated to possess all and only the justified doxastic attitudes, she will believe U according to evidentialism. Of course, the IRA does not occupy some unique epistemic circumstance. If U is true of anyone (assuming the individual has the relevant standing to entertain a thought about Ulysses), belief in U is justified according to evidentialism.

If the IRA believes U, she must either do so (i) occurrently, (ii) dispositionally, or (iii) in some manner in which the belief isn’t accessible, doesn’t display the dispositional profile characteristic of belief, and yet the IRA still qualifies as believing the proposition (I flesh out this possibility in Sect. 1.3).Footnote 11 I take (i), (ii), and (iii) to be exhaustive of the ways one might believe a proposition. If the IRA does not believe U in one of these three ways, she does not believe U, simpliciter. In the following, I discuss (i), (ii), and (iii) in turn. As I demonstrate, the IRA would not believe U, despite the fact that the proposition is strongly supported by her evidence. If U is true of the IRA, she occupies a circumstance in which her evidence strongly supports U, and yet belief in U isn’t the justified attitude to take.

1.1 Occurrent Belief

If the IRA were to occurrently believe U, then U would be false for the simple reason that the IRA would actually be entertaining an occurrent thought about Ulysses. In addition, the falsity of U would be manifest from the IRA’s perspective. Given that occurrently believing U would be blatantly irrational, the IRA wouldn’t occurrently believe U. So, insofar as we take the IRA to have very strong evidence for U and, thereby, believe U, she must do so non-occurrently.

One might object that it takes time for intentional agents to assess their evidence and update their attitudes (cf. Fantl, 2019; Podgorski, 2016). U is the proposition that at a certain time, t, (the present) one is not occurrently entertaining a thought about Ulysses. If the IRA has sufficient evidence to believe U at t, and she updates her beliefs in light of this evidence, she would only end up believing U at some later time, t1, or so the objection goes. It can certainly be the case that U is true of an IRA at t and that the IRA rationally and occurrently believe U at t1. However, evidentialism, as traditionally conceived, is a time-slice epistemology (Hedden, 2015; Moss, 2015); in other words, evidentialism is committed to the claim that the doxastic attitudes it is rational for an agent to have at a time, t, are determined solely by features of the agent’s epistemic circumstance at t, namely, their evidence at t.Footnote 12 Insofar as the IRA is stipulated to possess the doxastic attitudes evidentialism deems justified, the IRA will take the doxastic attitudes at a time that are appropriately apportioned to her evidence at that time. So, given that we’ve stipulated that the IRA is not occurrently thinking about Ulysses and, thereby, has extremely good evidence for U, she will presently believe U. However, she won’t believe U occurrently for the reasons already given.

An evidentialist might argue that in epistemic circumstances in which you aren’t entertaining any thoughts about Ulysses, an occurrent belief in U is justified in those circumstances, qua circumstances in which you aren’t thinking about Ulysses. However, if you were to occurrently believe U, you would be in relevantly different circumstances in which occurrent belief in U would no longer be justified. Evidentialism indicates what doxastic attitudes are justified given one’s body of total evidence at a time, not given what one’s evidence would be if one were to possess those attitudes.

To illustrate the point further, imagine the following circumstance,

(Omissive) An agent, S’s, total evidence, E, strongly supports a proposition, p, at a time, t. However, S doesn’t believe p at t. In addition, S is directly aware of the fact that she lacks a belief in p at t. Therefore, E also strongly supports the proposition that S doesn’t believe p at t.

In Omissive, S’s total evidence supports what is known as an omissive Moorean conjunction, that is, a proposition of the form ‘p, but I don’t believe p.’ It’s widely accepted that it is irrational to believe omissive Moorean conjunctions (Chan, 2010; Kriegel, 2004; Williams, 2006). Nonetheless, evidentialism deems belief in an omissive Moorean conjunction justified for S, as S’s evidence strongly supports the conjunction. However, Omissive is a circumstance in which the following key features hold: (i) According to evidentialism, some doxastic attitude, D (i.e., belief in the omissive Moorean conjunction), is justified for an agent, S, on a total body of evidence, E, at a time, t, (ii) S doesn’t possess D at t, (iii) if S had possessed D at t, S would have possessed some different total body of evidence, E + , and (iv) D is not justified according to evidentialism on E + . The evidentialist might argue that she can consistently accept (i)-(iv). Evidentialism claims that D is justified given S’s total evidence is E, not E + . So, a belief in the omissive Moorean conjunction is justified in the epistemic circumstances described in Omissive, qua circumstances in which S doesn’t actually believe the conjunction or the first conjunct. However, if S were to believe the omissive Moorean conjunction, S would be in a different epistemic circumstance, and in that circumstance the belief would no longer be justified.Footnote 13

Similarly, one might argue that (i)-(iv) holds in the case of the IRA. An occurrent belief in U is justified for the IRA in her epistemic circumstance, qua circumstance in which she isn’t occurrently thinking about Ulysses. However, if the IRA were to occurrently believe U, she would be in a relevantly different epistemic circumstance, and in that circumstance an occurrent belief in U would no longer be justified.Footnote 14

The problem for evidentialism is that U will be supported by the IRA’s evidence only if the IRA doesn’t occurrently believe U. Therefore, evidentialism will deem occurrent belief in U to be justified for the IRA only if the IRA doesn’t occurrently believe U.Footnote 15 However, beliefs do not exist as entities to be epistemically evaluated untethered to intentional agents and possible circumstances in which they are possessed. The very notion of belief (and of other doxastic attitudes as well) involves its possession as a mental state by an intentional agent. In order for belief to be the justified attitude to take for an agent it needs to be the case that the agent has justification to possess that belief. The crux of the problem for evidentialism involves the general conditions on an agent, S, having justification (whether that justification be epistemic, prudential, etc.) to do something, x (e.g., occurrently believe some proposition, possess some emotion, take some action, etc.), in an evaluative circumstance, C.Footnote 16 I take the following to be a straightforward truism,

(Having Justification) In order for S to have justification to x in C, it must be the case that x as done by S in C could exhibit certain relevant good making features (where the good making features will be determined by the state or event type to which S’s xing belongs).

In other words, it can’t jointly be the case that (i) S has justification to x in C, and yet (ii) there is nothing relevant to be said in favor of a possible token instance of x as done by S in C. S’s justification to x in C, must be grounded in the (possible) good making features of S’s xing in C. In claiming that S’s xing in C must (possibly) have certain relevant good making features in order for S to have justification to x, I make no assumptions about the good making features that S’s xing must (possibly) possess, given the state or event type to which S’s xing belongs. However, I take it that in order for an agent, S, to have epistemic justification to occurrently believe a proposition in a circumstance, it needs to be the case that an occurrent belief in the proposition as possessed by S in that circumstance could exhibit certain epistemic good making features, e.g., the propositional contents of the attitude would be supported by S’s evidence.Footnote 17 The problem for the evidentialist is that an occurrent belief in U as possessed by the IRA could have no relevant epistemic good making features (including the very epistemic features the evidentialist cares about, i.e., strong evidential support for the propositional contents of the belief); the IRA’s occurrent belief would be manifestly false and irrational. Given that there is nothing to be said in favor of an occurrent belief in U as possessed by the IRA, the IRA lacks justification to possess an occurrent belief in the proposition.

In response, the evidentialist might insist that (i) epistemic evaluative circumstances are exhausted by an agent’s total evidence at a time, and (ii) we have to hold the epistemic evaluative circumstances fixed in determining whether an agent, S, has justification to possess a doxastic attitude, D, towards a proposition, p, in those circumstances. Therefore, we have to hold S’s evidence fixed in evaluating whether S has justification to possess D(p). So, in evaluating whether an occurrent belief in U is justified for the IRA in her epistemic circumstance, C, we have to hold fixed the fact that her evidence supports U in C. Therefore,

(Counterpossible) If the IRA were to occurrently believe U in C—where C is exhausted by the IRA’s total evidence, E, which we are holding fixed supports U—then her belief would be adequately apportioned to her evidence and, thus, justified.

However, Counterpossible has an impossible antecedent, as it can’t be the case that the IRA occurrently believes U while still possessing total evidence that, on balance, supports U. Again, occurrently believing U constitutively involves possessing evidence that would strongly support the negation of U. The evidentialist could claim that Counterpossible is vacuously true, but this in no way advances the position of the evidentialist.Footnote 18 In determining whether an agent, S, has justification to do something, x, in a circumstance, C, we have to consider whether S’s xing in C could have any relevant good making features, where we have to make the changes necessary to C that are constitutively involved in S’s xing. We can’t accurately evaluate the justificatory status of xing for S independently of what would be constitutively involved in S’s xing. Holding the IRA’s evidence fixed in evaluating whether she has justification to occurrently believe U would be, absurdly, to consider whether she has justification to do something independently of what would be constitutively involved in doing that very thing.Footnote 19

The IRA does not have justification to possess an occurrent belief in U for the simple fact that her occurrent belief in U would exhibit no epistemic good making features, e.g., the propositional contents of the belief wouldn’t be supported by her evidence. However, clearly, all is not lost for the evidentialist, as occurrently believing U is not the only way to believe the proposition. We discuss dispositional belief in the following section.

1.2 Dispositional Belief

Roughly, one believes some proposition, p, dispositionally—that is, one possesses a standing belief in p—only if one possesses a sufficient number of the dispositions characteristic of belief in p.Footnote 20 Dispositions that are characteristic of belief in p include (i) asserting that p in relevant circumstances (e.g., if one were asked whether p is the case in circumstances where one desires to reveal one’s true attitudes), (ii) occurrently judging that p when considering whether p is the case, (iii) using p as a premise in conscious deliberation when relevant, etc. In order for an IRA to possess a dispositional belief in U, she would need to manifest a sufficient number of the dispositions characteristic of believing U in their associated stimulus conditions, e.g., if asked whether U is the case in circumstances where the IRA desires to reveal her true attitudes, the IRA would need to assert that U. However, the IRA would manifest none of (or, at least, not a sufficient number of) the dispositions characteristic of belief in U. The IRA would not be disposed to consciously judge U to be true, assert that U is the case, use U in conscious deliberation, etc., because consciously accessing the content of a standing belief in U would render the belief manifestly false and irrational. Therefore, the IRA would not dispositionally believe U.

As discussed in the previous section, U is the proposition that at a certain time, t, (the present) one is not occurrently entertaining a thought about Ulysses. The IRA could consciously and rationally affirm U, use U as a premise in deliberation, etc., at times later than t. However, the IRA’s future activity of affirming U, using U as a premise in conscious deliberation, etc., isn’t sufficient to ground the fact that she believes U dispositionally at t. Given that an IRA’s beliefs could change after t, and given the IRA would manifest none of (or, at least, not a sufficient number of) the dispositions indicative of a standing belief in U at t, it’s safe to conclude the IRA does not dispositionally believe U at t.

Problems arise for evidentialism because the theory treats an agent’s having justification to believe a proposition to be solely a function of the agent’s total evidence at a time. However, doxastic attitudes are not isolated representational states (insofar as we take doxastic attitudes to be representational) that can be characterized or possessed independently of their relations to other attitudes and the (computational/) functional role they play in the cognitive architecture of an intentional agent. Even representationalists who take beliefs to be propositional representations encoded in the language of thought (à la Fodor, 1987) agree that beliefs need to play certain roles in the agent’s cognitive architecture to count as beliefs as opposed to propositional attitudes of another type, e.g., suppositions, imaginings, desires, etc.

Given our principle, Having Justification, in order for a belief to be justified for an agent, S, in an epistemic circumstance, C, it needs to be the case that the belief, as possessed by S in C, along with the activity constitutive of possessing the belief, could have certain epistemic good making features. Again, we can’t assess whether an agent, S, has justification to do something, x (e.g., possess a standing belief in a proposition), in an evaluative circumstance, C, without considering what would be constitutively involved in S’s xing in C. Given that manifesting the dispositions characteristic of a standing belief in U would render it evidently false and irrational, the belief is irrational, simpliciter.

1.3 Inaccessible Belief

In the previous section, the dispositions I identified as characteristic of possessing a belief involved consciously accessing (or tokening an occurrent thought with the content of) the belief. However, it may be objected that it is possible for the IRA to have a standing belief in U while being unable to consciously access it. In other words, it may be possible for the IRA’s standing belief in U to play the right type of roles in the IRA’s cognitive architecture to count as a belief, despite the fact that the IRA can’t access it. Because the IRA would be unable to bring the content of the belief to consciousness, her evidence would continue to support U. Thus, in claiming that the IRA is justified in believing U, the evidentialist wouldn’t run afoul of our principle, Having Justification; there is at least one way in which the IRA can believe U (namely, believe it inaccessibly) and it still be the case that her belief would exhibit relevant epistemic good making features, i.e., the propositional content of the belief would be supported by her evidence.

There are plausible circumstances in which a belief can be attributed to an agent, as opposed to, say, a module operating subpersonally in the agent, despite the fact that the agent cannot seem to access the belief. For instance, it’s been argued that in cases of anosognosia in which a patient overtly denies that they suffer from a clear condition, e.g., partial paralysis, cortical blindness, aphasia, etc., the patient may nonetheless believe they possess the condition while being unable to access (at least for a time) the belief in conscious processing (Levy, 2008). For example, in anosognosia for hemiplegia, a patient may earnestly deny that the left side of their body is paralyzed and yet display behavior, like staying in bed, indicative of believing that they are partially paralyzed (Vocat et al., 2010). When asked why they are staying in bed, the patient may confabulate an answer, e.g., they just don’t feel like getting up or are too tired, demonstrating that they don’t have access to their reasons for (in)action. Plausibly, the patient believes they have hemiplegia (although likely not under that description), and this belief is playing a non-conscious yet person-level role in driving the patient’s behavior, yet the patient cannot consciously access the belief. Patients suffering from anosognosia for hemiplegia will certainly not display the typical dispositional profile associated with believing that they are partially paralyzed; however, other of the patients’ behaviors bespeak of their possession of the belief in a manner inaccessible to conscious processing.

If the evidentialist is to argue that there is a way in which the IRA could believe U and yet be unable to consciously access the content of U, the evidentialist owes us a psychological story about the IRA. However, appealing to cases of delusion like anosognosia for hemiplegia won’t help in explaining the IRA’s psychology. Clearly, those suffering from a delusion in virtue of brain damage are, through no fault of their own, not functioning in an ideally rational manner. It’s far from clear that we can make psychological sense of the IRA such that she counts as believing U, lacks access to the belief, and still, at any given time, possesses all and only the rational doxastic attitudes at that time. Countless propositions evidently follow from U and the content of the IRA’s other beliefs. If the IRA is to possess all and only the rational doxastic attitudes (which she does by stipulation) she would have to believe these propositions. For example, for any proposition, p, the IRA believes, she would believe the conjunction, U & p, as this conjunction would trivially follow from her evidence.Footnote 21 But none of these conjunctions could themselves be used in conscious processing for reasons already discussed. It’s hard to see how the IRA could believe U inaccessibly while remaining ideally rational without the aid of a full-fledged intentional homunculus who has access to all of the IRA’s beliefs and can thus update the IRA’s attitudes in light of her believing U. However, belief in U now seems more appropriately attributed to the homunculus as opposed to the IRA.

It might be objected that the evidentialist owes us no story about how an IRA has the beliefs she does. The evidentialist can merely stipulate that an IRA has attitudes that are properly apportioned to her evidence at a time. But, again, beliefs are not possessed in isolation of their relation to other attitudes or the functional role they play in an agent’s cognitive architecture. Given our principle, Having Justification, in order for one to have justification to do something, x, it must be possible that one’s xing would display certain relevant good making features. We can’t merely stipulate that the IRA believes U non-consciously and inaccessibly without providing a credible story about the IRA’s mental life and why her believing U would be of some epistemic merit. The problem for the evidentialist is that there appears to be no plausible way for the IRA (as opposed to a mental module or homunculus operating in the IRA) to possess a belief in U non-consciously and inaccessibly while remaining ideally rational.

If the IRA does not believe U occurrently, dispositionally, or in some non-conscious and inaccessible manner, she does not believe U, simpliciter. We are now left with a contradiction,

(Master Argument).

  1. 1.

    Doxastic attitude, D, towards a proposition, p, is justified for an intentional agent, S, at a time, t, iff having D towards p fits S’s evidence at t. (Definition of Evidentialism)

  2. 2.

    An IRA is an intentional agent who, at any given time, t, possesses all and only the justified doxastic attitudes at t. (Definition of IRA)

  3. 3.

    Some IRA (let’s call her “Trish”) isn’t occurrently entertaining a thought about Ulysses. (Assumption)

  4. 4.

    Trish has access to her occurrent thoughts. (Constitutive of occurrent belief)

  5. 5.

    Trish believes U. (From 1, 2, 3, and 4)

  6. 6.

    It’s not the case that Trish believes U. (In virtue of the arguments given in Sects. 1.1, 1.2 and 1.3)

  7. 7.

    Trish believes U, and it’s not the case that Trish believes U. (Contradiction from 5 and 6)

Assuming I’ve adequately motivated 6, at least one of 1–4 must be false. I suggest we reject 1. Given that a belief in U as possessed by the IRA would exhibit no epistemic good making features, the IRA is not justified to believe the proposition. Therefore, the fact that U is extremely well supported on the IRA’s evidence isn’t sufficient for the IRA to have justification to believe U, pace evidentialism. We cannot assess whether an agent has justification to possess a belief independently of considering what would be constitutively involved in the agent possessing that very attitude. By focusing solely on an agent’s evidence at a time, the evidentialist fails to account for what would be constitutively involved in an agent’s possessing an attitude, D, in determining whether the agent has justification to possess D.

2 Objections

2.1 Reject 3 of Master Argument

One might respond that my argument actually demonstrates that an IRA must always be occurrently entertaining a thought about Ulysses. We should reject 3 of Master Argument. We can’t assume that an IRA isn’t entertaining a thought about Ulysses without engendering a contradiction (or so the objection goes).

Of course, U is just one instance of the schema, Occurrent, mentioned in the first section. More broadly, this objection would require the IRA to entertain an occurrent thought about every entity of which she can think. This is absurd and ad hoc. It would be ludicrous for a theory to make it a condition on (ideal) rationality that one occurrently entertain a thought about everything of which one can think. Evidentialism treats rationality as, centrally, a function of apportioning one’s attitudes to the evidence. Occurrently entertaining a thought about everything of which one can think has nothing to do with one’s evidence.

One might argue that entertaining a thought about every entity of which one can think is an enabling condition for ideal rationality. Similarly, it is not a rational requirement that one possess limitless computational power; however, if one is to take all and only the justified doxastic attitudes at a time (thus being ideally rational), which includes believing all (evident) implications of the contents of one’s rational beliefs, one needs to possess limitless computational power as an enabling condition. However, limitless computational power is necessary for one to exhaustively and instantaneously determine what one’s evidence supports at a time. Stipulating that an IRA must occurrently entertain a thought about everything of which she can think, on the other hand, has nothing to do with the IRA apportioning her attitudes to the evidence and is merely an ad hoc means of avoiding the worry I raise.

2.2 Dispositional Belief Redux

One could argue that, pace my claims in Sect. 1.2, the IRA would dispositionally believe U. According to this objection, the IRA would manifest the dispositions characteristic of her standing belief in U, e.g., in appropriate circumstances the IRA would assert that U, use U in conscious deliberation, etc. However, in manifesting the dispositions characteristic of her standing belief, the IRA would render her belief unjustified and, thus, cease to be an IRA. In other words, the IRA’s status as an IRA is contingent on her not encountering any of the relevant stimulus conditions and manifesting the dispositions that are characteristic of her standing belief in U. It’s not impossible to possess a justified standing belief in U, you just have to be lucky enough to avoid manifesting any of the dispositions that are characteristic of the belief.

This objection serves to further highlight the problem I raise for evidentialism. If an agent, S, is to have justification to possess a belief, B, in an epistemic circumstance, C, and certain activity is constitutive of possessing B, then it needs to be the case that S’s engaging in that activity is compatible with the justificatory status of B. Again, beliefs do not exist to be epistemically evaluated in isolation of the agents who (possibly) possess them and the activity constitutive of their possession.

2.3 Anti-Expertise

One might argue that the problem I raise for Evidentialism is merely an instance of the problem of anti-expertise. If this is the case, extant solutions to the anti-expertise problem could be used to preempt the worries I raise.

In a case of anti-expertise, one gains compelling evidence that one is an anti-expert with respect to some proposition (or class of propositions), p, where an anti-expert, AE, with respect to p is one for whom the following holds,

p iff it’s not the case that AE believes (or judges that) p.

Let’s call an instance of the above an “anti-expertise proposition.” The following case is an oft cited example of anti-expertise introduced by Earl Conee (1982) (I’ve altered the case in several non-essential ways for ease of discussion),

After repeated and flawless trials using the best in brain-scanning technology with a massive and diverse sample of people, a thirtieth century brain physiologist, Dave, discovers that a person’s N-fibers fire iff it’s not the case that the person believes they are all firing (let’s call this the “N-fiber biconditional”). Dave then begins to wonder about the following proposition: (F) All of Dave’s N-fibers are firing.

Dave is in a rational bind (assuming he has access to his propositional attitudes) in virtue of his justified belief in the N-fiber biconditional. Regardless of the propositional attitude (or lack thereof) Dave takes towards F, Dave will fail to possess the attitude towards F that evidentialism deems justified.

The evidentialist might argue that the problem I pose is merely one of anti-expertise. One will inevitably be an anti-expert with respect to U such that the following biconditional holds,

U iff it’s not the case that I believe (or judge that) U.

However, the above biconditional is clearly false; if I occurrently believe U is false (while not also believing U is true), then the right side of the biconditional is true, yet the left side is false. The problem I raise for evidentialism doesn’t require that one have evidence that one is an anti-expert with respect to U. The problem is generated merely by U being true of an individual and the individual having reflective access to her occurrent thoughts.

Additionally, extant responses to the anti-expertise problem are of little help in solving the worries I raise for evidentialism. The two main responses to the anti-expertise problem consist in (i) denying that one will have sufficient evidence to support an anti-expertise proposition and (ii) allowing that in certain circumstances one is not rationally required to believe the evident logical implications of one’s rational beliefs. Roy Sorensen (1987) and Andy Eagan and Adam Elga (2005) opt for the first solution. Sorenson, for example, argues that an anti-expertise proposition will always be overshadowed (as Sorensen puts it) by an alternative hypothesis on one’s total evidence. Sorensen reasons that because the cost of believing an anti-expertise proposition is incoherence (that is, one will inevitably have an incoherent belief set, if one believes an anti-expertise proposition), we are always more warranted in revising background beliefs or offering some alternative hypothesis to explain our evidence for the anti-expertise proposition than in accepting the proposition. However, on the problem I raise there is no relevant proposition, p, like an anti-expertise proposition, such that if one believes p then one is, thereby, mired in incoherence. Therefore, there is no easy way to generate an analogue of Sorensen’s response to address my worry for evidentialism.Footnote 22

Conee (1982) and Reed Richter (1990) adopt the second solution and allow that an IRA may (i) believe she is an anti-expert with respect to a proposition, p, (ii) suspend judgment as to whether p, yet (iii) rationally refuse to make the clear inference to p. If the IRA refuses to infer p, she can (according to Conee and Richter) rationally maintain her suspension of judgment as to whether p. Richter argues that by rationally permitting an agent to do (i)–(iii) we allow the agent to maximize the amount of true information she knows in a situation, which, Richter claims, is “the main goal of epistemic rationality” (ibid, p., 154). But, again, this solution depends on it being the case that one has ample reason to believe an anti-expertise proposition and, therefore, has little application to the problem I raise.

2.4 Propositional and Doxastic Justification

Finally, one might argue that we can solve the problem for evidentialism by invoking the distinction between propositional and doxastic justification. The proper conclusion to draw from my argument is that the IRA is propositionally justified to believe U, despite the IRA being unable to possess a doxastically justified belief in U.

It’s commonly accepted that evidentialism is a theory of propositional—as opposed to doxastic—justification. Roughly, propositional justification is a function of having reason to believe a proposition—regardless of whether one actually believes it—whereas doxastic justification is a function of holding the belief on adequate grounds or properly basing the belief on one’s reasons (Silva & Oliveira, forthcoming). Traditionally, propositional justification is taken to be (conceptually/theoretically/metaphysically) primary; one is doxastically justified in believing a proposition, p, only if (i) one has propositional justification to believe p, and (ii) one epistemically bases one’s belief in p on adequate reason (cf. Feldman & Conee, 1985; Korcz, 1997, 2000; Pollock & Cruz, 1999).Footnote 23 Using the distinction between propositional and doxastic justification, one could argue that all I’ve demonstrated in Sect. 1 is that the IRA cannot possess a doxastically justified belief in U. However, believing U is still propositionally justified for the IRA in virtue of the IRA having reason to believe the proposition (or so the objection goes). To present a problem for evidentialism I would need to argue that a belief can be propositionally justified for an agent only if it’s possible for the agent to doxastically justifiably possess the belief, yet I’ve made no such argument.

As stated, those writing on propositional justification talk about the justification type in terms of having reason to believe a proposition, regardless of whether one believes it (cf. Volpe, 2017).Footnote 24 Altering the wording of our principle, Having Justification, to be explicitly about the conditions on an agent’s having reason to do something yields the following,

(Having Reason) In order for an agent, S, to have reason to do something, x, (e.g., believe some proposition, take some action) in a circumstance, C, (regardless of whether S actually xs in C) it must be the case that x as done by S in C could have certain relevant good making features (where the good making features will be determined by the state or event type to which S’s xing belongs).

I also take Having Reason to be a simple and straightforward truism. The fact that S has reason to x in C must be grounded in the (possible) good making features of S’s xing in C; therefore, it can’t jointly be the case that (i) S has reason to x in C, and yet (ii) x as done by S in C could have nothing relevant to be said in its favor. Restricting Having Reason to epistemic reasons for possessing beliefs yields the following truism,

(Having Epistemic Reason) In order for an agent, S, to have epistemic reason—and, thus, propositional justification—to possess a belief, B, (or any doxastic attitude for that matter) in an epistemic circumstance, C, (regardless of whether S actually possesses B in C) it needs to be the case that B as possessed by S in C could exhibit certain relevant epistemic good making features.

Having Epistemic Reason is a simple consequence of our broader principle, Having Reason, and expresses a condition on possessing propositional justification for a belief. Problems arise for evidentialism because the position jointly,

  1. 1.

    Equates epistemic reasons with evidence, thus adhering to what Clayton Littlejohn calls the Reason-Evidence Identification Thesis (REI),

    (REI) x is an epistemic reason (i.e., something that bears on whether to believe a proposition, p) iff x is a piece of evidence (for p) (Littlejohn, 2018, p. 531),

and

  1. 2.

    Restricts the relevant evidence bearing on whether an agent has reason to believe a proposition at a time, t, to the evidence the agent possesses at t.

What I’ve demonstrated is that, in virtue of 1 and 2, evidentialism runs afoul of the truism, Having (Epistemic) Reason. The IRA has very good evidence for U; thus, in virtue of 1 and 2, the evidentialist must claim that the IRA has good epistemic reason to possess a belief in U. However, the IRA lacks sufficient epistemic reason to possess a belief in U for the simple fact that a belief in U as possessed by the IRA would exhibit no relevant epistemic good making features, including the very features the evidentialist cares about (i.e., the propositional contents of the belief wouldn’t be supported by the IRA’s evidence). If the evidentialist insists that the IRA has good epistemic reason to possess a belief in U then the evidentialist is stuck claiming that (i) the IRA has sufficient reason to possess a belief in U in her epistemic circumstances, C, and yet (ii) a belief in U as possessed by the IRA in C (taking into account what is constitutive of possessing that attitude) could have nothing relevant to be said in its favor, including having propositional contents that are supported by the IRA’s evidence. It can’t jointly be the case that (i) and (ii) are true.

Nowhere in my argument have I suggested that in order for a belief, B, to be propositionally justified for an agent, S, it must be possible for S to properly base B in the evidence or form B on the basis of appreciating the evidence. I’ve made no claims about properly basing or forming attitudes or, more generally, doxastic justification. Nothing I’ve said rules out the possibility of epistemic circumstances in which an agent, S, is propositionally justified to possess a belief, B, (e.g., in virtue of the fact that B as possessed by S would have propositional contents that are sufficiently supported by S’s evidence) despite S being unable to doxastically justifiably possess B. Therefore, my argument does not require that I reverse the traditional (conceptual/theoretical/metaphysical) priority of propositional over doxastic justification or claim that a belief can be propositionally justified for an agent only if it’s possible for the agent to doxastically justifiably possess the belief.

It should also be noted that I am not committed to the claim that propositional justification for an agent, S, must comport with the contingent psychology of S. As several theorists have argued (e.g., Ichikawa & Jarvis, 2013; Smithies, 2015), it is possible for an agent, S, to have propositional justification to believe a proposition, p, even if believing p is beyond S’s ken, given S’s contingent psychology. When assessing whether S has reason to believe p, we may have to consider the epistemic good making features relevant to propositional justification (if any) of the belief as possessed by a sufficiently idealized version of S (cf. Turri, 2010; Volpe, 2017), which comports nicely with the quote from Smithies (2012, p. 280) given in Sect. 1, namely, “the propositions that one has justification to believe are just those propositions that one would believe if one were to be idealized in relevant respects.”Footnote 25

Drawing the distinction between propositional and doxastic justification does not provide the evidentialist with the resources to rebut my arguments. In order to respond to my arguments, the evidentialist needs to drive a wedge between the epistemic conditions in which it is propositionally justified for an agent, S, to possess a belief and the epistemic conditions in which the belief as possessed by S could exhibit the epistemic good making features we take to be relevant for propositional justification. The problem for the evidentialist is that there is no room to drive the wedge.

3 Conclusion

As I stressed throughout the paper, doxastic attitudes are not isolated representational states that can be characterized or possessed independently of their relations to other attitudes and the (computational/) functional role they play in the cognitive architecture of an intentional agent. Therefore, we cannot assess whether an agent has justification to possess a doxastic attitude independently of considering what would be constitutively involved in the agent possessing that very attitude. In order not to run afoul of our principles, Having Justification and Having (Epistemic) Reason, the evidentialist ought to make the justificatory status of a doxastic attitude, D, for an agent, S, a function of S’s actual evidence at a time and the evidence S would possess at that time were she to possess D and were she to engage in the cognitive activity constitutive of possessing D. Although it is beyond the scope of this paper to explore the details of how we might improve evidentialism, the updated theory would still ground the justificatory status of a doxastic attitude in evidence (thus maintaining REI), although not just the evidence the agent actually possesses.Footnote 26 Beyond exploring how evidentialism ought to be updated, further research should also examine whether other theories of justification, e.g., process reliabilist accounts, violate Having Justification and Having (Epistemic) Reason as well. For sake of space, I’ve only been able to examine evidentialism (as formulated in EJ) in this paper.