Abstract
Evidentialism is the view that facts about whether or not an agent is justified in having a particular belief are entirely determined by facts about the agent’s evidence; the agent’s practical needs and interests are irrelevant. I examine an array of arguments against evidentialism (by Jeremy Fantl, Matthew McGrath, David Owens, and others), and demonstrate how their force is affected when we take into account the relation between degrees of belief and outright belief. Once we are sensitive to one of the factors that secure thresholds for outright believing (namely, outright believing that p in a given circumstance requires, at the minimum, that one’s degree of belief that p is high enough for one to be willing to act as if p in the circumstances), we see how pragmatic considerations can be relevant to facts about whether or not an agent is justified in believing that p—but largely as a consequence of the pragmatic constraints on outright believing.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Ascribing particular beliefs to an agent serves two principal functions: to characterize the agent’s conception of how things are in the world, and to account for her actions.Footnote 1 Under ordinary circumstances we are happy to appeal to the notion of outright belief. We say that Jane believes in God, she believes her shoes are in the closet..., even if we might, under the right sort of prodding, admit that, strictly speaking, she believes things to various degrees. Between those beliefs which she holds with the greatest degree of confidence, such as her belief that 1 + 1 = 2, and those potential commitments which she outright rejects with the greatest degree of confidence (e.g. that all cats are triangles), are all those beliefs about which she is less than absolutely certain—some far less so than others. It is helpful in many philosophical settings to introduce a measure which reflects this relative degree of confidence and uncertainty, namely, subjective probability, a numerical assignment which ranges from 0 to 1, 0 being the probability the agent assigns to propositions which she regards as impossible or maximally implausible, and 1 being the probability she assigns to propositions she regards as certain or maximally plausible. Though many explorations of subjective probability and decision theory begin with the assumption that the agent’s assigning a particular subjective probability consists in her exhibiting certain betting behaviors or having certain preferences under various idealized betting situations (e.g. that she values only money, that she values every dollar equally no matter how much money she already has...), for now we shall suppose that it makes sense to speak of a measure of confidence, of degree of belief, independent of any direct relation to specific sorts of action.Footnote 2
Our practical purposes in daily life are frequently adequately served by ignoring the fact that the having of belief comes in degrees, just as they are when we make other commonplace ascriptions that in principle admit of degrees, such as those involving temperature, size, or states of emotion (the pizza is simply hot or cold, the child is thin, the father is angry...). The boundary is vague and gauged, perhaps, by untold contextual and pragmatic factors, but once a degree clearly exceeds or falls below a certain threshold, someone counts as having or failing to have a particular belief. What this threshold is like, whether it is variant or invariant across circumstances, what kinds of factors enter into establishing it—these are all challenging issues one would have to address in order to have a clearer picture of the difference between having belief to a certain degree and having outright belief.
In epistemological inquiry there are different ways to sidestep the puzzle of the relation between degree of belief and outright belief. Many Bayesians will insist that degree of belief or credence is the more refined and accurate notion to use when it comes to describing constraints on how a rational person’s state of opinion evolves over time in light of incoming evidence. There is no need to appeal to the cruder terms of art from our folk psychology: outright belief, disbelief, or suspension of belief (agnosticism). Much work in traditional epistemology, on the other hand, simply sticks with the conventions of ordinary parlance, and inquires into the conditions under which belief is justified, rational, or constitutive of knowledge. Degree of belief never enters into the equation, and perhaps there are good reasons why it does not in many cases.
One emerging debate in epistemology which is generally couched in the familiar terms of outright belief concerns whether or not pragmatic factors are ever relevant to epistemic evaluation in general, and epistemic justification in particular. At issue is the tenability of evidentialism, a view that takes differences with respect to practical needs and interests to be irrelevant to epistemic assessment: if two subjects S and S′ have the same evidence for and against p, then S and S′ cannot differ with respect to whether they are epistemically justified in believing p.Footnote 3 Such pragmatic factors as the severity of harm involved in believing p if p is false have no bearing on the kind of standards a belief (or the processes that give rise to it) have to meet in order for the belief to count as epistemically justified.Footnote 4 We shall look at recent arguments which try to show that variances in pragmatic factors can make a difference with respect to epistemic evaluation, especially when it comes to epistemic justification.
Strict adherence to the language of belief and disbelief becomes a disadvantage for these anti-evidentialist arguments, which, taken on their own, are unable to defeat a revisionist, Bayesian form of evidentialism. There are independent reasons, however, for trying to avoid Bayesian evidentialism. Further reflection on the significance of our talk of outright belief and on the relation between degree of belief and outright belief ultimately reveals why pragmatic considerations can be relevant to whether or not an agent is justified in outright believing. In order to count as outright believing that p, an agent must believe that p to a high enough degree such that she is willing to act as if p in the circumstances. When we combine this necessary constraint on outright belief with an intuitive normative principle governing degrees of belief (in order for an agent’s degree of belief that p to be justified it must be well-calibrated to her evidence for and against p), we arrive at a pragmatic constraint on the justification of outright believing: an agent is justified in outright believing that p only if her available evidence supports a degree of belief high enough for her to be willing to act as if p. Of two subjects with the same amount and quality of evidence, one could meet, and the other could fail to meet this requirement on account of pragmatic features of their respective situations, e.g. the costs if p should turn out to be false.
Once we are sensitive to the relation between degree of belief and outright belief, we see that pragmatic considerations do turn out to be potentially relevant to epistemic evaluation in certain limited ways—but largely as a consequence of the pragmatic constraints on outright believing.
2 Arguments against evidentialism
2.1 Intuitions about cases
Though none of the authors defending a role for pragmatic factors in epistemic justification would regard the following as decisive or unanswerable, our untutored intuitions about certain cases seem to speak in favor of regarding pragmatic considerations as potentially relevant to epistemic justification. These cases, usually involving the schedules of trains, planes, and banks, illustrate how the threshold of evidence required for epistemic justification seems to increase with the degree of harm involved in false belief. Such examples are similar to those which appear in defenses of contextualist accounts of knowledge:Footnote 5 one of the many contextual factors that can heighten the epistemic standards involved in having knowledge is the seriousness of error. Here is a pair of cases derived from Fantl and McGrath (2002) and Cohen (1999):
Train Case 1: You are about to board a train in order to go on vacation. You hope that the incoming train, bound for Geneva, is express, though it doesn’t much matter to you. You ask the Swiss businessman next to you whether the train is express, since he looks like a commuter in the know, and he answers “yes” without hesitation. You take his word for it, and believe that the incoming train is express.
Train Case 2: You need to get to Geneva on extremely urgent business at the UN, but are running late. If you miss the chance to give your presentation, the funding for an important refugee relief fund could be in jeopardy. You ask the Swiss businessman next to you whether the train is express, and he answers “yes” without hesitation. Since it is very important that you board the express train—it is the only one which will get you to the UN on time—you decide to seek out additional information in order to be more confident that this is indeed the right train.
Our intuitions seem to support the view that, even though the evidence is the same in case 1 and case 2, only in case 1 is such evidence sufficient for the belief that the incoming train is express to be epistemically justified. Testimony of the sort in question can be reliable enough to make belief that p epistemically justified when not that much is at stake in believing p falsely. It is unclear at this point, however, whether the degree of ill affects
-
a. how much we should increase our degree of belief in light of such evidence, or
-
b. what degree of belief is sufficient for outright belief, or both a & b.
2.2 Owens’ view
In Reason without Freedom Owens concedes that evidential considerations alone may determine whether we should believe p or not p when the time to draw a definite conclusion has arrived. Pragmatic considerations, however, are relevant to determining whether believing p or refraining from believing p is the most appropriate stance in light of the given level of evidence for p. There must be something, Owens insists, that establishes when the available evidence for p is sufficient for belief in p to be justified—something that determines when belief, rather than continued agnosticism or suspension of belief, is warranted. And yet further evidential considerations appear irrelevant or powerless to decide the issue. Whether we should believe p given our available evidence, or withhold belief and inquire further will depend on a host of pragmatic constraints, none of which have any bearing on the likelihood of the truth of p: the importance or urgency of having a settled view about p, the consequences of having a determinate view, the availability of resources for further inquiry, the costs of further inquiry, the difficulty of resolving conflicts, the potential harm of error...
While admitting a role for pragmatic constraints in epistemic evaluation, Owens is careful to distinguish his view from a full-scale pragmatism, which he takes to involve the notion that believing p can be justified by appeal to the desirable consequences of believing p, and that reflection on the desirability of believing p can motivate the rational agent to believe p. According to Owens, no amount of reflection on the extent to which believing p satisfies one’s weighted needs and interests will motivate a rational subject to believe p. (In this respect believing is different from acting.) No matter how beneficial upholding a particular belief might be, a rational agent will not be moved to accept it if there is only meager evidence in its favor. Furthermore, the pragmatic constraints which Owens regards as relevant to setting what level of evidence is sufficient for justification do not concern the desirability of believing p: they might sanction an extremely unpleasant belief with countless undesirable consequences (a sense of hopelessness, despair, diminished self-esteem...).
2.3 Fantl and McGrath
Fantl and McGrath (2002) agree with Owens’ point that pragmatic considerations are relevant to determining when evidence is sufficient for belief to be justified, but they fault his account for its incompleteness. What is needed to secure Owens’ case against evidentialism is a disclosure of the underlying principles which determine how much evidence is required for justification given particular sets of needs and interests on the part of the believer. Otherwise, we are really no better off than the evidentialist, still struggling to answer the questions: how much evidence is enough, and what exactly determines how much is enough? Fantl and McGrath adopt an alternative tactic against evidentialism by presenting an intricate and innovative defense of a pragmatic necessary condition on justification:Footnote 6
(PCA) S is justified in believing that p only if S is rational to act as if p.
This condition holds under what Fantl and McGrath take to be a standard interpretation of justified, namely, as having sufficient evidence to know (so if someone is justified in believing that p but nonetheless fails to know that p, it is not for lack of evidence). The actions which are best for S (taking into account S’s needs and interests, the costs and benefits of the various choices, the likelihood of various outcomes) must turn out to be the same as those actions which are best for S under the assumption that p is true; otherwise, S fails to be justified in believing p. S must be in a position where she may reasonably act as though p were true. If she is not—say, because the risks of acting as though p were true should p turn out to be false are very grave—then she is not justified in believing p. The seriousness of the harm involved in acting as though p were true, should p turn out to be false, can raise or lower the standards of evidence and the degree of confidence required for believing p to be justified (graver harm requires more conclusive evidence). With PCA, Fantl and McGrath secure a role for pragmatic considerations in epistemic justification on a principled basis, account for intuitions in the train case, and explain why evidentialism is false. Two subjects with the same evidence for p can differ with respect to the reasonableness of acting as if p (because, for instance, there are differences in the costs involved should p turn out to be false), so one could satisfy and the other fail to satisfy the conditions required for a belief that p to be justified.
2.4 James’ two intellectual duties
A further, more general reason for thinking that practical interests can make a difference to epistemic evaluation derives from William James’ famous observation in “The Will to Believe” that we have two intellectual duties: to know the truth and to avoid error. There is no single best way to honor these duties, particularly since they can pull us in opposite directions: total agnosticism guarantees that we will not believe any falsehoods, but at the cost of our never attaining knowledge; believing everything for which there is some measure of positive evidence will ensure that we have some true beliefs, but plenty of false ones as well. For our intellectual lives to proceed, we must make choices about how to weigh our multiple, sometimes conflicting cognitive goals, and we have no basis for such a choice if we fail to take our practical needs and interests into account.
So when we look at our beliefs and belief-forming-processes with an eye towards evaluating how well or how poorly they advance our cognitive goals, we find that such an assessment is futile when undertaken without a consideration of our practical interests. A belief might be justified in the face of positive, yet inconclusive evidence when the cost of not having a settled opinion is very high; the same belief might fail to be justified on the basis of such evidence when there is nothing to be gained, and much to be lost, by a rush to judgment. In the first situation, the duty to know the truth is accorded a stronger weight; in the second, the duty to avoid error.
3 The Bayesian evidentialist reply
An evidentialist who is a Bayesian at heart could attempt to counter the anti-evidentialist claims in Sects. 2.1–2.4 by insisting that outright belief is a vague and imprecise term for describing our cognitive states and by reformulating her position in a way which is amenable to a strict, Bayesian perspective. Consider the following intuitive, Bayesian-friendly normative principles, which will readily be granted by the conventional evidentialist:
BR: A rational agent’s degree of belief that p should be well-calibrated to the evidence for and against p which is available to her.
BJ: In order to be epistemically justified in her degree of belief that p, an agent’s degree of belief that p must be well-calibrated to the evidence for and against p available to her.
In other words, in order to be epistemically justified, an agent’s degree of belief that p must be properly responsive to the amount and quality of her evidence for and against p. That these normative principles are indeed among the norms shaping our epistemic evaluations is not so controversial, but the Bayesian evidentialist will go one step further by insisting on a stronger connection between epistemic justification and well-calibration with respect to degrees of belief:
BJ′: An agent is epistemically justified in her degree of belief that p when and only when her degree of belief that p is well-calibrated to the evidence for and against p available to her.
For the Bayesian evidentialist, whether or not an agent is justified in having a particular degree of belief that p is entirely determined by facts about her evidence (e.g. the amount and quality of evidence for and against p). Two agents with the same evidence for and against p, and the same degree of belief q that p cannot differ with respect to whether they are epistemically justified in their degree of confidence q that p.
Once we realize that, strictly speaking, we always have beliefs held with varying degrees of confidence, the challenges posed by Sects. 2.1–2.4 seem less serious. In the train case, we can allow that both subjects assign the same prior probability for p (the train is express), assign the same likelihood for p in light of the evidence (the evidence in question being the testimony of the Swiss businessman), and hence update their subjective probability for p to the same extent. The two subjects are equally justified in their shared degree of belief assignment for p, though they differ in whether such a degree of belief is sufficient for acting as if p to be advisable given the costs and benefits of so acting should p turn out to be true and should p turn out to be false. For the Bayesian evidentialist, there is no further question: do the subjects differ with respect to whether the belief that p is justified? What can be justified or fail to be justified is a particular degree of belief that p, or, within the realm of practical rationality, acting as if p, given one’s degree of belief that p, the costs and benefits of acting as if p, and the seriousness of the consequences of so acting should it turn out that p is false.Footnote 7 Pragmatic considerations are relevant only to the question of whether acting as if p is reasonable.
The same strategy helps to diffuse the impact of Owen’s observations. The proper object of epistemic appraisal is degree of belief, so it really makes no sense to ask how much evidence is sufficient for outright belief to be justified. Evidence comes in, and we update our degree of belief accordingly. The resulting boost in our degree of confidence in p can count as reasonable or unreasonable, justified or unjustified, depending on the quality and quantity of our evidence. The kinds of pragmatic factors Owens appeals to, such as the importance of having a settled view and the harms of error, cannot rationally have an impact on a subject’s degree of belief, as Owens would be the first to admit. They only have relevance to assessing the practical rationality of various courses of action (investigating further or ending inquiry, acting or refraining from acting as if p...).
As for Fantl and McGrath’s point that there is a pragmatic necessary constraint on justification, namely, that an agent must be rational to act as if p in order to be justified in believing that p, the Bayesian evidentialist would again be inclined to think that such a conclusion is the product of an epistemological project which has been cast in the wrong terms. Only degrees of belief can be justified or unjustified in the epistemic sense, and whether a particular degree of belief that p is warranted in light of the evidence does not at all depend on whether or not it is rational to act as if p. For any case where S is epistemically justified in assigning a particular degree of belief to p, short of absolute certainty, there will be circumstances where, given the stakes, it is rational for S to act as if p, and others where it will not be rational for S to act as if p. Indeed, the train case can be used to illustrate this point.
A combination of the above Bayesian responses could help deflect the Jamesian argument that pragmatic factors are necessarily involved in epistemic evaluation since they determine how we weigh our sometimes conflicting cognitive goals. First, the central goals, as James conceives them, have to be recast in order to accommodate the notion that we never have beliefs, full stop, but only degrees of belief, and actions undertaken. For the Bayesian evidentialist, it is too crude to speak of duties to believe the true and to avoid believing the false. We generally hope to assign a high degree of probability to claims which are in fact true, and assign a low degree of probability to claims which are in fact false, but ultimately our primary epistemic aim is to have degrees of belief which are well calibrated to the evidence. What kind of impact a bit of evidence has on our degree of belief should not, if we are rational, turn on the costs and benefits of boldness or caution, but simply on the quantity and quality of our evidence.Footnote 8 Whether the resulting degree of belief is strong enough for us to refrain from gathering further evidence, to stop considering alternative hypotheses, to assert confidently that we have finally resolved our question, to promulgate our viewpoint in the classroom, to build a bridge for heavy trucks, to move to China and spread the Word..., these are matters of decision-theoretic calculation, to the extent that they are amenable to rational resolution at all. What appears to be a weighing of cognitive goals—whether we should embrace the possibility that we have finally achieved understanding, or whether we should be more conservative and suspend judgment—is really a deliberation about what courses of action we should take in light of our degree of uncertainty.
4 Pragmatic constraints on outright belief
A potentially worrisome feature of the Bayesian evidentialist strategy, however, is that it is so radically dismissive of our ordinary ways of speaking, as well as of the linguistic habits of centuries of traditional epistemological inquiry. Even if there is a sense in which belief always comes in degrees, surely there are some important purposes served by counting people as outright believers, disbelievers, or agnostics concerning certain matters (think of how crucial these categories are for securing membership in various social groups, such as religious and political ones). How strongly someone has to believe a proposition in order to count as a believer in a certain setting may be quite a messy affair, with no clear-cut boundaries, but we cannot at the outset simply dismiss the possibility that there are various contextual factors which contribute to establishing some kind of threshold for belief.
Whatever such threshold requirements are like, it is clear that they are not always so strong as to demand absolute certainty, or so weak that merely thinking a proposition is more likely to be true than not will be enough for belief. We readily concede that most, if not all, of our empirical beliefs are upheld with less than mathematical certainty, and grant that, while we think the odds are slightly in favor of Lucky the horse winning the race, it would be inappropriate to describe us as believing that Lucky will win the race. At best, believing p to the maximal degree of confidence is sufficient for outright belief that p, and having a higher degree of belief that p than that -p is necessary for outright belief that p.
Since a central function of positing beliefs is to explain actions, a satisfactory account of thresholds for outright belief could appeal to facts about our dispositions to act. Consider
C: believing that p to a degree which is high enough to ensure that one is willing to act as if p is true,
where one’s being willing to act as if p means that what one is in fact willing to do is the same as what one would be willing to do, given p. Perhaps condition C is necessary and/or sufficient for counting as outright believing that p. Such a condition could play an important role in a contextual or a non-contextual account of outright belief: being willing to act as if p in the given circumstances is necessary and/or sufficient for counting as outright believing that p in those circumstances; being willing to act as if p in all or most circumstances (where one’s evidence for p is unchanged) is necessary and/or sufficient for counting as outright believing that p.Footnote 9
C can plausibly be taken as a sufficient condition for outright belief, provided we adopt the context-insensitive interpretation: we count as believing those propositions which we would not hesitate to act on under all or most circumstances where our evidence for p remains unchanged. Taking C as sufficient, but context-sensitive, would be too permissive: there are plenty of cognitive attitudes towards p which involve some degree of willingness to act as if p were true in particular settings, yet which fall short of believing (accepting, assuming, imagining, supposing...).Footnote 10
C can plausibly be taken as a necessary condition for outright belief, provided we are not so demanding as to require of the outright believer that she be willing to act as if p in all circumstances where her evidence for p is unchanged. Being so demanding would deprive us of most of our beliefs. For any belief that p which we hold to an extremely high degree of confidence, short of the maximum, we can envision plenty of situations (of the contrived sort which philosophers are so good at manufacturing) where the costs of acting as if p are so high in the case of error and the benefits so minimal in the case of correctness that we would refrain from acting as if p. To preserve the tenability of taking C as a necessary condition for outright belief, we could adopt a context-sensitive interpretation. In order to count as believing p in a range of circumstances, one must be willing to act as if p in those circumstances: one’s degree of belief that p has to be high enough that one is willing to act as if p under those circumstances.Footnote 11
The idea that C is a necessary, pragmatic constraint on outright belief has a certain intuitive appeal: we do seem to expect of people that, if they genuinely uphold a belief, they will not shirk from acting on it.Footnote 12 Hesitancy or failure to act in accordance with what we say we believe reflects indecision, or less than complete conviction—a view reinforced whenever we are urged to put our money where our mouth is, or to have the courage of our convictions.Footnote 13
5 The renewed case against conventional evidentialism
The original, non-Bayesian version of evidentialism is placed under threat when we view the Bayesian-friendly normative principle from earlier in our discussion, BJ, in light of the idea that condition C is a pragmatic, necessary constraint on outright believing.
BJ: In order to be epistemically justified in her degree of belief that p, an agent’s degree of belief that p must be well-calibrated to the evidence for and against p available to her.
C as a necessary condition on outright believing: In order to count as outright believing that p in the circumstances, an agent must believe that p to a high enough degree such that she is willing to act as if p in the circumstances.
Taken together, these two claims lead to a compelling argument against conventional evidentialism. Since having a degree of belief sufficiently high to underwrite a willingness to act as if p is partly constitutive of outright believing that p, the epistemic status of outright believing that p will, at least in part, depend on the epistemic status of having a degree of belief sufficiently high to underwrite a willingness to act as if p. In order to be justified in outright believing that p, the agent, at the very least, has to be justified in having a degree of belief which is high enough for her to be willing to act as if p. By BJ, such a degree of belief must be well-calibrated to/well-supported by the agent’s available evidence in order to be justified. Hence, in order for an agent’s outright believing that p to be justified, her available evidence must support a degree of belief high enough for her to be willing to act as if p. Given two agents S and S′ with the same amount and quality of evidence, one could have, and the other fail to have, enough evidence to support a degree of belief which is high enough to underwrite a willingness to act as if p—say, on account of differences in the costs if -p in their respective circumstances, as in the train examples. So one could satisfy, and the other could fail to satisfy, a necessary requirement for being justified in outright believing that p (even if they assign the same degree of belief to p in light of the evidence).
Pragmatic features, such as the cost if p should turn out to be false, have a potential bearing on whether or not outright believing that p can be justified for an agent. Not, according to the above derivation, because such features have any bearing on what degree of belief is justified in light of the evidence (we can agree with the Bayesian evidentialist on this point), but simply because such features are relevant to determining the threshold for outright belief. Higher thresholds will demand stronger, more conclusive, or more plentiful evidence than lower thresholds.Footnote 14
We have seen how the relevance of pragmatic factors to epistemic evaluation in the train examples can be interpreted as a product of the relevance of such factors to settling thresholds for outright belief. Returning to Owens’ argument, we can again concede to the Bayesian evidentialist that purely non-pragmatic considerations determine whether or not a subject’s resulting degree of credence after evidence comes in is epistemically justified. As for Owens’ point that pragmatic considerations are relevant to the question of whether or not an inquirer has sufficient evidence to justify belief, we can now recast the question as concerning whether or not a degree of belief that p which is high enough for the inquirer to be willing to act as if p is epistemically justified, i.e. is well-calibrated to the evidence. The same holds for Jamesean deliberations about striking an appropriate balance between theoretical boldness and caution. Does the evidence warrant a degree of belief which is high enough, given the stakes, for the inquirer to stop the search for more favorable evidence, to end deliberation about whether or not p, and to be willing to act under the assumption that p is true? Or does the evidence warrant only degrees of belief which fall under the threshold, making it more reasonable, given the stakes, to inquire further, to choose a course of action which seems a safer bet, given one’s degree of uncertainty about p... Again, the role for pragmatic constraints in epistemic evaluation can be interpreted as secondary, relevant only to setting the threshold for outright believing that p. Whether a degree of belief at or above that threshold is itself epistemically justified could then be entirely a matter of the quantity and quality of the subject’s evidence.
The above argument against conventional evidentialism derives a constraint on the justification of outright believing that p from an intuitive normative principle BJ (a principle which should be unobjectionable to the conventional evidentialist) and the claim that C is a necessary constraint on outright believing that p:
PJ: An agent is justified in outright believing that p only if her available evidence supports a degree of belief high enough for her to be willing to act as if p.
We can at this juncture wonder about the relation between PJ, and Fantl and McGrath’s PCA.
PCA: S is justified in believing that p only if S is rational to act as if p.
First consider practically rational agents in the robust sense (the sense which Fantl and McGrath presuppose): agents who prefer actions to the extent to which they are likely to best promote their ends and who make good, reflective judgments about such likelihoods. For such a practically rational agent S, when PJ is satisfied, PCA is also automatically satisfied; when PJ is not satisfied, PCA also fails to be satisfied. The acts which S is willing to do are those which she is rational to choose. So if the available evidence supports a degree of belief high enough for S to be willing to act as if p, then S is rational to act as if p. Correspondingly, if the evidence does not support a degree of belief high enough for S to be willing to act as if p, then S will not be rational to act as if p. (What S is rational to do, given p, is not the same as what S is rational to do, in fact).Footnote 15
Interestingly, for practically irrational agents, PCA is indeed an independent constraint, and one that places a stronger requirement on justification than does PJ. Say that the evidence is strong enough to make S epistemically justified in the personal probability she assigns to p, and that S outright believes that p. She is willing to act as if p in the given circumstances. Suppose, however, that acting as if p is a very poor choice for S in light of her weighted interests, her personal probability for p, and the potential costs and benefits of acting as if p in case of p and -p. (We can imagine S is a compulsive gambler, and often acts akratically, taking great risks whenever there is the promise of a huge payoff, even when doing so is not best in light of her interests and state of uncertainty.) In this case, even though S’s degree of belief that p is sufficiently warranted by the evidence, we might nonetheless be inclined to deny that she is justified in outright believing that p. Outright believing that p entails being willing to act as if p, and in this case we judge S’s outright belief as unreasonable: not because the degree of confidence she requires to be willing to act as if p is not sufficiently warranted in light of the evidence, but rather because acting as if p is not rational in light of her interests.Footnote 16
We may also be inclined to judge that S’s outright belief that p is unjustified in some cases where S is perfectly willing to act as if p (what she actually is willing to do is the same as what she would be willing to do, given p) and the evidence supports a degree of belief high enough for S to be willing to act as if p, but she has irrational beliefs which lead her to think that acting as if p is maximizing under the circumstances. A more reflective person would realize that acting as if p is extremely ill-advised, given S’s situation. PJ is met, but PCA will not be.Footnote 17 However, if we suppose that there is nothing wrong with S’s degree of belief that p from an epistemic standpoint, while sustaining the intuition that her outright belief is nonetheless unjustified, then we may be casting criticism on the reasonableness of her willingness to act as if p—an evaluation which cannot be purely epistemic.
PCA strikes me as highly intuitive, and I find Fantl and McGrath’s argument for it compelling, though I am still inclined to think that the pragmatic constraint on justification which PCA introduces is itself a legacy of the pragmatic element inherent in our concept of outright belief (which a hard-core Bayesian evidentialist could bypass). Because it is constitutive of S’s outright believing that p in the circumstances that S is willing to act as if p, when we evaluate whether or not such believing is in order, it is hardly surprising that our answer would, in turn, reflect our assessment of the reasonableness of being willing to act as if p—that is, whether or not S would be rational to act as if p.
There is another consideration which may further temper the anti-evidentialist implications of Fantl and McGrath’s defense of PCA. Fantl and McGrath’s argument crucially depends on interpreting justified as having sufficient evidence to know, that is, with the satisfaction of the justification condition on knowledge. But, as Conee and Feldman (2004) note,Footnote 18 such a notion of justification is fairly technical and narrow, and not one to which they would insist evidentialism applies. The strength of justification required for knowledge might very well depend in part on pragmatic features such as the cost of error, but such a conclusion does nothing to undermine the idea that the epistemic justification one has for a belief is entirely determined by the evidence one has.
The epistemic evaluation of outright beliefs as justified or unjustified is an exercise that ultimately cannot be accomplished without taking our practical interests and concerns into account. This anti-evidentialist conclusion holds, however, largely in virtue of the mechanisms of thresholds for outright belief. Outright believing that p requires, at the very least, being willing to act as if p, and whether our evidence warrants a degree of belief strong enough for us to be willing to act as if p in a given situation, naturally, cannot be determined in abstraction from our practical needs and interests. This modest anti-evidentialism can be resisted by a variety of Bayesianism which takes degrees of credence as the proper object of epistemic evaluation, but such a revisionist stance faces the complaint of being too unfaithful to our ordinary practices of epistemic assessment, the true focus of our philosophical concern.Footnote 19
Notes
Given the agent’s beliefs about her circumstances, her desires, interests and purposes, their relative strength and importance, the range of actions available to her, and the likelihood of various outcomes given certain actions, we can offer a teleological explanation of an action by noting that the action is one the agent regards as the most effective means to the relevant ends.
Here I follow Howson and Urbach (1993, p. 75), who make a good case for resisting the temptation to try to define degrees of belief in terms of dispositions for various sorts of behavior (generally involving betting) under certain conditions.
This formulation of evidentialism is offered by Fantl and McGrath (2002), who attribute the view to Conee and Feldman (1985). Conee and Feldman’s evidentialism—understood broadly as the view that facts about whether or not a person’s doxastic attitude is epistemically justified depend entirely on that person’s evidence—is typically contrasted with reliabilist accounts of justification. In this discussion, as in Fantl and McGrath (2002) and Weatherson (2005), the relevant contrast position is the view that pragmatic, non-evidential considerations can have a bearing on whether or not a person’s believing a proposition is epistemically justified.
We should construe evidence quite broadly here to include a wide variety of epistemic reasons for believing p, as well as any relevant background knowledge (or at least all the evidence supporting the background beliefs which are needed to fulfill the justification). Otherwise, evidentialism would be subject to the quick objection that two subjects faced with the same evidence for p, but possessing different degrees of background knowledge (e.g. a scientist and a child), could differ with respect to being justified in believing that p.
Naturally, the debate over evidentialism is premised on the idea that a legitimate distinction between pragmatic/practical and epistemic/evidential reasons can be drawn. If the distinction were not genuine, as some pragmatists claim, then there would be no real distinction between practical and epistemic justification, and hence no distinctively epistemic sense of justification which pragmatic concerns could or couldn’t bear upon. Practical reasons for belief would always be relevant to justification, call it “epistemic” or what you will, because such reasons would be the only sort we ever have anyway. Intuitively, there seems to be a clear difference between epistemic and practical reasons for belief: Aren’t the former simply those reasons which advance the truth-oriented or cognitive ends we have in believing, whereas the latter are those which help secure our purely practical ends? This natural suggestion falls short because adopting a false, evidentially baseless belief might sometimes be the best way to advance our epistemic ends overall. For example, by falsely believing that he is very smart on the basis of his feeling a tickle, Harry will have enhanced self-confidence and discipline, motivating him to study more and learn more truths in college. Despite the role taking the tickle as evidence of smartness plays in furthering Harry’s cognitive goals overall, we are hardly inclined to count Harry’s reason for thinking he is smart as epistemic. A simple reply seems to do the trick: since an epistemic reason for believing p ought to make p more likely to be true, an epistemic reason for believing p has to advance the epistemic goal of believing p when and only when p is true and not just one’s other epistemic goals. But such a response seems to saddle Harry with having the epistemic goal of believing that he is smart when and only when he is smart. Does Harry really have such a goal? Why should he—particularly when, given his personality structure, having it might make some of his more cherished epistemic goals inaccessible? The matter is a complicated one, and well deserving of greater attention. For purposes of this paper, however, we will suppose, as others do who are engaged in the debate over evidentialism, that the distinction between practical and epistemic reasons for belief, and pragmatic and epistemic justification is perfectly in order.
See, for example, DeRose (1992, p. 913–916).
Fantl and McGrath’s argument proceeds through a series of stages, beginning with a strengthening of an intuitive closure argument concerning knowledge. We cannot include every detail of their lengthy defense here, though hopefully the following rough, and somewhat liberal reconstruction will highlight and elucidate the most essential points. We begin at a slightly advanced stage of their discussion, with an enhanced version of their original intuitive closure argument. (Note that rational preferences for states of affairs is simply a broader category which encompasses preferences concerning actions on the part of the agent.)
-
(1′)
S knows that p.
-
(2′)
S knows that if p, A is better for S than B (given her needs and interests).
Therefore,
-
(3′)
S is rational to prefer A to B.
To conclude that S wouldn’t be rational to prefer state of affairs A, even though she knows A is preferable to B, given p, would be to concede that S doesn’t genuinely know that p. (Note that here, as elsewhere in the argument, the relevant sense of rational isn’t purely practical. What the authors seem to mean by S is rational to prefer A to B is that S thinks A will satisfy her needs and interests to a greater extent than B, and has good grounds for doing so.) A parallel point can be made, even when S is simply rational, i.e. has good reason, to think (yet might fail to know) that A is better for her than B, given p.
-
(1′′)
S knows that p.
-
(2′′)
S is rational to prefer A to B, given p. [S has good reason to think that if p, A is better for her than B]
Therefore,
-
(3′′)
S is rational to prefer A to B in fact.
To conclude that S wouldn’t be rational to prefer A, even though she is rational to think that A is preferable to B, given p, would be to concede that S doesn’t genuinely know that p. If you know that p, and are in a position to make reasonable judgments about what’s best given p, there shouldn’t be any problem in preferring as if p.
Fantl and McGrath operate with what they take to be a standard way of understanding ‘S is justified in believing that p,’ namely, as S has good enough evidence to know that p (if S fails to know, it’s not on account of S’s lack of evidence). Presupposing this interpretation, they argue that (1′′)–(3′′) can be modified to apply to a subject who is merely justified in believing that p.
-
(1′′′)
S is justified in believing that p.
-
(2′′′)
S is rational to prefer A to B, given p.
Therefore,
-
(3′′′)
S is rational to prefer A to B in fact.
If S fails to know that p, it isn’t because S fails to have enough evidence to know that p, so we can consider a subject S′ who has exactly the same evidence, needs and interests as S, but who knows that p. Since the rationality of a preference is entirely the product of one’s evidence, needs, and interests, and S and S′ have the same evidence, needs, and interests, whatever is a rational preference for S′, who knows that p, is also a rational preferences for S, who is merely justified in believing that p. So if both are rational to prefer A to B, given p, both are rational to prefer A to B in fact.
Switching (2′′) and (3′′) produces yet another valid argument:
-
(1′′′′)
S is justified in believing that p.
-
(2′′′′)
S is rational to prefer A to B in fact.
Therefore,
-
(3′′′′)
S is rational to prefer A to B, given p.
Given (1′′′′) and (2′′′′), could (3′′′′) turn out to be false? The authors argue against this possibility. Say (3′′′′) is false. Then either (I) S is rational to prefer B to A, given p, or (II) S is rational to be indifferent between A and B, given p. Appealing to the previous argument, (1′′′′) and (I) imply that S is rational to prefer B to A, in fact, which contradicts our stipulation that (2′′′′). Parallel reasoning suggest that (1′′′′) and (II) also lead to a conclusion which contradicts (3′′′′). If S is justified in believing that p, and S is rational to be indifferent between A and B, given p, then S is rational to be indifferent between A and B—a conclusion at odds with our stipulation that S is rational to prefer A to B in fact. (1′′′)–(3′′′), and (1′′′′)–(3′′′′) can be converted into a principle articulating a pragmatic necessary condition on justification:
-
(NC) S is justified in believing that p only if, for any states of affairs A and B, S is rational to prefer A to B, given p, iff S is rational to prefer A to B in fact.
This principle can be reworded as:
-
(PC) S is justified in believing that p only if S is rational to prefer as if p.
Preferences for states of affairs encompass preferences for actions, so we see that the principle with which we began, (PCA), is simply a special case of (PC):
-
(PCA) S is justified in believing that p only if S is rational to act as if p.
-
(1′)
Note that the Bayesian evidentialist won’t be committed to the view that two subjects S and S′ with the same evidence e for p cannot differ with respect to whether or not their degree of belief that p is epistemically justified: S and S′ might assign radically different likelihoods for p in light of e, only one of which is rationally acceptable. But S and S′, if they have the same evidence, cannot differ with respect to whether or not having a particular degree of belief that p is epistemically justified.
An anti-evidentialist might be tempted to argue that the degree of harmfulness of error could have a bearing on what degree of credence boost is warranted: when more is at stake, a more conservative updating policy is rationally required; when less is at stake, a more generous credence boost is rationally acceptable. For this approach to target evidentialism, however, the relevant notions of warrant and rationality would have to be purely epistemic—a weak point of the strategy. Troubling, too, are the effects of moving from one situation to another which presents a higher or lower level of risk involved should p be false: the proposal in question implies that one would be rationally required to raise or lower one’s subjective probability for p, despite the lack of new evidence. That the anti-evidentialists Fantl and McGrath would themselves resist this option is supported by the following passage: “But it ought to be common ground between theories of evidence that having a lot at stake in whether p is true does not, by itself, provide evidence for or against p. Evidence for p ought to raise the probability of p’s truth (in some appropriate sense of ‘probability’). But having a lot at stake in whether p is true doesn’t affect its probability, except in rare cases in which one possesses special background information.” (Fantl and McGrath 2002, p. 69).
In his celebrated Knowledge and Its Limits, Timothy Williamson makes a proposal along these lines (though he transcends the contextual/non-contextual dualism by introducing the idea that outright belief itself comes in degrees): “What is the difference between believing p outright and assigning p a high subjective probability? Intuitively, one believes p outright when one is willing to use p as a premise in practical reasoning. Thus one may assign p a high subjective probability without believing p outright, if the corresponding premise in one’s practical reasoning is just that p is highly probable on one’s evidence, not p itself. Outright belief still comes in degrees, for one may be willing to use p as a premise in practical reasoning only when the stakes are sufficiently low.” (Williamson 2000, p. 99)
Saying that we have degrees of belief (subjective probability) and degrees of outright belief appears to introduce an unnecessary complication. Our degree of willingness to use p as a premise in practical reasoning is a direct product of our degree of belief that p and our fundamental preferences, so why not simply stick with degrees of belief? It seems simpler just to say that, in some circumstances, our degree of confidence is high enough so that we are inclined to use p as a premise in practical reasoning (i.e., to act as if p). In those circumstances, we can count as outright believing that p. For those circumstances where our degree of belief isn’t high enough for us to use p as a premise in practical reasoning, we simply fail to count as outright believing that p in those circumstances (not: we still have some measure of degree of outright belief that p, as distinguished from our degree of belief that p, because of our willingness act as if p in some other range of circumstances).
Though I admit a pragmatist element to outright believing, I do not favor a purely pragmatist account of outright belief. What is constitutive of outright believing that p in a given circumstance is (i) that one’s degree of belief is high enough for one to be willing to act as if p is true and (ii) that one aims to have one’s degree of preference for believing that p (i.e. one’s degree of belief that p) answerable to the extent to which p seems likely to be true. One aims to have one’s degree of belief well-calibrated to (what one takes to be) one’s evidence.
In case some readers are wary of the notion that ascriptions of outright belief are context-sensitive, we might try to salvage a context-insensitive interpretation of C as necessary for outright belief by requiring only that the outright believer possess a willingness to act as if p in most (rather than all) circumstances where the evidence for p is unchanged. Exactly how the “most” should be understood is unclear: it probably shouldn’t be understood as most relative to all logically possible scenarios—that would be too demanding; on the other hand, if we mean most normal circumstances, or most of those circumstances sufficiently like the believer’s current circumstances in the relevant respects, we face the further difficulty of specifying what circumstances count as normal, or sufficiently like the believer’s current circumstances. Some ways of resolving these further questions could make the proposal indistinguishable from the context-sensitive interpretation. Furthermore, it’s unclear how we should understand the train case under the context-insensitive interpretation. In some sense, both subjects possess the same proclivities to act, relative to various circumstances: they have the same fundamental preferences, assign the same degree of belief to p, and differ only with respect to which circumstances are actual. Do both or neither or only one count as outright believing that p?
Extraordinary circumstances with unusually severe consequences in the case of error might incline an otherwise confident believer to refrain from acting as if p. Does ordinary practice dictate that such a subject still counts as believing that p, on account of her general proclivities under more normal circumstances? Or must we say that, while she counts as a believer in more normal settings, strictly speaking, she doesn’t outright believe that p in the extraordinary setting. Since the matter seems under-determined by the data of our everyday experience, we may as well avoid the complications inherent in the context-insensitive interpretation, and stick with the context-sensitive one.
A possible exception to condition C could arise for a rather peculiar sort of irrational agent. Say S thinks it would be best for her to take the shortest road to Elyria, and she has a very high degree of confidence q that taking path A is the shortest road to Elyria, but she has a bizarre phobia which makes it psychologically impossible for her to choose to take path A when she has degree of belief q that path A is the shortest road to Elyria (we can imagine only absolute certainty would bypass the problem). Even though her degree of belief that p is not high enough for her to be willing to act as if p, it still seems as though she could count as believing that p. Such situations are so atypical—at such great remove from the usual kinds of circumstances where our ordinary concept of belief is at play—that it should come as no surprise that the concept is put under some strain here.
Two authors who, in addition to Williamson, admit an even tighter relation between believing that p and being disposed to act as if p are Stalnaker (1987) and Weatherson (2005), both of whom accept a pragmatist, functionalist view of belief. Stalnaker writes: “To say that an agent believes that P is to say something like this: the actions that are appropriate for that agent—those that he is disposed to perform—are those that will tend to serve his interests and desires in situations in which P is true.” (Stalnaker 1987, p. 82) And Weatherson notes: “A better move is to start with the functionalist idea that to believe that p is to treat p as true for the purposes of practical reasoning. To believe p is to have preferences that make sense, by your own lights, in a world where p is true.” (Weatherson 2005, p. 421). Weatherson expands upon this basic pragmatist insight in a way which places special emphasis on conditional preferences—“an agent believes that p iff conditionalizing on p doesn’t change any conditional preferences over things that matter” (Weatherson 2005, p. 422)—a move which may introduce some difficulties (see note 14 below).
Both Stalnaker and Weatherson develop accounts of belief which respect an intuitive, yet somewhat controversial closure principle (given certain interpretations of the lottery and preface paradoxes): if an agent believes p and believes q, then she also believes p ∧ q. While typical threshold accounts of the relation between degrees of credence and outright belief might fail to accommodate this principle, the modest view I put forward here allows for it. The principle could be regarded as an additional necessary constraint on outright belief.
Since independently developing the views expressed in this paper, I have found a kindred spirit in Weatherson (2005), who also explores the possibility that the pragmatic sensitivity expressed in certain normative epistemic principles may be derivable from pragmatic conditions on belief. Weatherson as well challenges Fantl and McGrath’s interpretation of the train example by suggesting that while two agents with the same evidence in favor of a proposition and same degree of credence cannot differ in whether or not that degree of credence is justified, one could count and the other could fail to count as believing the given proposition on the basis of practical differences in their situations. Practical interests matter to philosophy of mind (insofar as they are relevant to determining whether a person’s doxastic state counts as belief), but not really to epistemology per se. While there are some points of agreement between Weatherson and myself, I am inclined to reject the theory of belief which he relies on, as well as the further criticisms he raises against counting Fantl and McGrath’s PCA (S is justified in believing that p only if S is rational to act as if p) as a pragmatic necessary condition on justification (see note 16).
Weatherson’s theory is far too complex to survey with any detail here, though his guiding insight is roughly the idea that the propositions you believe are those which leave all your conditional preferences unchanged (relative to all genuinely possible options and propositions you are disposed to take seriously). He expresses his central thesis as follows, where “A ≥ q B” means the agent regards action A as at least as good as or preferable to action B, given q. The first two quantifiers below range over all live, salient actions, and the third ranges over all non-far-fetched propositions q compatible with p whose truth makes a practical difference—i.e. propositions q where conditionalizing on q changes the agent’s preferences with respect to some live, salient actions. Bel(p) ↔ ∀A∀B∀q (A ≥ q B ↔ A ≥p ∧ q B ) Weatherson supplements this thesis with some additional constraints in order to deflect counterexamples involving propositions you believe whose truth or falsehood makes no practical difference. Weatherson is unhappy with what he designates as “threshold” views about the relation between belief and degree of belief, the view that “S believes that p iff S’s credence in p is greater than some salient number r, where r is made salient either by the context of belief ascription, or the context that S is in” (Weatherson 2005, p. 420). A probabilistically coherent agent who believes p and believes q will also have to count as believing p ∧ q. Threshold views fail to accommodate this intuitive closure principle. Weatherson’s alternative theory avoids this potential pitfall, but it does so at the cost of issuing a counter-intuitive verdict for a certain class of cases where an agent believes p, yet would no longer do so should strongly countervailing (even if not decisive) evidence q against p arise—a possibility which the agent thinks is unlikely. Consider, for instance, a fairminded juror who wants to do the best job she can bringing the guilty to justice while protecting the innocent. The juror believes that a defendant is innocent on the basis of strong evidence: a different person has recently confessed to the crime, the defendant’s footprints fail to match those at the crime scene, he has a strong alibi, etc. DNA test results have been delayed, and may not be forthcoming, but the juror is fairly confident that the defendant’s DNA will not be a match if testing is completed. If the test does turn out positive for a match, the juror would change her mind about the defendant’s innocence and issue a “guilty” verdict (a positive match speaks strongly, even if not decisively against innocence). The thought of sending an innocent person to jail is pretty horrific for her, though, so she concedes that in the unlikely scenario where the defendant is in fact innocent, and the DNA test is positive, she would prefer to recommend the verdict “innocent.” In this case p: the defendant is innocent q: The DNA test is completed and is positive for a match. A: recommend the verdict “guilty” B: recommend the verdict “innocent” A < B, A > q B, A < q ∧ p B The juror believes that p, even though p does change some of her conditional preferences, contrary to what Weatherson’s theory requires.
While much of my discussion is most naturally interpreted as concerning ex post justification, when ex ante justification is at issue a concern can be raised about my claim that PCA is not an independent source of a pragmatic constraint for practically rational agents. Say S is practically rational, but epistemically irrational: she does not believe p even though she should, because there is ample evidence (believing p is ex ante justified for S). S is rational to act as if p, in Fantl and McGrath’s sense, and PCA is satisfied. But S does not have a high enough degree of belief to be willing to act as if p—not because of practical irrationality, but simply because her degree of belief that p is so low (too low, indeed, to be epistemically rational). PCA and PJ appear to come apart: either because PJ does not apply in cases of ex ante justification, or perhaps because PJ simply fails. I think, however, that PJ can be understood in ways that would make it potentially relevant to accounts of ex post and ex ante justification. We can ask, of an agent who actually outright believes that p, whether or not her available evidence supports her degree of belief, which is high enough for her to be willing to act as if p. We can ask, of an agent who may not actually outright believe that p, whether or not her available evidence supports a degree of belief high enough such that the agent would be willing to act as if p were she to have that degree of belief. When the latter, ex ante reading is adopted, I believe that the above concern can be addressed. Since S is practically rational, we can take it that the available evidence supports a degree high enough such that S would be willing to act as if p were she to have that degree of belief (which, alas, she does not). PJ is satisfied, just like PCA. I thank an anonymous reviewer for pointing out the need to address this issue.
Weatherson presents an objection to Fantl and McGrath’s PCA (S is justified in believing that p only if S is rational to act as if p) by way of a complicated counterexample. I have a difficult time seeing how Weatherson’s example poses a potential challenge to Fantl and McGrath’s principle, unless we take him to be construing PCA in a way which, however natural, is ultimately at odds with the authors’ intentions. He presents an example where two agents are intuitively justified in believing that p (p is well supported by their evidence), and they do act as if p—what they actually prefer to do is what they would prefer to do, given the truth of p—but they are not rational in their choice of action in so far as their conception of which action is best is dependent on their beliefs in some other claims which are countered by their evidence. A different choice would have struck them as the best, as utility maximizing, had they responded to their evidence appropriately. The agents are, then, in some sense not rational to act as they do—a sense which Weatherson spells out in the following quote: “If we take rational decisions to be those that maximize utility given a rational response to the evidence, then the decisions are clearly not rational.” (Weatherson 2005, p. 439) This looks, superficially, like a case which counters PCA (the antecedent is true, and the consequent appears to be false), but not when we take into account that “S is rational to act as if p” is taken by Fantl and McGrath as equivalent to, or shorthand for “for all acts A, S is rational to do A, given p iff S is rational to do A in fact.” (Fantl and McGrath 2002, p. 77) The agents in question count as being rational to prefer as if p (in Fantl and McGrath’s sense) because both flanks of the biconditional are false. It remains true in Weatherson’s example that what is rational for the agents to do (utility maximizing, given a well grounded response to the evidence), is the same as what is rational for the agents to do, given p.
I thank an anonymous reviewer for bringing this kind of case to my attention.
See pp. 103–104.
Thanks to Peter McInerney, Jim Bell, Tim Hall, Kate Thomson-Jones, and Martin Thomson-Jones for helpful discussion of an earlier draft of this paper. Special thanks are due to Todd Ganson for his considerable feedback and support, as well as to an anonymous reviewer for insightful comments and questions.
References
Cohen S. (1999). Contextualism, skepticism, and the structure of reasons. Philosophical Perspectives, 13, 57–89.
Conee, E., & Feldman, R. (2004). Evidentialism. Oxford: Clarendon Press.
DeRose, K. (1992). Contextualism and knowledge attributions. Philosophy and Phenomenological Research, 52, 913–923.
Fantl, F., & McGrath, M. (2002). Evidence, pragmatics, and justification. The Philosophical Review, III(1), 67–94.
Howson, C., & Urbach, P. (1993). Scientific reasoning: The Bayesian approach. Chicago: Open Court, 1993.
James, W. (1896). The will to believe. New York: Longmans, Gree & Co.
Owens, D. (2000). Reason without freedom. London: Routledge.
Stalnaker, R. (1987). Inquiry. Cambridge, MA: M.I.T Press.
Weatherson, B. (2005). Pragmatic encroachment? Philosophical Perspectives, 19, 417–443.
Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ganson, D. Evidentialism and pragmatic constraints on outright belief. Philos Stud 139, 441–458 (2008). https://doi.org/10.1007/s11098-007-9133-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11098-007-9133-9