Let’s start by making a familiar distinction. Borrowing a case from Turri (2011), consider two jurors in a murder trial, Miss Knowit and Miss Not. Both listened closely to the trial, and have good reasons to believe that the defendant is guilty. Miss Knowit is convinced by the evidence and forms her belief that the defendant is guilty. Miss Not hears and understands the evidence, but it doesn’t move her. Rather, she forms the belief that the defendant is guilty because he looks suspicious. Since they both have good reason to believe that the defendant is guilty, they both believe the right thing. However, Miss Not is in a worse epistemic position than Miss Knowit.

The two jurors case shows us that we need to mark an epistemic distinction: the distinction between an agent that has the right belief for the right reasons, and one who merely has the right belief. Even though Miss Not formed the right belief, her failure to form the belief in the right way puts her in a worse epistemic position. On the other hand, Miss Knowit is in a better epistemic position by forming her belief in the right way. We can mark the epistemic distinction between the two jurors by looking at how the juror’s beliefs are based. Miss Knowit formed her belief in the right way by basing it on the evidence. By contrast, Miss Not failed to form her belief in the right way by basing it on the defendant’s appearance.

Let’s make use of some standard terminology, that of propositional and doxastic justification.Footnote 1 A doxastic state is propositionally justified just in case it is the state one ought to have; the state that an agent has good reason to have. That doxastic state is doxastically justified just in case it is also formed in the right way; that is, if it is based in the right way.Footnote 2 Applying this to the two jurors case: both jurors’ beliefs are propositionally justified, but only Miss Knowit’s belief is doxastically justified.

Any adequate epistemology needs to be able to accommodate this distinction. However, we face a puzzle when we consider how to apply the basing relation to the logical coherence of an agent’s beliefs. Many hold that whether or not an agent’s beliefs are coherent plays a role in the justification of those beliefs. Further, as I’ll later argue, a belief ought to be based on whatever plays a role in justifying that belief. Yet, it is puzzling how a belief could be based on the fact that the agent’s overall belief state is coherent.Footnote 3 Any epistemologist is going to need to answer this question. Of course, the answer could be as simple as denying that coherence plays a role in the justification of one’s beliefs. For example, in the face of these sorts of challenges some epistemologists, otherwise inclined to take coherence to play a role in justification, weaken the requirement so that the coherence of the entire doxastic state is not required; instead the coherence of a limited subset of that state is required.Footnote 4

This question is particularly pressing for the Bayesian. For the Bayesian, probabilistic coherence plays a key role in whether an agent’s doxastic state is the right one to have. So the Bayesian cannot reply that coherence doesn’t play a role in the justification of one’s credences, or otherwise weaken the requirement, without simply giving up on Bayesianism.Footnote 5 Furthermore, when we apply standard approaches to the basing relation to Bayesian epistemology, we run into serious problems. The general result we get is that no actual agents base their credences in the right way; indeed, they fail to even come close to forming them in the right way. This implies that actual agents’ credences are not even reasonably doxastically justified. Holding fixed the standard approaches to the basing relation, we have a problem for Bayesianism.

However, while the Bayesian can’t deny that coherence plays a role in the justification of one’s doxastic states, there are other options available to the Bayesian. Bayesian epistemology has largely been discussed in the context of formal epistemology, and less so in mainstream epistemology where phenomena like doxastic justification and the basing relation are studied. This puzzle gives us an excellent opportunity to consider how the Bayesian ought to think about these notions. I’ll argue that the Bayesian can handle the problem of allowing that coherence plays a role in doxastic justification by rejecting the standard approaches to the basing relation. By drawing on recent work on the basing relation we’ll see that we can develop an account of the relation that allows for agents to form their credences in the right way. Think of this as an investigation into how the Bayesian should think about the basing relation, as well as the related notions of propositional and doxastic justification. My aim is to look at a natural worry that the Bayesian cannot accommodate these notions, and then argue to the contrary that the Bayesian can.

In what follows, I first introduce background on Bayesian epistemology. In Sect. 2, I consider how to understand the two jurors case and doxastic justification in a Bayesian context. In Sect. 3, I argue that when we apply the standard approaches to the basing relation to Bayesian epistemology we get the result that no actual agent bases her credences in the right way. In Sect. 4, I argue that the results of the previous section constitute a problem for Bayesian epistemology. In Sect. 5, I show how the Bayesian can avoid these problematic results by drawing on recent work on the basing relation.

1 Bayesian epistemology

So far we have largely considered doxastic states like my belief that it will rain tomorrow. These are called binary beliefs because these states are either ‘on’ or ‘off,’ one is either in the state or not. These can be contrasted with degrees of belief or credences. These states correspond to how confident I am in a proposition, rather than being merely ‘on’ or ‘off.’ For example, my degree of belief that the sun will rise tomorrow is much stronger than my degree of belief that it will rain tomorrow.

In a Bayesian context, we investigate credences rather than binary beliefs. The target isn’t my belief that P, rather it is my credence c in P, where c is a real number. More globally, our target isn’t my set of beliefs, it’s my credence function; a function that maps propositions to numbers. These numbers measure how confident the agent is in the proposition.Footnote 6 Since my degree of belief that the sun will rise tomorrow is stronger than my degree of belief that it will rain tomorrow, the number that measures my former degree of belief is larger than the number that measures the latter.Footnote 7

Bayesianism consists of two key claims.Footnote 8 The first claim, (Probabilism), is about what a subject’s credence function should be like at a time. It says that a credence ought to be probabilistically coherent. More fully, (Probabilism) states that a credence function should be a probability function, where a function Pr is a probability function just in case it satisfies the following three conditions:Footnote 9

  1. (1)

    For any proposition P, \(\hbox {Pr}(\hbox {P})\ge 0\).

  2. (2)

    For any necessary proposition P, \(\hbox {Pr}(\hbox {P})=1\).Footnote 10

  3. (3)

    For any mutually exclusive propositions P and Q, \(\hbox {Pr}(\hbox {P or Q})=\hbox {Pr}(\hbox {P})+\hbox {Pr}(\hbox {Q})\).

The second claim that characterizes Bayesianism pertains to how a subject’s credence function ought to change upon receiving evidence E. First we need the notion of a conditional probability. This corresponds to how probable a proposition is, given that another proposition is true. It is customary to define the conditional probability of P given E as follows:

  1. (4)

    \( \hbox {Pr}(\hbox {P}\vert \hbox { E})=_{\mathrm{df}} \hbox { Pr}(\hbox {P} \& \hbox {E})/\hbox {Pr(E)};\hbox { where Pr}(\hbox {E})>0\).Footnote 11

We can now precisely say how, according to the Bayesian, my credence function should change. If I have just learned E, then in order to find out what my new credence in P should be, the Bayesian says to look to what my conditional probability in P given E is. When I learn E, my new credence function \(\hbox {Pr}_{\mathrm{E}}\) should relate to my prior credence function Pr in the following way: for all propositions P, \(\hbox {Pr}_{\mathrm{E}}(\hbox {P})=\hbox {Pr} (\hbox {P}\vert \hbox { E})\). Call this constraint (Conditionalization).

These two constraints characterize a minimal version of Bayesianism. Whether or not there should be more constraints is controversial.Footnote 12 However, I’ll set aside these other proposals. (Probabilism) and (Conditionalization) are agreed upon by all Bayesians, and these two constraints are enough to get the problem going. Further, it is straightforward to apply my positive proposal for how the Bayesian should understand the basing relation to these other potential constraints.

The constraints (Probabilism) and (Conditionalization) characterize what an agent’s credence function ought to be; more precisely, they indicate which credence functions are permissible and which are impermissible for a given agent to have. So the Bayesian constraints say that a given agent ought to have one among several permissible credence functions. Earlier I understood propositional justification in terms of which beliefs an agent ought to have. Generalizing to credences, a credence in P is propositionally justified for an agent when it is the credence an agent ought to have in P. Now, on a view where one can permissibly have different credences in a proposition, there is no single credence that an agent ought to have; rather, there are many credences such that the agent ought to have one of them. On a view that allows for this sort of permissiveness regarding which credences an agent ought to have, I think we should say that each of the credences are propositionally justified for a given agent. The minimal version of Bayesianism is just such a view. The two Bayesian constraints don’t uniquely characterize a credence function for a given agent. Rather they narrow down the range of permissible credence functions; that is, they tell us what credences states are propositionally justified for a given agent.

It’s worth noting that the Bayesian gives an irreducibly holistic account in the sense that, in general, whether or not a credence is permissible for an agent can only be determined by looking at the agent’s entire credence function. For example, while a range of credences in proposition P may be permissible for an agent, each of those credences can only be permissibly held if other particular credences are held. As a simple case, it is only permissible to have a credence of .5 in P if one also has a credence of .5 in \(\sim \!\!\hbox {P}\). Now, in some cases we can determine whether an agent has a permissible credence without needing to consider the entire credence function. For example, if P is necessary, then that is enough to determine that an agent ought to have a credence of 1 in P. But this doesn’t hold generally. For most credences, the Bayesian can only say whether a credence is permissibly held by looking at whether the entire credence function is permissibly held. This is mainly a consequence of axiom (3) of (Probabilism). Due to this axiom, whether a credence c in P is permissible for an agent depends on whether it is the case that for any proposition Q that is inconsistent with P, the agent’s credence in (P or Q) equals the sum of the agent’s credences in P and in Q. Since propositional justification is a holistic matter for the Bayesian, the Bayesian can only say that an individual credence is justified for an agent in virtue of belonging to an entire credence function that is propositionally justified.

Given this holism, it’s tempting to think of Bayesianism as only applying to entire credence functions rather than to individual credences, so let me briefly defend my practice of applying Bayesianism to individual credences. First, failing to apply Bayesianism to individual credences unduly limits Bayesian epistemology. We make many common sense judgments about when an agent has the right individual credence or forms that credence in the right way, and I take it that the Bayesian will want to accommodate these judgments. Moreover, the puzzle guiding this paper still arises at the level of entire credence functions. We can distinguish between an agent that just so happens to have a credence function that satisfies the Bayesian constraints, and an agent that forms her credence function because it satisfies the Bayesian constraints. Refusing to apply Bayesian epistemology to individual credences doesn’t avoid the puzzle. Finally, nothing that I say here will turn on whether or not we apply Bayesianism to individual credences. The problems and solution that I present can be understood in terms of either individual credences or entire credence functions. Since it’s easier, and a bit more intuitive, to think about forming individual credences rather than forming entire credence functions, I’ll put things in terms of individual credences. But the reader is welcome to translate what I say in terms of entire credence functions.Footnote 13

I’ll only focus on mainstream versions of Bayesian epistemology, which take (Probabilism) to play an important role in the justification of one’s credences. There are other views that arguably deserve the name “Bayesian”, according to which this isn’t the case. For example, there is a Bayesian version of Kolodny’s (2007) view where the only feature that matters for justification is one’s evidence, and it is simply a knock on effect of the evidential relations that if one correctly aligns one’s credences with the evidence, then one’s credence function will satisfy (Probabilism).Footnote 14 On such a view, the fact that one’s credences satisfy (Probabilism) plays no role in the justification of one’s credences. For the purposes of this paper, I set these views aside.

However, proponents of the views I’m setting aside may still find the results of this paper interesting. Even if satisfying (Probabilism) isn’t important for perfectly rational agents, consider imperfect agents. While they may fail to perfectly align their credences with the evidence, surely they do better by having a prior credence function that satisfies (Probabilism). So (Probabilism) does play a role in distinguishing how an imperfect agent could do better or worse. Imperfect agents play an important role later in the paper.

To sum up, the Bayesian that I’m focusing on holds that both (Probabilism) and (Conditionalization) characterize how an agent should arrange her credences. These constraints are what make a particular credence function the right one to have. In other words, a credence function is propositionally justified in virtue of satisfying (Probabilism) and (Conditionalization).

2 The basing relation in a Bayesian context

Let’s now consider the basing relation in the Bayesian context. It will be useful to go through the two jurors case using degrees of belief. When the two jurors hear the evidence, they learn a particular evidence proposition E. Suppose both jurors’ credence in E changes to 1. Further, both jurors recognize that what was presented at the trial was evidence in favor of the proposition that the defendant is guilty. So prior to hearing the evidence, both jurors’ conditional probability of the defendant being guilty, G, given E is high. After hearing the evidence, both jurors come to have the same high credence in proposition G. But suppose one juror changed her credence in G because she took the evidence E to be a good reason to do so, while the other juror changed her credence in G because of the suspicious appearance of the defendant. From the Bayesian point of view, both jurors are doing a good job at doing their epistemic duties; they updated their credences in the way that Bayesianism requires them to. But even though both jurors have the right doxastic state, a high credence in G, one of them is in a worse epistemic state than the other. Miss Not’s failure to form her credence in the right way puts her in a worse epistemic position. Putting this in the terminology of propositional and doxastic justification, both jurors’ credence in the defendant’s guilt is propositionally justified, but only Miss Knowit’s credence is doxastically justified.

For the Bayesian, the norms that an agent’s credences ought to follow are the Bayesian constraints, (Probabilism) and (Conditionalization). This codifies what an agent’s credences ought to be; i.e. when they’re propositionally justified. What should we say about how an agent’s credences ought to be formed? What should they be based on in order to be doxastically justified?

The simplest answer is that the credence must be based on the evidence in order to be doxastically justified. This fits with the two jurors case, for Miss Knowit based her credence on the evidence, but Miss Not did not. However, this answer is too limited. We need to invoke the basing relation in other cases as well. Consider a case involving axiom (3) of (Probabilism). Suppose P and Q are logically incompatible, so (Probabilism) requires that an agent’s credence in (P or Q) equal the sum of the agent’s credences in P and Q. Compare two agents. One agent recognizes that P is logically incompatible with Q, and recognizes that the probability of (P or Q) must be the same as the sum of the probability of P and the probability of Q, and sets her credences accordingly. The other agent doesn’t recognize these facts, rather she just so happens to set her credence in (P or Q) equal to the sum of her credence in P and her credence in Q because the tea leaves she read said to. We need to be able to say that there’s some sense in which the first agent is doing better, from an epistemic point of view, than the second agent, even though they both have the same credence. The way to do that is by invoking the basing relation. The first agent has done a better job at forming her credence in the right way than the second agent, due to the basis of the first agent’s credence. However, we can’t say that the first agent has based her credence on her evidence, whereas the second has not, for conditionalizing on evidence will only guarantee that one’s credence function satisfies axiom of (3) of (Probabilism) if one’s prior credence function satisfies this axiom. The second agent may base her credences on the evidence, and yet fail to form her probabilistically coherent prior credence function in the right way. The agent’s epistemic deficiency does not consist in failing to base her credence on the evidence, rather it consists in failing to base her credences on a feature of (Probabilism).Footnote 15

So we need to look elsewhere for a better answer. We would be better served by considering the standard take on the relationship between propositional justification and doxastic justification. Many epistemologists hold that an agent’s doxastic states ought to be formed on the basis of whatever it is that makes those states the right one to have. That is, in order for a doxastic state to be doxastically justified, that state must be based on whatever propositionally justifies that state.Footnote 16 Thus in the two jurors case, Miss Knowit forms her credence in the right way in virtue of basing that credence on what makes it the right one to have. And Miss Not fails to form her credence in the right way because her credence isn’t based on what makes it the right one to have.

Applying this to Bayesianism, in order for an agent to form her credences in the right way, she must base those credences on the Bayesian constraints. They must be formed because having those credences satisfies both (Probabilism) and (Conditionalization). But basing a credence on the fact that it satisfies (Probabilism) is where we get our puzzle. Prima facie it is hard to see how a credence could be based on the fact that it satisfies (Probabilism). We can make this worry more acute by considering the standard approaches to the basing relation. As we’ll see in the next section, when we consider the standard approaches to the basing relation, we’ll find that no normal agent bases her credences on the fact that they satisfy (Probabilism). More exactly, no agent bases her credence on every instance of the axioms of (Probabilism). This allows that an agent can base her credence on some of the instances of the axioms of (Probabilism), like the agent in the case above that sets her credence in (P or Q) equal to the sum of her credences in P and Q because she sees that P and Q are mutually exclusive and that the credence in the disjunction of two mutually exclusive propositions ought to equal the sum of the disjuncts’ credences. She bases her credence on an instance of axiom 3 of (Probabilism). But I’ll argue that no agent bases her credence on every instance of the axioms of (Probabilism), and so no agent bases her credence on the fact that her credences satisfy (Probabilism). Since having doxastically justified credences requires basing those credences on the fact that one’s credences satisfy both Bayesian constraints, it follows that no agent’s credences are doxastically justified. In a later section, I’ll argue that this result poses a problem for the Bayesian.

3 Problems for the Bayesian

I now turn to apply the standard approaches to the basing relation to Bayesian epistemology. Perhaps the most prominent account of the basing relation is the causal account.Footnote 17 The basic idea is that a belief is based on something just in case it is a cause of the belief. Applied to credences the idea is that an agent’s credence is based on something just in case it is a cause of the credence:

(Causal) Agent S’s credence c in P is based on R iff R causes S’s credence c in P.

As noted above, in order for an agent to form a credence in the right way, she must base that credence on the fact that the credence satisfies the Bayesian constraints. Given (Causal), this requires the fact that the credence satisfies the Bayesian constraints to cause the agent to have that credence.

I’m going to argue that given (Causal), Bayesianism implies that no agent bases her credences in the right way. Now, it is well known that (Causal) faces a variety of objections, and one might worry that I’m setting up a strawman in order to make my case. However, I’ll argue that formulating a causal account that avoids these objections won’t avoid my problem, so we can stick with the simpler formulation. But first, let’s see why (Casual) and Bayesianism lead to trouble.

The first problem is that it can’t capture the distinction we want. In the two jurors case Miss Knowit’s credence in the defendant’s guilt doesn’t seem caused by the fact that the credence satisfies the Bayesian constraints; in particular, it doesn’t seem caused by the fact that her credences satisfy (Probabilism). Presumably it’s caused by hearing and understanding the evidence, but not by the fact that her credences are probabilistically coherent. So on this account Miss Knowit’s credence isn’t doxastically justified and we can’t distinguish between the two jurors.

However, the account faces deeper worries. For it will never be the case that the fact that a credence function satisfies (Probabilism) is a cause of the agent having a credence that belongs to that credence function. This requires that a feature of an agent’s doxastic state at a time is a cause of a part of the agent’s doxastic state at that time. But this is impossible. There’s a kind of objectionable circularity here; in order for the credence to be caused, the doxastic state must be a probability function, but in order for the doxastic state to be a probability function, the credence must exist. So given (Causal), no agent ever forms her credences in the right way.Footnote 18

Of course, causal accounts of the basing relation face a variety of objections. One might suspect that if the causal account could be fixed up in such a way that it avoids these objections, then my problem will be handled as well. However, this suspicion does not hold up to scrutiny. The standard objections to causal accounts claim that the theory overgenerates instances of the basing relation. That is, the theory implies that doxastic states have more bases than they actually do. For example, the well-known objection from deviant causal chains purports to show that the causal account overgenerates bases for doxastic states by exploiting bizarre ways that something could cause the state.Footnote 19 But fixing the problem of overgeneration will do nothing to fix my problem. For that fix will keep the account from overgenerating bases, but my problem is that the account, together with Bayesianism, undergenerates bases.

More generally, proposed modifications of causal accounts usually add further constraints that a causal relation needs to meet in order to get basing. But this makes it harder to get basing, not easier. My problem is that the simple causal account already makes it too hard to get basing in a Bayesian context.

Perhaps instead of requiring that the basis of the credence be the fact that the credence function that the credence belongs to satisfies the Bayesian constraints, why not require that the basis be a mental state that represents that the credence function satisfies the Bayesian constraints? This approach is in the vein of a doxastic account of the basing relation, so let’s turn to that account.Footnote 20 The basic idea behind the account is that an agent’s belief is based on something just in case the agent takes it to be a reason to have the belief. In the case of credences, an agent’s credence c in P is based on something just in case the agent takes it to be a reason to have that credence. More precisely:

(Doxastic) Agent S’s credence c in P is based on R iff S takes R to be a reason to have c in P.Footnote 21

Given (Doxastic), an agent forms a credence in the right way just in case she takes the fact that her credences satisfy the Bayesian constraints to be a reason to have that credence. (Doxastic) requires that an agent have appropriate higher-order beliefs or credences about her own credences and takes those higher-order beliefs to be reasons to have those credences. But this leads to serious problems with basing on (Probabilism).

The problem is that in order for an agent to base her credence on what propositionally justifies that credence she will need to base it on the fact that the credence is a part of a probabilistic credence function. (Doxastic) requires that the agent take the fact that her credences are probabilistically coherent to be a reason to have the credence. Here is where we get our problem. For it isn’t plausible that actual agents are able to grasp all of their credences at the same time in order to determine whether they are probabilistically coherent.Footnote 22 As a result, this account can’t capture the distinctions we want. Consider the two jurors case and suppose that Miss Knowit cannot grasp all of her credences. Then on this account, Miss Knowit lacks doxastic justification and we can’t distinguish between Miss Knowit and Miss Not.

But even if we waived the previous point, as with (Causal), there is a deeper problem for (Doxastic). (Doxastic) combined with Bayesianism requires an agent to have powerful cognitive resources in order to determine whether a particular credence satisfies the Bayesian constraints; particularly due to (Probabilism). This is so because (Doxastic) requires that the agent check for probabilistic coherence among all of her credences. However, there are strong arguments against views requiring rational agents to have such resources.Footnote 23 These arguments are familiar in the case of the coherence of binary beliefs, and it is straightforward to apply them to probabilistic coherence.

(Probabilism) requires that one’s credence function should obey axiom (3) from above, which says that if two propositions are mutually exclusive then the probability of their disjunction is the sum of their probability. Let us restrict our attention to cases where propositions are logically incompatible. By (Doxastic), in order for an agent’s credence c in P to be based on the fact that the credence satisfies the Bayesian constraints, the agent must believe that having c in P satisfies axiom (3). That is, the agent must believe that for any proposition Q the agent has a credence in, that is logically incompatible with P, the sum of the agent’s credences in P and Q ought to equal the agent’s credence in the disjunction of these propositions. But not only must they have the belief, but they must take this belief to be a reason to have that credence. Among other things, this will require that an agent assess whether P is logically incompatible with every other proposition that the agent has a credence in. Further when a proposition is incompatible with P, the agent must take that fact to be a reason to set her credences accordingly. But given the massive number of propositions that most agents have credences in, this task is too difficult for any actual agent to perform.Footnote 24 Hence, actual agents never form their credences in the right way.Footnote 25

A natural thought is to understand the doxastic account in a weaker way. Suppose that an agent’s credence function satisfies the Bayesian constraints, the agent believes that their credence function satisfies the Bayesian constraints, and takes this belief to be a reason to have the credences that are part of that credence function. Is this enough to form their credences in the right way? The only way we can determine that is if the belief that their credence function satisfies the Bayesian constraints is also formed in the right way. For if the belief that one’s credences satisfy the Bayesian constraints isn’t formed in the right way then it isn’t doxastically justified. And if it isn’t doxastically justified, then it can’t be used to doxastically justify one’s credences. Now, it might be tempting to hold that all the agent needs is to be appropriately sensitive to whether the credence satisfies the Bayesian constraints, perhaps by understanding appropriate sensitivity in terms of reliability. But in keeping with our concern with the basing relation we also need to worry about the basis of the belief that the credence satisfies the Bayesian constraints. Here we need to apply a doxastic account of the basing relation, for we need a unified account of the basing relation. We don’t want one account for credences and a different one for beliefs. Since the belief that one’s credences are probabilistically coherent doesn’t seem to be a basic belief, in order for the belief that one’s credences satisfy the Bayesian constraints to be formed in the right way, the agent must have good reasons for this belief. And having good reasons for this belief, in accordance with a doxastic account of the basing relation, will ultimately require that the agent check whether their credences satisfy the Bayesian constraints. So we are back to our original problem, for this task is too difficult for any normal agent to perform.Footnote 26

So we run into trouble with both the causal account and the doxastic account. The trouble generalizes too, because other prominent accounts of the basing relation make use of notions from these accounts. For example, Swain (1981) has developed a causal account that makes use of counterfactual causes in order to avoid problems facing standard causal accounts. Roughly, R is the basis of a doxastic state just in case either R is the actual cause of that state, or if the actual cause had not caused the state, then R would have caused the state. However, this account will face the same problems as the causal account. Above, I argued that it’s impossible for the fact that a credence function satisfies the Bayesian constraints to be the cause of an agent’s credence that is part of that credence function. This implies that it cannot be a counterfactual cause.Footnote 27

We can even say something a bit more general. Standard approaches to the basing relation are guided either by causal factors, or by doxastic factors, or a mix of the two.Footnote 28 But we’ve seen that being guided by either causal factors or doxastic factors leads to the result that normal agents do not form their credences in the right way, since they do not allow normal agents to base their credences on the fact that they satisfy (Probabilism). Since standard approaches are guided by these two factors we get serious trouble when Bayesianism is combined with the standard ways of thinking about the basing relation.

4 Is this a problem for the Bayesian?

We have seen that when we apply standard ways of thinking about the basing relation to Bayesianism we get the result that actual agents fail to form their credences in the right way; that is, they fail to have doxastically justified credences. In this section, I argue that the results from the previous section constitute a serious problem for Bayesianism.

The result that actual agents fail to form their credences in the right way is a problem to the extent that we think that actual agents do form their credences in the right way. We might make the case by analogy with binary belief. We take it that actual agents often form their beliefs in the right way; that is, that actual agents often have doxastically justified beliefs. To the extent that a theory implies that agents do not have doxastically justified beliefs, that is a mark against the theory. Likewise, if we ordinarily take it that actual agents have doxastically justified credences, then to the extent that a theory implies that actual agents do not have doxastically justified credences, that is a mark against the theory.

However, things are not so simple, for actual agents often don’t form their credences in the right way. Actual agents often fail to satisfy the Bayesian constraints. For example, I don’t always perfectly conditionalize on my evidence. Moreover, there are many complicated logical truths that I don’t have a credence of 1 in. And I’m not unique in this respect. If actual agents don’t satisfy the Bayesian constraints, then they don’t have the right credences. And if they don’t have the right credences, then they don’t form their credences in the right way; for in order for a doxastic state to be formed in the right way, it must be based on what makes that doxastic state the right one to have. If that doxastic state isn’t the right one to have, then nothing makes it the case that it is the right one to have. So we already have reason to think that actual agents often don’t form their credences in the right way. If this is right, then the result of the last section is no problem for the Bayesian, since agents don’t have doxastically justified credences. The fact that applying the standard approaches to the basing relation to Bayesianism implies this is not a bad result.

So there is reason to think that this isn’t a bad result. However, I’ll argue that the results from the previous section still lead to trouble for the Bayesian. The Bayesian needs to be able to say that there is some sense in which normal, everday agents are doing better than, intuitively, extremely irrational agents. Or that there is some sense in which, say, I’m doing better epistemically than I was doing ten years ago. The Bayesian needs some way of evaluating agents that is more fine-grained than whether they satisfy the Bayesian constraints.

While actual agents don’t meet the Bayesian constraints, they can do better or worse at satisfying those constraints. And if we focus on a particular subset of our credences, perhaps ones that are logically/mathematically simple, then that subset can fare reasonably well when it comes to the Bayesian constraints. While Bayesianism is holistic, it can be extended to yield evaluations of individual credences. For example, the first two axioms of (Probabilism) can be individually evaluated. Of my current credence c in P, we can ask whether \(c\ge 0\). We can also ask whether \(c=1\), if P is necessary. (Conditionalization) also suggests a way of evaluating individual credence assignments. We can ask if my credence in P is equal to the number that (Conditionalization) says it should be. Here we can clearly make sense of an agent doing better or worse at changing her credence in response to the evidence. A credence that is close but not exactly the result of conditionalizing on one’s evidence is much better than being wildly off. The most holistic part comes in with respect to axiom (3) of (Probabilism). For this axiom involves the relationships between credences. However, there are still things we can say about individual credences. A particular credence can do better or worse based on how many violations of (3) it is involved in.

By way of example, suppose we have two agents whose credences don’t completely satisfy the Bayesian constraints, on the grounds that they haven’t perfectly conditionalized on their evidence. However, one agent’s credences are much closer to where they should be compared with the other agent’s credences. There is a clear sense in which the first agent is doing much better than the second, and a natural way in which we can evaluate individual credences is by considering how far away they are from where (Conditionalization) says they ought to be.

So we can make sense of particular credences being more or less the right ones to have, and we can make sense of an agent’s credence function being more or less the right one to have. With this in mind, the door is open to hold that actual agents do a reasonably good job at having the right credences, at least with respect to logically and mathematically simple propositions. Further, it is open to hold that actual agents do a reasonably good job at forming their credences. This seems quite plausible. We will at least want to hold that some actual agents form their credences well enough that we can distinguish them from agents who form their credences in clearly bad ways. The two jurors case is a good example of this. Even if Miss Knowit doesn’t do a perfect job at forming her credences in the right way, we still want to say that there is a substantial way in which she’s doing better than Miss Not.

With this in mind, we can adopt a degreed notion of justification, corresponding to doing better or worse at forming one’s credences.Footnote 29 The better the agent does at forming her credences, the greater the doxastic justification. While actual agents don’t have perfectly doxastically justified credences they do have a reasonable degree of doxastic justification for those credences.

The result of the previous section shows that Bayesianism and the standard accounts of the basing relation imply that actual agents never form their credences in the right way. As we’ve seen, this isn’t a problematic result, for actual agents often don’t form their credences in the right way. But actual agents do manage to do a reasonably good job at forming their credences in the right way. So it is a problem if Bayesianism entails that they don’t even come close. And the results from the previous section generalize to show that, given the standard approaches to the basing relation, Bayesianism does imply that actual agents don’t even come close. This is straightforward on the causal account of the basing relation, for the required causal relations aren’t possible. It’s only slightly less straightforward on the doxastic account. Given the amount of computing power required, and how far beyond our reach it is, actual agents do not even get close. Since the standard ways of thinking about the basing relation are guided by either causal or doxastic factors, they imply that actual agents cannot even get close to forming their credences in the right way.

Given the standard ways of thinking about the basing relation, the Bayesian cannot countenance an epistemic distinction between myself and someone with the same credences who forms them in seemingly bizarre ways. The Bayesian can’t make an epistemic distinction between the two jurors who end up with the right credence.Footnote 30 Yet there is an epistemic distinction between the two jurors. Even worse, given the connection between a doxastic state being formed in the right way and that state being doxastically justified, Bayesianism implies that we always lack doxastic justification for our credences. Yet, actual agents do sometimes have credences that are doxastically justified to a reasonable degree. This gives us a serious problem for Bayesianism.

I think that this problem gets at a common complaint about Bayesianism. It is not uncommon to hear a complaint along the following lines: “Bayesianism is a mathematically elegant theory, but it has little to do with epistemic norms that apply to human beings”. We can admit that actual agents can do reasonably well at having the right credences, though not perfectly well. But, upon considering the basing relation, we seem to get the result that actual agents never come close to forming their credences in the right way.Footnote 31

5 Basing for the Bayesian

The preceding results lead to serious trouble for Bayesianism. I will now argue, however, that by re-thinking the standard approaches to the basing relation, we avoid these problems. The Bayesian can adopt an alternative account of the basing relation that allows that agents can base their credences on the fact that they satisfy (Probabilism). Or in the case of agents that don’t perfectly satisfy the Bayesian constraints, they can base their credences on the elements of (Probabilism) that they do satisfy.

To achieve that end, let’s take a step back and think about what kind of picture of the basing relation would work better. The standard approaches require that the basis of a doxastic state directly explain why the agent has that doxastic state. But the natural ways in which a basis explains a doxastic state are by causing that state, or by being something that the agent takes to be a reason to have the state. As we’ve seen, this leads to problems in a Bayesian context.

A better approach allows that the basis is less directly explanatorily relevant to the doxastic state. For example, if we favored a causal way of thinking about the basing relation we would only require that the basis is somehow causally relevant to the doxastic state without actually being a cause of the state. If we can find a way to spell out this picture into a full account of the basing relation, then there’s promise that the Bayesian can avoid the problems of the previous sections.

As it turns out Thomas Kelly (2002) has proposed a general constraint on the basing relation that gives us what we want:

(Conditions) Whether R is a reason on which S’s believing P is based will be reflected in the conditions under which S would (or would not) continue to believe P.Footnote 32

Kelly proposes it in a different context, but it will turn out to be useful for guiding us towards an account of the basing relation that works better in a Bayesian context. But first let’s consider the plausibility of (Conditions).

The plausibility of (Conditions) comes out when we see how natural it is to presuppose it in particular cases. Consider the two jurors case. It’s intuitive to say that Miss Knowit has based her belief that the defendant is guilty on the evidence, because the conditions under which she would continue to believe that the defendant is guilty are sensitive to the evidence. If she later found out that the evidence was fabricated, she would not continue to believe that the defendant is guilty. In the case of Miss Not, it’s intuitive to say that she did not base her belief that the defendant is guilty on the evidence, because the conditions under which she would continue to have that belief are not sensitive to the evidence. If she found out that the evidence was unreliable, she would continue to have the belief. By contrast, if she realized that the defendant was not really suspicious-looking, but he only appeared that way due to the lighting, then she would not continue to believe that the defendant is guilty.

Evans (2013) offers a different case to motivate (Conditions). The case comes from an experiment reported in Haidt et al. (2000). In the experiment, subjects are given a hypothetical case involving incest; by design the instance of incest cannot lead to bad consequences. Many subjects report a strong belief that this instance of incest is wrong. When asked why it is wrong, they offer a variety of reasons involving bad consequences that often result from incest. However, when reminded that these reasons do not apply to this case, they do not give up their belief that this instance of incest is wrong. Rather, they either provide different reasons or provide no reasons at all. Intuitively, the subjects’ belief that the instance of incest is wrong is not based on the reasons they gave, because when they see that their reasons are not good ones, they don’t give up the belief.

Evans takes this case to motivate (Conditions). (Conditions) can explain our intuition that the subjects have not based their belief on the reasons they give. The thought is that since the subjects do not give up their belief that the instance of incest is wrong when the reasons they give are rebuked, then the conditions under which they continue to have that belief do not reflect those reasons. By (Conditions), those reasons are not the basis for their belief.

First, let’s translate (Conditions) into the Bayesian’s ideology:

(B-Conditions) Whether R is a reason on which S’s credence c in P is based will be reflected in the conditions under which S would (or would not) continue to have c in P.Footnote 33

(B-Conditions) allows for a looser connection between a credence and its basis than the previous conceptions of the basing relation that we were working with. Consider the doxastic account of the basing relation. As we saw in Haidt’s case involving judgments about incest, an agent can take something to be a reason to have a belief or credence, even though the reason is not reflected in the conditions under which the agent would continue to have that belief or credence.Footnote 34 As for the causal account of the basing relation, (B-Conditions) allows for a more indirect causal connection than does (Causal).

I think (B-Conditions) is plausible, and is motivated by considering how it applies to the two jurors case and Haidt’s case. So I’ll be assuming it from here on out, and showing that the Bayesian can use it to provide a more adequate account of the basing relation. Clearly, this falls short of a full defense of (B-Conditions). But my main goal is just to show that there is a principled and non-ad-hoc conception of the basing relation that the Bayesian can adopt that avoids the problems described above. In other words, I’m interested in how the Bayesian should think about the basing relation.

Now let’s turn to specific accounts of the basing relation that are guided by (B-Conditions). In effect, we are looking for ways of more precisely spelling out what it takes for the conditions under which an agent would continue to have a credence to reflect a basis. The way I was making use of (Conditions) when I considered the two jurors case and Haidt’s incest case was in terms of counterfactuals. Indeed, it is very natural to try to capture this in terms of counterfactuals. The conditions under which an agent holds a belief are sensitive to R just in case the following counterfactual holds:

  1. (5)

    If the agent were to no longer have R, then she would no longer hold the belief.

Unfortunately, understanding conditions in terms of such counterfactuals is too crude. Suppose an agent’s belief is overdetermined, in that the belief is based on two reasons and she would still have the belief if she lost only one of the reasons, but she would no longer hold the belief if she lost both. In this case, the instance of (5) would be false for both reasons. But that fact doesn’t disqualify the agent from basing her belief on those reasons.

We can do better by appealing to dispositions. We can say that the conditions under which an agent would hold a belief are sensitive to R just in case the agent has the disposition characterized as:

  1. (6)

    The agent is disposed to lose her belief if she loses R.

This avoids the difficulty of the agent with two reasons. An agent can be disposed to lose her belief if she loses one of the two reasons, even though were she to lose one of the reasons, she wouldn’t lose the belief, since she still has the other reason. She can still have the disposition, even though the corresponding counterfactual is false. The presence of the other reason keeps the disposition’s manifestation from obtaining.

Switching to credences, the idea is that for the conditions under which an agent would continue to have a credence to reflect a basis is for the agent to be disposed to revise her credence if she loses that basis. The simplest way of turning this into a full account of basing is by taking this disposition to be all there is to the basing relation. This is Evans’ (2013) dispositional account of the basing relation, and when adjusted for the Bayesian’s ideology we get:

(Disposition) S’s credence c in P is based on R iff S has the disposition to revise her credence in P if R does not hold.Footnote 35

Before applying (Disposition), let me clarify what I mean by the expression ‘R does not hold.’ I’m thinking of the sorts of things that are the bases for doxastic states as being fact-like. The fact that something is reliably formed, the fact that is it probabilistically coherent, and so on. So the natural way of interpreting the expression is by saying that the fact does not obtain; alternatively, we could say that the proposition that expresses the fact is not true.

(Disposition) can do the work we want. An agent can have a disposition to revise her credences if they do not satisfy (Probabilism). By adopting (Disposition), we can say that the agent has based her credences on the fact that they satisfy (Probabilism). This allows us to solve our key problem of giving an account of how an agent can base her credences on the fact that they satisfy the Bayesian constraints.

To illustrate, let’s consider the two jurors case. We can say that Miss Knowit has based her credence in the defendant’s guilt on the fact that having the credence satisfies the Bayesian constraints, on the grounds that she is disposed to revise her credence if having that credence does not satisfy the Bayesian constraints. We can consider each constraint in turn. Miss Knowit has the disposition to revise her credence in the defendant’s guilt, if that credence does not satisfy (Conditionalization). Further, Miss Knowit has the disposition to revise her credence, if her credences do not satisfy (Probabilism). However, Miss Not does not have the disposition to revise her credence, if that credence does not satisfy (Conditionalization), for she is not moved by her evidence. So we can distinguish the two jurors on these grounds. We can say that both jurors have the right credence, but only Miss Knowit has formed her credence in the right way, in virtue of the fact that she has the relevant disposition.Footnote 36

The key question to ask is whether (Disposition) avoids the problems that faced the standard causal and doxastic accounts we considered above. It’s clear that (Disposition) avoids the problems that arose in connection with (Causal), since (Disposition) doesn’t require any kind of causal relation between a feature of the juror’s doxastic state at a time and a part of that doxastic state. However, one might object that just as (Doxastic) required that an agent check whether her credences are probabilistically coherent, (Disposition) does as well. While (Disposition) doesn’t require that the agent take the fact that her credences are probabilistically coherent to be a reason to have the credences, surely having a disposition to revise one’s credences if they don’t satisfy (Probabilism) requires checking that the credences are probabilistically coherent. If the agent, or at least their cognitive system, isn’t doing these checks, then it would seem to be by luck or magic that she manages to have these dispositions.Footnote 37

This is a perfectly natural objection; however, there is a strong response. As it turns out, an agent can have a disposition to revise her credences if they don’t satisfy the Bayesian constraints, even if there is no sense in which the agent is computing her credences. Instead, it could be the case that the structure of the cognitive system is set up that so that it is simply a mechanical matter that the credences will be revised if they don’t satisfy the Bayesian constraints. Consider an analogy. A sundial has the disposition to display a shadow on a specific region, if it is a specific time of day.Footnote 38 But clearly there’s nothing in the sundial that’s calculating the angle of the sun. Rather, the sundial has the disposition it does, because of the structure of the sundial. The same can hold for our cognitive system.

For example, consider a simple, but highly unrealistic, model of how a cognitive system might work. Suppose that credences correspond to a fluid that occupies a container in our brains. There are various regions in the container corresponding to different propositions, and the height of the level of the fluid in a region corresponds to how high the credence is in the proposition that corresponds to that region. Now there might be canals in the container so that whenever the fluid occupies the region for a proposition some of the fluid will flow from that region to another region. Suppose the canals are so arranged that for any region for a proposition P, some of the fluid will flow to various regions corresponding to disjunctions of P with propositions that are inconsistent with P. These canals create the tendency for there to be as much fluid in P as there is that flows from P to one of these disjunction regions.

Let’s focus on one of these disjunction regions. Consider a region that corresponds to the disjunction of P and Q where P and Q are mutually exclusive. The canals create the tendency for the same amount of fluid that occupies the P-region to flow from the P-region to the (P or Q)-region, and likewise for the Q-region. In virtue of these canals, the overall system will be disposed to have the same quantity of fluid in the (P or Q)-region as the sum of the quantities of fluid in the P-region and in the Q-region; that is, the system will have the disposition for the credence in (P or Q) to equal the sum of the credence in P and the credence in Q. But the system can have this disposition without anything actually computing the credences. It simply has the disposition as a result of the structure of the system.

Clearly, it is unrealistic that what’s actually going on in our cognitive systems is anything like my simple model. But the purpose of the model is simply to show how a system could have a disposition to revise one’s credences if they don’t satisfy the Bayesian constraints without any computation going on. Once we see the general idea, then we can consider more realistic models of human cognition that don’t proceed in computational terms. There is a thriving research program devoted to non-computational approaches to different aspects of human cognition. There are various ‘dynamical models’ put forward, such as the well-known connectionist models.Footnote 39

Now, if an agent can have the disposition to revise her credences if they don’t satisfy (Probabilism) without needing to compute those credences, then we avoid the worry that faced (Doxastic) since that worry was based on the fact that computing whether one’s credences satisfy (Probabilism) is an intractable task. But why can’t we use the same strategy to defend (Doxastic) against the worry? Why can’t an agent take the fact that her credences are probabilistically coherent to be a reason to have her credences without checking and computing those credences? We can’t use this strategy to defend (Doxastic) because an agent’s taking something to be a reason for a doxastic state requires that the agent actually represent what it is that they are taking to be a reason for that doxastic state.Footnote 40 Computational approaches are representational, for a computational process can be described, at some level of abstraction, as the manipulation of symbols. For example, a truth table method for determining consistency is clearly a process that proceeds by way of manipulating symbols. But non-computational processes are, in general, not representational in this way.Footnote 41 In the fluid in the container model that I presented above, there isn’t any sense in which the system is manipulating any symbols, so there’s no sense in which the system is calculating or determining the coherence of the credences. But if the process doesn’t represent the reasons for the doxastic state, then an agent can’t take those reasons to be reasons to have that doxastic state. So if what is going on in an agent’s cognitive system is something like the fluid in container model, and there’s no accompanying computational process, then we can’t say that the agent is taking the coherence of her credences to be a reason to have those credences. Here’s a concrete example. Suppose an agent has probabilistically coherent credences and forms the belief that her credences are probabilistically coherent. However, suppose the process that produced that belief doesn’t represent any of the reasons for that belief; for example, for each pair of mutually exclusive propositions, the process doesn’t represent the fact that the sum of the credences in those propositions is equal to the credence in their disjunction. It doesn’t seem right to say that the agent is taking these facts about the correlation between the sum of the credences in two exclusive propositions and the credence in their disjunction to be a reason to have the belief that her credences are probabilistically coherent.

Of course, much of this depends on how we understand taking something to be a reason for a doxastic state. We could always choose to talk in such a way that if a non-computational process regulates the coherence of an agent’s credences, then that counts as an agent taking the coherence of their credences to be a reason to have those credences. If we interpreted (Doxastic) in this way, then it would avoid my objection. But this is at odds with how (Doxastic) is normally and traditionally understood, and is really much in line with my own approach characterized by (Disposition).

Another advantage of (Disposition) is that in light of the observation that actual agents generally don’t perfectly satisfy the Bayesian constraints, there is no general problem with actual agents being able to come close to the standards set by (Disposition). While we generally don’t satisfy the Bayesian constraints, we can do better or worse at satisfying them, and we can single out individual credences or sets of credences and evaluate those credences. (Disposition) allows us to extend that thought to the bases of credences as well. An agent might not perfectly satisfy the Bayesian constraints, but still come reasonably close. Further, that agent can be disposed to revise her credences if they are not reasonably close. So we can say that the agent did a reasonably, though not perfectly, good job at forming her credences.Footnote 42

While (Disposition) allows us to secure the basing relations that we want, one might worry that it is too permissive in the basing relations that it allows. Does it allow doxastic states to be based on intuitively bizarre facts? Fortunately, the account avoids this worry. Using dispositions keeps this account more discriminating than a simple counterfactual account. Even though an agent’s doxastic states can be counterfactually dependent on all sorts of facts, the agent will not have dispositions sensitive to all these facts. For example, if my parents had never met, then I wouldn’t have formed the credences that I did, but I don’t have the disposition to revise my credences if my parents didn’t meet. We have a principled way, on this account, to deny that the fact that my parents met is a basis of my credences.Footnote 43

(Disposition) is a somewhat minimal account of the basing relation. Some philosophers may prefer accounts that are closer in spirit to the causal or doxastic accounts of the basing relation. Further, I don’t want to stake my case for the Bayesian entirely on the dispositional account, it would be desirable if the Bayesian could be more neutral with respect to the basing relation. It turns out that we can develop better versions of the causal and doxastic accounts if we develop them with an eye to be more in keeping with (B-Conditions). The key is to respect the original idea that what marks out the basis of a doxastic state is what causes it, or what the agent takes to be a reason to have that state. However, instead of the basis simply being the cause or what the agent takes to be a reason, we let the basis be something that the cause or the reason is sensitive to. As before, we can use dispositions to spell out the sense in which the cause or the reason are sensitive to the basis.

Let’s first consider the causal account. At a first pass, basing on R requires that the agent be disposed to not have the credence be caused by its actual cause if R doesn’t hold. More precisely:

(Causal-Disposition) S’s credence c in P is based on R iff S has the disposition to be such that the causes of S’s credence would not cause S to have that credence in P, if R does not hold.

This disposition can be manifested in two ways. Supposing R doesn’t hold, the disposition could manifest by the cause of S’s credence causing S to have a different credence in P, or it could instead manifest by not causing S to have any credence in P at all. Suppose that an agent has based her credence in P on the fact that the credence satisfies the Bayesian constraints. In that case (Causal-Disposition) implies that the agent has the disposition to be such that the causes of S’s credence would not cause S to have that credence in P if S’s credence did not satisfy the Bayesian constraints.

We can see how this account works in the two jurors case. Consider the causes of Miss Know’s credence in the defendant’s guilt: the hearing of the testimony, and so on. We can plausibly say that Miss Knowit has the disposition to be such that the actual causes of her credence wouldn’t have caused that credence if her credences no longer satisfied (Probabilism) and (Conditionalization).Footnote 44 Meanwhile, Miss Not lacks these dispositions. Her credence in the defendant’s guilt would still be caused by the suspicious appearance of the defendant even if the credence didn’t satisfy the Bayesian constraints. So (Causal-Disposition) allows us to mark an epistemic distinction between the two jurors. While both jurors have the right credences, in virtue of satisfying the Bayesian constraints, only Miss Knowit has formed her credence in the right way, in virtue of the relevant dispositions about the causes of her credence.

This account allows the Bayesian to avoid the problems that arose when we adopted the standard causal account of the basing relation. It doesn’t require the fact that an agent’s credences satisfy (Probabilism) to cause the agent to have those credences. Instead we require that the agent is disposed to be such that the actual causes do not cause the credences if having those credences did not satisfy (Probabilism). So we avoid requiring the problematic causal relations from before.

Furthermore, we can apply this account to agents that fail to completely satisfy the Bayesian constraints. Suppose an agent does not completely satisfy the Bayesian constraints, but has a credence c that does reasonably well. Perhaps having c satisfies (Conditionalization), axioms (1) and (2) of (Probabilism), but is involved in some violations of axiom (3). Suppose further that the agent is disposed to not have c be caused by it’s actual causes, if it’s not the case that c satisfies (1) and (2), and is the result of conditionalizing on the juror’s evidence. We can say that the agent has done a fairly good job at forming c in the right way. This standard is something that an actual agent can meet. So (Causal-Disposition) can be applied to actual agents.

Developing a doxastic account goes along much the same lines. We can require that whatever the agent takes to be a reason to have a credence be sensitive to the basis of the credence. Again, we will use dispositions to mark out how the agent’s reason is sensitive to the basis. This gives us:

(Doxastic-Disposition) S’s credence c in P is based on R iff S is disposed to not take her actual reasons for her credence to be reasons to have that credence in P, if R does not hold.

As in the case of (Causal-Disposition), the disposition can be manifested in two ways. It could manifest by S taking her reasons for her credence in P to be reasons to have a different credence in P, or it could manifest by S failing to take those reasons to be reasons to have any credence in P at all. If an agent’s credence is based on the fact that it satisfies the Bayesian constraints, then (Doxastic-Disposition) implies that the agent has the disposition to not take her reasons for the credence to be reasons to have that credence, if that credence did not satisfy the Bayesian constraints.

We can better see how the account works by applying it to the two jurors case. If having her credence in the defendant’s guilt no longer satisfies (Probabilism) and (Conditionalization), then Miss Knowit is disposed to not take the evidence presented at the trial to be a reason to have her credence in the defendant’s guilt, perhaps because she would take that evidence to be a reason to have a different credence. And in virtue of that disposition, Miss Knowit has based her credence on the fact that having it satisfies the Bayesian constraints. Whereas Miss Not lacks this disposition with respect to what she takes to be a reason to have the credence. She lacks the disposition to not take the defendant’s suspicious appearance to be a reason to have the credence, if having that credence no longer satisfies the Bayesian constraints. Again, we can make an epistemic distinction between the two jurors.

We can also apply (Doxastic-Disposition) to agents that fail to completely satisfy the Bayesian constraints. Consider an agent that does not completely satisfy the Bayesian constraints, but has a credence c that does reasonably well on its own at being the right credence to have. An actual agent can have the disposition to not take whatever she takes to be a reason to have c, if having c no longer does reasonably well by Bayesian standards. In such a case we can say that the agent has done a fairly good job at forming c in the right way. So (Doxastic-Disposition) can also be applied to actual agents.

This account avoids the problems that arose for the Bayesian when we adopted the standard doxastic account of the basing relation. It doesn’t require an agent to grasp all of their credences and check whether they satisfy the Bayesian constraints, like the original doxastic account of the basing relation does. This account only requires that the agent be disposed to not take her actual reasons for the credence to be reasons to have that credence, if having that credence doesn’t satisfy the Bayesian constraints.

So we’ve seen that the three accounts I’ve sketched avoid the problematic result that no actual agent forms her credences in the right way; the accounts can do the epistemic work that we want them to do. My aim isn’t to choose between them, and no doubt these accounts will need to be modified in the face of difficulties. Rather, my aim has been to show how alternatives to the standard accounts of the basing relation can avoid the result that no actual agent forms her credences in the right way.

6 Concluding remarks

It’s worth stepping back and briefly considering how the preceding applies to a non-Bayesian epistemological setting. As I mentioned at the beginning, the problem of accommodating coherence in doxastic justification arises for epistemic settings that involve binary beliefs. So if part of what justifies a belief is the fact that it fits coherently into an agent’s overall belief state, then we have trouble on a causal account of the basing relation. Further, considerations of the computational complexity involved in checking for coherence of one’s belief state also raises problems if we adopt a doxastic account of the basing relation, as Kornblith (2002, pp. 122–135) argues. However, things may fare better if we adopt an account of the basing relation like (Disposition). Just as I’ve argued that (Disposition) allows that one can base their credences on the fact that one’s credence are probabilistically coherent, it seems plausible that (Disposition) allows that one can base their belief on the fact that one’s beliefs are coherent. Of course, now is not the time to pursue the details, and it is unlikely that every epistemic view will fit favourably with my solution to the problem in terms of (Disposition). For example, more internalist epistemic views may be committed to an account of the basing relation more in line with (Doxastic) rather than (Disposition). Nevertheless, my approach is a strategy worth pursuing in other settings. But I’ll leave that work for another time.

To conclude, I’ve argued that when we apply the standard ways of thinking about the basing relation to Bayesian epistemology, we get the result that actual agents never come close to forming their credences in the right way. If this result holds then we have a serious problem for Bayesianism, for actual agents do sometimes do a reasonably good job at forming their credences. Furthermore, it prevents us from making the epistemic distinctions that we need to make. However, I’ve argued that by adopting a different way of thinking about the basing relation, we can develop a more satisfactory account of the relation in a Bayesian context. This account allows that actual agents can do a reasonably good job at forming their credences, and allows us to mark the epistemic distinctions we need to make.