1 Introduction

Bayesian credences measure the degrees to which agents are confident in various propositions. To have .9 credence in the proposition p is, roughly, to be 90% certain that p is true. To have 1 credence in p is to be completely certain that p is true, and to have 0 credence in p is to be completely certain that p is false. It is usually supposed that an agent’s credences should obey the standard probability axioms.Footnote 1 So for instance, an agent should not have credence .9 in p without also having credence .1 in ¬p.

There are several reasons to take credences seriously. As discussed in [7], they can be used to explain when and how an agent should update her beliefs in response to new evidence. Credences also yield an attractive solution to the Preface Paradox.Footnote 2 It has even been argued that beliefs reduce to credences. According to one prominent version of this claim, known as the ‘Lockean threshold’ analysis of belief, agent A believes that p if and only if A has sufficiently high credence in p.Footnote 3

But credences come with a cost. What exactly are credences? Are they dispositions to bet a certain way [1], or systematizations of agential behavior [9], or something else entirely? Why should credences obey the probability axioms, rather than other formal constraints? And given that credences are typically taken to be real numbers, how do the mental states of finite creatures like ourselves exemplify the infinite precision of the continuum?

In order to side-step tricky questions like these, Kenny Easwaran argues that belief—not credence—is the “real” doxastic state.Footnote 4 Credences can be understood as mathematical summaries of agential belief. To interpret credences in this way, first ascribe belief sets to agents. Then find a way to use probability functions to represent those sets. What was taken to be a credence function is, therefore, a probability function that represents a particular set of beliefs.

If this can be done, it promises to yield many of the benefits of credences without incurring the costs. As discussed in [2], for example, it promises to provide an answer to the Preface Paradox without raising tricky questions about what credences are, why they should obey various formal constraints, and how our mental states can exemplify the precision of real numbers. In short, if this account succeeds, then what Bayesians take to be credence functions are really just mathematical representations of belief sets. The idealizations of probability axioms, and the infinite precision of credences, are merely formal tools which can be used to describe belief and the value that agents place on truth and falsity.

All this requires that belief sets be representable by probability functions, however. So what does it take for probability functions to represent belief sets? In partial answer, Easwaran lists several necessary conditions for representability. One is strong coherence: a belief set B is strongly coherent just in case there is no other belief set that is at least as accurate as B in every possible state of the world, but that is strictly more accurate in at least one state. Easwaran shows that if B is representable, then B is strongly coherent (see [2], p. 829).Footnote 5

It would help Easwaran’s project considerably if strong coherence were also a sufficient condition for representability. Then by satisfying strong coherence, an agent’s belief set would conform to many of the constraints that credences impose on rationality, since that belief set would be representable by a probability function that mimics the properties of credences. Moreover, strong coherence itself seems like a plausible rationality constraint. For if an agent is rational, then her belief set had better not be less accurate, overall, than another belief set. But unfortunately, strong coherence is not sufficient for representability: there are subsets of Boolean algebras that are strongly coherent yet unrepresentable (see [2], pp. 846–849).

This raises two questions. First, what conditions are sufficient for a belief set to be representable by a probability function? Second, does any such sufficient condition provide a plausible constraint on rationality, the way strong coherence does?

In this paper, I answer both questions with a new sufficient condition for representability. In Section 2, I review the basic notions which are used to articulate that condition. In Section 3, I derive representability from the sufficient condition. In Section 4, I explain why this condition provides a plausible rationality constraint. Finally, in Section 5, I discuss some paths for future research. I briefly present some of that research in the A.

Aside from explaining how to reduce credence to belief, this paper also contributes some new theorems that might be of general interest. For example, Theorems 3.4 and A.4 establish certain formal connections between belief sets and comparative confidence orderings. These theorems, and the theorems and lemmas leading up to them, may be interesting for all parties concerned with the connection between belief and comparative confidence, not just those who want to reduce credence to belief.

2 Basic Notions

2.1 Belief Sets

Roughly, an agent’s belief set is the set consisting of all propositions which she believes.

Definition 1 (Belief Set)

Let \(\mathcal {X}\) be a finite Boolean algebra. A belief set is a set \(B\subseteq \mathcal {X}\).

The atoms of the algebra—call them ‘state descriptions’—represent mutually exclusive states in which the world may be. The propositions which agent A believes are disjunctive combinations of these atomic states. For example, if the atoms of the algebra are a 1, a 2, and a 3, A’s belief set might contain a 1, or a 2a 3, or any other disjunctive combination of zero or more of the a i .Footnote 6 The algebra is assumed to be finite (and non-empty) in order to simplify the forthcoming analysis.

The following definition states the conditions under which a belief set B is representable by a probability function. The idea, roughly, is that B is representable just in case all the propositions in B are at least as likely as not to be true, and all the propositions not in B are at least as likely as not to be false.

Definition 2 (B-Representability)

Let \(\mathcal {X}\) be a finite Boolean algebra, and let \(B\subseteq \mathcal {X}\) be a belief set. Say that B is b-representable (for ‘belief-representable’) just in case there is a probability function P r such that for all \(p\in \mathcal {X}\),

  1. (i)

    if \(Pr(p)>\frac {1}{2}\) then pB, and

  2. (ii)

    if \(Pr(p)<\frac {1}{2}\) then pB.

Note that if P r satisfies (i) and (ii), but \(Pr(p_{0})=\frac {1}{2}\) for some proposition \(p_{0}\in \mathcal {X}\), then P r b-represents B regardless of whether p 0B or p 0B. In other words, propositions assigned a probability of \(\frac {1}{2}\) do not make a difference to whether or not the given probability function b-represents the given belief set.Footnote 7

It is a general fact that for any probability function P r and any \(p\in \mathcal {X}\), \(Pr(p)>\frac {1}{2}\) if and only if \(Pr(\neg p)<\frac {1}{2}\). Read through the light of this fact, Definition 2 says that if a proposition is more likely than its negation then it is believed, and if a proposition is less likely than its negation then it is not believed.

This notion of representability is related to the version of the Lockean threshold analysis of belief which sets the threshold to \(\frac {1}{2}\). For suppose we take the probabilities P r(p) to express the credences of some agent A. Then Definition 2 says that if A’s credence in p is greater than \(\frac {1}{2}\) then A believes that p, and if A’s credence in p is less than \(\frac {1}{2}\) then A does not believe that p.

It is now possible to state, using Definition 2, exactly what needs to be done in order to reduce credence to belief. We must identify a condition such that if a given belief set satisfies that condition, then the belief set is b-representable. That is, we must identify a condition such that if a belief set satisfies that condition, then there exists a probability function that assigns a probability of at least \(\frac {1}{2}\) to the propositions believed – the propositions in the set – and assigns a probability of no more than \(\frac {1}{2}\) to the propositions not believed – the propositions not in the set. This accomplished, we will have a condition which is sufficient for a belief set to be summarizable in terms of a credence function which respects the \(\frac {1}{2}\) threshold. We will have a condition for when the structure of belief and the structure of non-belief can be described by credences.

Note that the particular notion of representability defined in Definition 2 assumes that the threshold for belief is \(\frac {1}{2}\). So the principal results of this paper concern the \(\frac {1}{2}\) threshold case. As discussed in Section 5 and in the A, however, the results of this paper can be extended to the threshold 1, and possibly to other thresholds as well. I focus on the results for the \(\frac {1}{2}\) threshold case (and the 1 threshold case) because they are relatively simple. Similar results for other thresholds may be quite complicated. Moreover, \(\frac {1}{2}\) and 1 are among the most natural thresholds for belief. So it is particularly desirable to have representation results for them.

The following two theorems will be useful later. The first—Theorem 2.1—says that if a proposition and its negation are both in B or both not in B, then they are equiprobable.

Theorem 2.1

Let \(\mathcal {X}\) be a finite Boolean algebra and let \(B\subseteq \mathcal {X}\) be a belief set. If B is b-represented by a probability function P r , then the following conditions hold.

  1. (i)

    For every proposition pB , if ¬pB then \(Pr(p)=Pr(\neg p)=\frac {1}{2}\).

  2. (ii)

    For every proposition pB , if ¬pB then \(Pr(p)=Pr(\neg p)=\frac {1}{2}\).

Proof

For (i): suppose that pB and ¬pB. By Definition 2, \(Pr(p)\geqslant \frac {1}{2}\) and \(Pr(\neg p)\geqslant \frac {1}{2}\). Since P r(p) = 1 − P rp), it follows that \(Pr(p)=Pr(\neg p)=\frac {1}{2}\).

For (ii): suppose that pB and ¬pB. By Definition 2, \(Pr(p)\leq \frac {1}{2}\) and \(Pr(\neg p)\leq \frac {1}{2}\). Again, since P r(p) = 1 − P rp), it follows that \(Pr(p)=Pr(\neg p)=\frac {1}{2}\). □

Roughly put, the second theorem about belief sets—Theorem 2.2—says that if two sets only differ from each other on pairs of propositions and their negations, then those sets are b-represented by exactly the same probability functions.

Theorem 2.2

Let \(\mathcal {X}\) be a finite Boolean algebra and let \(B\subseteq \mathcal {X}\) be a belief set. Suppose B is b-representable by the probability function P r , and suppose \(B^{\prime }\subseteq \mathcal {X}\) is a belief set that satisfies the following two conditions.

  1. (i)

    If pBB then ¬pB.

  2. (ii)

    If pB B then ¬pB.

Then P r b-represents B .

Proof

Let \(p\in \mathcal {X}\) be such that \(Pr(p)>\frac {1}{2}\). Then pB (since P r b-represents B). Suppose pB . Then ¬pB by condition (i). But then \(Pr(p)=\frac {1}{2}\) by Theorem 2.1, which contradicts the supposition. Therefore, pB .

Now let \(p\in \mathcal {X}\) be such that \(Pr(p)<\frac {1}{2}\). Then pB (since P r b-represents B). Suppose that pB . Then ¬pB by condition (ii). But then \(Pr(p)=\frac {1}{2}\) by Theorem 2.1, which contradicts the supposition. Therefore, pB .

So by Definition 2, P r b-represents B . □

As will become clear in Section 3, this little theorem is quite important. Think of it as showing that if two belief sets are ‘sufficiently similar’ to each other, establishing the b-representability of one suffices to establish the b-representability of the other.

2.2 Comparative Confidence Orderings

Comparative confidence orderings—which encode information about whether an agent is more (or less) confident in one proposition than another—play a crucial role in the coming discussion of belief. They act as tools for explicating certain properties of belief sets, properties which are relevant to whether those sets are b-representable. In this section, I define comparative confidence orderings, and I introduce some of the constraints which will be relevant for connecting them to belief sets.

Definition 3 (Comparative Confidence Ordering)

Let \(\mathcal {X}\) be a finite Boolean algebra. A comparative confidence ordering is a set \(\succeq \;\subseteq \mathcal {X}\times \mathcal {X}\).

Intuitively, if A is an agent and \(p,q\in \mathcal {X}\), then pq just in case A is at least as confident in the truth of p as in the truth of q. Let pq be shorthand for p.

Comparative confidence orderings, like belief sets, can be represented by probability functions.

Definition 4 (C-Representability)

Let \(\mathcal {X}\) be a finite Boolean algebra, and let \(\succeq \;\subseteq \mathcal {X}\times \mathcal {X}\) be a comparative confidence ordering. Say that ≽ is c-representable (for ‘comparative-representable’) just in case there is a probability function P r such that for every \(p,q\in \mathcal {X}\), pq if and only if \(Pr(p)\geqslant Pr(q)\).

So a comparative confidence ordering is c-representable just in case there is a probability function on the underlying algebra which preserves the ordering.

As discussed in [3], comparative confidence orderings are typically taken to obey five conditions. Of those five, just three will be relevant here. Let \(\mathcal {X}\) be a finite Boolean algebra, and let \(\succeq \;\subseteq \mathcal {X}\times \mathcal {X}\) be a comparative confidence ordering. The three conditions are as follows.

  • (A1) For every \(p, q\in \mathcal {X}\), either pq or qp.

  • (A2) ⊤≻⊥.

  • (A3) For every \(p\in \mathcal {X}\), p ≽⊥.

(A 1) says that the ordering is total: for every pair of propositions, the agent is at least as confident in one as in the other. (A 2) says that the agent is strictly more confident in ⊤, the tautological proposition that contains every atom in \(\mathcal {X}\), than ⊥, the contradictory proposition that is empty. (A 3) says that for every proposition, the agent is at least as confident in that proposition as in ⊥.

As I shall use them, (A 1), (A 2), and (A 3) are best understood as ‘belief-to-confidence’ constraints. In Section 3, I show how to construct collections of comparative confidence orderings from belief sets. (A 1), (A 2), and (A 3) are important constraints which the orderings obtained from that construction end up satisfying.

The following condition, which some comparative confidence orderings satisfy, will also play a central role in this paper.

(SA):

(Scott’s Axiom) Let \(\mathcal {X}\) be a finite Boolean algebra and let \(\succeq \;\subseteq \mathcal {X}\times \mathcal {X}\) be a comparative confidence ordering. For all pairs of sequences X = 〈x 1,…,x n 〉 and Y = 〈y 1,…,y n 〉 of length \(n\geqslant 2\) whose elements belong to \(\mathcal {X}\), if

  1. (i)

    X and Y have the same number of truths in every atom of \(\mathcal {X}\),Footnote 8 and

  2. (ii)

    for all i ∈ [1,n), x i y i ,

then

  1. (iii)

    y n x n .

The idea of (SA) can be explained via a simple example drawn from a more familiar context: the real numbers under the usual \(\geqslant \) relation. Let x 1, x 2, y 1, and y 2 be real numbers, let X = 〈x 1,x 2〉, and let Y = 〈y 1,y 2〉. Suppose that x 1 + x 2 = y 1 + y 2, which is the analog of condition (i). Also suppose that \(x_{1}\geqslant y_{1}\), which is the analog of condition (ii). It follows that \(y_{2}\geqslant x_{2}\),Footnote 9 which is the analog of (iii). Thus, the \(\geqslant \) relation on the reals satisfies an analog of (SA) for the case n = 2.Footnote 10 One can think of (SA) as encapsulating this fact about \(\geqslant \) in the context of Boolean algebras.

As discussed in more detail in Section 4, despite its complicated appearance, (SA) articulates an intuitively plausible constraint on comparative confidence. For if agent A is at least as confident in each x i as in the corresponding y i for i ∈ [1,n) (condition (ii)), and A is strictly more confident in x n than in y n (the negation of condition (iii)), then A seems to be irrational if she also knows that the X propositions and the Y propositions are equally accurate (condition (i)). To put it roughly: if the X propositions and the Y propositions are known by A to be equally accurate, then A had better not be strictly more confident in the X propositions than in the Y propositions.

Dana Scott used (SA) to prove what is now called “Scott’s Theorem” [10], one version of which is given below.

Theorem 2.3 (Scott’s Theorem)

Let \(\mathcal {X}\) be a finite Boolean algebra, and let \(\succeq \;\subseteq \mathcal {X}\times \mathcal {X}\).Thensatisfies (A 1),(A 2),(A 3), and (SA) if and only ifis c-representable.

Thus, satisfaction of (A 1), (A 2), (A 3), and (SA) is sufficient and necessary for c-representability.

Theorem 2.3 is important for our purposes because it connects (SA) to c-representability. So if c-representability connects to b-representability in the right way, then in virtue of Theorem 2.3, (SA) can be used to derive a sufficient condition for b-representability. In the next section, I show how c-representability and b-representability are so connected, and then derive the sufficient condition.

3 The Sufficient Condition

Before deriving the sufficient condition for b-representability, I discuss a particular way of constructing comparative confidence orderings from belief sets. Think of the construction as producing the comparative confidence orderings that an agent might reasonably exemplify, given what she believes; the orderings that are consonant, say, with her belief set. The construction proceeds in two steps. First, starting from a belief set B, I construct a partial comparative confidence ordering \(\succeq _{B}^{\star }\). Roughly, B induces \(\succeq _{B}^{\star }\) by inducing certain comparisons among the beliefs that B contains. Second, I construct the set of all total extensions of \(\succeq _{B}^{\star }\) that satisfy some relatively minor restrictions.

Also, from now on, I assume that ⊥∉B and that ⊤∈ B. This is a reasonable assumption to adopt because it is implied by strong coherence: it can be shown that if ⊥∈ B or if ⊤∉B, then B is not strongly coherent.Footnote 11

The ordering \(\succeq _{B}^{\star }\) is constructed as follows. Let \(B\subseteq \mathcal {X}\) be the belief set of agent A. Define the following three sets.

  • D 1 = {〈pp〉∣pB}.

  • D 2 = {〈¬p,p〉∣pB}.

  • D 3 \(=\{\langle p,\bot \rangle \mid p\in \mathcal {X}\}\).

Then let \(\succeq _{B}^{\star }=D_{1}\cup D_{2}\cup D_{3}\).

D 1 says that if A believes p, then A is at least as confident in p as in ¬p. Since this is extremely plausible, the comparisons in D 1 seem like the sorts of comparisons that A’s belief set should induce. So D 1 belongs in the partial comparative confidence set \(\succeq _{B}^{\star }\) that is induced by A’s belief set. D 3 says that for every \(p\in \mathcal {X}\), A is at least as confident in p as in the empty proposition. Again, since this is extremely plausible, D 3 belongs in \(\succeq _{B}^{\star }\) too.

In the case of D 2, things are a bit more complicated. D 2 says that for every p which A does not believe, A is at least as confident in ¬p as in p. Given that the notion of b-representability at issue here is based on a \(\frac {1}{2}\) threshold for belief (see Definition 2), this is extremely plausible. So given the relevant notion of b-representability, D 2 should be included in the comparative confidence set \(\succeq _{B}^{\star }\). Of course, D 2 is much less plausible for thresholds of belief other than \(\frac {1}{2}\). But for other thresholds, D 2 is no longer required in order to obtain interesting results; see Section 5 and the Appendix.

D 1, D 2, and D 3 make no assumptions about how A’s beliefs ought to be compared with each other. They only concern comparisons among beliefs and non-beliefs.Footnote 12 So they are consistent with a wide variety of comparative confidence orderings that one might want to construct from A’s belief set. Very little is assumed about the ordering that, given belief set B, should be associated with A.

Now for the second step of the construction. Let \(\mathcal {K}\) be the set of total comparative confidence orderings ≽ that contain \(\succeq _{B}^{\star }\) as a subset, and that also satisfy the following two conditions.

  • \((\mathcal {C}_{1})\) 〈⊥,⊤〉∉ ≽.

  • \((\mathcal {C}_{2})\) For all pB, if ¬pB then 〈¬p,p〉∉ ≽.Footnote 13

Given that 〈⊤,⊥〉 is automatically in each ordering in \(\mathcal {K}\) (since it is in D 3), \((\mathcal {C}_{1})\) simply says that according to all of those orderings, A is strictly more confident in ⊤ than in ⊥. \((\mathcal {C}_{2})\) says that comparative confidence is strict for any proposition the agent believes without also believing its negation: if agent A believes p and does not believe ¬p, then according to \((\mathcal {C}_{2})\), A is strictly less confident in ¬p than in p. Basically, \((\mathcal {C}_{2})\) implies that D 1 and D 2 exhaust the comparisons between propositions and their negations that orderings in \(\mathcal {K}\) may include. As shown in Theorem 3.3, this ensures that the belief sets which can be ‘read off’ orderings in \(\mathcal {K}\) are ‘sufficiently similar’ to the belief set B, for the purposes of drawing conclusions about b-representability.

The following definition provides a succinct way to refer to this construction.

Definition 5 (Constructed from B in the manner of \(\mathcal {C}\))

Let \(\mathcal {X}\) be a finite Boolean algebra, let \(B\subseteq \mathcal {X}\) be a belief set, and let \(\mathcal {K}\) be the set of total comparative confidence orderings that contain \(\succeq _{B}^{\star }\) and that satisfy \((\mathcal {C}_{1})\) and \((\mathcal {C}_{2})\). Then \(\mathcal {K}\) is the set of comparative confidence orderings constructed from B in the manner of \(\boldsymbol {\mathcal {C}}\).

Like (A 1), (A 2), and (A 3), the conditions D 1, D 2, D 3, \(\mathcal {C}_{1}\), and \(\mathcal {C}_{2}\) are best understood as ‘belief-to-confidence’ constraints. They are constraints used to generate the set \(\mathcal {K}\) of comparative confidence orderings that a given belief set induces. Of course, the latter five constraints are explicitly built into the \(\mathcal {C}\) construction. But as the following lemma shows, the orderings constructed in the manner of \(\mathcal {C}\) satisfy (A 1), (A 2), and (A 3) as well.

Lemma 1

Let \(\mathcal {X}\) be a finite Boolean algebra, let \(B\subseteq \mathcal {X}\) be a belief set, and let \(\mathcal {K}\) be the set of comparative confidence orderings constructed from B in the manner of \(\mathcal {C}\).Each \(\succeq \;\in \mathcal {K}\) satisfies (A 1), (A 2), and (A 3).

Proof

For (A 1): by Definition 5, ≽ is total. Thus, ≽ satisfies (A 1).

For (A 2): \(\langle \top ,\bot \rangle \in \;\succeq _{B}^{\star }\) because ⊤∈ D 3. Since \(\succeq _{B}^{\star }\;\subseteq \;\succeq \), 〈⊤,⊥〉∈ ≽. By Definition 5, ≽ satisfies \((\mathcal {C}_{1})\), and so 〈⊥,⊤〉∉ ≽. Hence, ⊤≻⊥.

For (A 3): for each \(p\in \mathcal {X}\), \(\langle p,\bot \rangle \in \;\succeq _{B}^{\star }\;\subseteq \;\succeq \). □

The following theorem shows that (SA) is both sufficient and necessary for an ordering in \(\mathcal {K}\) to be c-representable.

Theorem 3.1

Let \(\mathcal {X}\) be a finite Boolean algebra, let \(B\subseteq \mathcal {X}\) be a belief set, and let \(\mathcal {K}\) be the set of comparative confidence orderings constructed from B in the manner of \(\mathcal {C}\).Then for each \(\succeq \;\in \mathcal {K}\) ,satisfies (SA) if and only ifis c-representable.

Proof

By Theorem 2.3, ≽ satisfies (A 1), (A 2), (A 3), and (SA) if and only if it is c-representable. By Lemma 1, each \(\succeq \;\in \mathcal {K}\) satisfies (A 1), (A 2), and (A 3). Therefore, for each \(\succeq \;\in \mathcal {K}\), ≽ satisfies (SA) if and only if ≽ is c-representable. □

The remaining theorems connect the c-representability of orderings in \(\mathcal {K}\) to the b-representability of the belief set from which \(\mathcal {K}\) was constructed. They invoke a new notion: that of a belief set induced by a comparative confidence ordering.

Definition 6 (Induced Belief Set)

Let \(\mathcal {X}\) be a finite Boolean algebra, and let \(\succeq \;\subseteq \mathcal {X}\times \mathcal {X}\) be a comparative confidence ordering. Let B be the belief set which consists of all and only the propositions p such that p ≻¬p. Call B the belief set induced by .

The following theorem connects comparative confidence orderings to the belief sets they induce. It states that if an ordering induces a belief set, then any probability functions that c-represent the former must b-represent the latter.

Theorem 3.2

Let \(\mathcal {X}\) be a finite Boolean algebra, and let \(\succeq \;\subseteq \mathcal {X}\times \mathcal {X}\) be a comparative confidence ordering that induces the belief set B .Let P r be a probability function that c-represents ≽.Then P r b-represents B .

Proof

If \(Pr(p)>\frac {1}{2}\) then 2 ⋅ P r(p) > 1 = P r(p) + P rp), and so P r(p) > P rp). By Definition 4, p ≽¬p. Also by Definition 4, for if ¬pp, then \(Pr(\neg p)\geqslant Pr(p)\). Therefore, p ≻¬p. So by Definition 6, pB .

If \(Pr(p)<\frac {1}{2}\) then 2 ⋅ P r(p) < 1 = P r(p) + P rp), and so P r(p) < P rp). By Definition 4, ¬pp, so . Definition 6 implies that pB .

Therefore, by Definition 2, P r b-represents B . □

The next theorem shows that if B is used to construct a set \(\mathcal {K}\) of comparative confidence orderings in the manner of \(\mathcal {C}\), then each comparative confidence ordering induces a belief set that is ‘sufficiently similar’ to B.Footnote 14

Theorem 3.3

Let \(\mathcal {X}\) be a finite Boolean algebra, let \(B\subseteq \mathcal {X}\) be a belief set, and let \(\mathcal {K}\) be the set of comparative confidence orderings constructed from B in the manner of \(\mathcal {C}\) . Suppose that some \(\succeq \;\in \mathcal {K}\) induces the belief set B . Then the following two conditions hold.

  1. (i)

    B B.

  2. (ii)

    For every pBB , ¬pBB .

Proof

For (i): let pB . By Definition 6, p ≻¬p, and so 〈¬p,p〉∉ ≽. It follows that pB, for if pB, then 〈¬p,p〉∈ D 2. But then 〈¬p,p〉∈ ≽, which is a contradiction.

For (ii): let pBB . By Definition 6, pB if and only if p ≻¬p. Since pB , either 〈pp〉∉ ≽ or 〈¬p,p〉∈ ≽. The former is impossible: since pB, it follows that 〈pp〉∈ D 1, and thus, 〈pp〉∈ ≽. So 〈¬p,p〉∈ ≽. And if ¬pB, then ≽ does not satisfy \((\mathcal {C}_{2})\). Therefore, ¬pB.

By Definition 6, ¬pB if and only if ¬pp. As was already shown, 〈pp〉∈ ≽. Therefore ¬pp, and so ¬pB .

Thus, ¬pBB . □

The set B is ‘sufficiently similar’ to B in the sense that (i) every proposition in B is in B, and (ii) if B includes a proposition that B does not, then B (and not B ) also includes that proposition’s negation.Footnote 15

At long last, here is the sufficient condition for b-representability.

Theorem 3.4 (Sufficient Condition for B-Representability)

Let \(\mathcal {X}\) be a finite Boolean algebra, let \(B\subseteq \mathcal {X}\) be a belief set, and let \(\mathcal {K}\) be the set of comparative confidence orderings constructed from B in the manner of \(\mathcal {C}\).Suppose that some \(\succeq \;\in \mathcal {K}\) satisfies (SA). Then B is b-representable.

Proof

By Theorem 3.1, ≽ is c-representable by some probability function P r. Let B be the belief set induced by ≽. By Theorem 3.2, P r b-represents B .

By Theorem 3.3, B B. Since B B = , condition (i) of Theorem 2.2 holds trivially (where B is substituted for B in Theorem 2.2, and B for B ). Since Theorem 3.3 implies that ¬pBB for every pBB , condition (ii) of Theorem 2.2 holds as well. It therefore follows from Theorem 2.2 that since P r b-represents B , P r also b-represents B. □

Here is an informal, intuitive account of how this sufficient condition was derived. Start with a belief set B. Use B to construct a set \(\mathcal {K}\) of comparative confidence orderings that satisfy some restrictions. Then use Scott’s theorem to show that if at least one of those orderings satisfies (SA), then it is c-representable. Suppose some such ordering ≽ does indeed satisfy (SA), and let P r be the probability function that c-represents it. Use ≽ to induce a second belief set B , and show that P r b-represents B . Finally, show that since P r b-represents B , P r also b-represents the original belief set B. This establishes Theorem 3.4: if the set of comparative confidence orderings constructed from B in the manner of \(\mathcal {C}\) includes at least one ordering that satisfies Scott’s axiom, then B is b-representable.Footnote 16

4 The Sufficient Condition as a Constraint on Rationality

At the end of Section 1, I raised two questions for the project of representing belief sets by probability functions. First, what conditions are sufficient for representability? Second, does any such condition amount to a plausible constraint on rationality?

Theorem 3.4 provides an answer to the first question: for an agent’s belief set to be representable by a probability function, it is sufficient that one of the comparative confidence orderings constructed from her belief set, in the manner of \(\mathcal {C}\), satisfies (SA). But what about the second question? Does the sufficient condition of Theorem 3.4 amount to a plausible rationality constraint?

There are reasons for thinking that it does. In particular, there are reasons for thinking that every rational agent’s belief set B should conform to the following two criteria. The first is Constructive Consistency: B should be consistent with the \(\mathcal {C}\) construction. That is, B should give rise to at least one total comparative confidence ordering that contains D 1, D 2, and D 3, and that satisfies \((\mathcal {C}_{1})\) and \((\mathcal {C}_{2})\). The second is Scott Satisfiability: at least one of the orderings which B induces—via the \(\mathcal {C}\) construction—should satisfy (SA).

Note that if an agent’s belief set must indeed conform to these criteria, in order for that agent to be rational, then the sufficient condition of Theorem 3.4 is a plausible rationality constraint. For by Constructive Consistency, if agent A is rational, then her belief set should be consistent with the \(\mathcal {C}\) construction: her belief set should generate at least one total comparative confidence ordering that satisfies the requirements of the \(\mathcal {C}\) construction. And by Scott Satisfiability, if A is rational, then one of the orderings generated by the \(\mathcal {C}\) construction should satisfy (SA). Thus, if A is rational, then (SA) must be satisfied by one of the comparative confidence orderings constructed from A’s belief set in the manner of \(\mathcal {C}\).

Let us now see what justifies these two criteria. To start, consider Constructive Consistency. First and foremost, note that Constructive Consistency does not require the agent to actually have a comparative confidence ordering.Footnote 17 Constructive Consistency merely requires that the agent’s belief set induce at least one comparative confidence ordering that satisfies the belief-to-confidence constraints of the \(\mathcal {C}\) construction. The fundamental doxastic state is the belief set; the orderings induced are not included in that fundamental state. Think of the comparative confidence orderings as providing non-fundamental, higher-level descriptions of fundamental facts about belief.Footnote 18

Second, the \(\mathcal {C}\) construction invoked by Constructive Consistency places very few restrictions on the comparative confidence orderings that may suitably describe A’s doxastic state.Footnote 19 In virtue of D 1, it requires that according to the descriptions those orderings provide, A must be at least as confident in her beliefs as in those beliefs’ negations. This seems like a reasonable restriction to impose: if A believes that p, then presumably A should not be strictly less confident in p than in ¬p. In virtue of D 2, it requires that according to the descriptions those orderings provide, A must be no more confident in the beliefs she does not have than in the negations of those non-beliefs. This too seems reasonable: if A does not believe that p, then A should be at least as confident in ¬p as in p.Footnote 20 In virtue of D 3, it requires that according to the descriptions those orderings provide, A must be at least as confident in every proposition as in ⊥. Once again, this seems like a completely reasonable restriction to impose on rationality: A should not be strictly more confident in a contradiction than in some other proposition.

In virtue of \((\mathcal {C}_{1})\), the \(\mathcal {C}\) construction requires that according to each of the orderings induced by A’s belief set, A must be strictly more confident in ⊤ than in ⊥. This seems reasonable: A should not be strictly more confident in a contradiction than in a tautology, and A should not be equally confident in both. Finally, in virtue of \((\mathcal {C}_{2})\), it requires that according to the descriptions those orderings provide, A must be strictly more confident in her beliefs than in the negations of those beliefs, whenever A does not believe those negations. This seems reasonable as well: it would be arbitrary for A to be exactly as confident in p as in ¬p, but to believe one and not the other.

Of course, Constructive Consistency does not imply that A must have a total comparative confidence ordering, in order for A to be rational. For Constructive Consistency does not require that A have a comparative confidence ordering, total or partial, at all. Rather, it only requires that A’s belief set be capable of inducing a total ordering in the manner of the \(\mathcal {C}\) construction. This too seems like a reasonable constraint to impose on rationality.

So each of the constraints that the \(\mathcal {C}\) construction imposes on comparative confidence orderings are pretty reasonable. They seem like constraints that descriptions of rational agents—descriptions which invoke comparative confidence, at least—should satisfy. Therefore, Constructive Consistency is a plausible rationality constraint.

Now consider Scott Satisfiability, which states that every rational agent’s belief set should give rise, via the \(\mathcal {C}\) construction, to at least one comparative confidence ordering that satisfies (SA). To see why this is a plausible constraint on rationality, suppose that agent A has a belief set which fails to satisfy Scott Satisfiability. That is, suppose that every comparative confidence ordering induced by A’s belief set—every ordering, consistent with the \(\mathcal {C}\) construction, which may describe A’s doxastic state—fails to satisfy Scott’s axiom. Then intuitively, A’s doxastic state is irrational.

That irrationality could manifest itself in a number of ways. To see it in just one case, assume that the \(\mathcal {C}\) construction generates exactly one ordering ≽1 from A’s belief set,Footnote 21 and suppose that ≽1 violates (SA) because there are two n-tuples of propositions 〈x 1,…,x n 〉 and 〈y 1,…,y n 〉 such that the following three conditions obtain. First, according to the description provided by ≽1, A is at least as confident in x 1 as in y 1, at least as confident in x 2 as in y 2, and … and at least as confident in x n− 1 as y n− 1. In other words, A satisfies condition (ii) of (SA). Second, according to ≽1, A is strictly more confident in x n than in y n . That is, A violates condition (iii) of (SA). Third, according to ≽1, A knows that collectively, the x i s are just as accurate as the y i s. In other words, (i) of (SA) is true. Then according to the description of A’s doxastic state which ≽1 provides, A is irrational. And that seems right. For A is strictly more confident in the collection of x i s than the collection of y i s, despite the fact that A knows the two collections contain the same number of truths.

In general, of course, A’s belief set will induce many different comparative confidence orderings that are consistent with the \(\mathcal {C}\) construction. If some of those orderings do not satisfy (SA), but others do, it does not follow that A is irrational. For there are descriptions of A’s doxastic state which do not have the sorts of implications for irrationality that ≽1 has. Actually, since Theorem 3.4 implies that A’s belief set is b-representable in this case, A is rational. In short, rationality merely requires that there be at least one comparative confidence ordering, at least one such description of A’s belief set, which satisfies both (SA) and the requirements of the \(\mathcal {C}\) construction.

But suppose instead that every single one of the orderings generated by the \(\mathcal {C}\) construction fail to satisfy (SA). Suppose there is no way whatsoever to describe A’s doxastic state, using comparative confidence, in a way that avoids the sort of irrationality exhibited by ≽1. Then A clearly seems irrational.Footnote 22

A concrete example will help motivate these conclusions. Suppose Emily the policewoman is chasing Dick the thief. She sees three directions in which Dick could have run: left, right, or straight. She is more confident that Dick went left than that he went straight. So intuitively, she should be more confident in left ∨right than in straight ∨right.Footnote 23

Violating (SA) amounts to violating this extremely plausible intuition. I shall demonstrate why for just one such violation.Footnote 24 Let \(\mathcal {X}\) be the Boolean algebra consisting of three atoms: left, right, and straight. Let X = 〈left,straight ∨right〉 and let Y = 〈straight, left ∨right〉. Since X and Y have the same number of truths in each atom of \(\mathcal {X}\),Footnote 25 condition (i) of (SA) is satisfied. Since Emily is more confident that Dick went left than that he went straight—that is, since left ≽ straight, according to Emily’s comparative confidence ordering—condition (ii) is satisfied too. But suppose (SA) fails here. That is, suppose straight ∨right ≻ left ∨right; this is just the negation of condition (iii). Then according to ≽, Emily’s belief set is such that she is strictly more confident in straight ∨right than in left ∨right. Since she is at least as confident in left as in straight, however, she is irrational.

Given these reasons for thinking that rational agents should satisfy Constructive Consistency and Scott Satisfiability, the sufficient condition of Theorem 3.4 provides a plausible rationality constraint. Consequently, in virtue of that theorem, it follows that if an agent is rational, then her belief set can be represented by a probability function.

5 Conclusion

Theorem 3.4 states a sufficient condition for b-representability, and there are reasons for thinking that this condition provides a plausible rationality constraint. Moreover, by satisfying that constraint, an agent’s belief set conforms to many of the constraints on rationality that credences impose. For by Theorem 3.4, there exists a probability function that b-represents the belief set.

Though I find the points made in Section 4 compelling, I do not think that they establish, once and for all, that credences are just mathematical representations of agential belief. Perhaps there are countervailing reasons to reject Scott Satisfiability, for instance. (SA) is a rich, complicated axiom, and that might make it seem too strong to be a rationality constraint. Moreover, Definition 2 assumed that the threshold for belief is \(\frac {1}{2}\). So one may be able to reject the conclusion that credences reduce to belief, as argued for here, by rejecting the \(\frac {1}{2}\) threshold.

But in fact, these results look like they will generalize to thresholds other than \(\frac {1}{2}\). By way of illustration, in the Appendix, I explain how to generalize these results to a threshold of 1.Footnote 26 To put it briefly, for the 1 threshold case, I change the definition of b-representability, I alter D 1 and \((\mathcal {C}_{2})\), I delete D 2, and I change the definition of a comparative confidence ordering inducing a belief set. The definition of b-representability is changed because, of course, the threshold for belief is now 1. D 1 and \((\mathcal {C}_{2})\) are altered because they are the key belief-to-confidence constraints in the \(\mathcal {C}\) construction which allow a belief set to be (almost) entirely recovered from the comparative confidence orderings it induces. And when the threshold is 1, different belief sets need to be recovered. D 2 is deleted simply because it no longer applies. The agent could fail to believe that p, and yet still be properly described as strictly more confident in p than in ¬p. The definition of an ordering inducing a belief set is changed because, unsurprisingly, the required notion of ‘inducing’ for a threshold of 1 is different from the required notion of ‘inducing’ for a threshold of \(\frac {1}{2}\).

For the sake of reducing credence to belief, of course, it would be great if these results extended to other thresholds. But it would also be quite interesting if they did not. That would suggest that the thresholds \(\frac {1}{2}\) and 1 are rather special: they are the unique thresholds for which credences can be reduced to belief via Scott’s axiom and orderings induced by belief sets. So either way, the present results point to avenues for future research that are worth pursuing. If the results extend to other thresholds, then the reductive project is that much closer to full success. If not, then we have discovered an interesting fact about thresholds \(\frac {1}{2}\) and 1.

In sum, the results of Sections 3 and 4 motivate the view that belief is basic, and that credences are just mathematical tools for summarizing belief states. Moreover, given that Theorem 3.4 holds for the \(\frac {1}{2}\) threshold, and that a similar theorem holds for the threshold 1, there is reason to seek out analogous theorems for other thresholds.Footnote 27 The present results illustrate a couple ways by which to reduce credence to belief, and they point to other ways of carrying out that reduction.