1 Introduction: representationalism and degrees of belief

I take representationalism concerning belief to be a familiar doctrine to readers, so will not dwell on summarizing it. It states that a belief is a mental representation which is in a ‘belief box’, in so far as it performs particular roles—in guiding action and reasoning, inter alia—that other mental representations do not. The most influential version of the view is Fodor’s (1975, 1981), according to which mental representations are linguistic in character—are sentences in a language of thought (‘mentalese’)—and feature in several different propositional attitudes. As explained by Fodor and Pylyshyn (1988), these putative mental representations have combinatorial syntax and semantics to which cognitive processes are sensitive.

Fodor’s variant of representationalism is attractive for two key reasons. First, it enables us to understand how thought structurally maps onto the world (and indeed hypothetical or counterfactual scenarios) by analogy with how natural language does the equivalent. Concepts are analogous to words and propositions are analogous to sentences, for example. Second, it provides a means of explaining the productivity and systematicity of thought (and the propositional attitudes involved therein). That’s to say, it explains how an agent can readily entertain infinitely many new propositions—consider the sequence ‘1 + 1 = 2’, ‘2 + 1 = 3’, ‘3 + 1 = 4’, and so on—and how an agent who entertains ‘Sally met Harry’ is easily able to entertain ‘Harry met Sally’.Footnote 1

Due partly to these attractive features and its considerable influence, I will focus on Fodor’s variant of representationalism in this paper. However, the findings plausibly go for other versions of representationalism, such as those due to Millikan (1984), Dretske (1988), and Cummins (1996a). Indeed, there is a growing literature in cognitive science positing ‘graded’ mental representations, in order to account for uncertainty; see, for instance, Chater and Oaksford (2008). Yet none of this adopts a Fodorian approach, and hence one might doubt that this remains tenable when beliefs are not treated merely as binary, or ‘on–off’, in character.Footnote 2

This brings us to the idea that beliefs come in degrees. The underlying idea is simple. Each of us is more confident about some things than others. For example, most people who believe both that ‘2 + 3 = 5’ (p) and that ‘A human colony will be established on Mars in the next twenty years’ (q) are more confident in the former than the latter. And we expect this to be reflected in their actions. For instance, we’d expect such people to prefer to bet on p rather than on q, if offered the choice of an even odds bet on only one or the other (in normal circumstances).Footnote 3 It is easy to see how similar reasoning can be extended to ‘disbeliefs’, or beliefs in propositions involving negations, such as ‘2 + 3 ≠ 5’ (~p).

It is not possible to account for degrees of confidence by noting that agents may have full beliefs such as ‘The probability of p is greater than 0.999’, ‘The probability of q is less than 0.8’, ‘p is more likely than q’, or even merely ‘p is more possible than q’, on a representational view of concepts. The primary reason is that the kind of behaviour explicable by appeal to degrees of belief may be observed in agents who do not possess mental representations of probability, and who have not even entertained the notion that possibility comes in degrees. For example, young children act on some of their beliefs more confidently than they do on others, despite not having a firm grasp of modal notions or probability theory. A boy might be prepared to walk but not run over a glass bottomed bridge, yet content to run across a wooden equivalent without giving safety a second thought.Footnote 4

Degrees of belief feature in most interpretations of probability of an epistemic, or information-based, variety.Footnote 5 Notably, they form the basis for the subjective interpretation of probability, which was devised independently by Ramsey (1926) and De Finetti (1937). The fundamental idea behind this interpretation is that a rational agent’s degrees of belief should obey various rules. For example, one’s degrees of belief involving a proposition and its negation should be apportioned appropriately. Let unity represent certainty, and D(p) represent a degree of belief in p. Then for a rational agent, D(p ∨ ~p) = 1 and D(p) + D(~p) = 1. And so on. It turns out that a rational agent’s degrees of belief obey the axioms of probability. Or so De Finetti and Ramsey claimed. Their original arguments to this effect involved equating preferred betting quotients (or ‘odds’) with degrees of belief in (particular) gambling scenarios, and showing that selecting betting quotients which violate the axioms of probability leaves one susceptible to being Dutch Booked (or losing whatever happens) in a way one would otherwise not be. Dutch Book arguments have subsequently been discussed extensively, and refined in many ways as a result; see Hájek (2009) for an overview. The details need not concern us. The significant matter is simply that the conclusion of the original Dutch Book arguments remains plausible.

The putative connection between degrees of belief and probabilities is also significant in so far as it requires that the former, like the latter, can be conditional. Any P(p|q) on a subjective interpretation of probability (inter alia) will involve a D(p|q). Moreover, advocates of information-based (or epistemic) interpretations of probability often take conditional probabilities to be fundamental.Footnote 6 Keynes (1921: 6–7) put it as follows:

No proposition is in itself either probable or improbable, just as no place can be intrinsically distant; and the probability of the same statement varies with the evidence presented, which is, as it were, its origin of reference. It is as useless... to say “b is probable” as it would be to say “b is equal,” or “b is greater than,”…

And De Finetti (1990: p. 194) followed suit, in declaring:

[E]very prevision, and, in particular, every evaluation of probability, is conditional; not only on the mentality or psychology of the individual involved, at the time in question, but also, and especially, on the state of information in which he finds himself at that moment.

This is not merely to say that one’s degrees of belief are subject to change in response to changes in information. It is to declare that unconditional degrees of belief do not exist. This is not, however, as puzzling as it might initially appear. First, accepting this is compatible with using mathematical representations such as D(p) and understanding these to represent particular kinds of conditional degrees of belief. Second, as suggested by Popper (1959), the conditional degrees of belief in question may be taken to be of the form D(p|T), where T represents a tautology. And psychologically rather than logically speaking, one might take T to represent a sparse innate information state, which might include the syntax of mentalese. Indeed, it is difficult if not impossible to square Fodor’s view of the mind—especially given how it accounts for productivity, systematicity, and compositionality—with the possibility of an agent having a degree of belief (or a full belief) in a proposition without having any other information whatsoever.

It should be noted, finally, that degrees of belief might be understood to be imprecise in character, just as probabilities might be. Thus it is not necessary to admit that degrees of belief can have values such as 1/1,000,000 or 1/1,000,001, as would be implausible given our psychological inability to distinguish between the betting odds we consider to be fair to such a high degree of resolution.Footnote 7 One may instead understand degrees of belief to be interval-valued, in line with the approach of Kyburg (1983). For simplicity’s sake, however, I will write as if the possible values of degrees of belief lie on a continuous spectrum between zero and unity.

This concludes my overview of Fodor’s representationalism and the idea that belief comes in degrees. In the next section, I will present and refute two arguments that the former is difficult to combine with the latter. Beforehand, I will add only that the compatibility of the two ideas has not been seriously considered by those working in philosophy of probability or formal epistemology, in their discussions of the possible nature of degrees of belief.Footnote 8

2 Two arguments against representationalism concerning degrees of belief

Before we consider the arguments against representationalism concerning degrees of belief, consider the consequences if they are successful. First, representationalism is attractive largely because of its alleged scope: it purports to provide a basis for understanding all propositional attitudes. So if it fails for belief states (or for closely related cognitive states), its appeal diminishes significantly. Second, propositional attitudes other than beliefs also come in degrees. For example, I fear losing an arm considerably more than I fear losing a little finger (while fearing the latter independently). So one might expect (some of) the arguments against the coherence of representationalism concerning degrees of belief to bear on the compatibility of representationalism with other propositional attitudes involving degrees. This expectation is well founded, as will soon become apparent.

2.1 The ‘clutter’ argument

I will tackle two arguments, both of which are presented by Schwitzgebel (2002). The first concerns ontological inflation, or the ‘clutter’ that follows from admitting degrees of belief in addition to beliefs:

[T]he well-known difficulty of how to account on a representational theory for the apparently infinite number of beliefs each of us has (e.g., I believe that the number of planets is less than 10, also that it is less than 11, etc.) becomes even more intractable if propositions about which one has no settled opinion are beliefs of low or intermediate degree of confidence. It may then follow that for every possible proposition, one must have a representation of some sort in mind. (Schwitzgebel, 2002: p. 271)

My response is as follows. First, positing degrees of belief, above and beyond beliefs, need not involve suggesting that any given agent possesses any additional mental representations of propositions (other than those standardly admitted by representationalists). In brief, the reason is that degrees of belief may be construed either as: (1) constituting fine-grained detail of (coarse-grained) states that are standardly associated with believers, such as suspension of belief, which already involve the relevant representations; or (2) as supplements to the aforesaid states (which, to repeat, already involve the relevant representations).

More precisely, independently of thinking in terms of degrees of belief, we allow the possibilities, for any given conceivable proposition p, that a person might believe p, disbelieve p, or suspend belief in p. This only requires appeal to a single representation, of p, to which a person might take three different attitudes.Footnote 9 The only salient alternative is to posit a representation of ~p in addition to a representation of p, and just two attitudes: belief and suspension thereof. For simplicity’s sake—since the present issue is whether appeal to degrees of belief increases the number of propositional representations—I’ll treat these options as interchangeable, and discuss disbelief in p in terms of belief in ~p. Now consider how belief, disbelief and suspension might be accounted for in terms of a subject S’s degree of belief concerning p, Ds(p) such that 1 ≥ Ds(p) ≥ 0. Let Bsp represent ‘S believes p’ (as in standard epistemic logics) and Ssp represent ‘S suspends belief in p’. One simple idea is to introduce two thresholds for degrees of belief, d and b, with 1 > b ≥ d > 0, such that:

Ds(p) > b if and only if Bsp

Ds(p) < d if and only if Bs ~p

b ≥ Ds(p) ≥ d if and only if Ssp

One might set b and d at 0.5, for example, and hold that suspending belief equates to having a degree of belief equal to 0.5, whereas believing and disbelieving equate to having higher or lower degrees of belief respectively. The only propositional mental representation appealed to is p.

This account of the relationship between belief and degrees of belief is too simplistic as it stands, for a variety of technical reasons. For example, there are situations where individuals suspend judgement on p, q, and p&q. But this requires their degrees of belief to violate the axioms of probability on the simple proposal, due to the multiplication law of probability.Footnote 10 As shown by Leitgeb (2014), however, a similar account can work provided that the thresholds involved are context sensitive. Moreover, even if beliefs cannot be reduced to degrees of belief—as Leitgeb (2017) now thinks—it only follows that positing degrees of belief involves positing a propositional attitude other than belief (which comes in degrees, as several other attitudes do).Footnote 11

Granted, it might be argued that clutter is caused merely by positing degrees of belief in addition to beliefs. But whether it is genuinely clutter—whether a significant principle of parsimony, such as Ockham’s razor, is violated—depends on whether the positing of such entities does explanatory work (of a psychological variety, inter alia). And it does explanatory work for reasons already adduced. It explains why a person not schooled in probability theory (or anything similar) would strongly (and rationally) prefer one option to another while fully believing that each option would result in success (and the same utility). For example, it’s possible to prefer one investment to another while fully believing that both will issue in the same profit.

There is, however, a more sophisticated clutter-based argument that one might make. This involves pointing to the vast, plausibly infinite, number of conditional degrees of belief that an agent might be required to have on a representationalist picture. Consider, for instance, your degree of confidence that ‘Joe Biden is President of the USA’ (p) conditional on your present background knowledge (b). In addition to this, you might be thought to have an enormous array of degrees of confidence covering potential future evidence items (e, e1, e2, etc.) concerning p: D(p|b&e), D(p|b&e1), D(p|b&e2), and so on. Since some future possible evidence states concerning p instead involve deletion of information in b, moreover, the relevant array is even larger (if it’s not infinite and of the same cardinality). So doesn’t this entail that an agent must have a degree of belief box, so to speak, occupied by e, e1, and so forth, in addition to p? Doesn’t it strongly suggest, moreover, that this box must contain representations of the discrete relations between p and each of these items, or representational correlates to p|b&e, p|b&e1, etc.?

Representationalists may answer both questions in the negative. The primary reason is that a conditional degree of belief such as D(p|b&e) may be understood to be the degree of belief that an agent is disposed to have in p upon believing (or assuming/accepting) b and e (and grasping p).Footnote 12 (Not all conditional degrees of belief are active.) Possessing such a disposition does not require believing (or assuming/accepting) b or e, or even having any mental representation of b or e. It merely requires having the mental capacity to entertain (and believe or accept) b and e, and thus the capacity to mentally represent b and e. By way of analogy, consider ‘888,888,888,888/2 = 444,444,444,444’ (q). You believe q to be true. But you almost certainly did not have this belief when you began to read the penultimate sentence. Nor, in all probability, did you have mental representations of the two large numbers involved. Nevertheless, as Audi (1994) argues, you did have the disposition to believe q. Hence you had the ability to form the mental representations necessary to believe q (on a representationalist view), in line with the productivity of thought considerations mentioned in the introduction.

Thus, a representationalist may hold that agents have dispositions to believe propositions to certain degrees or extents in addition to dispositions to believe simpliciter. On this view, you once had a disposition to believe q to a high degree, as well as a disposition to believe q, upon considering q (and forming the mental representations involved therein) in the light of your background information b. This disposition has now manifested, as a result of the consideration you gave q a few moments ago. It is conceivable that this manifestation (or activation of a degree of belief in q) involves the generation of a representation of q|b (or a surrogate involving only a proper subset of your background information). But no such additional representation need be posited. Instead, q might simply ‘have a place in the degree of belief box’ that it did not have before. I will discuss the internal structure of this degree of belief box—and how this box might relate to the belief box—subsequently. For the moment, note that it is plausible that ‘conditional judgements … cannot be interpreted as a belief in any proposition’ (Edgington, 1995: p. 280), and hence that they don’t involve a single sentence in mentalese, on the view of conditional degrees of belief I am using here. This view is summarized by Ramsey (1931: p. 247):

If two people are arguing “If p will q?” and are both in doubt as to p, they are adding p hypothetically to their stock of knowledge and arguing on that basis about q; ... they are fixing their degrees of belief in q given p.

Another concern about clutter, which arises from the previous considerations, should be mentioned at this juncture. This concerns implicit degrees of belief, which easily follow from explicit degrees of belief (or degrees of belief in the appropriate ‘box’). To appreciate the concern, consider an agent with a full belief in p and a full belief in q. Such an agent might be said to have an explicit belief in p & q without attributing to her an additional full belief in a mental representation r with the content ‘p & q’. But what if an agent has a degree of belief in p and a degree of belief in q? Doesn’t it follow that she has an active degree of belief in p & q? It is possible to answer in the negative.Footnote 13

First, there is no pressing reason to think that acting on (or deciding on the basis of) two or more distinct degrees of belief requires forming additional degrees of belief. It will require an operation that takes into account the strengths of the degrees of belief involved, and not merely their existence (as would be the case in the analogical situation involving full beliefs). But the output of that operation need not be applied to a mental representation involving a conjunction, rather than another kind of representation (involving, say, the desirability of an action based on its anticipated utility and the relevant degrees of belief). Such modes of representation are required—from a representationalist perspective—irrespective of whether degrees of belief are posited above and beyond (full) beliefs. Thus mental processes might seem more complex when we introduce degrees of belief, but it is dubious that this introduces any unwarranted ontological clutter. As we will see subsequently, in Sect. 2.2.1, it is unclear that allowing for degrees of belief while rejecting representationalism results in positing less complex processes.

Second, imagine someone considering an action which she (fully) believes will be successful only if four distinct propositions—p, q, r, and s—are true. We need not think that in order to consider the action, she must proceed stepwise to mentally represent p & q, then p & q & r, and then p & q & r & s (or p & s, then p & q & s, and then p & q & r & s, etc.). So even granting that such an individual must form a representation of p & q & r & s under such circumstances—imagining a satisfactory argument to this effect were to be found—the extra clutter introduced is minimal. Moreover, it is possible to allow that agents have dispositions to form such conjunctive mental representations, which are only triggered when they are needed, in line with the general strategy outlined above. Consider, for example, your current degrees of belief in ‘Acupuncture can temporarily relieve pain’ (p) and ‘Barack Obama was once the President of the USA’ (q). It is implausible that you have formed any active degree of belief involving p and q, provided (as seems likely) the conjunction has not been relevant to any of your considerations. However, it is possible that you would form such a degree of belief under the correct circumstances, in virtue of possessing a disposition to manifest an active D(p&q|b) of a specific value under said circumstances. The kind of circumstances I have in mind will become apparent in the next few sections.

Before I continue, however, I should address one final way in which excess clutter might be a concern. This is in the special case where beliefs and degrees of belief are construed as distinct propositional attitudes, which is a possibility that I discuss in greater depth in Sect. 3. Doesn’t this lead to a ‘double storage’ problem? I don’t think so, for the same reason that I don’t think that a simultaneous fear of p and belief in p generates a double storage problem. In short, an agent might have a single mental representation of p to which they take multiple attitudes simultaneously. I suspect that the inelegant ‘boxing’ analogy is responsible for any impression to the contrary. This makes it natural to think that one would have to put a token representation of type p in the belief box, and another token representation of type p in the degree of belief box, in order to believe p and have a degree of belief that p. However, one could instead imagine ‘boxes’ that expand and contract, and which might overlap, in so far as the analogy is only intended—as all analogies are—to partially hold. It would perhaps be preferable to eschew the analogy, and to appreciate that a single representation might be connected to various attitude formation mechanisms/modules simultaneously. Whether this possibility obtains is ultimately an empirical question.

If beliefs and degrees of belief were distinct and sometimes compresent, a ‘synchronisation’ mechanism, as Weisberg (2020: p. 4) puts it, would also be necessary to ensure their coherence. First, however, it seems independently plausible that we have several mechanisms of this kind governing how our other propositional attitudes interrelate. For example, it does not seem that one can (normally) strongly fear that p while strongly desiring that p. It is a significant possibility that phenomena such as wishful thinking and self-deception occur when such mechanisms aren’t functioning as they should (or result from their limitations under unusual circumstances). In brief, these are cases where our propositional attitudes appear not to fit together as they ought to; our interest in them stems from this apparent irregularity.Footnote 14 Second, moreover, one might deny that beliefs and degrees of beliefs in the same proposition(s) are ever compresent. This is a view advocated by Weisberg (2020), which I will explain in Sect. 3.

2.2 The argument concerning content formation and role in action

I will now turn my attention to Schwitzgebel’s second argument against—or concern about—representationalism concerning degrees of belief. This as follows:

[Representationalists] have struggled to describe naturalistically how a belief gets its content and the role it plays in action. This task is apt to become much more complicated by the introduction of degrees of belief: If (simplifying a bit) the belief that a cow is there is just the belief apt to be caused, in normal circumstances, by a cow’s being there, what are we to say about the belief, with .4 degree of confidence, that a cow is there? (Schwitzgebel, 2002: pp. 271)

The treatment here is rather condensed, and requires some unpacking. The two challenges are to describe naturalistically: how a degree of belief gets its content; and the role a degree of belief plays in action. The concluding question—concerning how ‘the belief, with 0.4 degree of confidence, that a cow is there’ is ‘apt to be caused’ on a representationalist view—is intended to pertain to each challenge. So I will answer this question as I tackle each. I will interpret it as concerning generation of a degree of belief of 0.4 that a cow is present, not a ‘belief … that a cow is there’ to which a degree of confidence of 0.4 is attached (which is what the passage suggests when read literally). That’s because the presence of a degree of belief of 0.4 in a proposition is compatible with, and will often be associated with, disbelief in said proposition.

Before I tackle the challenges, however, I should like to address the overarching character of Schwitzgebel’s second argument. His concern is that an existing problem for representationalism becomes much more complicated if the existence of degrees of belief is admitted. But even if this is true, it is unclear that this is a good reason for the representationalist to balk at allowing for degrees of belief. There doesn’t appear to be any general epistemic principle of the form: ceteris paribus, the more complex a problem confronting a theory is, the more likely the theory is to have a serious defect. I say this because I grant that the introduction of degrees of belief introduces extra complexity to existing problems for representationalism, although I do not concede, for reasons I subsequently cover, that this constitutes ‘much more’ complexity. Introducing degrees of belief is also not obviously any less complex or problematic on a dispositionalist approach to belief, such as that favoured by Schwitzgebel (2002), for reasons I explain below.

2.2.1 How degrees of belief get their content

Let’s begin by considering how a degree of belief ‘gets its content’. The phrase is a little vague, but here are two salient ways to understand it: first, as concerning how the mental representations that become involved in degrees of belief arise ab initio; second, concerning how those mental representations come to feature in degrees of belief (rather than other propositional attitudes). I will deal with each in turn. My short answer on the first issue is: degrees of belief acquire their content in the same way that beliefs acquire their content. More precisely, accounting for how and when we are disposed to form a mental representation (or a sentence in mentalese)—such as ‘A cow is there’—is not a new problem. Even assuming that degrees of belief do not exist, for example, it is necessary to account for how one might come to entertain a proposition such as ‘A vampire is there’ despite not having encountered a vampire. One might see a sinister looking shadow after watching a horror movie. Or one might wonder what the world would be like if there were vampires among us, after reading Dracula or playing a roleplaying game involving undead characters. In short, forming a representation doesn’t require deploying it in a belief. And as shown earlier, allowing for degrees of belief doesn’t require allowing for mental representations of a different kind, say of the form q|b; a conditional degree of belief q|b need only involve representations of q and b.

This brings us, second, to the distinct question of how particular mental representations come to feature in degrees of belief (rather than other kinds of propositional attitudes). Due to the intimate connection between degrees of belief and belief, one might begin by suspecting that the following is true:

Belief sufficiency condition (for properly functioning agents): If an agent believes p or ~p, then she has an active degree of belief involving p and her current background knowledge b, D(p|b).

According to this condition, for a properly functioning agent there is no belief in a proposition without a degree of belief in said proposition. ‘Properly functioning agent’ features because cognitive impairments might cause the condition to be violated; for example, sometimes not all of an agent’s (relevant) background knowledge might be involved in the belief formation process. But why think this condition holds in normal psychological circumstances? The reason is that our belief formation processes are sensitive to our (subjective) evidence, and evidential relations are not exhausted by (actual or perceived) entailment relations. We know, for instance, that behaviour may be conditioned on the basis of repeated similar experiences, despite the fact that the presence of a pattern in past experience doesn’t guarantee that said pattern will continue in future experience. As Russell (1959: p. 63) pithily put it: ‘The man who has fed the chicken every day throughout its life at last wrings its neck instead … our instincts cause us to believe that the sun will rise to-morrow, but we may be in no better a position than the chicken’. Similarly, background information like ‘This die landed on six on its last four rolls’ (perhaps in conjunction with ‘Some dice are loaded’) often serves as (subjective) evidence for ‘This die will land on six on its next roll’ (q). Seeing the die land on six again when rolled would typically increase confidence in q, whereas seeing it fail to land on six when rolled would typically decrease confidence in q. Note that I don’t assume, here, that subjective evidence and objective evidence are always in alignment. For example, agents prone to supernatural beliefs may be susceptible to errors in probabilistic reasoning, such as failing to take sample size into account properly (Blackmore & Troscianko, 1985).

Paradigm cases in support of the belief sufficiency condition involve conscious deliberation concerning several hypotheses. Imagine you return to where you thought you parked your car, in a multi-storey car park, only to discover that it is not present. Your first reaction is to fearfully consider the possibility that the car has been stolen. But then it occurs to you that it would be difficult to get the car out of the car park without the correct parking ticket, which you still have in your pocket. You then consider the possibility that you remember where the car was parked relative to the level on which it is parked, but are simply on the wrong level. And so on. You have a capacity to rank these possibilities in order of salience. But this is just to say that you’re more confident in some of these hypotheses than you are in others. Initially, you might conclude that you’re on the wrong level. You may already know that if you discover you’re on the correct level, after checking, then you’ll come to believe that you’ve forgotten where you parked the car on that level.

However, one might doubt the veracity of the belief sufficiency condition in virtue of the fact that some beliefs are formed without conscious deliberation, especially if one supports a dual process theory (and perhaps dual system theory) of belief formation. Take a mundane belief caused by perception, such as your current belief that there’s a page or a screen in front of you. Was this unconsciously mentally ranked as one possibility among many? Perhaps not. I do not have any strong view on the matter.Footnote 15 Hence, I will simply note that the previous condition may be weakened, such that it is more plausible, as follows:

Type-2 belief sufficiency condition (for properly functioning agents): If an agent believes p or ~p as a result of a type-2 mental process, then she has an active degree of belief involving p and her current background knowledge b, D(p|b).Footnote 16

It is therefore a reasonable hypothesis that a degree of belief in a proposition such as ‘a cow is present’ is often generated prior to a belief in ‘a cow is present’ occurring, as part of the online (and perhaps offline) mental process of assessing the salience of various possibilities. But we have also seen that a degree of belief in such a proposition may occur merely when a belief in that proposition—or the possibility of that proposition being true—is being considered. For example, when viewing a seated bovine from afar, one might be unable to see whether it has an udder or a penis. Thus one might look to its other features, such as how muscular it is, in order to reach a determination of its sex. One might come to believe ‘a bull is present or a cow is present’ (p) with a high degree of belief and ‘a bull is present’ (q) with a lower degree of belief (say, relative to the background information that no steers or heifers are present in the area). But one would have a degree of belief in ‘a cow is present’ (r) too, which could be 0.4 in magnitude. Thus, a representationalist may say that a degree of belief of 0.4 that r is apt to be formed in a wide variety of circumstances when a belief in r is being considered. It is impossible to enumerate these circumstances (and there are other situations in which such degrees of belief may be formed, which I come to in a couple of paragraphs). The representationalist is in the same boat as the dispositionalist concerning belief at this juncture. The dispositionalist cannot list all the dispositions associated with a full belief such as ‘a cow is present’, not least because these are conditional, inter alia, on the other propositional attitudes a given agent has. Nor can the dispositionalist specify when the dispositional profile associated with any given belief is liable to come into being, to go out of being, or otherwise to change (such that an ‘in-between belief’ state arises).

A critic could still insist that the answer to ‘When will a degree of belief in r of degree n normally be formed?’ is less clear than that to ‘When will a belief simpliciter in r normally be formed?’ Granted, it might seem so when considering a belief like ‘There is a cow’ and thinking that the formation of this can be accounted for, normally, by the presence of a cow. But what, say, of a belief in Pythagoras’s theorem? It would be curious to say this is normally accounted for by the truth of the theorem, because it’s necessarily true. It would seem, that’s to say, to be so trivial as to be no kind of explanation at all. It seems much more plausible that what normally causes one’s belief in the theorem is one’s possession of appropriately strong evidence that the theorem is true. Now note the vagueness that has been introduced by ‘appropriately strong’. It is dubious that any more vagueness is introduced by declaring that ‘A degree of belief in r of degree n is normally formed when the agent has appropriately strong evidence that r’. It’s just that what’s appropriately strong will vary as n varies.

We have considered how degrees of belief may be involved in belief formation. However, it is plausible that degrees of belief are also formed when an agent merely assesses the extent to which possibilities cohere. That’s to say, the following, or something similar, appears to hold:

Possibility coherence assessment condition: If an agent has assessed the extent to which a particular possibility – the truth of a statement, p – coheres with another possibility – the truth of a different statement, q – then she has a conditional degree of belief involving p and q, D(p|q).Footnote 17

Assessment of this kind without the intention to form a belief whether p (or whether q) occurs in a variety of imaginative circumstances, such as when forming contingency plans or when engaging with fiction. For example, Alexander the Great was reputedly such a superb general, in part, because he anticipated various possible scenarios ahead of many battles, and told his lower-level commanders what to do if they came about. His aim in so doing was to consider which conditions would contribute towards victory in a variety of potential situations. He didn’t need to think the potential situations were highly likely, or even likely, in order to consider them. Similarly, if you consider whether the USA would have been better governed in the event that Hilary Clinton had been elected as its President rather than Donald Trump, you will consider ‘your degree of belief … in a hypothetical-belief distribution … derived from your actual distribution…’ (Edgington, 1995: pp. 263–264).

2.2.2 How degrees of belief play a role in action

This leaves the question of how degrees of belief play a role in action. The answer, which is alluded to in the discussion (and examples) in the previous section, is threefold. In presenting each part, I will assume we are discussing a properly functioning agent (as previously specified).

First, degrees of belief play an indirect role in (guiding) action in so far as they are involved in (at least some processes of) belief generation in their contents. When one entertains several possibilities with a view to forming a belief in one such possibility, such as in the previous car park example, one typically comes to believe eventually in the possibility in which one has the greatest degree of belief (on one’s background information). To the extent that the resultant belief influences action independently of the degree of belief responsible for its formulation, no more need be said in the present context. The challenge is not to account for how full beliefs bear on action.

Second, degrees of belief often play a direct role in guiding action when multiple considered possibilities are compatible with one’s evidence. Most obviously, this is when one is betting. One’s degrees of belief help one to determine what’s a fair bet, irrespective of whether or not the pertinent proposition is thought to be true. For instance, I would gladly make a bet of $1 at 1,000,000:1 odds that ‘a human will walk on Mars in the next fifteen years’ (p), although I do not believe that p. That’s because my active degree of belief in p is considerably higher than one in a million, nevertheless. The idea here should be familiar from expected utility theory; in effect, degrees of belief may be construed as (implicit) probability estimates or attempts to form subjective probabilities. By extension, it is evident how degrees of belief may affect actions in other contexts. For example, you might believe that you could safely take a bend at a faster speed when driving a racing car on a track day. You might refrain from taking the bend at that faster speed, however, because of a reasonably high degree of belief that you would not do so safely (coupled with a high degree of belief that an accident would result in serious injury). What counts as a ‘reasonably high degree of belief’ will depend on how risk averse you are, which may in turn depend on other factors, such as your cortisol and testosterone levels, and your other beliefs (e.g., about whether you have any dependents).

Third and finally, degrees of belief also guide action in so far as they help one to plan for various possibilities (in order of salience). We saw this in the example of Alexander the Great, but similar examples are commonplace. For example, the height (and variety) of flood defences put in place in a particular context typically depends on (group) degrees of belief concerning future eventualities. The possibility that such defences will be overcome is normally accepted, provided that this possibility is taken to be sufficiently remote. Often, moreover, precautions are prioritised in order of the degree of belief in their being required. For example, vaccinations for tuberculosis could be given to children in the USA, as well as vaccines for measles, mumps and rubella. But only the latter are normally given, because infection by tuberculosis is typically taken to be a remote possibility.

One might suspect that the cases considered in the previous two paragraphs aren’t significantly different in kind. However, sometimes degrees of belief inform planning even when no direct (non-mental) action is taken on their basis. Consider a chess player who is anticipating her opponent’s next move. She will devote time to considering what would be her best reply to a variety of such moves, and may be prepared to respond in an instant to some eventualities (e.g., if she spots a series of moves forcing checkmate). However, this doesn’t mean that she will have the opportunity to so respond. Her opponent might play an entirely unanticipated move, or a move that she had quickly dismissed as poor in her prior analysis (and therefore not properly considered).

I can now answer Schwitzgebel’s worry about how a degree of belief of 0.4 in ‘A cow is present’ might issue in action. In line with the examples adduced above, this is context dependent. However, such a degree of belief could conceivably play a causal role in an agent intentionally doing the following: accepting a small bet that a cow is not present with a betting quotient of 0.4 (with a fellow farm worker); verbally expressing a concern that there’s a heifer rather than a cow present (when in the process, say, of searching for a cow in a farming context); preferring instead to look somewhere else for a cow rather than in the immediate environment (if desperate to find one with limited time); and so on. One might also make plans that need not issue in physical action; one might plan to milk the potential cow, given the background assumption that ‘If a cow is present, it will be producing milk’, for example.

A difficult related question also deserves consideration, although it is not, strictly speaking, raised by Schwitzgebel. How—if at all—are actions guided by beliefs in combination with degrees of belief? I will address this in the next section. In doing so, I will also say more about how degrees of belief and beliefs interact, with a consideration of their respective functions.

3 Dispositional representationalism

In the course of the previous discussion, I advanced two key ideas concerning our mental representations and our belief-related propositional attitudes involving them. First, we need not be lumbered with numerous mental representations ‘in memory’—representations concerning possible future scenarios that we haven’t yet considered, or past scenarios that are no longer of interest to us, for instance—because we instead possess dispositions to form mental representations in response to appropriate (sensory or other mental) stimuli.

Second, we need not be lumbered with beliefs or degrees of belief concerning all the mental representations we happen to have ‘in memory’—we need not evaluate each and every proposition we happen to have considered—because we may instead possess dispositions to form such attitudes in response to appropriate stimuli. Moreover, having dispositions to degree of believe need not require having—or having dispositions to form—mental representations of evidential relations (of the form q|b). As explained in Sect. 2.1, to be disposed to believe q to degree r in the presence of background information b is not to be disposed to have a representation of q|b.

These two ideas form the basis of dispositional representationalism, for beliefs as well as degrees of belief. On this view, to have an active belief or degree of belief in p requires having a mental representation of p. But to have a disposition to believe or to degree of believe in p does not require having a mental representation of p. This leaves open the question of whether believing or degree of believing (inactively) is sometimes possible without having such a mental representation. Dispositional representationalism is open to this possibility. For example, perhaps it is true that having a stored (degree of) belief in p sometimes involves having a mental representation, and sometimes involves forming a special kind of disposition to form a (specific degree of) belief in p when a recall act is performed. What do I mean by a ‘special kind of disposition’? Contrast the following two cases. First, imagine an agent who has been exposed to new unexpected evidence—evidence that she has not previously considered the possibility of—and who forms a new degree of belief in p as a result. It is clear that she did not have that degree of belief previously (although she did have a disposition to form it). Second, however, imagine an agent who once had an active degree of belief in p, which became inactive without being directly stored. It appears possible, nonetheless, that the inactivation process might have involved a kind of indirect storage. This could involve creating a disposition that was not present beforehand, for her degree of belief in p to take a particular value if p again becomes relevant in her deliberations (assuming, perhaps, her evidence concerning p doesn’t change). And in that case, it might be said that she continues to have a specific inactive degree of belief in p.Footnote 18 Dispositional representationalism doesn’t take a strong stance on this matter, given the empirical state of play; hence, it has different competing variants.

Several other puzzles remain in developing dispositional representationalism (or variants thereof). Mainly, these concern the extent to which the pertinent disposition types and their manifestations are necessarily connected. When I earlier mentioned type-2 processes, for instance, I touched on the following question: is it possible to believe p without having a degree of belief in p? And more generally, one might ask ‘How are the degree of belief formation and belief formation systems connected in properly functioning humans?’ Some other outstanding questions in the vicinity are: is it possible (for such a human) to form a mental representation p without forming a degree of belief in p?; and if so, is it possible to have a disposition to form a mental representation p without having any disposition to degree of believe (or even believe) that p?

To gain some insight into how these questions might be answered, it is helpful to consider (non-human) animal belief generation systems, which tend to be less complex than our own (judging, for example, by the behaviour that animals exhibit). The main idea in considering these animal beliefs is to use them as a comparison class, to help us to understand—or, at least, to form useful hypotheses about—why we can reason in ways that some other animals cannot, and especially the respective roles (or ‘functions’) of beliefs and degrees of belief.

Consider the absence of a kind of uncertainty indicating behaviour in rats. This is shown by the following experiment, which Smith (2005) reports performing with Schull. The rats were trained to identify a repeating tone of the same pitch and to distinguish this from a repeating tone involving an identical pitch alternating with a tone of any different pitch; a reward was given when the rats correctly indicated a tone as repeating or alternating. The rats also learned how to trigger a change to a new tone (via the same means by which they initiated the tones). Call this the uncertainty option.

The rats were then exposed to tones that became harder to classify correctly as repeating or alternating, in virtue of the two pitches in the alternating tones becoming ever closer. However, the frequency with which the rats chose the uncertainty option did not increase as a result.

Smith (2005) and Carruthers (2008) provide two different explanations of why the uncertainty option is not taken by rats, but is taken by monkeys and dolphins in structurally similar experiments. The former suggests that rats aren’t capable of meta-cognition: he claims that they can’t be aware that they are uncertain, as monkeys and dolphins can be. The latter, criticising the former, instead claims that the behaviour of the monkeys and dolphins can be explained in terms of degrees of belief plus a ‘gate-keeping mechanism’, and that the rats lack this gate-keeping mechanism. So in the case of dolphins, say:

[F]rom one moment to the next, and from one glance to the next, the degrees of belief that result from a given stimulus will fluctuate somewhat. This means that in connection with particularly difficult discriminations the degrees of belief in the presence or absence of a stimulus of a given type … will often reverse themselves…

What is an animal in such circumstances to do? Plainly it would be adaptive, in cases where the animal isn’t forced to act immediately, for it to pause and do things that might resolve the indeterminacy, or for it to take action in pursuit of an alternative goal instead … [I propose] it has in place a mechanism … most likely evolved … which when confronted with conflicting plans that are too close to one another in strength will refrain from acting on the one that happens to be strongest at that moment, and will initiate alternative information-gathering behavior instead. If this issues in changed degrees of belief, and hence in sufficiently changed degrees of desire to perform one action rather than another, then that action will be performed; if not, and there are no alternatives, then one or other is chosen at random (or in accordance with momentary greater strength). (Carruthers, 2008: 66)

However, there is a third possibility for explaining the rats’ behaviour that Smith and Carruthers overlook. This is as follows. Rats lack degrees of belief (and the mechanisms necessary for generating these), although they do have beliefs. They fail to select the uncertainty option because they either fully believe one way or the other, ‘repeating’ or ‘alternating’, in response to any tone stimulus in the experiment. And the reliability of their full belief forming mechanism decreases as the tones become closer together, as a result of their aural limitations.

Carruthers (2008: p. 66) might rejoin by appealing to how perceptual systems operate:

[A]ll perceptual systems are inherently noisy in their operations, in the sense that no two presentations of one and the same stimulus will issue in precisely the same degree of belief. (That this is so is one of the central assumptions of Signal Detection Theory, or SDT.) Hence from one moment to the next, and from one glance to the next, the degrees of belief that result from a given stimulus will fluctuate somewhat.

However, from the fact that all perceptual systems are inherently noisy in their operations, it doesn’t follow that all animals are capable of degrees of belief. The quoted claim about signal detection theory, in this context, is misleading. To see this, one need only note that signal detection theory was originally developed to understand the performance of radars. So consider a very simple perceptual apparatus, which deals with only one kind of signal (and is subject to noise), for illustrative purposes. This perceptual apparatus might be connected to a belief-forming system that is only able to take the outputs to recommend: (i) belief in signal; or (ii) belief in no signal. Note also that even if the belief-forming system could also take the outputs to recommend (iii) suspension of belief in signal, this wouldn’t require it to be capable of ‘ranking’ beliefs. (A degree of belief-forming system might take exactly the same outputs to recommend a wide range of different degrees of belief in the signal being present.)

I don’t take this to show that Carruthers is wrong about rats. (To successfully criticise the arguments for a hypothesis is not to show that the hypothesis is wrong.) I take it, rather, to show that there’s a counter hypothesis—that rats have beliefs but not degrees thereof—which should be taken seriously. And even if this is not true of rats, it may be true of other animals. Since degrees of belief are centrally involved in relatively high-level cognitive activities—like planning—they might even have evolved subsequently to beliefs.

We have seen that the two kinds of system under consideration—belief forming and degree of belief forming—may be distinct (even if they’re sometimes compresent and interconnected). So think in terms of mental modules, for illustrative purposes, and now consider three pertinent possibilities for a mind: (i) the presence of only a belief forming module, (ii) the presence of only a degree of belief forming module, and (iii) the presence of both a degree of belief forming module and a belief forming module. And recall that since representationalism is here presumed, these propositional attitude generating modules must involve representations.

We have already considered (i) above, so now consider (ii). It is conceivable, at least prima facie, that a degree of belief forming module could perform all the functions of a belief forming module (perhaps in conjunction with some other kinds of module). We saw this earlier, in the discussion of the idea that beliefs might be reduced to degrees of belief (and especially Leitgeb’s work on this topic). For example, as Frankish (2009: Sect. 4) explains:

[One might] think of flat-out beliefs as behavioural dispositions arising from the agent’s partial beliefs and desires – intentional dispositions, we might call them. Thus, to say that a person has a flat-out belief with content p is to say that they have partial beliefs and desires such that they are disposed to act in a certain way – a way characterized by reference to the proposition p ... I shall refer to this generic position as the behavioural view of flat-out belief.

However, the mere fact that having degrees of belief might obviate the need for having beliefs does not mean that there are any animals for whom option (ii) holds. First, evolution can’t be relied on to result in optimal, or even close-to-optimal, systems. A nice illustration is the recently invented exoskeletal device that ‘consumes no chemical or electrical energy and delivers no net positive mechanical work, yet reduces the metabolic cost of walking by 7.2 ± 2.6% for healthy human users under natural conditions’ (Collins et al., 2015, p. 212). Second, it’s conceivable that having a belief-generating system in addition to a degree of belief generating system would be advantageous, in some circumstances, if the former didn’t always rely on ouputs from the latter. I touched on this earlier when I mentioned the possibility of a dual process theory of belief formation, and will say more about this below.

This leaves option (iii), which I think, albeit tentatively, holds for humans. My preferred hypothesis is that (as a matter of evolutionary contigency) humans have separate degree of belief forming and belief forming systems, which are nevertheless connected in significant ways (in properly functioning individuals). As I discussed in Sect. 2.2.1, degree of belief formation in p plausibly precedes belief formation in p (or ~p) in many cases, such as when conscious deliberation concerning several alternatives is involved in reaching the belief. However, there may be some situations—cases where conscious deliberation does not occur—in which beliefs are formed by different pathways. Simple perceptual beliefs are prime candidates, at least in some contexts (such as when the sympathetic nervous system is aroused). For instance, a fencer might come to believe an opponent’s blade is approaching from a particular direction—and respond accordingly by parrying or evading—without forming any corresponding degree of belief that the blade is approaching. It is conceivable, moreover, that such a process of belief-formation might lead to faster (but coarser grained) decision-making, and hence marginally swifter responses, than a process of degree of belief formation coupled to a process of belief formation would.Footnote 19

It wouldn’t do to judge purely on the basis of such phenomenological considerations, however, because it’s highly plausible that one can have degrees of belief without being aware of them. As Ramsey (1926: pp. 170–171) put it:

When we seek to know what is the difference between believing more firmly and believing less firmly, we can no longer regard it as consisting in having more or less of certain observable feelings; at least I personally cannot recognize any such feelings … It will no doubt be objected that we know how strongly we believe things, and that we can only know this if we can measure our belief by introspection. This does not seem to me necessarily true; in many cases, I think, our judgment about the strength of our belief is really about how we should act in hypothetical circumstances … It is possible that what determines how we should act determines us also directly or indirectly to have a correct opinion as to how we should act, without its ever coming into consciousness.

This is a key reason why my suggestion that option (iii) holds for humans is tentative. However, a further argument in its favour, which is advocated by Ross and Schroeder (2014) and Weisberg (2020), is that beliefs and degrees of belief appear to play distinct functional roles for independent reasons. First, we are disposed to assume the objects of our beliefs to be true in our reasoning processes; we are disposed to use them as premises in arguments, and we exhibit reliance on their truth in other related respects, such as when deciding what to asseverate. Second, we tend to be somewhat resistant to reconsidering our beliefs once they are formed. Few nominalists have become Platonists, or vice versa, for example. Perhaps confirmation bias is related to this; once p is believed, it often proves harder to dislodge it with evidence than it might have been to lower a credence in p in the absence of belief. Overall, I would summarise both aspects by saying that having a belief in p appears to involve a commitment that having an active degree of belief (even over a specific threshold) lacks. Naturally, this commitment isn’t indefeasible. When the stakes are high, or strong evidence comes in, beliefs can be shaken.

It should also be noted that it is possible that having a belief in p is incompatible with having an active degree of belief in p. It is possible, that’s to say, that forming a belief that p involves deactivating any degree of belief in p that’s present, and that activating a degree of belief in p involves deactivating any belief in p (or ~p) that’s present. This possibility is explored by Weisberg (2020: p. 31), who suggests that: ‘we do not have an occurrent full belief in P and an occurrent partial belief about P at the same time.’ Weisberg combines this view with the notion, discussed towards the start of this section, that deactivating degrees of belief involves generating new dispositions to degree of believe, rather than storing the degrees of belief in memory. He writes:

[D]egrees of belief needn’t be stored. Occurrent degrees of belief can be constructed on the fly, based on features of the retrieval process like fluency and quantity. And dispositional degrees of belief are only encoded in memory indirectly, insofar as one’s memory is in a state that disposes one to retrieve information with more or less fluency and quantity. (Weisberg, 2020: p. 26)

In any event, one might disagree with how I have here begun to develop dispositional representationalism (as it pertains to humans)—and also with Weisberg’s (2020) suggestions, which I am sympathetic to—while nevertheless thinking that its core components are correct. Dispositional representationalism doesn’t depend on our having interconnected, yet distinct, degree of belief and belief formation processes. It provides a theoretical framework for a variety of different stances on degree of belief formation (and how this connects to belief formation).

In closing this section, I should note that options (i) to (iii) do not exhaust the interesting possibilities concerning animal cognition. For example, some animals might have a mental system for recognising (some) possibilities without ranking them in order of salience. In short, that’s to say, a possibility recognition module might function alongside a belief formation module. The latter might simply select (non-conflicting) options from the former when functioning correctly.

4 Conclusion

Representationalism of a Fodorian variety can readily accommodate the notion that we have degrees of belief by recourse to the idea that we have numerous dispositions to: (i) form mental representations; and (ii) form (conditional) degrees of belief involving said representations. This view, which I call dispositional representationalism, is also compatible with the possibility of our having separate belief and degree of belief forming mechanisms (and dispositions to believe that aren’t entirely dependent on dispositions to degree of believe).