1 Introduction

Will it rain tomorrow? I’m not sure. My credence (subjective probability) that it will rain is 40 %.

Is 40 % the rational credence for me to have that it will rain? I’m not sure about that either. Properly taking all of one’s evidence into account can be tricky. I’m not sure that I’ve done it exactly right.Footnote 1

So: I am uncertain whether it will rain. And I am uncertain about the rational degree of belief for me to have that it will rain.

Is there a principle that links these two sorts of uncertainty? Or can any old beliefs about the weather be rationally combined with any old beliefs about what it is rational to believe about the weather?Footnote 2

2 The puzzle of the unmarked clock

There does seem to be a principle that links the two sorts of uncertainty. Below I will motivate an improved version of just such a principle. But first, a puzzle:

Take a quick look at this picture of an “irritatingly austere” clock, whose minute hand moves in discrete 1-min jumpsFootnote 3:

figure a

If your eyes are like mine, it won’t be clear whether the clock reads 12:17 or some other nearby time. What should you believe about the time that the clock reads?

That, it seems, depends on what the clock really reads. If the clock really reads 12:17, then you should be highly confident that it reads a time near 12:17—99 % confident, say, that the time is within a minute of 12:17. But you should be highly uncertain as between 12:16, 12:17, and 12:18. For definiteness, suppose that given your visual acuity, you should have roughly the same degree of belief in each of these three possibilities.

If the clock had instead indicated a different time—4:03, say—you should have instead been 99 % confident that the time was within a minute of 4:03, but highly uncertain as between 4:02, 4:03, and 4:04. And the corresponding pattern holds for any other time, as well.

But now there is a problem. Suppose that you are 99 % confident in H, the proposition that the time is within a minute of 12:17. You ask yourself: is 99 % the rational level of confidence for you to have in H? You might reason as follows. Either the time (indicated on the clock) is 12:17 or not:

  • If the time is 12:17, then 99 % is the correct level of confidence for you to have in H.

  • If the time is not 12:17, then 99 % is an irrationally high level of confidence for you to have in H. For example, if the time is really 12:18, then you should have less than 99 % confidence in H. For in that case, you should have more than 1 % confidence that the time is 12:19, a possibility incompatible with H.

So on the one hand, you have 99 % confidence in H. On the other hand, you think that 99 % is a level of confidence that is definitely not too low, and is probably too high (since you think that the time is probably not exactly 12:17).Footnote 4 But that looks irrational.

Compare: Pangloss is 99 % confident that the next round of Mideast peace talks will succeed. But he also thinks that he is often irrationally overconfident in good outcomes, and never irrationally underconfident in them. As a result, he thinks that 99 % is a level of confidence that is definitely not too low, and is probably too high.Footnote 5

Pangloss seems to have an irrational combination of attitudes. And, at least at first glance, your attitudes toward the unmarked clock look to be just as irrational. But given your imperfect ability to distinguish nearby times, your attitudes toward the clock seem to be perfectly rational. The puzzle is to resolve this conflict. What degrees of belief should you have about what time the clock displays?

3 Probing the assumptions

A careful treatment of the puzzle would probe some of the assumptions made in the informal presentation above. Is the puzzle an artifact of the idealized way in which the setup was assumed to be completely rotationally symmetric? Or of a questionable assumption that the clock viewer is sure just how the position of the clock hand determines what it is rational for her to believe? Or of an undefended assumption that the viewer has perfect access to her exact degrees of belief? Or of the assumption that it is rational for the clock viewer to be uncertain what it is rational to believe?

These are all fair questions, but we needn’t get caught up in the details. For the puzzle of the clock is an instance of a much more general conflict, a conflict that can’t be avoided by tweaking the details of the clock setup. And laying out and resolving the more general conflict will resolve the puzzle as a side-effect.

4 Rational Reflection

To begin laying out the general conflict, recall the question from the end of Sect. 1: can any old beliefs about the weather be rationally combined with any old beliefs about what it is rational to believe about the weather?

The answer is: no. For example:

Joe is certain just what degrees of belief he ought to have. In particular, he is certain that he ought rationally have degree of belief 99 % that it will rain. But despite this, his degree of belief that it will rain is only 1 %.

I hope you agree that Joe’s combination of attitudes is unreasonable. But if not, imagine chatting with Joe about the weather.

“The evidence strongly supports that it will rain,” he might say. “There are plenty of storm clouds nearby, and the barometric pressure is low. Furthermore, it has rained every day for the last month, and this is the rainy season. Yes, I’m quite certain of exactly what degrees of belief are rational for me, and that it is rational for me to be extremely confident that it will rain tomorrow.”

“So, will it rain tomorrow?” you ask.

“No.”

This dialogue makes dramatic that Joe’s beliefs about the weather do not mesh properly with his beliefs about what he should believe about the weather.

Joe is unreasonable because he violates the following constraintFootnote 6:

  • CERTAIN Whenever a possible rational agent is certain exactly what degrees of belief she ought rationally have, she has those degrees of belief.

This constraint is extremely plausible. But it covers only a very specific case—the case in which one is certain just what one should believe. Can it be generalized to cover cases in which one is uncertain about what degrees of belief one should have?

To see one natural way of generalizing the constraint, modify the case of Joe. Suppose that Joe is not certain that his degree of belief in rain should be exactly 99 %. Instead, suppose that he is just certain that it should be quite high—say, greater than 90 %. But despite this, Joe’s degree of belief that it will rain is only 1 %.

Again, Joe’s combination of attitudes looks unreasonable.

If Joe is rational, it seems, his degree of belief that it will rain should be somewhere in the range of values that he thinks might be rational. Indeed, it seems that his degree of belief that it will rain should be some kind of average of those values.

This suggests a tempting way to generalize CERTAINFootnote 7:

RATIONAL REFLECTION P(H | P′ is ideal) = P′(H)

whenever P is the credence function of a possible rational subject SH is a proposition, P′ is a credence function, “ideal” means “perfectly rational for S to have in her current situation”, and the conditional probability is well defined.Footnote 8

The statement above is a mouthful. But the guiding idea is simple: When one is rationally certain what credence function one should have, one should have that credence function. But when one is uncertain, then one should have as one’s credence function a weighted average of the functions one thinks it might be rational to have.

For example, suppose that one is 50 % confident that one should have credence function P 1 and 50 % confident that one should have credence function P 2. Further suppose that P 1(rain) = 70 % and P 2(rain) = 90 %. Then if one is rational, RATIONAL REFLECTION entails that one will have as one’s degree of belief in rain the average of 70 and 90 %—i.e., 80 %.Footnote 9

5 Epistemic modesty can be rational

The story so far: I have presented the puzzle of the unmarked clock, and claimed that it is an instance of a more general conflict. As a first step in laying out the general conflict, I gave some motivation for RATIONAL REFLECTION. That principle connects one’s beliefs about, say, the weather, with one’s beliefs about what it is rational to believe about the weather.

To complete my explanation of the general conflict, I will need to address the question: is it ever rational to be uncertain about what it is rational to believe?

The answer is: yes. For exampleFootnote 10:

  • HYPOXIA Bill the perfectly rational airline pilot gets a credible warning from ground control:

    Bill, there’s an 99 % chance that in a minute your air will have reduced oxygen levels. If it does, you will suffer from hypoxia (oxygen deprivation), which causes hard-to-detect minor cognitive impairment. In particular, your degrees of belief will be slightly irrational. But watch out—if this happens, everything will still seem fine. In fact, pilots suffering from hypoxia often insist that their reasoning is perfect—partly due to impairment caused by hypoxia!

  • A few minutes later, ground control notices that Bill got lucky—his air stayed normal. They call Bill to tell him. Right before Bill receives the call, should he be uncertain whether his degrees of belief are perfectly rational?

The example invites us to answer “yes”, for the following reason. Before Bill is told that he got lucky, he should be uncertain whether he is suffering from hypoxia, and so should be uncertain whether his degrees of belief are perfectly rational. And he should be uncertain about what degrees of belief it is rational for him to have.

One might reject this analysis. One might claim that Bill should be absolutely certain that he got lucky and avoided hypoxia. But that is implausible. Such certainty would be overconfidence on Bill’s part—a failure to properly take into account ground control’s credible warning.

The case of Bill shows that in some situations, rationality is compatible with uncertainty about what degrees of belief are rational. Indeed, Bill should think that he is in exactly such a situation. Let us record these conclusions:

  • MODESTY In some possible situations, it is rational to be uncertain about what degrees of belief it is rational for one to have. Furthermore, it can be rational to have positive degree of belief that one is in such a situation.

6 Modesty from anti-luminosity

An independent argument supporting MODESTY goes by way of uncertainty about evidence. One way to be uncertain what one should believe is to be uncertain what evidence one has. The conclusions of anti-luminosity arguments from Williamson (2000, 2008) entail that in some situations, rationality is compatible with uncertainty about what evidence one has. And they entail that it can be rational to suspect that one may be in such a situation. So anti-luminosity arguments provide a route to MODESTY available even to those who reject the argument based on HYPOXIA.

7 Modesty conflicts with Rational Reflection

So far I have argued for MODESTY, which entails that it is sometimes rational to be uncertain about what it is rational to believe. And I have given a motivation for RATIONAL REFLECTION, which is a constraint on how one’s opinions of what is rational to believe ought to mesh with the rest of one’s opinions. I hope I’ve convinced you that both of these claims are true.

It was a trap.

It turns out that MODESTY and RATIONAL REFLECTION are inconsistent with each other. That is the more general conflict I promised to explain. And it is the conflict at the root of the puzzle of the clock.

Here is a proof that MODESTY and RATIONAL REFLECTION are inconsistent with each other. (The proof may be skipped without loss of continuity.)

Proof: Suppose that a particular subject has credence function P, and that P′ is any credence function that the subject thinks might be ideal. Then if RATIONAL REFLECTION is true, for any proposition HP(H | P is ideal) = P′(H). In particular, when H is the proposition that P′ is ideal:

$$ P(P'\,\hbox{is ideal} | P'\,\hbox{is ideal}) = P'(P'\,\hbox{is ideal}). $$

By the definition of conditional probability, the left hand side of this equation equals 1. So P′ is immodest, in the sense that it assigns credence 1 to the claim that it itself is the ideal credence function for the subject to have. And the same is true for every credence function that the subject thinks might be ideal for her. So the subject is certain that rationality requires her to be certain about what credences it is rational to have. This conflicts with (the second sentence of) MODESTY.Footnote 11 , Footnote 12

So we have an apparent paradox: we had initially plausible motivations for believing both MODESTY and RATIONAL REFLECTION, but now have seen that the two claims conflict.Footnote 13

The puzzle of the clock is an instance of this conflict. For recall that the puzzle depended on the assumption that the clock viewer should be uncertain about what it is rational for her to believe about the time. That assumption is an instance of MODESTY. And it depended on the assumption that it is unreasonable for the viewer to be 99 % confident in a proposition, while thinking that 99 % is a level of confidence that is certainly not too low and probably too high. That assumption derives from the same considerations that motivate RATIONAL REFLECTION.

How should the conflict between MODESTY and RATIONAL REFLECTION be resolved? There are a number of optionsFootnote 14:

  • We might reject MODESTY, and claim for example that rationality requires one to be certain just what degrees of belief are rational. This would require us to say that Bill the pilot should be certain that he has avoided hypoxia, and that the clock viewer should be certain exactly what it is rational for her to believe about the clock. That seems desperate.Footnote 15

  • We might reject RATIONAL REFLECTION and similar principles, insisting that beliefs about what it is rational to believe do not impose systematic constraints on one’s other beliefs.Footnote 16 A defender of this line takes on the burden of explaining away the initial appeal of such principles, and the seeming irrationality of the clock viewer’s “99 % confidence is probably too high and definitely not too low” stance.

  • We might say that MODESTY and RATIONAL REFLECTION both express rational ideals, but admit that some rational ideals conflict with others.Footnote 17 This may be defensible in the end,Footnote 18 but there is a cost to admitting that the notion of perfect rationality is itself inconsistent. And this proposal seems not to give a clear answer to the question: what degrees of belief should the clock viewer have about the time?

This brief survey of options is not exhaustive, and the objections I have raised are not conclusive. But I hope to convince you that we can do better. We can resolve the conflict in a way that allows us to consistently hold on to MODESTY, and also to the considerations that motivate RATIONAL REFLECTION.Footnote 19

All we need to do is amend RATIONAL REFLECTION. Let me explain.

8 New Rational Reflection

Think back to how RATIONAL REFLECTION was motivated above (in Sect. 4) The story started with this constraint:

  • CERTAIN Whenever a possible rational agent is certain exactly what degrees of belief she ought rationally have, she has those degrees of belief.

This constraint is extremely plausible and extremely cautious. It doesn’t even rule out that one can be rationally certain that one is irrational. It just rules out that one can be rationally certain exactly what degrees of belief one should have, without having those degrees of belief.

The next step was to generalize this constraint to cover cases in which one is uncertain what degrees of belief one ought to have. It was suggested that when one is uncertain what credence function is rational, one should have a credence function that is a particular weighted average of the ones that one thinks might be rational. That is RATIONAL REFLECTION.

There was nothing wrong with the first step of the story: CERTAIN is correct. And there was nothing wrong with trying to generalize CERTAIN to cover more cases. But RATIONAL REFLECTION is the wrong way to generalize CERTAIN.

A better way of generalizing CERTAIN is brought out by the following line of reasoning.Footnote 20

Suppose that you’re considering what credence function it would be rational for you to have. Consider the candidate functions—the ones that you think might be ideally rational—as a kind of panel of purported experts. In the special case that you are sure what function is ideal, the panel contains just a single member, and you’re sure that she is the true expert. In that case, you should just believe what the expert believes. That corresponds to CERTAIN.

But now suppose that you are uncertain which function is ideal. In that case, it is as if the panel contains a number of purported experts and you are uncertain which one is the true expert.

For concreteness, suppose that the panel consists of credence functions named Cassandra, Merlin, and Sherlock. Conditional on Sherlock being the true expert, what credences should you have?

It is tempting to answer: the ones that Sherlock has. That is how RATIONAL REFLECTION answers the question. But that answer is not in general correct. For Sherlock might himself be uncertain who is the true expert. And conditional on Sherlock being the true expert, you should not be uncertain who the true expert is.

So: conditional on Sherlock being the true expert, you shouldn’t align your credences to Sherlock’s. What should you do?

A warm-up question will point the way to the answer: What should be your credence that it will rain tomorrow, given that Sherlock is the true expert and that many people will use umbrellas tomorrow?

Answer: your credence should be rather high. And it should not in general equal Sherlock’s unconditional credence that it will rain. For the information that many people will use umbrellas tomorrow provides strong evidence that it will rain tomorrow.

This suggests that your conditional credence should not equal Sherlock’s credence that it will rain. Rather, it should equal Sherlock’s credence that it will rain conditional on (at least) the information that many people will use umbrellas tomorrow.

More generally: your credences, conditional on Sherlock being the true expert, should equal Sherlock’s credences conditional on Sherlock being the true expert. Further generalizing this thought yields the following principleFootnote 21:

NEW RATIONAL REFLECTION P(H | P′ is ideal) = P′(H |P′ is ideal)

whenever P is the credence function of a possible rational subject SH is a proposition, P′ is a credence function, “ideal” means “perfectly rational for S to have in her current situation”, and the conditional probability is well defined.

An example will help illustrate the difference between the new principle and the old. Suppose that you are 50 % confident that you should have credence function P 1 and 50 % confident that you should have P 2. RATIONAL REFLECTION entails that if you are rational, your probability for a proposition H will be the average of P 1(H) and P 2(H). In contrast, NEW RATIONAL REFLECTION entails that if you are rational, your probability for H will be the average of P 1(H |P 1 is ideal) and P 2(H | P 2 is ideal).

(There is another strategy to motivate the new principleFootnote 22: Suppose that a subject starts out wondering: what degrees of belief are rational for me? And suppose that she then learns the answer to that question. She will end up certain just what she ought to believe. And so CERTAIN will impose a constraint on her final state of mind. But that will indirectly impose a constraint on her initial state of mind—by way of an assumption about how the subject should update her beliefs when she gets new information. In other words, we can generalize CERTAIN by saying: Rational agents have states of mind that are consistent with CERTAIN, and would remain consistent with CERTAIN were they to learn the truth about what they ought to believe. This strategy yields a derivation of NEW RATIONAL REFLECTION in a special case, and so lends some credence to the truth of the principle in full generality.)Footnote 23

Moral: NEW RATIONAL REFLECTION is the right way to generalize CERTAIN. It expresses the manner in which a subject’s opinions about what it is rational to believe constrain her other opinions. And it is perfectly consistent with MODESTY. So the conflict between MODESTY and RATIONAL REFLECTION has a satisfying resolution: drop RATIONAL REFLECTION and adopt NEW RATIONAL REFLECTION instead.

The end. Except for one final matter: it remains to address the puzzle of the clock.

9 Resolving the puzzle

Recall the setup:

figure b

You look at the clock, ending up 99 % confident in H, the proposition that the time is either 12:16, 12:17, or 12:18. You are highly uncertain as between those three possibilities, assigning, say, 33 % of your confidence to each of them.Footnote 24 That pattern of attitudes looks reasonable.

But as we saw in Sect. 2, that pattern of attitudes also entails that you think that 99 % is a level of confidence in H that is definitely not too low for you, and is probably too high. That makes your 99 % confidence in H look unreasonable.

So: the first line of reasoning concludes that your beliefs about the clock are reasonable. The second line concludes that those beliefs are unreasonable. What has gone wrong? That is the puzzle.

The answer is that the second line of reasoning is wrong. For what lies behind that reasoning is the thought that one’s degree of confidence in H should always be a weighted average of the degrees of confidence that one thinks might be rational. That is an initially tempting thought. And it is a thought that follows from RATIONAL REFLECTION. But it is incorrect.

In contrast, this “averaging” thought doesn’t follow from NEW RATIONAL REFLECTION. That is why the state of uncertainty about the clock described above is compatible with NEW RATIONAL REFLECTION. The bottom line is that there just isn’t anything wrong with your state of uncertainty about the clock.

Then why does that state of uncertainty seem unreasonable?Footnote 25 Because it superficially resembles states of uncertainty that are genuinely unreasonable.

For instance, your state of mind superficially resembles the state of mind of Pangloss, the self-aware optimist from Sect. 2:

Pangloss is 99 % confident that the next round of Mideast peace talks will succeed. But he also thinks that he is often irrationally overconfident in good outcomes, and never irrationally underconfident in them. As a result, he thinks that 99 % is a level of confidence that is definitely not too low, and is probably too high.

Pangloss seems to exhibit the same pattern of uncertainty that you do. And Pangloss is unreasonable. That provides additional temptation to think that you, too, are unreasonable.

But the cases are different, and the reason Pangloss is unreasonable does not apply to you as a viewer of the clock. Let me explain what makes Pangloss unreasonable, and why no corresponding consideration applies to the viewer of the clock.

Let S be the proposition that the peace talks will succeed, and let P G be Pangloss’s credence function. For simplicity, suppose that Pangloss is sure that the ideal credence for him to have in S is either 99 or 66 %, but has no idea which. It followsFootnote 26 that Pangloss has approximately 99 % credence in S, conditional on the rational credence in S being 66 %:

$$ P_G(S| 66\,\%\,\hbox{is ideal}) \approx 99\,\%. $$

There looks to be a mismatch here. And there is: on any natural understanding of the case, such a conditional credence is totally unreasonable. Conditional on 66 % being the ideal credence to have that the talks will succeed, Pangloss’s credence that the talks succeed should not be approximately 99 %. Rather, it should be approximately 66 %.

Compare: in an ordinary case, one’s credence that it will rain next week, conditional on the rational credence being 66 %, should be approximately 66 %. Only with a very special back-story would it make sense for that conditional credence to be anywhere near 99 %. And the same is true for Pangloss’s conditional credence above.

That is why Pangloss is unreasonable.

At first glance, the situation with the viewer of the clock looks similar. In particular, the viewer of the clock has 100 % credence in H, conditional on the rational credence in H being 66 %:

$$ P(H| 66\,\%\hbox{is ideal}) = 100\,\%. $$

That conditional credence looks to involve the same sort of mismatch as Pangloss’s. But in the clock case, the mismatch is only apparent. For the clock case has the following very special feature. Given the setup, the information “66 % is the ideal credence to have in H” is strong evidence that H is true. Indeed, it is conclusive evidence that H is true, since the clock viewer is certain that

66 % is the ideal credence for the viewer to have in H only if the time is either 12:16 or 12:18.

No corresponding claim holds for Pangloss.

The bottom line is that clock viewer’s conditional credences exhibit the same apparent mismatch that Pangloss’s do. The difference is that in the viewer’s case, the mismatch is only apparent.