1 1

Jonathan Vogel (2000) describes a “bootstrapping” counterexample to reliabilism. For our purposes, we can take the target view to be:

Reliabilism: A belief is justified just in case it is formed by a reliable process.

Vogel’s counterexample involves Roxanne, a woman who forms beliefs about the contents of her car’s gas tank by reading its gas gauge. Roxanne has not bothered to ascertain whether the gauge is a reliable indicator of the tank’s contents. In point of fact, the gauge is reliable, but Roxanne has neither justification to believe this is true nor justification to believe it is false.

Over the course of many days, Roxanne reads the gauge repeatedly. On each occasion, she reasons as follows:

  • On this occasion, the gauge reads “F”.

  • On this occasion, the gas tank is full.

  • On this occasion, the reading on the gauge matches the contents of the tank.

The gauge does not read “F” on every occasion; the example shows a pattern of reasoning that will yield the same third proposition each time. The crucial point is that according to reliabilism, Roxanne is justified in believing each of the above propositions (or their analogues) on each occasion. Because Roxanne’s perception is reliable (we may suppose), the belief that the gauge reads “F” is justified once she looks at the gauge. Because the gauge is reliable, reading the gauge gives Roxanne justification for the belief that the tank is full. From these two justified beliefs, Roxanne may deduce the justified belief that the reading on the gauge matches the tank’s contents. Footnote 1

After Roxanne has engaged in this pattern of reasoning many times, she can deduce:

  • I have read the gauge many times, and each time its reading has matched the contents of the tank.

By induction, Roxanne then forms the justified belief that:

  • The reading on the gas gauge always matches the contents of the tank.

From which a quick deduction yields:

  • The gauge is reliable.

Roxanne started out without any justification to believe that her gas gauge is reliable. Yet if reliabilism is correct, Roxanne can gain such justification simply by reading the gauge over and over and working through Vogel’s bootstrapping reasoning. This looks like a serious problem for reliabilism.

One might think that the problem comes from Roxanne’s inductive reasoning step—perhaps the reliabilist can avoid bootstrapping by denying that induction from justified premises always yields justified conclusions. Footnote 2 However, we can easily construct bootstrapping examples with no inductive step. Suppose that for some reason Roxanne knows before ever checking her gas gauge that

  • The gauge is either reliable or anti-reliable.

where being “anti-reliable” means the reading on the gauge never matches the contents of the tank. As soon as Roxanne observes the gauge once and deduces that

  • On this occasion, the reading on the gauge matches the contents of the tank.

she can be justified in believing the gauge is reliable. No inductive step is required. All we need to get the bootstrapping going are the ability of the gauge to produce justified beliefs, the ability of perception to produce justified beliefs, and

Closure: If each premise in a set is justified, any proposition jointly entailed by the set is justified as well.

(In symbols: \([Jp_1\,\&\,Jp_2\,\&\,\ldots\,\&\,Jp_n\,\&\,(\{p_1, p_2, \ldots, p_n\} \vDash q)] \rightarrow Jq\))

The importance of bootstrapping may seem mitigated by the fact that no one in the literature endorses reliabilism as I have described it. Footnote 3 Goldman (1976) argued briefly for the view, only to reject it for lack of a defeaters condition. But adding a reliabilist-friendly defeaters condition to the reliabilism I’ve described won’t save the view from bootstrapping; it would be contrary to the spirit of reliabilism to claim that Roxanne’s lack of initial evidence that the gauge is reliable counts as a defeater for the proposition that it is. Moreover, (Cohen 2002) shows that a variety of epistemological theories besides reliabilism give rise to bootstrapping examples. Cohen concludes that a theory will permit bootstrapping just in case it allows a source to give an agent justification without that agent’s being antecedently justified in believing the source is reliable. Footnote 4 (Reliabilism, for example, allows the gas gauge to justify beliefs about the tank’s contents without Roxanne’s having antecedent justification to believe the gauge is reliable.)

Cohen defends his conclusion by working through a number of epistemological theories with the relevant feature and generating bootstrapping examples for each. Yet this falls short of a general argument that he has identified the correct class of views; for instance, it leaves open the possibility that an epistemology might have bootstrapping problems even if it requires reliability information before an agent can gain justification from a source. Footnote 5 As Cohen admits (p. 321), his efforts are hindered by our lack of a precise characterization of the problem cases—almost no attention has been paid to defining necessary and sufficient conditions for “bootstrapping.”

Insufficient attention has also been paid to the question of why it’s a bad thing for a theory to permit bootstrapping. This question is particularly important in light of Van Cleve’s argument (2003) that any theory immune to bootstrapping will be susceptible to skepticism. If we are forced to choose between bootstrapping and skepticism, we need to know what costs a theory incurs by permitting bootstrapping. The bootstrapping literature largely trusts our intuitive rejection of bootstrapping processes. It is sometimes pointed out Footnote 6 that bootstrapping allows an agent to use a process in establishing its own reliability. But it’s unclear that such circularity is always vicious—doesn’t some of our justification for believing perceptions involve past deliverances of perception? It is also sometimes suggested that bootstrapping allows Roxanne to conclude that her gauge is reliable without performing any independent checks, such as correlating the gauge’s readings with dipstick observations of the tank’s contents. But very little is said about what “independent” comes to here—what conditions must the dipstick observation meet to clear the bar?

Perhaps there are many bad things about bootstrapping; perhaps there is no the thing that is wrong with an epistemological theory that permits it. But let me suggest one problem a theory has if it makes bootstrapping possible. There’s an old idea in epistemology that some risk must attach to any reward: If an investigation can’t undermine a conclusion, it can’t support it either. Now consider Roxanne’s situation. Suppose that the true epistemology allows Vogel’s bootstrapping procedure to yield justification if the gauge is reliable, and suppose that Roxanne (a seasoned epistemologist) knows this. Suppose further that before she makes any observations of the gauge, Roxanne can predict what kinds of observations she’ll make and what kinds of reasoning she’ll use. She doesn’t know precisely what reading the gauge will give on each occasion, but she knows that whatever the reading, she will conclude that it matches the contents of the tank. Finally, suppose Roxanne knows in advance that which precise gas levels are indicated at which times won’t yield any clues as to the gauge’s reliability; perhaps her car will be driven or partially filled up by her brother each day before she checks its gauge so that there will be no pattern in the readings.

Roxanne knows that after she has made many observations, she will believe that the gauge is reliable. Her epistemological knowledge tells her that if the gauge is indeed reliable, her belief that it is will be justified. On the other hand, Roxanne knows that nothing in the course of her observations will give her any justification for believing the gauge is not reliable. Considering the proposition p that the gauge is reliable, Roxanne knows that if p is true, her investigation will justify it for her, but if p is false, the investigation will provide no evidence against it. And Roxanne knows all this before the investigation of p begins, when by stipulation she lacks justification for believing either p or ∼p.

In short, an epistemological theory that allows Roxanne to bootstrap permits a no-lose investigation. But the true theory of justification shouldn’t permit no-lose investigations.

2 2

This article’s main suggestion is that true epistemological theories do not permit no-lose investigations. Later our goal will be to identify classes of theories that meet this desideratum. First, however, we need to clarify the claim in a number of ways.

Up to this point I haven’t been precise about the notion of justification in play. This is because I think the points in this article work for a number of notions of justification, as well as justification-like notions such as having warrant for, having support for, having evidence for, and even being in a position to know. Anywhere the word “justification” appears, or its abbreviation “J”, you should feel free to substitute any of those notions—subject to a few conditions.

First, I will assume that knowledge entails justification. I will not assume that knowledge is justified true belief. I will also not assume that a proposition’s being justified for a particular agent entails that the agent believes the proposition. That is, I am working with a notion of propositional rather than doxastic justification. We attribute doxastic justification with locutions like “the agent’s believing p is justified;” an agent has doxastic justification for p only if the agent believes p and does so for good reason. Propositional justification is evoked by “p is justified for the agent;” p is propositionally justified for an agent whenever that agent has adequate justificatory resources for p, whether the agent avails herself of those resources of not. Footnote 7

Our notion of justification may be global or local; it may require surpassing some evidential threshold or it may not. Any of the following could be our notion of justification: has some evidence for p, has prima facie justification for p, has pro tanto reason to believe p, has all-things-considered justification for p, is in a position to know p, etc.

Finally, I am going to overlook the distinction between an investigation that removes justification to believe p, an investigation that provides justification to disbelieve p, and an investigation that provides justification to believe ∼p. I will describe an investigation that does any of these as “undermining” p. In general, I will coarse-grain descriptions of an agent’s justificatory situation with respect to a proposition so as to work with just four categories: p is justified for the agent, ∼p is justified for the agent, both, or neither. (The “both” option will be available on only some notions of justification.)

So much for clarifying our notion of justification; what about no-lose investigations? Notice that our main suggestion is a conditional with an existential antecedent: Given a particular epistemological theory, if there exists a situation that meets the conditions for a no-lose investigation, then that theory is false. This suggestion will be plausible only if every case meeting those conditions is epistemologically repugnant.

To that end, we should start by noting that a better name for the class of cases might be “some-win-no-lose investigations” (though we’ll stick with the catchier moniker). There’s nothing troubling about the existence of an “investigation” of p that has no hope of providing justification for p or for ∼p. We should restrict our attention to cases in which the undermining of p is not possible but the justification of p is.

Even then, we might think there are investigations that are “no-lose” in some sense but are perfectly permissible by the true theory of justification. Suppose I’m about to drill for oil below the spot on which I’m standing, and there is indeed oil just beneath the surface. There’s a sense in which this investigation is guaranteed to produce justification for the claim that there’s oil beneath me—it will in fact produce such justification. Yet this is a perfectly good investigative process on any plausible theory of justification.

But this is not a no-lose investigation in our sense. When we say that an investigation of p shouldn’t be guaranteed not to undermine p, the fact that an investigation will justify p in the actual world isn’t strong enough to produce the kind of guarantee in which we’re interested. This suggests that our conditions for a no-lose investigation might be spelled out using counterfactuals—perhaps a no-lose investigation is one that fails to undermine p not only in the actual world but also in close possible worlds. Yet that move would force us to choose among various theories of counterfactuals, and would also bring in questions about how to identify this very investigation across possible worlds. We would quickly find ourselves dealing with the generality problem (about the correct level of description for identifying epistemological processes), which is already a problem for a variety of epistemological views. Footnote 8

A better alternative is to spell out the guarantee in a way that sets aside process descriptions and focuses exclusively on conditions in the actual world. When we require an investigation to have the possibility of undermining p, we should focus not on metaphysical possibility but instead on epistemic possibility for the agent. We should ask whether the agent entertains any epistemically possible worlds in which the investigation undermines p; that is, we should ask whether the agent knows in advance that undermining is ruled out.

Pursuing this line yields a simple list of necessary and sufficient conditions for a no-lose investigation. Suppose an agent knows at t 1 that between that time and some specific future time t 2 she will investigate a particular proposition (which we’ll call p). Her investigation counts as a no-lose investigation just in case the following three conditions are met:

  1. (1)

    p is not justified for the agent at t1. (∼J1p)

  2. (2)

    At t 1 the agent knows that ∼p will not be justified for her at t 2. (K 1[∼J 2p])

  3. (3)

    At t 1 the agent knows that if p is true, p will be justified for her at t 2. (K 1[pJ 2 p])

The second of these conditions captures the sense in which a no-lose investigation is guaranteed to have no justificatory downside. The first and third conditions provide the possible upside: if p is true, the agent will go from p’s not being justified for her at t 1 to p’s being justified for her at t 2. Our main suggestion is that, setting aside a few small exceptions to be discussed later, a correct epistemological theory will not allow investigations satisfying all three of these conditions. Footnote 9

To illustrate this definition of no-lose investigations, consider the following story:

The Court Jester: Noblemen from Italy have arrived in the King of England’s court, bringing with them the jester Giacomo. The King has heard a rumor that Giacomo is quite the ladies’ man, Footnote 10 but the King knows the rumor’s source is unreliable and so lacks justification to believe it.

To settle the matter, the King orders the jester to regale the court with tales of his amorous conquests. The King’s instructions are very precise: If Giacomo is indeed a ladies’ man, the tales are to be true; if not, the jester is to make up false tales that sound convincingly real. The King knows the jester will obey these orders—to disobey is punishable by death, and nearby Italian nobles who know the truth about Giacomo will be happy to expose any disobedience. So the King knows that whether the jester is a ladies’ man or not, His Highness will hear nothing this evening that convinces him otherwise.

As the King expects, the jester spends a long evening describing broken hearts left littering the landscape. In fact, Giacomo is a ladies’ man and all his tales are true. At the end of the evening, is the King justified in believing this?

An epistemological theory’s answer to this question will depend on whether it holds that the King is justified in believing what the jester says. Footnote 11 If so, then once the King is justified in believing that Giacomo wooed the Lady Gwendolyn, that Giacomo wooed the Maid Jean, etc., the King will (by Closure) have justification to believe that the jester is a ladies’ man.

The point of the story is that this conclusion is absurd. One should not be able to gain justification for a proposition just by ordering up favorable evidence, even if that evidence happens to be true. Any epistemological theory that says one can is incorrect. This is brought out by the fact that the King’s investigation, if capable of providing justification, could be worked up into a no-lose investigation. Suppose that according to the true epistemological theory the King’s procedure provides him with justification if Giacomo is reliable—and suppose the King knows that. Now consider the King just after he decides what orders to give the jester. The King’s source for the rumor is unreliable, so he lacks justification to believe the proposition p that Giacomo is a ladies’ man. This satisfies the first condition for a no-lose investigation. The King also knows that the jester is a seasoned performer who will be under serious duress and so will not give any indication that he is not a ladies’ man. This satisfies the second condition. Finally, the true epistemological theory tells the King that if the jester is indeed a ladies’ man, his reports will provide the King with justification. So the King knows that if p is true, he will wind up with justification to believe p. This satisfies the third condition. The true epistemological theory should not allow the King to gain justification by listening to Giacomo, in part because if the King could do so he could engage in a no-lose investigation.

I will leave it to the reader to verify that if Roxanne is well-informed epistemologically and Vogel’s bootstrapping process goes through, it creates a no-lose investigation for her as well.

3 3

One might think that the possibility of no-lose investigations could be ruled out immediately by something like Bas van Fraassen’s Reflection principle. (van Fraassen 1995) The Reflection principle itself will probably not do the job, because it concerns an agent’s current and future credences and our criteria for no-lose investigations are not obviously about credences. Footnote 12 But there is a principle in the vicinity that might apply to our case:

Epistemic Reflection: Given two times t1 and t2 and a proposition p, if the agent has justification at t1 for the proposition that she will have justification for p at t2, then the agent has justification for p at t1. (J1[J2p] → J1p)

Epistemic Reflection is a highly plausible principle Footnote 13 as long as we make allowance for some well-known kinds of exception to Reflection. Arntzenius (2003) argues that Reflection fails when an agent is subject to memory loss or the threat thereof, and such cases will also create exceptions to Epistemic Reflection. For example, suppose I have evidence at t 1 that favors p but also have a defeater for that evidence. Suppose also that I know I will forget the defeater (but not the evidence) between t 1 and t 2. I have justification at t 1 to believe that p will be justified for me at t 2, but this does not give me justification for p at t 1. One may also argue (via the Sleeping Beauty Problem) that Reflection fails in cases in which p is context-sensitive, or in which some of the evidence relevant to p is context-sensitive even if p itself is not. Footnote 14 Such cases will also create exceptions to Epistemic Reflection.

There are also exceptions to Reflection in which the agent suspects she may be irrational at future times. However, these cases do not create exceptions to Epistemic Reflection because Epistemic Reflection concerns the propositions that are justified for an agent at various times, not what the agent actually believes at those times.

These exceptions to Epistemic Reflection can also provide exceptions to our claim that the correct theory of justification will not allow no-lose investigations. For a silly example, take the proposition p “There are memory-erasers who want belief in their existence to be justified.” Suppose that at t 1 I have evidence for p but also have a defeater for that evidence (so that I meet the first condition for a no-lose investigation). Suppose further that I know of some specific future time t 2 that I’m not going to get any evidence against p between now and then (so that the second condition is met). Footnote 15 Finally, suppose that if p is true the memory-erasers will remove the defeater from my memory so that I have justification to believe in them at t 2 (thereby meeting the third condition). Under our definition, this example involves a no-lose investigation, yet such arrangements will be possible on any epistemological theory that allows for defeated justification.

To avoid memory-loss and context-sensitivity counterexamples, I hereby amend our main suggestion to apply only to cases in which: (1) p and all the agent’s relevant evidence concerning p are context-insensitive; and (2) the agent knows at t 1 that every proposition relevant to p that is justified for her at t 1 will also be justified for her at t 2. Footnote 16 The Roxanne and Giacomo examples either meet these conditions or can be made to do so by slightly rewriting context-sensitive premises.

With the proper caveats in place, Epistemic Reflection should be adopted as part of any correct theory of justification. But that alone won’t make a theory immune to no-lose investigations. Epistemic Reflection concerns cases in which the agent has justification at t 1 for the proposition that she will have justification for p at t 2. But in a no-lose investigation the agent has justification at t 1 only for the proposition that if p is true she will have justification to believe it at t 2. This is insufficient to justify p for her at t 1 by Epistemic Reflection; so Epistemic Reflection does not put our third no-lose investigation condition in tension with the first.

4 4

Having laid out precise necessary and sufficient conditions for no-lose investigations, we can now ask what kinds of epistemological theories make such investigations impossible.

First, any theory that says no agent is ever justified in believing anything will clearly avoid no-lose investigations. Footnote 17 But I take this to be an unappealing option.

Second, one could avoid no-lose investigations by denying Closure. For example, one might have a theory on which an agent’s justification for a proposition is always relative to a set of alternatives. If moving from premises to entailed conclusion changes the set of relevant alternatives, justification may not be preserved and Closure may be violated. Footnote 18 For example, when Roxanne forms her initial belief that the tank is full, the only alternatives under consideration are (1) the gauge reads “F” and the tank is full, (2) the gauge reads “1/2” and the tank is half-full, etc. When she later considers whether the gauge is reliable, the set of relevant alternatives includes cases in which the gauge reading mismatches the contents of the tank. While she had justification to believe that the tank was full relative to the initial set of alternatives, she lacks justification for the proposition that the gauge is reliable relative to this expanded set of alternatives. So Roxanne cannot justifiably infer the reliability of the gauge from her individual justified beliefs in the gauge’s reports.

Disavowing Closure can thwart Vogel’s Roxanne example. But can a theory escape all possible no-lose investigations by denying Closure? The answer appears to be “yes.” All of our no-lose investigation examples involve a proposition p that is equivalent (given the agent’s background knowledge) to the proposition that her epistemic process is reliable. (For example, relative to the King’s background knowledge the jester is a ladies’ man just in case his testimony is reliable.) Now suppose some reliabilist denies Closure, and we try to construct an investigation that is no-lose on his view. Footnote 19 The epistemic process in question will have to report on some matter other than its own reliability, and the agent will then have to infer proposition p from that report. But if the reliabilist denies Closure, he can deny that p is justified for the agent after the inference, thereby blocking the no-lose example. Footnote 20

Crispin Wright’s theory (2004) accepts Closure—at least for the justificatory notion he calls “warrant”—but avoids no-lose investigations by another tack. Wright would not grant the reliabilist’s claim that Roxanne can gain warrant for beliefs about the contents of the gas tank just by reading the gauge; for Wright Roxanne would need antecedent evidence that the gauge is reliable. Yet Wright avoids the threat of skeptical regress here by holding that there are some fundamental epistemic processes (such as perception) which we are entitled to accept as reliable without any evidence to that effect. Then why can’t we generate a no-lose investigation whose p concerns the reliability of one of these special epistemic processes? Because Wright believes we are always entitled to accept the proposition that such a process is reliable; that entitlement is not earned by any process we go through. So for a p of this sort we will never be able to construct an example that satisfies the first condition for a no-lose investigation; the agent in question will always have warrant for p at t 1.

Another way to maintain Closure but escape no-lose investigations is to endorse

Negative Self-Intimation: If an agent is not justified in believing a proposition, she is justified in believing she is not so justified. (∼JpJ[∼Jp])

Negative Self-Intimation is usually accepted as part of a broader position that the justificatory status of a proposition is always accessible to an agent. So adherents of Negative Self-Intimation typically accept Positive Self-Intimation (or the “JJ” principle) as well, according to which an agent has justification to believe she’s justified whenever she is. But in principle Negative Self-Intimation can stand on its own.

One can formally prove that Negative Self-Intimation (in the company of Epistemic Reflection and Closure) bars no-lose investigations, but the proof is complicated because it involves reasoning about an agent’s reasoning about what’s justified for her at various times. So I have left the proof to an appendix. Roughly speaking, though, here’s how it works: Suppose for reductio that Negative Self-Intimation is true and no-lose investigations are possible. Consider an agent who has arranged an investigation meeting our three conditions for some given p, t 1, and t 2.

At t 2, p is either justified for the agent or it isn’t. Suppose for reductio that it isn’t. By Negative Self-Intimation, the agent has justification at t 2 to believe that she lacks justification for p. But she also knows that if p is true, she has t 2 justification for p. So by Closure the agent has t 2 justification for ∼p. But one of our conditions entails that ∼p is not justified for the agent at t 2. So we have a contradiction; it must be that p is justified for the agent at t 2.

At t 1, the agent can run through all the reasoning in the previous paragraph. So at t 1 the agent has justification to believe that at t 2 she has justification for p. By Epistemic Reflection, the agent then has justification for p at t 1 as well. But one of our conditions for a no-lose investigation was that the agent lacks t 1 justification for p. So we have another contradiction, and we can conclude that given Closure and Epistemic Reflection, Negative Self-Intimation is inconsistent with the possibility of no-lose investigations.

The basic idea of this proof is that if Negative Self-Intimation is true, an agent will always be able to notice when she lacks justification for p. In a no-lose investigation, lacking justification for p at t 2 is evidence that p is false. So if Negative-Self Intimation is true the only way to guarantee the agent’s investigation won’t undermine p is to guarantee that that investigation will provide justification for p. But if an investigation is (epistemically) guaranteed to provide justification for p at t 2, then by Epistemic Reflection the agent already has that justification at t 1, in violation of our first no-lose condition. Footnote 21

To summarize the results of this section, an epistemological theory may avoid no-lose investigations by: denying the possibility of justification, denying Closure, allowing agents “warrant for nothing” that particular epistemic processes are reliable, or adopting Negative Self-Intimation. Combinations of these moves will work as well. A number of views, however, will remain in trouble. A Closure-embracing reliabilist, for instance, will grant the possibility of justification but will not give agents either free warrant to accept that their processes are reliable or the ability to detect when such processes are not. So such a view will allow no-lose investigations, as the Roxanne example reveals.

5 5

We began by asking what’s wrong with bootstrapping, and it’s not clear we’ve answered that question. Perhaps many things are wrong with bootstrapping. Perhaps one of them is that bootstrapping generates no-lose investigations. But whatever is wrong with bootstrapping in general, I think we have identified something that goes wrong with any epistemological theory that allows Roxanne to gain justification that her gas gauge is reliable: it creates a no-lose investigation. In general, it is a bad thing if an epistemological theory makes no-lose investigations possible. And while we don’t have necessary and sufficient conditions for a case to qualify as bootstrapping, we have provided such conditions for no-lose investigations. This makes it much easier to identify classes of epistemological theories that avoid no-lose investigations, as we did in the previous section.

It may be objected that we have substituted for the question “What’s wrong with bootstrapping?” the question “What’s wrong with no-lose investigations?” That’s an interesting question as well, but there’s an important contrast: while our intuitive aversion to bootstrapping is a recently-recognized phenomenon, the aversion to all-upside epistemology that lies behind our rejection of no-lose investigations is much older and better-entrenched. It appeared once in Nozick’s claim (what Nozick (1981) called the “variation condition” and now sometimes goes by “sensitivity”) that an agent knows a proposition only if she wouldn’t believe it were it false; before that it was recognizable in Popper’s (1961) position that a theory can be tested only if it is falsifiable—an idea which in turn has origins as far back as Bacon’s Novum Organum. Footnote 22 Van Cleve can argue that allowing bootstrapping is the price of avoiding skepticism, and being only mildly invested in bootstrapping-avoidance we may entertain that as an acceptable exchange. But something much deeper (in me at least) objects to a view that allows no-lose investigations.Footnote 23

The possibility of no-lose investigations also has odd consequences. Consider our King again, and imagine that he is someone who values having justified beliefs. He knows that he has no justification for believing the rumor about the jester, but if the true theory of epistemology permits him a no-lose investigation he knows that after talking to the jester he may just (if he gets lucky and the rumor is indeed true) possess such justification. This makes it important to the King to talk to Giacomo, even though he already knows what information he is going to get. Even though the King knows what Giacomo is going to say, his face-to-face interaction with the jester has the potential to change the rumor’s justificatory status and so is something His Highness will seek out. This strikes me as an odd fetishization of the actual employment of a process whose results are entirely anticipated. Footnote 24

It will be noted that while our list at the end of the previous section describes sufficient conditions for a theory’s avoiding no-lose investigations, we haven’t shown that making one of the moves on that list is necessary if one wants to avoid no-lose investigations. Hopefully our precise definition of a no-lose investigation will some day make a proof of necessity and sufficiency possible. For now let me offer a line of thought that at least suggests that making one of the moves listed is necessary to avoid no-lose investigations. Suppose one allows for the possibility of justification, but doesn’t give it away for free in the Wrightian style. Suppose one also accepts Closure. Then it looks like one will also have to accept Negative Self-Intimation to avoid no-lose investigations. If one admits that a lack of justification can be inaccessible to an agent in some cases, then we can use those cases to build an example in which the agent gains justification for p if it’s true, but is unable to notice if that justification is lacking. Telling the agent in advance about this arrangement will not prevent it; knowing that a (justificatory) condition is undetectable doesn’t give the agent the ability to detect it. So if Negative Self-Intimation fails we can construct a no-lose investigation, taking advantage of the fact that at t 2 the agent can’t use her lack of justification for p as a tip-off to its truth-value.

If this line of reasoning succeeds in establishing necessary conditions for avoiding no-lose investigations, we have a complete menu of epistemological options meeting our desideratum. What’s interesting about this menu is that its options are either overtly skeptical, deny Closure, or have an internalist flavor (here I’m counting both Wright’s approach and Negative-Self Intimation). Footnote 25 If we want to avoid both overt skepticism and no-lose investigations, we must drop either Closure or epistemological externalism.