1 Introduction

Philosophy has seen a resurgence of interest in the nature of knowledge-how, and in particular regarding whether knowledge-how is ultimately reducible to a sort of knowledge-that or propositional knowledge. The terms of this debate have remained relatively stable since Stanley and Williamson (2001) revived interest in it. Most work has centered on three kinds of arguments: (1) attempts to show that one or another theory is incompatible with either the syntax or semantics of English or other natural languagesFootnote 1; (2) surveys of folk judgmentsFootnote 2; and (3) appeals to intuitions about both real and imagined cases.Footnote 3 Here, we will focus on this third set of arguments. In particular, we are interested in what we call “no-belief” cases, in which agents appear to exhibit knowledge-how even though they seem to fail to believe, or even actively disbelieve, that the way in which they in fact \(\upvarphi \) is a way for them to \(\upvarphi \). These cases present a prima facie challenge to Intellectualism about knowledge-how, or, roughly, the thesis that knowing how to \(\upvarphi \) just is knowing that some way w is a way for one to \(\upvarphi \) (Stanley and Williamson 2001; Stanley 2011a, b). The problem is this: if knowledge-that entails belief, and if knowing how to \(\upvarphi \) just is knowing that w is a way for one to \(\upvarphi \), then an agent cannot both know how to \(\upvarphi \) and fail to believe that w, the way that she actually \(\upvarphi \)s, is a way for her to \(\upvarphi \). And yet, the no-belief cases we will present appear to be precisely cases in which an agent knows how to \(\upvarphi \) while failing to believe that w, the way that she actually \(\upvarphi \)s, is a way for her to \(\upvarphi \).

After discussing background issues, we consider a variety of no-belief cases that have already been introduced to support Anti-Intellectualism, together with some Intellectualist replies. While we take these replies to be mostly successful, they fail to home in on the key issue. We therefore introduce a new set of no-belief cases, involving skilled motor action, which have been extensively investigated in psychology and which resist the Intellectualist replies mentioned above. More importantly, these new cases focus our attention on what we take to be the central question: how ought we attribute belief in situations where an agent explicitly reports believing P, but her behavior suggests that she does not believe P? There has been considerable debate on this question elsewhere in epistemology, and some have argued that cases of apparent belief-behavior conflict can be explained by positing action-guiding “implicit” beliefs—or, roughly, non-conscious beliefs to which agents lack introspective access.Footnote 4 Our aim will be to clarify the relationship between these two debates, that is, the debate about the nature of knowledge-how and the debate about belief-behavior conflict. Until now, these debates have progressed largely in isolation from each other.Footnote 5 We argue, in particular, that ostensible no-belief cases reveal how Intellectualism depends on the plausibility of positing something like implicit beliefs in many cases of apparent knowledge-how. We conclude by offering reasons why positing implicit beliefs in these cases is unlikely to succeed, even if it represents a viable strategy in other cases. Finally, we discuss the ramifications of this suggestion for a broader set of questions about the relationship between practical and theoretical knowledge.

2 Intellectualism, anti-intellectualism, and belief

Ryle (1949) famously argued in favor of the claim that knowledge-how is irreducible to knowledge-that. We call this view Anti-Intellectualism. Ryle’s main argument for Anti-Intellectualism was that to deny it would lead to a vicious regress: suppose that knowing how to \(\upvarphi \) is to be explained in terms of contemplating a proposition \(P_{1}\). Contemplating \(P_{1}\) is itself an act that can be done intelligently or not. Thus, intelligently contemplating \(P_{1}\) requires knowing how to contemplate \(P_{1}\). That, in turn, will require contemplating \(P_{2}\), and so on. While not all contemporary Anti-Intellectualists are motivated by Ryle’s regress, and some have presented alternative interpretations of his arguments, they are united in claiming that knowledge-how is irreducible to knowledge-that.Footnote 6

Most Intellectualists deny the force of Ryle’s regress, thus rejecting what was long taken to be the most powerful argument against Intellectualism.Footnote 7 Several versions of Intellectualism have recently emerged, but a central tenet of virtually any version of Intellectualism is that knowing how to \(\upvarphi \) just is knowing that w—standardly, the way in which one actually \(\upvarphi \)s—is a way for one to \(\upvarphi \).Footnote 8 Intellectualists have argued that endorsing this thesis offers several distinct advantages: it coheres better with contemporary theories of the syntax of “knows how” constructions, it offers a more parsimonious metaphysics of knowledge, and it explains a variety of cases involving agents who know how to \(\upvarphi \) in spite of being unable to \(\upvarphi \) themselves.Footnote 9

One less widely discussed aspect of the Intellectualist thesis is its implications for the relation between knowledge-how and belief. Assuming, as is standard, that knowledge entails belief, then a corollary of the central Intellectualist thesis is that knowing that w is a way for one to \(\upvarphi \) entails believing that w is a way for one to \(\upvarphi \).Footnote 10 If, then, there are cases in which an agent both knows how to \(\upvarphi \) and fails to believe that the way she in fact \(\upvarphi \)s is a way for her to \(\upvarphi \), these will pose a serious challenge for Intellectualism. Schematically, Intellectualists will have three options for dealing with such cases: (1) deny that there are such cases by denying that the agents in question know how to \(\upvarphi \); (2) deny that there are such cases by denying that the agents in question fail to believe that the way they in fact \(\upvarphi \) is a way for them to \(\upvarphi \); or (3), accept that there are such cases but deny that knowledge entails belief. Crucially, apparent no-belief cases are not a problem for Anti-Intellectualists. This is because knowledge-how, understood as something irreducible to propositional knowledge-that—an ability (Ryle 1949), a “seeming” (Cath 2011), or perhaps a form of acquaintance—does not entail belief. The pressing question, then, is whether there really are cases in which agents know how to \(\upvarphi \) without believing of the relevant w—the way in which they actually \(\upvarphi \)—that it is the way they \(\upvarphi \). In the next two sections, we shall argue that there plausibly are such cases.

3 Extant no-belief cases

The relevance of no-belief cases to the knowledge-how debate was first noted (to the best of our knowledge) by Wallis (2008). Much as we do below, Wallis turned to empirical psychology in order to provide cases where agents appear to know how to \(\upvarphi \) while simultaneously failing to believe that any particular w is a way for them to \(\upvarphi \). Wallis offered two such cases: (i) sleepwalkers who competently drive cars while sleeping; and (ii) severe amnesiac patients who can learn to solve certain sorts of puzzles, but who can offer no explanation of how they solve those puzzles (p. 133).Footnote 11 Prima facie, Wallis argues, sleepwalkers manifest no beliefs at all. Thus, while their actions demonstrate their knowledge of how to drive cars, it seems that they don’t believe that the way they drive is, in fact, a way for them to drive. Likewise, amnesiac patients are unable to explicitly remember any of the strategies they have learned to solve the relevant puzzles. Thus, they too seem to know how to \(\upvarphi \)—in this case, know how to solve puzzles—without believing that the way they \(\upvarphi \) is a way for them to \(\upvarphi \). Both (i) and (ii) therefore look to be cases that violate the Intellectualist’s commitment to the claim that knowing how to \(\upvarphi \) requires knowing that some particular w is a way for one to \(\upvarphi \), on the assumption that knowledge entails belief.

As Stanley (2011a) and Glick (2011) have argued, however, Wallis’ analysis of these cases is ultimately unpersuasive. First of all, in order to even make sense of (i) as a challenge to Intellectualism, we had to slide from talking about “belief-possession” to “belief-manifestation.” It is highly unclear that Intellectualists need be committed to anything in particular about the manifestation of relevant beliefs in exhibiting know-how. More importantly, it seems plausible to think that Wallis’ sleepwalkers and amnesiacs do believe that the way they in fact \(\upvarphi \) is a way for them to \(\upvarphi \). It is just that these beliefs are not conscious. Positing that these agents possess myriad unconscious beliefs guiding their actions hardly looks ad hoc; such beliefs would, for instance, also help to explain why sleepwalkers generally search for food in the refrigerator, but not in the washing machine.Footnote 12 Likewise, in exhibiting familiarity with the relevant puzzles and strategies, severe amnesiacs might well be manifesting unconscious beliefs about these things. For Wallis’ cases to prove a genuine challenge to Intellectualists, it must be the case that Intellectualists are committed to the claim that knowing how to \(\upvarphi \) requires the conscious, explicit belief that w is a way to \(\upvarphi \). We are unaware of Intellectualists committing themselves to any such thesis.Footnote 13 It is therefore open to Intellectualists to claim that something like an implicit or unconscious belief that w is a way for one to \(\upvarphi \) will suffice to underwrite knowing how to \(\upvarphi \). In that case, neither of Wallis’ (i) or (ii) present a genuine challenge to Intellectualism.

A different sort of no-belief case, introduced recently in Cath (2011), constitutes a more serious threat to the Intellectualist position:

The Non-Dogmatic Hallucinator Jodie occasionally suffers from a peculiar kind of hallucination. On occasion it seems to her that she remembers events of learning how to \(\upvarphi \), when in fact no such event occurred. Furthermore, the way Jodie ‘remembers’ as being the way to \(\upvarphi \) is not a way to \(\upvarphi \) at all. On Saturday, a clown teaches Jodie how to juggle. By the end of the class she knows how to juggle, and is juggling confidently. And so there is a way, call it ‘\(w_{3}\)’, such that Jodie now believes that \(w_{3}\) is a way for her to juggle, namely, the way the clown taught her to juggle. On Sunday, Jodie is about to tell a friend the good news that she knows how to juggle. However, as she begins, the alarm goes off on her false memory detector (or FMD), a remarkable device that is a super-reliable detector of her false memories. This indicates to Jodie that her apparent memory of learning how to juggle is not only a false memory but that it is also misleading with respect to the way to juggle. Normally, Jodie would revise her beliefs accordingly, and this is exactly what Jodie does. So, she no longer believes that she knows how to juggle or that \(\hbox {w}_{3}\) is a way for her to juggle. Of course, Jodie did learn how to juggle yesterday, so her FMD has made an error, albeit one that was highly unlikely. (p. 116)

According to Cath, intuitively, Jodie knows how to juggle (p. 116). And yet, for no w does she believe that w is a way for her to juggle. Thus, according to Cath, there is no w such that Jodie knows of that w that it is a way for her to juggle (p. 117).Footnote 14 That, in turn, entails that the Intellectualist analysis cannot be correct—for Intellectualists claimed that knowing how to juggle just is knowing that some w is a way for one to juggle.Footnote 15

The strategy of appealing to implicit or unconscious belief looks less promising in The Non-Dogmatic Hallucinator than it did above. One problem is that—in contrast to the sleepwalker and amnesiac cases—attributing implicit or unconscious beliefs to Jodie seems ad hoc. The motivation for doing so seems to be that, were Jodie to lack such a belief, Intellectualism would be false. Another problem is that Jodie explicitly rejects that \(w_{3 }\) is a way for her to juggle. Thus, if we attribute to Jodie an additional, implicit belief that \(w_{3}\) is a way for her to juggle, we are thereby attributing contradictory beliefs to Jodie. We discuss freestanding arguments for positing contradictory beliefs below (Sect. 5.2), but here we simply note the extra explanatory burden with which Intellectualism is now saddled. In addition to showing that knowing-how reduces to knowing-that, Intellectualists must also now show that agents can hold contradictory beliefs, and that contradictory beliefs represent the best explanation of cases like The Non-Dogmatic Hallucinator. To foreshadow an additional worry: if we posit contradictory beliefs, then whenever we want to appeal to an agent’s belief that P in order to explain her having \(\upvarphi \)-ed, in a case of contradictory belief, we will also now need to ask why that belief that P was operative in this context rather than her belief that \(\sim \) P.

We are hesitant, however, to draw too much from cases like The Non-Dogmatic Hallucinator. This case involves (i) an imaginary agent who regularly hallucinates vividly enough such that she cannot subsequently distinguish between these hallucinations and reality, yet who seems perfectly healthy in other respects, so much so that she can learn new complex skills like juggling; (ii) a piece of science fiction technology called a “false memory detector” (FMD); and (iii), a highly unusual exception to the super-reliability of the FMD, which Jodie the imaginary yet highly functioning hallucinator fails to notice, such that now she fails to believe that she knows how to do what she does in fact know how to do. This is a long walk to a counterexample to Intellectualism! Moreover, we suspect that some Intellectualists might be willing to bite the bullet and simply deny that Jodie knows how to juggle, despite the fact that it seems very much like Jodie does indeed know how to juggle. Other potential no-belief cases in the literature—such as Bengson and Moffett’s (2007) Salchow—are similarly bizarre.Footnote 16 We note in addition how intuitions about thought experiments like these can be affected by seemingly irrelevant factors like word order, moral valence, context, and even font (Gendler 2007). Intellectualists would not look to be in terrible shape were they to simply deny that cases like these represent serious counterexamples to their view.

Of course, bizarre thought experiments hold an important place in the history of philosophy. Some have urged caution (Williamson 2008; Dennett 2014), while others have insisted that these kinds of cases are useful so long as there is one way of filling out the details such that the case represents a counterexample to some theory (Ichikawa and Jarvis 2009). Thankfully, we needn’t resolve this issue here. For although they have thus far gone unnoticed in philosophy, there are significantly better no-belief cases to be found in the empirical literature than those to which Wallis appealed. Such cases preserve the dialectical merits of Cath’s The Non-Dogmatic Hallucinator without forcing us to stray into the realm of extreme science fiction. What’s more, these cases involve far more pedestrian psychological phenomena than sleepwalking or amnesia. We turn now to the task of introducing these cases, drawn from recent research on skilled motor action.

4 New no-belief cases

In many ball sports, like baseball, tennis, and cricket, athletes are taught to play by “watching the ball” or to “keep their eye on the ball.” This instruction serves several purposes. Focusing on the ball can help players to pick up nuances in the angle and spin of the incoming serve or pitch; it can help players to keep their head still, which is particularly important in sports like tennis, where one has to meet the ball with one’s racquet at a precise angle while one’s body is in full motion; and it can help players avoid distractions. One thing that attempting to “watch the ball” does not do, however, is cause players to actually visually track the ball from the point of release (in baseball or cricket), or opponent contact (in tennis), to the point of contact with one’s own bat or racquet. In fact, it is well-established that ball players at any level of skill make anticipatory saccades to shift their gaze ahead of the ball one or more times during the course of its flight towards them. These saccades—the shifting of the eye gaze in front of the ball—occur despite the fact that most players (at least explicitly) believe that they are visually tracking the ball the whole time.

The standard explanation of these saccades is that it is simply impossible to visually track an object moving so quickly toward the origin of one’s gaze. Players must therefore shift their gaze ahead of the ball at various points in its flight path and then wait for it to catch up (Bahill and LaRitz 1984; McLeod 1987; Land and McLeod 2000). Bahill and LaRitz (1984), for instance, claim that in sports such as baseball, tennis, and cricket, where the velocity of the ball can exceed 100 mph, it is physically impossible for players to track the movement of the ball when it is closer than 5 feet from them. In baseball, for example, as the ball approaches the batter, the horizontal angle of the ball—defined as the angle between the line of sight from the batter’s eye to centerfield and the line of sight from the batter’s eye to the ball—increases at a speed far faster than a human being’s maximum possible gaze velocity (defined as smooth-pursuit eye tracking plus head movement). At 5.5 feet from the plate, the retinal image of a baseball traveling 60 mph is changing at \(1{,}100^{\circ }/\hbox {sec}\), yet gaze velocity in a professional baseball player does not appear to exceed \(150^{\circ }/\hbox {sec}\). What good professional baseball players consistently do when batting is track the ball for some period after its release, then shift their eye gaze without tracking to a point part way between themselves and the pitcher and watch the ball as it passes that point. Finally, they shift their gaze once more, again without tracking, to the expected point of contact.Footnote 17 For our purposes, the ultimate explanation of these saccades is largely beside the point; regardless of why exactly they do so, it seems clear that baseball, cricket, and tennis players of all skill levels make anticipatory saccades rather than smoothly tracking the ball from the point of release to the point of contact.

There are two relevant beliefs that a batter who intends to hit by “watching the ball” might have: (i) a true belief that she predicts when the ball will cross the plate and then makes an anticipatory saccade to that spot; (ii) a false belief that she visually tracks the ball from the pitcher’s point of release until and through the point of contact with the bat. Most batters, we presume, explicitly hold (ii) and reject (i).Footnote 18 What’s more, we take it that (ii)—the belief that one hits by visually tracking the ball until and through the point of contact—contradicts (i)—the belief that one hits by making an anticipatory saccade to the point of contact.Footnote 19 It is also important to note that this false belief (ii) clearly fails to intrude at the level of action. Intending to watch the ball helps one succeed as a batter regardless of whether one is watching the ball in the way that one thinks one is. Furthermore, one watches the ball intentionally. Watching the ball is an intentional action which is itself a component of a more complex action (i.e., batting).

Relatedly, it also seems reasonable to suppose that most batters believe that they “watch the ball” by having a clear visual image of the ball as it comes towards them. Recent work by Mann et al. (2007) calls into question that this is in fact how batters bat, however. It seems that skilled cricketers suffer little to no loss in their batting abilities even when they are wearing contact lenses that significantly impair their vision. A significant reduction of these cricketers’ batting skill was only observed when the contact lenses were so extremely mis-prescribed that the batters’ effective eyesight became equivalent to that of someone on the border of legal blindness. Contrary to what most batters presumably believe, then, one does not even have to be able to see the ball clearly in order to watch the ball in whatever way is relevant to hitting it.Footnote 20

Similar, and perhaps even more vivid, results obtain with regard to catching balls. For example, Reed et al. (2010) have shown that fielders in various ball sports believe that when they are catching a ball their gaze rises and then falls as the ball falls. These fielders not only believe that this is what their gaze is doing, but they also report experiencing their gaze rising and falling. But in fact, unless the ball is caught below eye-level, the fielder’s gaze goes up continuously and does not fall.Footnote 21 Reed et al. report that most participants in controlled studies appear to be unaware of any discrepancy between their reports and their behavior. “Conscious perceptual judgments,” they write, “were not simply incomplete: They were often confidently wrong” (Reed et al. 2010, p. 73). Knowing how to catch a ball, it seems, is perfectly compatible with having a very flawed understanding—even a false phenomenology—of ball-catching.

Cases like these make the problem for Intellectualists explicit.Footnote 22 Professional batters know how to hit baseballs. These batters in fact hit baseballs by shifting their gaze ahead of the ball, or by making anticipatory saccades, rather than by smoothly tracking it from the point of release to the point of contact. However, these batters believe that they hit baseballs by watching the ball in a different manner—specifically, by smoothly tracking it from start to finish. What’s more, these batters would likely deny that they hit balls in part by making anticipatory saccades rather than smoothly tracking the ball. Thus, there appears to be no w such that w both accurately characterizes how professional batters hit baseballs and what those batters believe to be the way they hit a baseball. But if Intellectualism is correct, and if knowledge entails belief, then there should be just such a w. This is the threat posed by our version of no-belief cases.

Let us take stock of the dialectic so far. We have tried in this section to provide a more compelling version of the argument against Intellectualism from no-belief cases than the versions offered by either Cath or Wallis. In contrast to Cath, the no-belief cases we presented are real, mundane, and common. Ours are in fact cases involving human beings performing at the peak of their abilities, in more or less ideal psychological and physical ways, rather than cases in which people suffer mysterious cognitive deficits and are subject to bizarre twists of luck. Intellectualists, therefore, cannot simply dismiss these cases as being too far-fetched to productively inform philosophical argument. In contrast to Wallis, our version of the argument from no-belief cases requires making no assumptions about the relationship between knowledge and conscious, explicit belief—assumptions that look like they can be plausibly denied in the sorts of cases Wallis discusses. Instead, we rely on a pair of significantly weaker assumptions: (i) that knowledge entails belief of one sort or another; and (ii), that a sincere, explicit disavowal of P provides at least some evidence to the effect that one does not believe P. We take this shift to represent a significant dialectical advantage. Denying our (i) and (ii) will have significant ramifications for Intellectualism, ramifications that denying Wallis’ assumption that knowledge entails explicit belief does not. We discuss these ramifications in the next two sections. We also note here that while Wallis’ cases involve merely the absence of the relevant beliefs, the cases we present involve both the absence of the relevant belief and the agent’s explicit disavowal of that belief.Footnote 23

Before moving on, it is worth noting exactly how our version of the argument from no-belief cases relates to the prima facie similar line of argument advanced by Dreyfus (2002a, (2002b, (2005, (2007a, (2007b) and Dreyfus and Kelly (2007). In very rough outline, Dreyfus appeals to the phenomenology of skilled action to argue that knowledge-how does not amount to a type of knowledge-that. For Dreyfus, the phenomenology of skilled action, particularly expert action, is exhausted by mindless responsiveness to “affordances” (Gibson 1979). For example, Dreyfus (2002a) writes,

[C]onsider a tennis swing. If one is a beginner or is off one’s form one might find oneself making an effort to keep one’s eye on the ball, keep the racket perpendicular to the court, hit the ball squarely, etc. But if one is expert at the game, things are going well, and one is absorbed in the game, what one experiences is more like one’s arm going up and its being drawn to the appropriate position, the racket forming the optimal angle with the court—an angle one need not even be aware of—all this so as to complete the gestalt made up of the court, one’s running opening, and the oncoming ball. One feels that one’s comportment was caused by the perceived conditions in such a way as to reduce a sense of deviation from some satisfactory gestalt. But that final gestalt need not be represented in one’s mind. Indeed, it is not something one could represent. One only senses when one is getting closer or further away from the optimum. (pp. 378–379)

Dreyfus argues that skilled action like this demonstrates a form of “understanding,” akin in his view to Aristotle’s conception of “practical wisdom” or phronesis, and that this form of understanding can be expressed as knowledge-how (Dreyfus 2005, p. 59). Moreover, the form of understanding exhibited in skilled action is fundamentally different from knowledge-that. Dreyfus offers various reasons for this. He claims that the phenomenology of skilled action cannot be mentally represented (op. cit.); he doubts that reasons for action influence agents during performance (Dreyfus 2005, pp. 50–51); he denies that skill is the exercise of “conceptual” capacities (Dreyfus 2007a, b); and he points to the sometimes deleterious effects of reflection upon expert performance (also known as “Steve Blass Disease”).Footnote 24

We are sympathetic to many aspects of Dreyfus’ view, and we find significant conceptual affinity between the conclusion of our argument and Dreyfus’ claim that human beings cannot be “full-time rational animal[s],” assuming this means that knowledge-how is both ineliminable and irreducible to knowledge-that. But it is unclear to us how phenomenology alone can settle the debate about Intellectualism. For one, it is unclear what the phenomenology of an implicit belief is like, if it is like anything at all.Footnote 25 We are also unsure how to settle disagreements about the relevant phenomenology. Montero (2010), for example, describes the phenomenology of expert athletics in terms starkly different than does Dreyfus. Finally, while we accept that phenomenology plays an important role in specifying what it is that we are trying to understand—namely, what it is to know how to do something—and while we also agree with Dreyfus that the relevant phenomenology suggests a deep difference between practical and theoretical knowledge, we are also open to the possibility of phenomenology sometimes turning out to be highly misleading about the nature of the mind.Footnote 26 Our arguments, we hope, avoid these challenges.

5 Intellectualist options

The Intellectualist now faces a choice. First, she can claim that the relevant agents do not know how to bat or catch, despite their apparently being skilled batters and catchers. Second, she can claim that the relevant agents really do believe what they explicitly claim not to believe. Or, third, she can deny that knowledge-that entails belief. In our view, the first option is clearly unappealing, and we know of no Intellectualists who have been tempted to pursue it. Professional and even skilled amateur batsmen and fielders surely know how to bat and catch. These are the experts we novices emulate. If they don’t know how to bat and catch, or to bat and catch by watching the ball, then no one does. This leaves the second and third options in contention. We shall address these in reverse order.

5.1 Knowledge without belief

The third option, denying that knowledge-that entails belief, appears more promising than the first. And, in fact, this sort of view has recently been explored in Brogaard (2011). Brogaard suggests that no-belief cases should be considered against the background of a larger set of issues in epistemology, namely, how to deal with the fact that we often attribute knowledge to beings, such as small children and many non-human animals, who plausibly lack full-blooded belief states (p. 151). In response to this wider problem, Brogaard invites us to think of knowledge as follows:

[K]nowledge need not be a belief state that satisfies certain epistemic constraints. Rather, knowledge is a determinable of which other mental states are determinates. Perceptual states, standing belief states, judgments, realizations, recollections, ability states, introspective states, and so on, are all determinates of knowledge, as long as they satisfy certain epistemic constraints. Some of these, for example seeings, are primitive knowledge states, others are standard knowledge states. (Ibid., p. 152)

Essentially, Brogaard suggests that a wide range of problems in epistemology can be solved by thinking of knowledge as a basic kind of mental state, one that can be manifested in an array of different creatures and circumstances by an array of different, more specific mental states: knowledge-beliefs, knowledge-judgments, knowledge-realizations, etc. What unifies these various states is that they exhibit certain further epistemic properties (e.g., safety, reliability) to be specified by means of sustained, detailed epistemic inquiry.

What sort of state does knowledge-how amount to then, on Brogaard’s proposed picture? Presumably, paradigm instances of knowledge-how amount to knowledge-ability states—that is, ability states which themselves count as knowledge in virtue of their exhibiting certain epistemic properties.Footnote 27 While we submit that there is nothing incoherent about this position, we nonetheless find it unsatisfying. This is because it is not at all clear that Brogaard’s suggestion constitutes a genuine alternative to Anti-Intellectualism.Footnote 28 If it doesn’t, then Brogaard’s perhaps true claim that knowledge doesn’t necessarily entail belief can be understood as pertaining specifically to knowledge-how. And, of course, Anti-Intellectualists can and do accept that knowledge-how doesn’t entail belief.

According to Anti-Intellectualists, knowledge-that and knowledge-how are fundamentally different sorts of things. One natural way of translating this into Brogaard’s picture would be to claim that knowledge-beliefs and knowledge-ability states are fundamentally different. According to Brogaard, the fact that knowledge-ability states and knowledge-beliefs are both instances of knowledge suffices to make them fundamentally the same sort of thing. According to the Anti-Intellectualist, though, the fact that these are very different sorts of instances of knowledge suffices to make them fundamentally different sorts of things. The dispute between Brogaard and the Anti-Intellectualist thus threatens to degenerate into a merely verbal dispute.Footnote 29 Put slightly differently: it is not at all clear that the concept fundamentally different is being applied in a consistent manner across these two views. If not, then the real disagreement between Brogaard and Anti-Intellectualists may be about how to apply the concept fundamentally different, with the underlying facts about knowledge-how being agreed upon by all parties.Footnote 30

Unsurprisingly, Brogaard is hardly the only philosopher to have taken seriously the possibility that knowledge might not entail belief.Footnote 31 For instance, Myers-Schulz and Schwitzgebel (2013) have recently resuscitated arguments originally found in Radford (1966) against that assumption. Essentially, Myers-Schultz and Schwitzgebel point to a series of cases in which an agent seemingly knows that P even though she will sincerely and explicitly disavow believing that P. We eschew a full consideration of these arguments in the present context for two reasons: first, this sort of argument looks to rely on some strong assumptions about the luminosity of belief that we lack the space to properly consider.Footnote 32 Second, considered in the present context, the argument is complicated by Schwitzgebel’s more general commitment to “Anti-Intellectualism” about belief, and indeed about attitudes in general. Schwitzgebel ’s (2010, 2013) view is that attitudes just are a certain sort of complex disposition. If this thesis is correct, it would upend the entirety of the debate between Intellectualists and Anti-Intellectualists about knowledge-how, at least in its present form.

There will, of course, be many other ways of reconsidering the relationship between knowledge and belief, and some Intellectualists will no doubt be attracted to this sort of position.Footnote 33 Rather than trying to cut off every avenue here, we wish merely to suggest the following: if the Intellectualist is tempted to carve off knowledge-how as a special kind of knowledge that doesn’t entail belief, then she risks collapsing the distinction between her preferred view and Anti-Intellectualism. If, on the other hand, the Intellectualist is tempted to deny that knowledge entails belief more broadly, then she will be forced to take on a fairly revisionary theory of the relation between knowledge and belief. What’s more, if she relies on cases that look anything like Myers-Schultz and Schwitzgebel’s, she will have to offer an independent argument for the general transparency of belief. Either way, it looks to us as though the explanatory burdens of denying that knowledge entails belief, either in general or more narrowly, will prove substantial.

5.2 Implicit belief

As we suggested earlier, we think that the second option—positing that these agents really do believe what they explicitly claim to disbelieve—is the most plausible Intellectualist interpretation of no-belief cases. The viability of this option hinges on the plausibility of attributing implicit or unconscious beliefs that conflict with the agents’ explicitly avowed beliefs.Footnote 34

This brings us face-to-face with a very difficult question, and one that is familiar from other parts of contemporary epistemology: how ought we, in general, attribute beliefs to agents in cases of discord between what an agent avows and how she behaves? In Sect. 3, we offered reasons why Jodie’s apparent knowledge of how to juggle is unlikely to be explained by appeal to implicit or unconscious beliefs. We suggested that positing implicit or unconscious beliefs in this case seems ad hoc. Moreover, we suggested that positing implicit or unconscious beliefs in Jodie’s case means that she is in a state of contradictory belief, and that this state requires further explanation (an explanation which Anti-Intellectualists need not offer). We hesitated to draw conclusions from this kind of case, however. So the question is whether these same worries apply to the idea that implicit or unconscious beliefs provide a plausible explanation of what agents like professional batters and fielders know how to do.

Implicit or unconscious beliefs are often defended as part of a theory of what Egan (2008) calls “fragmented belief:”

Actual human beings don’t have a single coherent system of beliefs—either binary or graded—that guides all of their behavior all of the time. The systems of belief that we in fact have are fragmented or, as Stalnaker (1984) and Lewis (1982) put it, compartmentalized. Rather than having a single system of beliefs that guides all of our behavior all of the time, we have a number of distinct, compartmentalized systems of belief, different ones of which drive different aspects of our behavior in different contexts. (p. 48)

Agents with a fragmented system of belief will often have contradictory beliefs, since on this view it is coherent that an agent A act upon belief P in one context but belief \(\sim \) P in another context. In other words, instead of understanding the action-guiding role of belief in terms of agents being disposed to act in ways that would satisfy their desires if their beliefs were true, a theory of fragmented belief understands the action-guiding role of belief in terms of dispositions to act in ways that would satisfy an agent’s active desires if her active beliefs were true. Or as Egan puts it: “[a]gents are disposed to act, in a context c and within a domain d, in ways that would satisfy their <c,d> active desires if their <c,d> active beliefs were true” (Egan 2008, p. 52).Footnote 35

On this sort of a view, different kinds of fragments are likely to have different properties. Among these different properties are conscious accessibility and availability for verbal report. So a theory of fragmented belief predicts that there will be cases in which agents consciously and genuinely avow \(\sim \) P in one context but manifest a genuine belief that P in another. This seems to apply to the cases we have presented. Batters, for example, might be said to hold two contradictory beliefs with different properties. First, they hold a genuine belief \((P_{1})\) that the way that they watch the ball is by watching it all the way from the point of release to the point of contact; this belief is consciously accessible. Second, they hold another genuine belief \((P_{2})\) that the way that they watch the ball is by anticipating when it will cross the plate and then make an anticipatory saccade to that spot; this belief is not consciously accessible, or, at least, is not consciously accessed.Footnote 36 \(P_{1}\) and \(P_{2}\) might have additional discrepant properties as well. For example, \(P_{1}\) might display “inferential promiscuity” (Stich 1978)—that is, the ability to play a role in a huge set of inferences the agent might make—as well as responsiveness to evidence and reason—that is, \(P_{1}\) might change under the right sort of rational pressure. \(P_{2}\), on the other hand, might fail to display inferential promiscuity and might be unresponsive to rational pressure. Indeed, no matter how much evidence and reason one gave a professional batter, you might not be able to dislodge \(P_{2}\).

Thus far, we hope to have shown how the debate between Intellectualists and Anti-Intellectualists plausibly converges with a separate debate in epistemology about belief attribution. Perhaps this convergence should come as no surprise. After all, Stanley and Williamson (2001) claimed that knowing how to \(\upvarphi \) isn’t just reducible to knowing that w is a way for one to \(\upvarphi \); rather, knowing how to \(\upvarphi \) reduces to knowing that w is a way for one to \(\upvarphi \), where that proposition is presented under a “practical mode of presentation” (p. 429). For Stanley and Williamson (2001), “thinking of a way under a practical mode of presentation . . . entails the possession of certain complex dispositions” (p. 429).Footnote 37 This raises the possibility that Intellectualists always had in mind something like a fragmented picture of knowledge. The complex dispositions entailed by thinking of a way to \(\upvarphi \) under a practical mode of presentation would be “actional,” in the sense that certain knowledge is known in a way relevant to the action system, whereas other knowledge is relevant to explicit, conscious reasoning.Footnote 38 If this is right—and if Intellectualists like Stanley and Williamson were further inclined to posit that the structure of beliefs and knowledge largely parallel each other—then it should be no great leap for them to endorse a theory of fragmented belief alongside their theory of fragmented knowledge. In that case, this third option should strike Intellectualists of this stripe as the natural response to no-belief cases.

Our aim is not to argue for or against the claim that beliefs are fragmented as such. As we noted in Sect. 3, however, at minimum we also think our arguments show that Intellectualism is saddled with an extra explanatory burden that Anti-Intellectualists needn’t bear. Intellectualists must first defend a particular theory of fragmented or contradictory belief. Then, Intellectualists must further show that the agents in particular no-belief cases do in fact have unconscious or implicit beliefs that explain their behavior, alongside the beliefs that these agents will actually avow.Footnote 39

Appealing to a theory of fragmented belief in order to offer an Intellectualist-friendly explanation of no-belief cases raises three additional worries. The first is that the Intellectualist interpretation of no-belief cases needs to explain contradictory occurrent beliefs, not just contradictory beliefs that are active in different contexts. The cases of confabulation in sports that we discussed above don’t quite exhibit this structure, since in these cases the athletes do indeed report their beliefs about \(\upvarphi \)-ing in a different context (e.g., in an interview after the game) from the one in which they \(\upvarphi \). However, there is no reason to think that these beliefs couldn’t be (or aren’t) occurrently conflicting. After all, the outfielders in Reed et al. (2010) study were “confidently wrong” about their perceptual judgments when catching fly balls. Likewise, batters are taught to hit the ball by smoothly tracking it with their gaze, and presumably this is exactly what they are trying to do when batting. We also note an odd consequence of an Intellectualist interpretation of these cases in terms of occurrent contradictory beliefs. Imagine a batter saying, “I know how to watch the ball, but I don’t believe that looking ahead to where I predict the ball will be is a way for me to watch the ball.” It strikes us that a batter can utter this sentence truly. But citing the batter’s contradictory beliefs, Intellectualists will have to claim that this sentence is false, since the second conjunct should prove to be false in virtue of the baseball player’s purported unconscious belief that looking ahead to where I predict the ball will be is a way of hitting the ball. This strikes us as an odd interpretation of a common attitude, namely the attitude of knowing how to \(\upvarphi \) but disbelieving the truth about how one \(\upvarphi \)s.

The second worry is that, as Egan notes, a theory of fragmented belief must explain the processes of integration of an agent’s belief fragments. As before, this is extra explanatory work for Intellectualism. But, more importantly, the explanation of integration must work in cases of ordinary, skilled action. The cases considered in the belief fragmentation literature are typically cases of irrational belief or irrational action. Egan (2008), for example, focuses on cases of rational inconsistently (i.e., believing P&Q, and that P is inconsistent with Q), failures of closure (i.e., believing that if P then Q and believing P, but failing to believe Q), and differences between recall (e.g., “what was Val Kilmer’s character’s callsign in Top Gun?”) and recognition (“was ‘Iceman’ Val Kilmer’s character’s callsign in Top Gun?”). No-belief cases, like those we have discussed, are different from these in the sense that they are not irrational. The only conflict such cases present is between an agent’s behavior and her explicit beliefs. So, in these cases, the agent’s purported fragmented beliefs really aren’t all that fragmented. In fact, they are well-integrated. While we see no reason why a theory of fragmented belief cannot explain this kind of integration in principle, we note that a more parsimonious explanation would focus on different kinds of mental states—some “action-guiding” and some “truth-taking”—that need not be rationally integrated.

The third, and most significant, worry is that Intellectualists who appeal to a theory of fragmented belief will need to explain the distinct processes according to which different belief fragments update. And, crucially, all of these distinct processes will have to remain doxastic. As we noted before, different fragments will be predicted to have different properties, such as being consciously accessible. We also mentioned properties like inferential promiscuity and responsiveness to evidence and reason. The trouble is that some of these properties are distinctly different from the properties typically ascribed to belief states. At some point it would seem that an unconscious state that is inferentially monogamous (so to speak) and insensitive to evidence and reason must fail to be a belief.Footnote 40 The agent’s action-guiding state might be a mere association, an “alief” (Gendler 2008a, b), or some other yet-to-be-understood subpersonal state. For our purposes, it could be any of these, so long as associations, aliefs, and so on aren’t entailed by knowledge-that.

Indeed, it looks like the agent’s action-guiding states in no-belief cases fail to display many ordinary doxastic properties. States of belief—whether introspectively available or not—are thought to be paradigmatically sensitive to changes in what an agent takes to be all-things-considered evidence (Brownstein and Madva 2012). When the milk carton in the fridge turns out to weigh next to nothing, a normal agent will revise her belief that there is milk in the fridge. But the same does not hold of the beliefs relevant to knowledge-how. This becomes clear in situations in which an agent comes to genuinely believe that she ought to \(\upvarphi \) in \(w_{2}\), but persists in \(\upvarphi \)ing in \(w_{1}\). In principle, this suggests that whatever mental state is associated with \(w_{1}\) is not sensitive to what the agent takes to be all-things-considered evidence. We thus have reason to conclude, in such cases, that the mental state associated with \(w_{1}\) is not a belief.

It is a common occurrence in skill-learning for agents to genuinely believe that they ought to \(\upvarphi \) in \(w_{2}\), but to nonetheless persist in \(\upvarphi \)ing in \(w_{1}\). Consider that, recently, some of the best hitters in modern Major League Baseball (Albert Pujols, Barry Bonds, and Alex Rodriguez) all failed spectacularly to even make contact with softballs pitched to them by Jennie Finch, Team USA’s star softball pitcher. This was despite the fact that Finch throws more slowly than MLB pitchers and softballs are significantly larger than baseballs (Epstein 2013). The cause of this failure was likely that MLB batters don’t know how to watch the ball when it is pitched in a sufficiently novel way (for them). Telling these batters to adjust their anticipatory saccades would presumably have done nothing to help. The batters already possessed overwhelming evidence that watching the ball in way w—the way that they know how to watch it in baseball—was not an effective way of watching a softball. Such evidence did nothing to stop their continued whiffing. This suggests that the players’ putative unconscious belief that w is a way to watch the ball is insensitive to the evidence, made obvious by their continued whiffing, that w is not a way to watch a softball. What Pujols, Bonds, and Rodriguez needed was not a new set of beliefs about the right way to watch the ball. What they needed was the practice required to learn how to hit by watching a softball. Indeed, the fact that these kinds of skills supposedly require about ten thousand hours of rote practice puts additional pressure on the claim that knowing how to hit softballs by watching them in the right way is nothing more than a matter of acquiring the right beliefs.Footnote 41

To avoid this worry, Intellectualists might turn to a revisionary account of belief, such as the theory of “Spinozan Belief Fixation” (Gilbert 1991; Egan 2008, 2011; Huebner 2009; Mandelbaum 2011, 2014).Footnote 42 Spinozan Belief Fixation holds that as soon as an idea is presented to the mind, it is believed. In other words, beliefs are unconscious propositional attitudes that are formed automatically as soon as an agent registers or tokens their content. For example, one cannot entertain or consider or imagine the proposition that “dogs are made out of paper” without immediately and unavoidably believing that dogs are made out of paper (Mandelbaum 2014). We refrain from comment on Spinozan Belief Fixation, but simply note the obvious: it is a radically revisionary view of belief, and we doubt that all Intellectualists will want to be tethered to it. Indeed, it seems to us that Spinozan Belief Fixation is deeply at odds with the frequent claim that Intellectualism does justice to ordinary ways of thinking and talking about knowledge-how (in the form of linguistic analysis and surveys of folk judgments). We cannot see how Intellectualists could maintain this claim if they were to embrace a theory like Spinozan Belief Fixation.

6 Conclusion

We hope to have clarified a key aspect of the current debate about the nature of knowledge-how—namely, the prevalence, plausibility, and import of no-belief cases, or cases in which an agent appears to know how to \(\upvarphi \) without believing that w, the way that agent actually \(\upvarphi \)s, is a way for her to \(\upvarphi \). We have provided what we think are clearer no-belief cases than those in the extant literature by drawing on studies in the psychology of skilled motor action. These cases, we claim, illustrate how the debate over the nature of knowledge-how actually converges with a different debate in epistemology regarding the propriety of attributing beliefs to often conflicted epistemic agents like ourselves. Ultimately, we think that these cases lend limited support to an Anti-Intellectualist account of knowledge-how, according to which knowledge-how is irreducible to knowledge-that. This sort of picture allows for practical knowledge to deviate from theoretical knowledge in an intuitive way. What’s more, this sort of picture of practical knowledge fits nicely with the broader view that skill and expertise are in some way in tension with—perhaps even threatened by—theoretical knowledge.Footnote 43

At the heart of the know-how debate, then, stand two competing pictures of the relationship between theoretical and practical knowledge. On the one hand, Anti-Intellectualists offer a picture on which these two sorts of knowledge very often fail to coincide, on which getting these two sorts of knowledge to align represents a particular, and perhaps unusual, sort of achievement. On the other hand, Intellectualists offer a picture of theoretical and practical knowledge on which the former simply subsumes the latter. Intellectualists can and have employed a range of different tools to make the case for this subsumption, from analyses of the syntax of natural languages to surveys of folk judgments. Here we have focused on cases in which at least everyone can agree that theoretical and practical knowledge seem to be misaligned, and we have considered those tools Intellectualists have at their disposal for trying to explain (away) this seeming. Focusing on questions about belief-attribution, we have argued that the Intellectualist’s subsumption of practical to theoretical knowledge requires a theory of implicit belief that both explains the relevant cases and is plausible in its own right. We hope to have shown that, in their current state at least, it is questionable whether any of the available theories of implicit belief will prove to be the right tool for the job.