1 Introduction

How do we understand a scientific event?Footnote 1 Take the refraction of light as it enters water. Some claim that to understand why light bends we only need to know that it obeys Fresnel’s Sine Law. On this view to understand why light bends just is to know that it follows a path described by Fresnel’s equation. This attitude mirrors Hempel’s (1965) Deductive-Nomological account of explanation. Others think laws are not enough, and to explain this event we need to know the causal story for how light bends. This view claims explanations must include appropriate explanatory causes of the event to be explained; it approximates the causal account of explanation associated with Salmon (1984). Still a third common opinion is that understanding comes from identifying the event as an instance of a more general argument pattern which unifies a number of similar kinds of events. This is reflected in Kitcher’s (1989) unification account of explanation.

This does not exhaust the possible ways in which something can be explained of course, but many philosophers believe if we generalize from these models of explanation we get a universal thesis for what it takes to understand a scientific event. The idea has been summarized nicely by Nounou and Psillos (2012). They call it the ‘Standard Story’ on explanation and understanding:

The Standard Story goes like this: Scientific understanding is constitutively tied to explanation; hence, it is covered by theories of scientific explanation. Bluntly put, the question is this: what kind of information should science offer (and how should it offer it) in order for it to provide understanding of the world? And the standard answer is: it should provide explanatory information. (2012, 72)

Nounou and Psillos claim that explanations provide understanding in virtue of their providing explanatory information—explanatory content. For the Deductive-Nomological, Causal, and Unification accounts of explanation the specific nature of this content is different, but the overarching idea is the same: understanding is achieved by possessing the correct explanation. This is a deflationary account of scientific understanding.

The Standard Story has been challenged recently. New ‘Substantivists’, as we might call them, claim understanding requires more than mere knowledge of explanatory information. Henk de Regt (2009) for instance has called for a theory which provides a constitutive story of understanding—one that tells us exactly in what understanding consists. Substantivists tend to think such a story requires the addition of some specific abilities exercised by the subject who understands. Stephen Grimm (2010, 340–1), for example, claims that understanding requires a subject S has the ability to anticipate how changes in one variable in a causal sequence can lead to changes in another variable. He also requires S be able to apply general expressions of these causal relations to particular cases. Thus, Grimm requires S be able to make predictions for new variations on previously studied examples. de Regt (2009, 31–2) himself claims S understands p if S knows how to use an intelligible theory to explain p, where intelligibility roughly means ‘usable by S’. de Regt’s requirements have a similar result as Grimm’s: S must be able to see the consequences of using a theory in a specific situation. We can refer to this demand that a theory of understanding incorporate a subject’s cognitive skills as the ‘Ability Thesis’.

Khalifa (2012) has recently responded to Substantivists on behalf of the Standard Story. He argues that we do not need substantial theories of understanding because anything they say can also be said adequately with the already existing literature on scientific explanation. He calls this thesis the Explanatory Model of Understanding (EMU). This deflationary view is my target in this paper. I will show how an alternative view on understanding, the Inferential Model, is both independently plausible and avoids EMU’s deflationism. In Section 2 I explain EMU, highlighting its commitment to two ideas: that all understanding-relevant explanatory knowledge is propositional in nature, and that the abilities we use to generate understanding are merely our logical reasoning skills. In Section 3 I provide an epistemic argument against EMU, suggesting that advocates of EMU are mistakenly taking knowledge-how to be knowledge-that. In Section 4 I consider and respond to three objections, the most important of which is the idea that knowledge-how entails knowledge-that. I spend some time arguing that such a view illegitimately assumes an inferential account of concepts. In Section 5 I reveal in greater detail the kinds of abilities we use to differentiate knowledge from understanding. In Section 6 I show how the Inferential Model of Scientific Understanding accommodates those skills. In Section 7 I close by arguing that this new substantivist model provides a satisfying alternative to Khalifa’s deflationary view of scientific understanding.

2 The Explanatory Model of Understanding (EMU)

In “Inaugurating Understanding or Repackaging Explanation?” Kareem Khalifa provides arguments purporting to show each of three popular accounts of scientific understanding can be reduced to ideas about scientific explanation already in the literature. The focus is on the accounts of Grimm (2010), de Regt (2009), and de Regt and Dieks (2005). Khalifa begins his paper: “I argue that current ideas about scientific understanding can be replaced by earlier ideas about scientific explanation without loss. Indeed, in some cases, such replacements have clear benefits” (p.16). He uses the following deflationary principle as a foil to undermine these alternative accounts, although leaving open the possibility that new theories of understanding may avoid EMU’s grasp:

(EMU): Any philosophically relevant ideas about scientific understanding can be captured by philosophical ideas about the epistemology of scientific explanation without loss. (2012, 17)

This bold claim is a generalization of comments that come from Hempel, Salmon, and Unificationists, all to the effect that understanding amounts to no more than “adequately representing the information demanded by one’s preferred model of explanation” (2012, 17). He adds in respect to the new substantivist’s project, “We are welcome to use the word “understanding”, but we are just relabeling the explanation literature” (2012, 17).

These are strong words, claiming to undermine much of what Grimm, de Regt, and Dieks have produced over the last few years. It is therefore an important question whether Khalifa is correct in his criticism. I am not concerned however to evaluate whether Khalifa is successful in that task. Rather I will raise an independent objection regarding EMU’s internal content. If I am correct, then it may be possible for others to make use of Khalifa’s error to defend their own accounts against EMU, but I am more interested in using this mistake to motivate an alternative view on understanding, which I believe avoids EMU’s deflationism.

To get to grips with EMU we should look closely at how Khalifa treats explanatory knowledge. He recognizes that understanding is a mental state, while an explanation is not. So, if EMU is going to connect the two it has to add an epistemological component to explanations: knowledge of an explanation is the relevant mental state—explanatory knowledge. This explanatory knowledge amounts to ‘rich, accurate, and detailed beliefs about an explanation’ (2012, 18). This is propositional knowledge—knowledge that an explanation says such and such. The detailed and accurate beliefs are true beliefs about most of the information characteristic of a philosophical model of explanation. With a Deductive-Nomological explanation, for example, this amounts to knowing the laws, initial conditions, phenomenon, and their inferential relations (2012, 18). On the other hand, the required rich beliefs are those of good explanations which optimize the virtues of simplicity, power, consistency, fecundity, and fit with data.

If this picture of understanding as explanatory knowledge is correct, as Khalifa thinks, then he is right in also claiming understanding is already covered in the work of philosophers of science like Sellars (1963), Harman (1973), Thagard (1978, 1992), Lycan (1988), and Lipton (2004), to name just a few. One might object to Khalifa that if we take a brief look at this literature we find topics ranging too broadly from Sellars’ semantic views in ‘Truth and Correspondence’ (1963), through Thagard’s account of scientific revolutions (1992), to Lipton’s (2004) theory of inference to the best explanation. This covers a lot of territory in the ‘epistemology of scientific explanation’, much of it falling outside what we would traditionally consider ‘theories of explanation’ proper. Still, let’s put aside worries about how we categorize our theories of explanation, Khalifa has in mind a model of understanding composed of anything philosophical and related to scientific explanation. EMU is therefore extremely general. The important point is that if one adopts this broad view of explanatory knowledge, then understanding can quite adequately be captured by propositional knowledge. If this account is right we don’t require the additional abilities to which Grimm and de Regt (and Dieks) appeal.

This commitment to mere propositional knowledge as adequate for understanding is encapsulated in Khalifa’s positive statement of three criteria for scientific understanding (2012, 26):

  1. (a)

    Knowing that the explanans is true

  2. (b)

    Knowing that the explanandum is true

  3. (c)

    For some l, knowing that l is the correct explanatory link between the explanans and the explanandum.

The last condition (c) is supposed to avoid appeal to more than propositional knowledge, and with this adjustment Khalifa is avoiding the Ability Thesis. He illustrates how the above approach can handle a number of problems for understanding which have popped-up in the literature. For instance, our misleading sensations of understanding—those mistaken ‘aha’ moments—are handled nicely; if you have a correct explanation of p then you understand p regardless of how you feel about it. It is knowledge of the explanatory link that does all the epistemic work, not some feeling. EMU also answers de Regt’s call for a ‘constitutive’ account of understanding: “There is nothing further that explanations need to provide…fully understanding a phenomenon would just be having very rich, accurate and detailed beliefs about its explanation” (2012, 20). Third, and most important for us, EMU supposedly disposes of the substantivist’s ‘ability thesis’: Grimm’s account is consumed by Woodward’s (2003) ‘what-if-things-had-been-different’ theory of explanation (knowing how to anticipate changes in a causal variable just is knowing ‘what-if-things-had-been-different’), and de Regt’s appeal to intelligibility is absorbed in Hempel’s account. Specifically, in his (1965, 337) Hempel says “The understanding [that scientific explanation] conveys lies in the insight that the explanandum fits into or can be subsumed under, a system of uniformities represented by empirical laws or theoretical principles” (quoted in Khalifa (2012, 26), italics are Khalifa’s). Khalifa’s idea here is that according to Hempel, S does not require some special ability, skill, or know-how. The propositional knowledge contained in a correct explanation is enough to generate understanding—provided S uses straightforward deduction. Based on EMU’s solving these problems Khalifa is compelled to pose the following challenge to theorists of understanding: EMU “urges understanding champions to state what is deficient about this picture” (2012, 21).

Khalifa is not alone in thinking the Ability Thesis redundant. Nounou and Psillos (2012) have pointed out that philosophers often take understanding to reflect a collection of specific abilities. They provide a list of typical examples of these abilities advanced in the recent anthology Scientific Understanding (de Regt et al. (2009)). These include the ability to ‘extract understanding from an explanation’, and the ability to ‘determine what is and what is not relevant in an explanation’. But Nounou and Psillos do not see how these abilities demand a new substantive theory of understanding. On their evaluation these skills are perfectly compatible with the Standard Story: “these [skills] may well be right. Yet, they do not seem to constitute an alternative object for a proper philosophical account…at the end of the day it is fully consistent with the above-noted traits that understanding and explanation go together” (Nounou and Psillos 2012, 72–3).

Nounou and Psillos are reflecting the deflationist thesis of EMU: the abilities we require to extract understanding from our explanations are nothing more than can be captured by the Standard Story. Additional ideas regarding the psychology of understanding for instance may be interesting, but not philosophical. This goes hand in hand with EMU’s explicit rejection of the need for anything more than propositional knowledge.

If EMU and Nounou and Psillos are correct, all we really require for scientific understanding is propositional explanatory knowledge plus the use of our basic logical reasoning abilities. And if this is correct then substantivists have been tilting at windmills.

3 An epistemic argument against EMU

I think an epistemological argument can be made in response to EMU, (and Nounou and Psillos). I will now argue that EMU does a very poor job of handling our abilities to extract understanding of phenomenon p from explanation q in only propositional terms. This is an obligation it really needs to meet. If EMU fails to account for philosophical ideas relevant to extracting understanding, then it fails to satisfy its boast. My idea is to show that there are important abilities required to acquire understanding of an explanation which are unaccounted for in EMU’s account of propositional knowledge. My focus here is on whether propositional knowledge alone is enough for understanding, and the pressure will be on whether the kind of abilities required for understanding an explanation fall under EMU’s restriction to only the logical. I argue they do not. This discussion raises the question of whether understanding is really different from knowledge-that, and in Section 4 I will respond to this concern in greater detail.

Looking to EMU’s account of understanding (a)–(c), the first two criteria are not contested here. It is (c) which causes all the trouble. This last requirement is EMU’s alternative to a substantive theory’s skill condition (advocated as the Ability Thesis). Khalifa claims “skills are replaced by propositional knowledge concerning explanatory details” (2012, 26). His idea is to strip-away talk of knowing-how to do something, and replace it with knowing-that there is an explanatory connection. The argument to establish (c) was already mentioned above when the skill condition was accused of redundancy in Hempel’s model: “The understanding [that scientific explanation] conveys lies in the insight that the explanandum fits into, or can be subsumed under, a system of uniformities represented by empirical laws or theoretical principles” (Hempel, quoted in Khalifa 2012, 26). The idea here is that any skills involved are merely logical and not in need of further explanation.

It should be noted that EMU’s commitment to only propositional knowledge has previously been challenged. de Regt for instance argues, “A student may have memorized Bernoulli’s principle and have all the background conditions available but still be unable to use this knowledge to account for the fact that jets can fly” (de Regt 2009, 26). Khalifa responds along the same lines as above: it is confused to think we need a full-blown substantive theory of understanding if all that is additionally required are deductive inference skills. EMU can do all the necessary work, and no supplementary story about inference skills is going to explain any hidden philosophical issues here.

I agree that the student requires more than mere propositional knowledge, but it is a mistake to think, as Khalifa does, that investigating the necessary additional inference skills is a philosophical triviality. First, it begs the question to assume that ‘knowing l is the correct explanatory link’ is a form of propositional knowledge. Khalifa thinks knowing this connection is merely propositional knowledge plus some logical abilities. In fact, if we examine in some detail what it takes to extract understanding from an explanation I think there are good reasons to conclude knowing l is not knowing-that, but is instead a form of knowing-how. And this know-how is neither a set of mere logical abilities, nor is it reducible to knowing-that. To make this argument I will consider three cases.

  • Case (i): Imagine I give my 4 year old daughter the explanation given by Khalifa in his paper: “the shape of an airplane’s wing (curved on top and flat on the bottom) creates a difference in the velocity of air on the top and the bottom of the wing, such that the pressure exerted by the slower moving air along the bottom of the wing is greater than the pressure exerted along the top of the wing. As a result of this difference in pressure, flight is possible” (Khalifa 2012, 22). Now I happen to know that my daughter understands the meaning of the words in this explanation.Footnote 2 She knows what a plane is, what a wing is, etc. She also knows what the sentences say and can memorize each statement. I am not too surprised at her ability, she is pretty good at memorizing children’s stories like Llama Llama Red Pajama and Dora the Explorer Goes to School. She also believes each statement in the explanation because she trusts I am not lying to her. We can reasonably conclude therefore that she has knowledge of the explanation in virtue of having justified true beliefs. But if I ask her how a plane stays aloft, the best she can do is repeat the explanation I have already given. She draws no explanatory inferences about the case. In fact, if I ask her how a related object, like a hang-glider or a helicopter, stays aloft, she just stares back at me with a blank expression. She does not understand the explanation in any explanatorily-relevant way, she merely knows it.Footnote 3 She understands its meaning, but cannot do anything with it. She is like de Regt’s student, she lacks precisely the insight to which Hempel refers. We can call this skill to acquire propositional knowledge ‘Semantic Ability’.Footnote 4

  • Case (ii): I give the same explanation to students in my philosophy of science course. Many of them have forgotten any physics they may once have learned but they understand why q explains p. They are tacitly aware that pressure is a force, and forces can cause lift. This is why there is a pressure difference which entails the plane stays aloft. They are unable to solve further problems, such as what will happen if we double the wing area of the plane, but they know why q entails p. Importantly, they differ from my daughter in that my students have an idea of how it can be that q entails p. Their knowledge-why is a knowledge-how, not a knowledge-that. They know how it is that a wing generates lift. They do see that a hang-glider works by the same principle because they make inferences regarding the lift-inducing properties that are common to both airplanes and hang-gliders. Note that this is a generalization from examples, and that means it is more than a logical inference. Rather than merely using an inference rule, the student is distinguishing properties that are relevant to the explanation from those that are not and identifying their occurrence in examples. They distinguish the properties of objects in the explanation which are responsible for the explanatory connections, or links in the story. We can call this ‘Comprehension Ability’.Footnote 5 This is the ability Hempel speaks of to fit the explanandum into our background theoretical principles—to be able to make inferences enough to follow that explanation, to make it intelligible or comprehensible. This skill is more than an ability to derive a conclusion from a law plus initial conditions because it requires an inductive generalization on the part of the student, one from previously given information about properties to those properties in new examples.

  • Case (iii): There are some students in my class who have taken enough physics to not only follow the explanation above, but to actually figure-out what will happen to the plane if we double its wing area. These students know enough theoretical background information to mentally initiate a new situation model with new values for the variables involved, and perform accurate calculations. These students have more than mere semantic and comprehension ability, they also have what we can call ‘Problem-Solving Ability’.Footnote 6 They have the skills necessary to synthesize, analyze, compute, evaluate, etc. based on their background theoretical beliefs related to this explanation. These students have cognitive skills that go well beyond those of my daughter or their philosophy classmates. These are the sorts of skills recommended by de Regt and Grimm, though not required by Hempel. These students exceed Hempel’s requirements of understanding an explanation of a phenomenon, p. Perhaps it would be more accurate to say they understand a theory T, which they can use to comprehend p, and much more besides.Footnote 7

To make clear that case (ii) is different from (i) by requiring more than just logical abilities, we can run the same argument for a different example—one from logic. This will help to reinforce the distinctions.

For case (i) a student S1 in my logic course is told that that ~(A & B) entails (A ⊃ ~B) in virtue of applying DeMorgan’s and Implication rules:

  1. 1.

    ~(A & B)

  2. 2.

    ~A v ~B by DeMorgan’s on line (1)

  3. 3.

    (A ⊃ ~B) by Implication on line (2)

S1 knows these statements mean respectively ‘not both A and B’, ‘not A or not B’, and ‘if A, then not B’ but has no idea how to infer the latter from the former. S1 merely knows what each means and that it is logically permitted for us to infer the latter from the former (perhaps not even knowing the inferences go the other way too). S1 lacks logic and comprehension abilities, and has only semantic ability. S1 doesn’t really ‘follow’ the explanation, she merely knows it. She has mere propositional knowledge. But in case (ii) another student S2 not only knows the derivation in the sense that she can learn it, she also knows the meaning of the propositions well enough to recognize they are equivalent. Student S2 not only knows but understands how it is that ~(A & B) entails (A ⊃ ~B). S2 recognizes that the properties of these propositions are such that they have the same truth value under every possible truth value assignment for A and B—their truth tables. S2 can see that the logical equivalence of (1) and (2), and of (2) and (3), are encapsulated by the rules. This is more than merely knowing a rule of logic and blindly applying it in a plug and chug way. S2 has achieved a level of comprehension of the explanation. This cannot merely amount to S2 having more knowledge-that than S1. There is a difference in kind between knowing-that and knowing-how. The difference here is an equivalence in meaning between ‘not both A and B’ and ‘If A then not B’. S1 fails to see this semantic equivalence, knowing only that they are logically equivalent. S2 not only knows they are equivalent, as if told by an authority and blindly accepting it, she also identifies the equivalence, recognizes it and sees how it is coherent.

On the other hand, S2 does not yet have the problem-solving ability to head-off and apply DeMorgan’s or Implication to other problems (unless she starts to spontaneously generalize from this case, which is an inference skill at work of course). For case (iii), a third student S3 reaches problem-solving ability level: she can correctly infer how to apply these laws elsewhere. As many of us know from experience as logic teachers, some students make this leap to new cases immediately, others require further instruction. But this just indicates that semantic ability, comprehension ability, and problem-solving ability are three very different levels of cognitive processing requiring different skills.

To clarify the cases: Case (i) reflects semantic ability: the ability to make intelligible the meaning of words and sentences in an explanation—knowledge-that. Case (ii) reflects comprehension ability: the ability to make inferences from the literal meaning of propositions to generate explanatory knowledge—knowledge-how. Case (iii) reflects problem-solving ability: this level presumably incorporates (i) and (ii) but also includes the skills necessary to assimilate new information, set-up and calculate solutions to new problems using previous principles, and evaluate the results of those calculations.

These cases reflect three distinct, though overlapping levels of cognitive skills. Since EMU permits only propositional knowledge for understanding p (knowledge-that) it must fail to capture case (ii), which requires knowledge-how. Khalifa claims EMU does capture case (ii) by appeal to S’s inferential ability but that cannot be correct because making inferences is not just having more propositional knowledge; it requires our doing something. It requires we infer how-p. This is knowledge-how, not knowledge-that. On the other hand, de Regt and Grimm demand too much from S, requiring she achieve level (iii) ability. This high level of achievement outstrips what we normally mean by comprehending an explanation, reflecting an ability to use a theory about p rather than merely grasp how-p.

This three level taxonomy of understanding-related abilities suggests a re-write of EMU’s account of understanding. While (a) and (b) remain the same, we ought to rephrase (c) as:

  • (c): For some l, knowing how l is the correct explanatory link between the explanans and the explanandum.

This last condition reflects an endorsement of the comprehension ability thesis and all that comes with a substantive theory of comprehension. We should now say, given the Hempel ‘insight’ claim, that EMU must be committed to ‘comprehension’, but not to ‘problem-solving’. This fits nicely with the abilities mentioned by Nounou and Psillos. The amendment to EMU is charitable, though one might be tempted to say that EMU has been stretched beyond its legitimate initial characterization. Whatever the nomenclature, this is not in conflict with but rather supports Hempel’s view. The mistake is that EMU erroneously thinks inference skills are trivial in accounting for understanding.

I think Khalifa therefore faces a dilemma: either reject ‘comprehension’ and stick with only propositional knowledge (the semantic ability thesis), or embrace ‘comprehension’ by renouncing EMU’s rejection of a theory of skills/abilities for understanding. If EMU includes only the semantic ability thesis then it fails to capture Hempel’s notion of ‘insight’ to which it is committed. If EMU includes the comprehension ability thesis then we have to recognize the need for a substantive theory of understanding to support that commitment—one cannot simply assume that understanding as comprehension is covered by existing theories of explanation.

4 Potential objections to the epistemic argument

I will consider three objections to the above argument. The second is the most important, but the first sets the scene.

First objection

One apparent way out of this dilemma is for the EMU proponent to claim that all this know-how talk is unnecessary and misleading because knowing-how ‘q entails p’ really only requires more knowledge-that. All know-how requires is that S know that the reason for p is q. Knowing why the plane stays aloft is just to know that the reason is the difference in air pressure on the top and bottom of the wings. This point is made by Khalifa (2012, 26)

But this response makes a mistake. While it is true that to understand how p is entailed by q one must know the reason that q produces p, this propositional knowledge alone is insufficient. One must also be able to infer from the properties of q to the event p. One must infer that the different air pressure causes lift because there is more force up than down. This inference does not require only knowledge-that, but also knowledge-how; an inferential skill to bridge the information gap between merely knowing-that there is a pressure difference, to grasping that this difference produces, causes, entails, results, in lift. Similarly for the second example, one must infer that ‘not both A and B’ is analytically equivalent to ‘not A or not B’. Of course this only holds for the inclusive definition of disjunction, so to make this inference does require something more than logical ability, it requires comprehension of the logical operator. This just helps to prove my point: logical plug and chug reasoning is not enough for comprehension of a derivation—nor for understanding why something q explains something else p. Knowing-that q entails p is not the same as inferring-that q entails p.

Second objection

Another objection pushes in a similar direction but is subtly different. Instead of claiming knowing-how emerges from amassing knowledge-that, the suggestion is instead that knowing-how is entailed by knowing-that. The idea behind this objection is that if we really know an explanation then we must fully know the meaning of the concepts involved, and this knowledge of the meaning of concepts itself entails a knowing-how. For example, in the airplane case, for a student to really know the explanation it is required that they know what ‘pressure’ means, and if they do know what ‘pressure’ means this entails they know how to infer things like ‘differential air pressure between the top and bottom of a wing causes lift’. If one’s view of concepts brings with it these inferential abilities, which I have used to separate knowledge from understanding, then the gap in my account between knowing an explanation and understanding it evaporates. According to this objection my daughter has only a superficial knowledge of the explanation because she has only an incomplete knowledge of the meaning of the concepts involved. She only appears to know the meaning of ‘pressure’ but really fails because she has no inferential abilities with regard to that concept. Knowing the meaning of a concept on this account requires knowing how to actually do something with it. If my account ignores this possibility then it is not really neutral with regard to one’s view on concepts.

My answer to this criticism is to point to the significant difference between understanding a concept and understanding an explanation. The critic is accusing me of assuming a non-inferential account of concepts, and suggesting this generates my biased view that explanatory knowledge is similarly non-inferential. But one can perfectly well hold an inferential view on concepts, (even of propositions), while rejecting the claim that knowledge of a set of propositions in an explanation entails understanding it.

We should spend a little time investigating this important issue since it is central to a clear grasp of the difference between cases (i) and (ii). First of all, let us assume that most words in a natural language get their meaning from the concepts they are used to express. A concept need not be a single word, for there are complex expressions like ‘exploding tail engine’ which are concepts composed of simpler concepts. Yet, we may want to limit the size of a concept, since it seems a stretch to think of entire propositions as single concepts. So, whatever exactly a concept is, they are not to be identified merely as words or as propositions.

Since the criticism I am considering points to concepts as the locus of my mistake, we should note that the structure of concepts is typically thought of as being either one of ‘containment’ or of ‘inferential relations’.Footnote 8 The containment model takes a concept literally to be composed of the proper parts of other concepts. On this account the concept ‘exploding tail engine’ could not be tokened without tokening the concept ‘engine’. The other view, the inferential account, takes concepts to be constituted by inferential dispositions. On this account ‘exploding tail engine’ may implicate other concepts, (such as aircraft, crash, passengers, etc.) however, any one of these associated concepts may not actually be tokened when the phrase is tokened. At most one may have a disposition to infer ‘aircraft will crash’ from ‘exploding tail engine’.

My critic has the inferential model of the structure of concepts in mind. One might therefore think we can defer the debate over knowledge and understanding to those arguing over the containment versus inferential model of concepts. But notice that even if we adopt the inferential model of concepts there is no reason to think knowing an entire explanation will entail understanding that explanation. The reason for this is that there is a big difference between understanding a concept, (or even a proposition), and understanding an explanation comprised of a set of propositions.

To illustrate this point, go back to the logic example (1)–(3) above. The first statement ~(A & B) includes a number of concepts: negation, conjunction, and the idea that uppercase letters can act as variables for propositions. The statement means ‘not both A and B’. Let’s be really liberal and assume for the benefit of my critic that the inferential model can apply not only to concepts but to entire propositions. In this case adopting the inferential view of concepts/propositions suggests that ‘really’ knowing what this statement means entails being disposed to make inferences about it. But even with this permission, what kind of inferences are we responsible for making, and how many? Are we expected to be able to infer step (2) ‘not A or not B’? It seems the inferentialist critic we are dealing with requires it, else understanding the derivation to step (2) from (1) requires more than mere propositional knowledge. This would be strained however. It is normal in teaching logic for instance to have to show students that (1) entails (2) by illustrating with examples: ‘If it is not the case that I both own a cat and a lizard, then I either don’t own a cat or I don’t own a lizard’. This inference is perhaps not terribly demanding, but it is at least a move that takes some cognitive work—it does not spontaneously occur to everyone.

But even if the inferentialist does implausibly think we all spontaneously project DeMorgan’s law in all tokenings of negated conjunctions, that doesn’t prove his case. The criticism remember is that knowing an entire explanation is enough for understanding it. If we stick with the simple logic example, knowing that (1) entails (2), and that (2) entails (3), does not enable my students to ‘see’ that ‘not both A and B’ is logically equivalent to ‘if A then not B’. It takes them some serious cognitive work to get from the meaning of the former proposition to the meaning of the latter, so it is just implausible to think knowing (1) ‘really well’ will reveal to them the content of (3). It doesn’t. Looking at their midterm exams proves it.

The point is that even being liberal with inferentialism and assuming it extends to cover entire propositions, knowing the content of propositions and the explanatory relations between them is not sufficient for understanding an explanation as a whole. (I will illustrate in Section 6 of this paper what I take to be the additional inferential work necessary to achieve understanding).

Third objection

Some will claim it doesn’t make sense to push a general thesis about the relation between explanation and understanding in abstraction from the specific theory of explanation held. One can after all have a theory of explanation which has nothing to do with understanding, such as is the case with Hempel’s; but one can also have a theory of explanation like Achinstein’s (1983) which is tied essentially to understanding. On the latter account explanation is defined in terms of understanding, and in that case the question of whether understanding an explanation requires more than knowing it ceases to arise. This objection seemingly provides an important lesson about both deflationary and substantive accounts of understanding: their relevance to explanation depends entirely on whether the explanatory account at hand takes understanding as primitive.

There are however good reasons not to take understanding as primitive. Grimm (2006) for example has persuasively argued that understanding is a form of knowledge. He argues that, like knowledge, understanding is Gettierable, requires truth, and is not transparent. In fact, Grimm notes that most philosophers of science working in this area, including Lipton, Salmon, Woodward, Kitcher, and Achinstein himself, take understanding to be a form of knowledge. If one is going to claim understanding cannot be further analyzed in terms of knowledge one at least requires strong reasons for doing so. I think we need it explained, and unlike Khalifa I don’t think it is enough to simply claim everything we want to say about scientific understanding can be captured with current epistemology of scientific explanation. I think we need to go beyond Hempel’s dicta and unpack some of the details about how the insight of which he speaks, and to which Khalifa defers, can generate understanding. Such insights are far from trivial. To make clear just how non-trivial they are, and highlight the need for a substantive theory of understanding, I will next sketch some ideas about the abilities we require for scientific understanding. This is merely a sketch but should give an idea of how to recover a substantive theory of understanding.

5 Which abilities are required for scientific understanding?

Let us return to our previous cases (i) and (ii).Footnote 9 I have argued that case (i) illustrates semantic ability and case (ii) comprehension ability. By again looking at the differences between the cases it should be possible to determine precisely which abilities are required for scientific understanding and show that they go well beyond those condoned by EMU. Knowing these abilities will also help us judge the adequacy of new accounts of scientific understanding. In particular, we want to know why EMU falls short in terms of the inferential abilities required by those who understand. The goal is therefore to distinguish between the inferential abilities permitted under EMU, (which I allow can include those associated with the above-mentioned inferential account of concepts), and those abilities that go beyond EMU’s domain but which are necessary for comprehension. My strategy is to use questions as a tool of inquiry.

It is often quite useful when trying to establish the nature of someone’s cognitive achievement to give them questions and evaluate them on their answers. Similarly I think we can use questions to help evaluate the nature of cases (i) and (ii). What sort of questions do we expect S to be able to answer correctly if S has semantic ability rather than comprehension ability? In other words, assuming S answers correctly, what kinds of questions reflect S’s possessing only propositional knowledge of an explanation, not understanding?

Well, scientists have to memorize a lot of information if they are to think about an issue. They need to know the definition of terms (plane, wing, air, velocity) and facts (gravity pulls massive objects together, pressure is force over an area). Scientists also require knowledge of complex ideas like theoretical procedures, key points in an argument, principles, etc. These more complex ideas we can think of as complex concepts, whereas the simpler ideas require only limited connections between concepts. Both simple and complex ideas can be memorized and constitute S’s declarative knowledge in her cognitive framework—her conceptual network. When professors test their students on declarative knowledge they ask straightforward questions: ‘describe theory X’; ‘explain theory Y’; ‘state Z’s views in your own words’; ‘identify the defining characteristics of theory X’, etc. This is testing for ‘knowledge-that’.

If these are the kinds of questions S needs to be able to answer in order to reflect possessing knowledge, I want to know which skills are required to answer these questions. We should not undervalue the achievement, for it takes a lot of work to read and commit to memory a large number of complicated facts. According to current cognitive psychology the skills themselves are quite limited though, requiring only what we can call ‘referential inference’ (recall) and ‘propositional memorization’ (semantic memory).Footnote 10 The former is the task of activating the appropriate concepts as S hears them, the latter is committing it to memory. To know the explanation for why a plane stays aloft my daughter merely has to make sense of and commit to memory each of the propositions I give her. This requires constructing the meaning of each proposition as it is heard, and keeping them all in sequence. The construction of meaning for a statement is complicated in detail, but the cognitive processes can be thought of as only requiring activation of previously established concepts. (If the ideas are not already available to pull-up from memory then the task of memory construction becomes very difficult of course—learning. We are assuming this is not the case for our example). All that my daughter has to do is operate the abilities of ‘referential inference’ and ‘proposition memorization’. This semantic ability, as we have seen, generates only literal knowledge.Footnote 11 As argued above, this is all that is required to possess propositional knowledge.

Some people are very good at rattling-off definitions of concepts, reciting principles, and rules, perhaps even summarizing entire theories. Yet many are not so good at identifying an instance of some concept in a given example. Asking them to do so is to pose a cognitive challenge demanding more than mere knowledge alone. If a situation is posed to them, they just cannot recognize which law or principle is in play. My daughter is in this situation when it comes to our plane example. My students on the other hand are not only able to recall the explanation of what keeps a plane aloft, they can also recognize the principle of lift at work here as well as in other examples. So, if I ask whether Bernoulli’s principle is also acting in the case of cars staying on the road they reply ‘no’. Additionally, they can generate their own examples of the principle at work, citing hang-gliders, birds, perhaps even the swimming of fish or penguins. This ability to identify general principles in multiple examples and to generate one’s own examples reflects more than mere knowledge, as argued above, and sits better with the notion of comprehension, or understanding.

Which skills are at work in these students, but lacking in my daughter? Unlike questions that test S’s knowing something, ‘understanding questions’ ask someone to recognize a general principle at work in a specific situation, or ask them to provide a specific situation in which a given principle is at work. Neither of these come easily. To achieve understanding students typically need to encounter new examples and illustrations.Footnote 12 They begin with only a vague idea of what is being referred to, but with more examples students come to recognize the key properties of an example and relationships which embody the relevant principles, laws, and concepts. Previously shallow, literal knowledge becomes deeper as concepts are related, integrated and elaborated. The cognitive processes involved in making these connections are again inferential in nature but go way beyond simple referential inference. When we encode explanations at this level we make causal, logical, and probabilistic inferences—explanatory inferences.Footnote 13 We connect concepts in our cognitive network: The force of the air under the wing pushes the plane up; the plane maintains its altitude if the upward lift is equal to the downward force of gravity. We can expect a continuum of ‘depth of understanding’ precisely related to the number and importance of these sorts of causal, logical, and probabilistic inferences.

Furthermore, the nature of the inferences at this level is not really mysterious. We establish causal relationships and their logical and probabilistic cousins in well-known ways: through generalization and specialization inferences.Footnote 14 These are familiar friends: we make generalizations about specific properties based on repeated correlation of events (birds tend to fly). We make specialization inferences when exceptions to our generalizations occur—we construct an ‘exception to the rule’ rule (penguins are non-flying birds). There is some controversy over how our minds develop these processes. I do not have space to investigate this here. What I want to do instead is sketch a contribution to how we can think of the difference between knowledge and understanding in terms of cognitive processes using elementary tools for modeling the mind. This will help drive-home the inadequacy of EMU as an account of scientific understanding and also highlight how to accommodate the abilities just determined as necessary for understanding. The philosophy in all this will be in providing a theoretical story about how explanations which are understood rather than just known can be represented in a model, however we may choose to construct that representation.

6 The inferential model of scientific understanding

This section is a highly compressed sketch of the argument given by Newman (forthcoming).

We are concerned with the mind’s ability to represent scientific explanations as propositional knowledge, and how minds extract understanding from that knowledge. There are many ways to model the human mind. Currently, there are broadly speaking four popular approaches: connectionist models; Bayesian models; declarative/logic based models; dynamic systems models. I am not going to debate which is the more plausible. Let us just assume for the sake of argument that a good way to model human mental representation is through logic-based (rule-based) mental models.Footnote 16

On this approach a mental model is a kind of mental representation that is used to model the properties, relations, and processes we perceive around us.Footnote 17 Rules are adopted as the basic building blocks of all representations, and when they are activated at different levels of generality or specificity they form a hierarchy. A mental model is a specific activation of a complex interrelated hierarchy of condition-action rules, each rule taking the form of an ‘if-then’ conditional. An important idea behind this approach is that the hierarchy undergoes updating of rule-structure and rule-strength with time-step execution cycles—learning.

Some rules in a network are diachronic, some are synchronic. The synchronic rules are useful for identifying (categorizing) what we are modeling, so they can be used to characterize our concepts. For example, ‘if X has wings, fuselage, and engines, then X is a plane’. Synchronic rules also activate associated rules, forming activations of conceptually related rules. For example ‘if X is a plane, activate the ‘vehicle’ concept’ and ‘if X is a plane, activate the ‘flying’ concept’.

Diachronic rules on the other hand are concerned with prediction and action commands, telling us what to expect in future states of the model and what to do in response to a stimulus. These rules therefore make predictions, such as ‘if the plane loses engine power, it will fall to ground’. They also provide action commands such as ‘if you see a plane falling out of the sky, call the fire and rescue service.’

Although there is much more to be said about mental models, I will cut this description unconscionably short with just one more concept, one that is going to be very important for us: coupling. Two rules are coupled when one activates the other. For example, take R1 to be an ‘if-then’ rule which has a consequent identical to the antecedent of R2. Each time R1 is activated it may activate R2. For instance, R1 and R2 may be coupled: ‘if a plane falls out of the sky, then it explodes when hitting the ground’; ‘if a plane explodes when hitting the ground, then it will be destroyed’. In order for a sequence of rules to represent a system as a mental model they have to be coupled. Coupling therefore plays an essential role in the construction and maintenance of a mental model.

Coupling is generated by two inductive mechanisms, and I think it is precisely these mechanisms that are responsible for our coming to understand a phenomenon rather than merely know it. These mechanisms are inductive rule-generalization and rule-specialization (abduction and analogy are special cases of these).

Rule generalization is of two sorts: condition-simplifying and instance-based. Condition-simplifying inductive generalizations are simply the cognitive system recognizing an unnecessary number of conditions in a rule and modifying the rule by cutting them. For example, take the rule ‘if X has wings, engines are attached to X, and X’s wings are not feathered, then X is a plane’. It may turn out, given the system’s experience that this rule is activated just as well without the final condition (‘X’s wings are not feathered’). If so, then the rule can be simplified by cutting the clause without harm.

Instance-based generalizations are more familiar: a rule is developed or strengthened on the basis of similar conditions co-varying in the environment with similar consequents. For instance, if our system frequently sees flying objects, and they are aircraft, it establishes a rule reflecting that co-variation. This is basically a case of enumerative induction, but is essential for establishing rules that can fire to represent the environment.

The second mechanism responsible for generating coupling between rules is rule specialization. This is modifying a rule in light of counterexamples. This might for instance occur in the situation mentioned above when we find that not all planes behave according to typical linear flying patterns. Instead of throwing out the standard rules we just modify the relevant conditions to include ‘and the plane is not designed for acrobatics’. This mechanism saves the system from discarding useful but overgeneralized rules.

How does this appeal to mental models help us understand understanding? My suggestion (the philosophy in all this) is that coupling of rules helps us see why understanding is different from mere ‘knowledge-that’. The Inferential Model of Scientific Understanding makes the simple claim that understanding requires the coupling of declarative knowledge, and this is achieved when S makes the correct generalization and specialization inferences. This appeal to coupling then is my attempt to accommodate the skills and abilities missing from prior accounts. To do so I am appealing to specific inferences. I call these inferences the activation of ‘inference rules’. Inference rules operate on those rules that represent parts of an explanation—what I call ‘ordinary rules’.

For instance, we might use an ordinary rule to represent that ‘air flowing over a wing moves faster than air moving under a wing’. We use inductive ‘inference rules’ to couple one set of ‘ordinary rules’ to other ‘ordinary rules’ which represent, say, the proposition that ‘differences in air flow entail differences in air pressure, which entail differences in force’. The coupling is manifested in the inferences from air flow differences to pressure differences to force differences. Without S executing these inductive inferences all that is acquired from hearing an explanation is declarative knowledge.

For another illustration go back to the logic example, of how a student can understand rather than merely know the derivation of ‘if A then not B’ from ‘not both A and B’. The claim I made was that S1 does not understand the derivation though S2 does, and the difference between them is that S1 only knows literally what is stated in each step, whereas S2 knows how to connect the steps via appropriate inference rules. Translated into the mental model idiom, I claim that though S1 and S2 know the same declarative propositions, S2 couples those propositions together in an inferential pattern. Coupling is the activation of a second ordinary rule by a previous ordinary rule in a cognitive framework. The activation is executed by an inferential rule. So it is the failure to use inferential rules that distinguishes a knower from one who understands.

If this is right and it is the coupling of ordinary representation rules by inference rules that generates understanding, then it is important to know how these new inference rules get generated. I suggest inference rules are created via the two procedures mentioned above: inductive generalization and specialization. We can explain this with an evolutionary learning mechanism adopted by the mental models approach: our inductive inferences, which connect ordinary representation rules together, are driven by repeated confirmations. The rules are rewarded with an increase in strength when they make correct predictions, but penalized with reduction in strength when they get things wrong. Successful rules continue to represent the environment because they keep getting stronger, so patterns of successful inductions that are rewarded are reinforced.

For instance if looking to the sky reveals an image that elicits the categorization ‘plane’ and this leads via diachronic rule firing to the prediction that the plane will stay aloft, then this association is reinforced just in case the plane does indeed stay aloft. We develop a rule, such as ‘If a plane is aloft, it will continue aloft’. Similarly, with the logic example if we are rewarded when we infer from ‘not A and B’ to ‘neither A nor B’ to ‘if A not B’, the inference rules we use to get there will be reinforced. We will default to those rules again and again, until they cease to work, in which case they will lose their authority. We maintain these connections between concepts when they are successful. We reject connections that are predictive failures. In virtue of successful generalization and specification inferences we connect elements in our cognitive network. The more connections and the more important they are, the more we come to understand the world.

We can say therefore, that with inductive inference via coupling, S achieves the insight of which Hempel speaks, but which is missing from the D-N model of explanation. Coupling is what provides the integration of information into a coherent network of belief, and it is this that reflects understanding an explanation.

For another example, take the classic case of the flagpole and its shadow. The height of the flagpole and angle of the sun in the sky explain because they predict the length of the shadow. But of course the shadow also predicts the height of the flagpole, so the D-N model is inadequate. The mental model account can explain why the D-N model fails: it does not capture the synchronic association inferences between concepts, but only the diachronic prediction inferences of the model. We know that to explain the height of a flagpole we need to look to something like people’s intentions, and we know this because the concept ‘flagpole’ when represented in a mental model fires associated concepts like ‘nation’, ‘patriotism’, ‘hegemony’, etc. So, to explain the height of the pole we need to appeal to why people would make it as tall as they did, not how long a shadow it casts.Footnote 18

The Inferential Model of Understanding claims that knowledge of an explanation is merely the activation of ordinary representation rules in a cognitive hierarchy that correctly represents the explanation’s propositional content. In contrast, understanding an explanation is achieved when those activated ordinary rules are coupled by correct inference rules. This is the ‘how’ of understanding for explanations. We can now explicitly state the difference between knowledge and understanding in this idiom:

  • (K): Knowledge of an explanation is the activation of ordinary rules in a cognitive hierarchy that correctly represent the explanation’s propositional content.

  • (U): Understanding an explanation is achieved when those activated ordinary rules are coupled by the correct inference rules.

7 Benefits of the inferential model

There are some interesting consequences that follow from my characterization of (K) and (U). First we should note that the inferential model can accommodate those peculiarities associated with understanding which Khalifa uses EMU to solve. Those mistaken ‘aha’ moments Khalifa refers to, (where one mistakenly feels like one understands), are explained as cases where the inferential rule being used to associate two or more ordinary rules is incorrect, although it appears to the subject to be the correct inference. Like EMU, the inferential model also asserts that if you have a correct explanation of p then you understand p regardless of how you feel about it, yet the inferential model goes further by explaining how you can come to obtain that feeling in the first place. EMU has no explanation for how that feeling arose, whereas the inferential model isolates its origination in the activation of an incorrect inference rule.

Where EMU claimed to have answered de Regt’s call for a ‘constitutive’ account of understanding, we now see it clearly just rejects the request. The inferential model on the other hand embraces the problem and locates the constitutive components as being the generation of and relations between ordinary and inferential rules in our cognitive architecture.

Of course the third nagging problem for understanding theorists which EMU apparently took care of was the temptation toward the ‘ability thesis’. Khalifa claims EMU swallows this thesis whole, either through Hempel’s or Woodward’s accounts of explanation. But as we have seen there is more to be said about inference than Khalifa recognizes. Specifically it is the difference between cases (i) and (ii).

But aside from answering these problems, the inferential model is capable of providing an explanation of one very important additional issue which EMU has no hope of addressing. The inferential model can explain the relation between the many different types of explanations and scientific understanding. That is, we see a number of different models of explanation (Deductive-Nomological, causal, unifying, etc.), but EMU cannot address why it is that each is correct for some cases, if not for all. The Inferential Model explains the understanding we get for each type of case in terms of the coupling of cognitive rules. Logical explanations are explanatory for us because of the coupling of diachronic rules reflecting logical entailment relations that have been established by generalization and specialization inferences. Causal explanations explain because they reflect the coupling of diachronic rules reflecting causal entailments established by the same mechanisms. Explanations of a unifying nature are slightly different—unification is intelligible in virtue not of diachronic antecedent-consequent relations but because of synchronic association relations. For instance, we understand better the mechanics of projectiles because the concepts in Newton’s kinematics are associated with those in both Galileo and Kepler’s theories. That is to say, our concepts in Newton’s theory are informed in virtue of their unifying those in predecessor theories.Footnote 19

Mental representations as conceived under the mental model theory can I believe provide a clean description of why S comes to know-how p, and thus how S comes to understand p. What we have with the inferential account is a naturalistic story (because it is working within the constraints of empirical psychology) of why we find some kinds of explanation explanatory. This is a theory which goes well beyond anything currently existing in the philosophy of science literature, and consequently falls outside the scope of EMU. It is however a necessary addition to our study of understanding precisely because without it we run the risk of missing the differences between cases (i) and (ii), and that is to mistake mere propositional knowledge of an explanation for understanding it.