Introduction

The promise of finding what is now known as the “mark of the cognitive” has, at least for some philosophers, a rather strong allure. Such a discovery is thought to be replete with theoretical, if not also practical, benefits. Armed with the mark in question one is able to explicate what makes cognitive processes, states, or organisms cognitive. But one can do more. Settling on a mark of the cognitive would, presumably, also settle the bounds of cognition. Do we want to know where cognition begins and where it ends? One way to do so—for some, the only way—is to specify first what cognition is.

But is there an available answer to the question “What is cognition?” Adam and Garrison (2013) begin to provide such an answer. They do so by (1) proposing a necessary part of the mark of the cognitive; (2) criticizing an alternative mark of cognition (Rowlands 2010); and (3) applying their proposed answer to show that certain systems (such as bacteria, slime mold, plants, can-collecting robots, and a type of termite mound) are not cognitive. For Adams and Garrison (hereinafter “A&G”), systems are cognitive only if they are capable of acting for reasons, but all aforesaid systems are not capable of acting for reasons and, therefore, they are not cognitive. In what follows, I critically examine A&G’s proposed necessary condition for the mark of the cognitive. After a brief presentation of their position (“Cognition as Reasoning-Involving” section), I argue not only that their proposal is in need of additional support (“The Role of Reasons-Explanations” section), but also that it is too restrictive (“The Case of Visual Agnosia” section).

Cognition as Reasoning-Involving

A&G conclude that cognition involves reasoning and articulate this idea in different ways. For instance, they maintain that:

  • To be cognitive (or intelligent) a system must have beliefs, desires, or intentions. “[P]lants are only cognitive systems if they have beliefs, desires, or intentions of their own” (342).

  • A system is cognitive if its behavior is cognitively produced (cf. ibid.). Better: a system is cognitive if its behavior is the result of certain mechanisms, and the names of these mechanisms, A&G tell us, are “beliefs, desires, plans, intentions, and reasons” (348).

  • A system is cognitive if it acts for “system centered reasons” (347). They write: “We will begin by suggesting that creatures that cognitively process information are capable of doing things for reasons” (346). Or: “Cognitive systems do one thing A in order to do (achieve) another thing B… [and] the goal of doing B and the strategy for accomplishing B by doing A are represented within the system” (347).

  • A system is cognitive if one can explain the behavior of the system by appealing to the representational content of its internal states (346).

  • A system is cognitive if one can explain its behavior in terms of reasons (347).

  • “We propose that only cognitive creatures or systems are capable of acting/behaving for reasons (explainable via reasons at the level of the individual)” (351).

As the above list attests, the requirement that cognition involves reasoning is expressed by A&G in at least two ways. First, cognition involves reasoning insofar as acting for reasons of one’s own (and consequently, having certain attitudes) is a necessary condition for cognition. That is to say, according to A&G, there is no cognitive system or organism that lacks such attitudes or is incapable of acting for reasons.Footnote 1 But A&G also understand the requirement that cognition involves reasoning to be an explanatory requirement. To wit, a system S is a cognitive system if one can provide, what I will be calling, a “reasons explanation” for S’s behavior—viz., an explanation of S’s behavior that cites S’s beliefs, desires, or intentions. These two ways of spelling out the requirement that cognition involves reasoning turn out to be closely related for A&G, for there exists an intimate connection between a reasons explanation, on the one hand, and having reasons, on the other hand. For instance, when discussing the behavior of certain plants, they write:

The chemical mechanism within the plant that causes it to turn its leaves toward the light doesn’t rise to the level of attributing reasons to the plant itself… The plant is not doing things for reasons (not reasons of its own) (347).

In this passage, A&G move from (1) no reasons explanation can be provided for S’s behavior to (2) S does not act for reasons. The same inference is made when they discuss the behavior of Brook’s robot, Herbert (see 346–347). For A&G, as I have already stated, it is a necessary condition for being cognitive that the system’s behavior can be explained in terms of reasons.Footnote 2 And, presumably, a reasons explanation is indicative of having reasons—i.e., if S’s doing of ϕ is amenable to a reasons explanation then S does ϕ for reasons of its own. In other words, the practice of providing reasons (or cognitive) explanations tracks the existence of reasons, beliefs, desires, and intentions. A&G’s own claims support this intimate relationship between reasons explanations and having certain attitudes or psychological states:

Only motions with cognitive explanations are truly cases of intelligent behavior. Consider tropisms. Plants will turn their leaves toward the light. There is some chemical mechanism within the plant that causes its behavior to mimic that of an intelligent agent. But the plants are only cognitive systems if they have beliefs, desires, or intentions of their own (342).

The Role of Reasons Explanations

How can A&G argue that a reasons explanation is necessary for cognition? Here is one strategy that will not work: cite examples of systems that do things (or at least, systems that are capable of doing things) for reasons and then demonstrate that such systems are cognitive. For A&G’s purposes, this strategy will not do. What A&G are after is a necessary condition for cognition and not a sufficient condition. Thus, showing that certain systems which do things for reasons are cognitive systems (e.g., cats or human beings) fails to offer the requisite support for their proposed mark of the cognitive.

What A&G have to do is to argue that a reasons explanation is necessary for cognition by demonstrating that whenever such an explanation is missing, the agent or system whose behavior is being explained is not cognitive. This is the strategy chosen by A&G. For instance, when discussing the behavior of bacteria that move towards food and away from toxins, they state:

To an observer this behavior may be seen as cognitive and one may be tempted to attribute beliefs, desires, and goals to the bacterium. But on closer inspection there are complete explanations (none of which constitute cognitive processing) of their “sensing,” of the chemical reactions that produce energy for their movement, and of the stimuli that trigger their movement (341).

Or, in their discussion of the purported cognitive status of slime mold, we find the following passage:

[C]onsider slime mold which has received much attention for behavior that is similar to behavior demonstrating basic intelligence. Slime mold can navigate a maze and find the shortest route to the end of the maze (where a food source if located). What is more it can time its movement through the maze to match the periodicity of on and off infusion of cold (which inhibits motion through the maze). …Whatever the final explanation of the behavior of the slime mold, it is very unlikely that it is due to cognitive resources (341–342; footnote removed)

Let me focus on the first quoted passage. Taken at face value, the passage might mislead one to think that what demonstrates that bacteria are not cognitive is the fact that there are complete chemical explanations of their behavior. But this is not A&G’s view. Suppose that a complete explanation of my behavior is available in microphysical terms. Does that show that I am not a cognitive system? Arguably, it does not. It does not show that because, typically at least, the availability of one type of explanation does not preclude the availability of a different type of explanation.Footnote 3 A&G acknowledge this issue. In a footnote they clarify that what they mean is not that the availability of a chemical explanation of bacteria’s behavior entails the non-cognitive status of bacteria. Instead, bacteria are not cognitive for the chemical explanation “does not constitute being a reason or representation” (341, n. 9). So, A&G’s conclusion that bacteria are not cognitive cannot rest on the fact that a complete non-reasons explanation is available. Their conclusion regarding the non-cognitive nature of bacteria rests instead on the fact that whatever explanation we can provide it is not a reasons explanation. Their argument can be summarized as follows:

Premise 1::

One can explain the behavior of bacteria by providing a chemical explanation

Premise 2::

Such a chemical explanation does not rise to the level of reasons explanation

Premise 3::

If a chemical explanation does not rise to the level of reasons explanation, then there is no alternative explanation that does

Lemma::

There is no reasons explanation in the case of bacteria

Premise 4::

When explaining the behavior of an agent or system S, if one is incapable of providing a reasons explanation, then S is non-cognitive

Conclusion::

Bacteria are non-cognitive

But if this is the argument that A&G are making then it is clear what A&G have to do in order to convince us of its soundness. They need to provide support for premise 4. (They also need to provide support for premises 2 and 3, but I shall grant these premises, at least for the case of bacteria.) Without any arguments in support of premise 4, A&G are not arguing for their mark of their cognitive. They are simply assuming it. Indeed, when we look closely at all of their examples that are supposed to show that various systems are not cognitive, A&G seem to do precisely that: they employ premise 4 in order to establish their conclusion but without first establishing that crucial premise. For instance, consider the following passage:

Earlier we objected to saying that bacteria or slime mold or Brooks’ robots do things for reasons. The explanation of the behavior of these things is always at a different level. There are non-representational explanations of why they do what they do. The explanations may be chemical, physical, or electronic and programmable, but even though one may find these very same features in a cognitive creature, one will also find that the explanation of cognitive behavior includes the representational content of the internal states That is, reasons may be physically instantiated in cognitive systems, but reasons include more than the intrinsic properties of those states. Reasons look beyond the physical properties of internal states to what they represent about the world. That will be a crucial difference between an explanation of behavior in terms of reasons and an explanation not in terms of reasons. In so far as the explanation is in terms of reasons, the behavior will be cognitively explainable (346–347; emphasis added).

Here, A&G point out that in the case of cognitive agents or systems their behavior includes representational content. That is all well. But what this shows is that at least some instances of cognition involve representational content. Why should one accept the stronger claim that all instances of cognition must involve this content as well? Second, A&G contend that bacteria, slime, and robots are not cognitive because the explanation of their behavior “is always at a different level” than a reasons explanation (346). Again, one will accept that this fact shows that such systems are not cognitive, only if one already accepts that it is necessary for being cognitive to be amenable to such an explanation.

There seems to be something of a sleight of hand at work here. They argue that being amenable to a reasons explanation is a necessary condition for being cognitive by maintaining that systems whose behavior is not amenable to such an explanation are not cognitive. But they can only establish the non-cognitive status of these systems, if they already presuppose that being amenable to a reasons explanation is a necessary condition for cognition. A&G need to provide independent support for their proposed mark of the cognitive. Without such support, skeptical readers can reject A&G’s proposed mark of the cognitive. Even if A&G are right to hold that many cognitive agents do act for reasons, one can still deny the further claim that all cognitive agents act for reasons.

Note that A&G cannot circumvent the raised issue by simply pointing out that what they are doing is providing inductive grounds for their proposed mark of the cognitive—viz., what they aim to do in this essay is to show that an acceptance of their proposed criterion gives rise to results that are commonly accepted, intuitive, or somehow commonsensical. Suppose that A&G start from the observation that bacteria, slime mold, and can-collecting robots are non-cognitive and then use this observation to provide an explanation of why they are not cognitive. They can argue, for instance, that they are not cognitive because we cannot provide a reasons explanation of their behavior. The problem with this way of trying to support the proposed mark of the cognitive is that A&G have to assume that bacteria, slime mold, and can-collecting robots are non-cognitive. But what licenses Adams and Garrison to assume such a claim? They cannot use their mark of the cognitive to support it, for if they do so, they will be begging the question. Thus, they have to take for granted that bacteria, slime mold, and can-collecting robots are non-cognitive. But read in this way, their paper offers no support for the contention that such systems are not cognitive. Their position also rests on assumptions that others will likely reject. What might seem intuitive to some is not intuitive to all.Footnote 4

The Case of Visual Agnosia

The claim that cognition involves reasoning is not only a claim that requires further justification, as I argued above, it is also a claim that appears to be too strong—at least given certain understandings of what it means for cognition to involve reasoning. Consider the well-documented case of patient DF who suffers from visual agnosia (Milner and Goodale 2006). After carbon monoxide poisoning, DF’s ventral stream of her visual system suffered irreversible damage. As a consequence, DF performs poorly in tasks where she has to discriminate visually between simple geometrical shapes; moreover, she is unable to recognize visually letters, digits, and faces. Despite these profound limitations, DF, just like many other visual agnosics, is in possession of certain motor skills, such as reaching out for, and grasping, everyday objects (Goodale and Milner 2004; Milner et al. 1991). In a striking demonstration that included a “vertically mounted disc in which a slot was cut” and “on different test trials, the slot was randomly set at” different angles (Milner and Goodale 2006, 129), it was discovered that DF was able to insert a card in the slot even though she was incapable of visually reporting the angle of the slot. Reporting on this finding, Milner and Goodale write:

D.F’s attempts to make a perceptual report of the orientation of the slot showed little relationship to its actual orientation and this was true whether her reports were made verbally or by manually setting a comparison slot. Remarkably, however, when she was asked to insert her hand or a hand-held card into the slot from a starting position an arm’s length away, she showed no particular difficulty […] In short, although she could not report the orientation of the slot, she could ‘post’ her hand or a card into it without difficulty (ibid.).

Given what DF can and cannot do, accounts of cognition that maintain that cognition is necessarily reasoning involving face a challenge: they have to demonstrate that DF’s behavior in this experiment (i.e., successfully placing the card in the slot) is amenable to a reasons explanation.Footnote 5 If it turns out that DF’s behavior is not amenable to such an explanation, then accepting A&G’s account should force one to conclude that DF’s behavior is not cognitive. However, since there are good reasons in support of the claim that DF’s behavior is cognitive, the unavailability of a reasons explanation would constitute an objection to A&G’s proposed mark of the cognitive.Footnote 6

In the remainder of this paper, I shall argue that according to two understandings of the nature of reasons explanations—understandings that A&G themselves seem to espouse—there are good reasons to think that DF’s behavior is not amenable to such an explanation. This conclusion, coupled with the contention that DF’s behavior is cognitive, casts doubt on A&G’s claim that a behavior is cognitive only if a reasons explanation can be provided for that behavior. Their proposed mark of the cognitive unduly restricts the extension of “cognitive.”

Reasons Explanations and Intentional States

We can say that S’s behavior is amenable to a reasons explanation only if one can explain S’s behavior by citing certain attitudes that S possesses—attitudes such as S’s beliefs, desires, and intentions. This understanding of the nature and requirements of reasons explanation seems to be in agreement with A&G’s position. To repeat a previously quoted line, plants can be considered to be cognitive systems, according to A&G, only “if they have beliefs, desires, or intentions of their own” (342). So the initial question, “Can we provide a reasons explanation of DF’s behavior?”, can now be replaced with a different one: “Can we explain DF’s behavior by citing certain of her beliefs, desires, and intentions?”

We can make progress in answering this latter question by considering DF’s condition in a bit more detail. The cause of DF’s visual form of agnosia is damage along the ventral processing pathway. This damage affects the connection between early-level vision and higher-level representations. As Farah writes, in DF’s case there is

a functional disconnection between early visual representations in occipital cortex and higher level representations of object appearance in the ventral stream. Without access to ventral stream areas, DF cannot make explicit judgments of shape, size, and orientation (Farah 2004, 23–24).

In light of this account, one can argue that since DF’s visual system is such that it precludes the possibility of forming certain beliefs about orientations and shapes that face her (beliefs such as the slot is orientated in such a way), her reaching behavior is not amenable to a reasons explanation. The argument here is quick: higher-order representations are necessary for perceptual beliefs and judgments of shape, size, and orientation; such higher-order representations require a functional ventral stream; thus, DF, who has a damaged ventral stream, is unable to have higher-order representations. Consequently, DF cannot entertain beliefs or make judgments about the orientation of the slot. Without the ability to have such beliefs it seems reasonable to conclude that DF’s reaching is not the product of any antecedent belief about the way the orientation looks. Yet, DF is able to successfully match the orientation of the presented slot. What this shows is that the fact that DF is capable of successfully performing the task is not amenable to an explanation that cites DF’s beliefs about the orientation of the slot. After all, the attitudes that appear to be necessary in order to give a reasons explanation of her successful completion of the task—e.g., the belief that the slot is orientated at this angle—are not attitudes that DF can have.

Reasons Explanation and Concept-Possession

However, maintaining that reasons explanations are explanations that necessary implicate certain attitudes (i.e., attitudes that the agent whose behavior is being explained possesses) is not the only way that one can explicate the nature and requirements of reasons explanation. One can alternatively hold that S’s behavior is amenable to a reasons explanation if S possesses certain concepts. Once again, this understanding of reasons explanation seems to be in agreement with A&G’s claims. For example, when criticizing accounts that hold that certain can-collecting robots are intelligent systems, they write:

Suppose you want to build a computer or robot that can think; one that has cognitive processes. What do you have to include to insure genuine thought or cognition is taking place? There are robots now that go around AI labs and pick up coke cans and other tasks, but there is no reason to believe that these robots can think. When one picks up a coke can, it doesn’t know what a coke can is. Even if these robots detect coke cans by detecting shape and/or color and size, it is fairly clear that the robots do not have the concepts of shape, size, or color. That is, they don’t have concepts of what they are doing. They don’t understand coke, can, or even picking up. So what would one have to do to get a robot to be able to understand and think and cognitively process information? (339)

Hence, determining whether DF’s behavior is amenable to a reasons explanation requires us to determine whether she possesses the concepts which are necessary in order to perform the task of placing the card through the slot. Does she? Although, the use of a concept is largely, if not entirely, unconscious, that which such use permits is not. There are a number of abilities that are thought to require the possession and use of concepts. If DF cannot perform these abilities, then it is very likely that DF does not possess, or more modestly, she does not employ, the relevant concepts. Issues of concept possession and employment are rather complex. Here, I present only one set of considerations in support of the view that DF lacks the relevant concepts.

Agnosic patients such as DF cannot visually recognize objects as being of a certain kind and thus cannot match or group together similar objects. Usually, if a subject is not able to group together or match objects which are instantiations of a concept C, then the subject is said to lack C. DF’s case is even more telling. Here is what Milner and Goodale report after testing her ability to see fine detail:

We tested Dee’s [DF’s] ability to see fine detail…She did as well as a visually normal person in detecting a circular patch of closely spaced fine lines on a background that had the same average brightness. Yet, remarkably, even though she could see that there was a patch of lines there, Dee was completely unable to say whether the lines were horizontal or vertical (Goodale and Milner 2004, 8; emphasis added)

If horizontal and vertical lines are indistinguishable, then one might be tempted to conclude that DF must lack some orientation-related concepts. A world that “lacks shape and form,” as Goodale and Milner put it, is a world that is bound to lack some concepts (ibid.).

To avoid confusion, I need to clarify that what is of interest is not whether DF can have any thoughts and beliefs about slots, orientations, openings, etc. but rather, whether she can have specific and fine-grained beliefs about this particular orientation that stands in front of her in the experimental situation. In other words, what is being denied is not that DF possesses general concepts such as slot, orientation, or 45 degrees. There is evidence that militates against such a pronouncement: DF, for instance, can rotate her palm at a certain orientation when asked to do so (Milner, personal communication). She can also talk about slots and orientations. Instead, what is being denied is that DF possesses the concepts that are necessary for entertaining thoughts such as, “This slot is orientated at such-and-such angle.” In other words, although DF might have some thoughts about the orientation next to her (say, she believes that there is a slot placed next to her) she does not seem to be in a position to entertain thoughts that are appropriately fine-grained in order to guide her reaching behavior. Thoughts about slots and orientations in general are of no help to DF, insofar as they cannot guide her in moving and rotating her hand so as to match the orientation of the slot. To form an antecedent judgment on the basis of which DF would act, she would need to entertain specific thoughts about the particular slot and orientation in front of her. But this is precisely what she is unable to do. It is tempting, therefore, to cite the lack of the relevant concepts as the cause of this inability.Footnote 7

Even though DF might not have appropriately fine-grained beliefs about the orientation, she still has many other beliefs and desires. For instance, she has the belief that there is a slot in front of her. She also has the desire to match the orientation of the slot (or at least to perform the experiment). Nonetheless, even if we cite those propositional attitudes we still come short of providing an explanation of DF’s behavior vis-à-vis the successful matching of the orientation of the slot. An explanation that restricts itself to what concepts DF possesses or to what DF’s propositional attitudes has will only provide an explanation of the fact that DF performs the test. The fact that DF performs it successfully will escape that explanation. Thus, as long as we are interested in providing a unified explanation of DF’s action—that is, an explanation not only of her performance of the task but also of her success—a reasons explanation does not seem to be forthcoming.

Conclusion

In this paper, I have examined A&G’s proposed necessary condition for the mark of the cognitive. I showed that A&G have not provided sufficient reasons in support of their proposal. I also argued that according to two understandings of what it means to provide a reasons explanation, the proposed mark of the cognitive seems to draw the boundaries of cognition in a way that is less than satisfying.Footnote 8 Behaviors that should count as cognitive turn out to be non-cognitive.