Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Burdening with Consciousness

Animal cognition researchers are (mainly) concerned with behavior, which they use to infer the underlying mechanisms of physiological control. Of course, an animal’s internal representations of the external world must have their physical basis in the brain, therefore understanding the neural basis of cognition must be an essential component of fully understanding cognition in animals (and humans). However, presently there is a basic problem in integrating neurobiological techniques and knowledge with behavioral cognitive studies. The problem is that our behavioral knowledge of cognitive capabilities, rudimentary though it may be in many respects, is still more advanced than our neurobiological knowledge. Therefore, theories of cognition in non-human animals are mainly functional, lacking any deeper knowledge of the exact content of the stored representations. This is true for both consciousness and proto-consciousness. Dreaming is no exception. From the behavior of a sleeping dog, we may infer it is dreaming. For example, I once read the following web entry (http://petshub.com/dog/dog-dreaming.php):

You can tell when your dog is dreaming by watching him. When he first falls asleep, his breathing will become more regular as the sleep becomes deeper. When the dream starts, the dog’s breathing becomes shallow and irregular. There may be occasional muscle twitches, and you can actually see his eyes moving behind his closed lids if you look closely enough. The eyes are moving because the dog is actually looking at the dream images in the same way that he would look at real objects in the outside world.

But such reports cannot fool us that we are (currently?) unable to assess the content of the dream. Non-verbal animals cannot report about their dreams (maybe to conspecifics, but not to humans). And there is a further fundamental problem that confronts the study of cognitive processes in a comparative framework, called the learning-performance problem (Kamil 1998). That is, although we are interested in measuring learning abilities or cognitive abilities in our experiments, all we are ever able to measure directly is behavior, and the behavior we observe is a function of many factors besides cognitive ability. Therefore, whenever we obtain a difference between species in their performance on a cognitive task, how can we be sure that the difference truly represents a difference in cognitive ability?

Hobson likes the distinction between primary and secondary consciousness, with the latter depending on the evolution of language. While we humans have language and can give verbal reports of our states, “subhuman” animals are stuck with purely perceptional and emotional (proto-conscious) states. First of all, I would recommend to use the word “non-human” instead of “sub-human” to classify animals properly, avoiding a scholastic, anagenetic or teleological conception of evolution (“from amoeba to man”). It is an old philosophical question if thinking without language qualifies as thinking at all. The anthropocentric view considers the way we think as the prototypical (or essential or only) way of thinking. Our (propositional) thought is obviously related to language, which tempted many philosophers to believe that language and its resulting abilities is all that matters (Davidson 1984). Must therefore any other species also have language in order to think? What are distinctive capacities of thinking (or reasoning or rational) species? Two primary candidates in the contemporary philosophy of mind are the ability to generalize in an abstract way and to make mistakes (Hurley and Nudds 2006a, b). So we may ask if an animal must have concepts and conceptual abilities or language in order to generalize and to make (and recognize) mistakes.

As Herrnstein (Herrnstein and Loveland 1964) and many followers (Huber and Aust 2006) since the seminal work with pigeons have shown, categorization and concept discrimination do not depend on linguistic abilities. Nor do abstract and relational categorization depend on linguistic abilities (Huber 2010). Moreover, language is not an all-or-nothing ability, but has evolved gradually. There is sufficient evidence to suggest a distinction between the faculty of language in the broad and the narrow sense (Hauser et al. 2002). The first includes a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements. According to this distinction, language in the narrow sense only includes recursion and is the only uniquely human component of the faculty of language. This faculty may have evolved for reasons other than language. Comparative studies are currently looking for evidence of such computations outside of the domain of communication (for example: number, navigation, and social relations) (Fitch et al. 2010).

What’s about displacement? Can a non-verbal animal de-center from itself and take account of the perspective of another animal by simulation, by using its own practical abilities off-line? Or does such de-centering require theorizing about the other’s mind and so require having both concepts of, and a theory about, mental states? If so, what mental states? And what qualifies as a mental state? The perception and knowledge of others? The goals and intentions of others? There is solid evidence from several different experimental paradigms that chimpanzees understand all of them (Tomasello et al. 2003). The same is true for some corvids, like ravens (Bugnyar and Heinrich 2005) and scrub jays (Clayton et al. 2007), when solving “knower-guesser” tasks: they can take others’ perception into account and draw inferences about the probability of winning food from, or losing it to those others. But there is currently no evidence that chimpanzees or any other species understand false beliefs. Call and Tomasello, therefore, conclude that chimpanzees understand others in terms of a perception – goal psychology, as opposed to a full-fledged, human-like belief–desire psychology (Call and Tomasello 2008).

What about making mistakes? Must an animal have the concept of a mistake or of a reason in order to recognize that it has made or is likely to make a mistake – or could this again be registered and manifested in practical terms? Do animals know that they could be wrong? (Call 2010) A challenge for future animal research in the absence of the fine-grained distinctions that verbal reports make available is to design proper behavioral experiments to assess the capacities for de-centering or recognizing mistakes. What kinds of non-verbal behavior, and under what conditions, might provide evidence for these capacities?

A prominent feature of human awareness is that we know or can estimate the state of our memories, that is, assess our own memory strength. We are consciously aware of some memories and can make verbal reports about them. This “knowing what we know” ability has been called metamemory or metacognition (Flavell and Wellman 1977; Flavell 1979, 1992). It can function to guide efficient behaviour because it enables us to cope with uncertainty by escaping a task or seeking further information. Thus, metamemory as a kind of self-knowledge is adaptive and might be expected to be phylogenetically widespread. Respective comparative research may be useful in revealing similarities between animal and human cognition, in suggesting the precursors of human cognitive sophistication, and in pinpointing the special allowances of language for cognition. However, many human uncertainty paradigms do not support this comparison, for they rely on introspection and self-reports (feelings of knowing, tip-of-the-tongue experiences, etc.) and do not suit animal observers.

Recent years have indeed seen a considerable increase in interest in metacognitive abilities in nonhuman species, using a perceptual and behavioral uncertainty paradigm that could be used comparatively (Carruthers 2009). With animals, the best researchers can do is to look for nonverbal behavior that is functionally similar, i.e., the animals are asked to use a behavioral proxy for human uncertainty. Animals able to discern the presence and absence of memory should improve accuracy if allowed to decline memory tests when they have forgotten, and should decline tests most frequently when memory is attenuated experimentally. This can be done by providing the subjects with an escape response option. Choice of this option leads to a mediocre reward, less than can be obtained by completing the test correctly, but more than is available if the test is taken and failed. Thus, there is an incentive for the animal to use metamemory in order to maximize the reward received. Subjects that know how well they will perform on the primary task in a particular trial should selectively escape from difficult trials when allowed to, and when they choose to complete the primary task instead of opting out, they should perform better than when forced to complete it. Metamemory has been established in dolphins (Smith et al. 1995), rhesus macaques (Hampton 2001; Hampton et al. 2004; Washburn et al. 2006; Son and Kornell 2005), capuchin monkeys (Basile et al. 2009; Beran et al. 2009; Fujita 2009), orangutans (Suda-King 2008) and even rats (Foote and Crystal 2007) by converging evidence from several paradigms. In contrast, pigeons have, so far, performed differently from mammals in tests on metamemory (Inman and Shettleworth 1999; Sole et al. 2003; Sutton and Shettleworth 2008; Roberts et al. 2009). Although they behaved adaptively in that they maximized perceived reward, they apparently did so without relying on a process having the functional properties of conscious uncertainty. However, metamemory in pigeons has only to be tested in only one (difficult) task (delayed matching to sample) and has never been explored in the context of long-term retention.

Humans comply with uncertainty by often reporting that choices are based on a strategic cognitive process aided by conscious self-regulation. This fits into the early psychologists’ suggestion that cognitive conflict, difficulty, and confusion elicit higher modes of cognition and that “consciousness provides extraneous help to cognition when nerve processes are hesitant” (James 1890/1952). However, a more parsimonious interpretation of such metacognitive responses, in both humans and animals, is that they reflect controlled decisional and criterion-setting processes that are necessary in the perceptual ambiguity of threshold. “This grants escape responses a cognitive sophistication that fits the data, without burdening them with consciousness or with equally heavy ad hoc behaviorist assumptions” (Smith et al. 1997).

There is a risk in explaining animals’ performance in human-designed tasks in the same way we would explain the behavior of human participants – say, in terms of processes of inference and reasoning, or metacognition, or mind reading. For instance, animal subjects in metacognitive experiments involving an escape response may react to uncertainty without being aware of their knowledge states or their level of uncertainty (Staddon et al. 2007; Carruthers 2009). Or they may simply react to the anxiety that they may feel in uncertain situations without representing their state of anxiety. No doubt, different processes can lead to similar behavior (even among human beings, for example, in recovery of function after brain damage). “Without language, how can we determine whether to attribute a rational process to explain an animal’s behavior? More fundamentally, what kinds of processes are the ‘right kind’ to count as rational?” (Hurley and Nudds 2006a, b)