Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction: The “Brain Shift”

To assert the existence of something is not the same as observing it. This is certainly the case with the human mind because nobody can affirm having observed it, while we must accept the idea that the brain exists, for this is empirically evident. We are all inclined to believe that our mental states or processes come from the brain, although many of us refuse to believe that this same organ is sufficient to explain our reasoning, feelings, and so on. Our cultures have been so deeply and, for such a long time, dominated by the certainty of the existence of the mind, that even on a purely linguistic level, we would find it strange to speak of a “tired brain” rather than a “tired mind,” or to ask what is going through someone’s brain rather than through his or her mind. But, for all intents and purposes, we would understand each other anyway because the two concepts—brain and mind—clearly converge on a unique reality, though one largely unknown.

In fact, dualism presents itself through two historical mainstreams. On the one hand, we have the metaphysical tradition according to which humans possess a twofold reality—namely, the body and the soul. This claim cannot be based on any empirical evidence, of course. Nevertheless, in the course of human history, and even right up to the present day, the existence of the soul has been believed and asserted by many, including many philosophers.

On the other hand, we have the modern dualistic approach, which starts with René Descartes, and progresses, in various forms, via Franz Brentano to contemporary thinkers such as Karl Popper and David Chalmers. The interesting point is that, in contemporary debates, the soul is no longer the issue at stake, and this is probably due to the widely diffused influence of our scientific cultural premises. As a consequence, more subtle or special concepts, such as consciousness and intentionality, are at the center of current debate among philosophers, neuroscientists, and psychologists. Such concepts are certainly linked to the mind, but, at the same time, they implicitly refer also to the traditional view of the soul. Nevertheless, the simple fact that the soul needs a definite metaphysical foundation induces most scholars to avoid any explicit reference to it.

However, in conceiving the mind as something clearly separated from the material structure of the brain, contemporary dualism traces back to traditional metaphysics, although it replaces philosophical certainties with a wide spectrum of theoretical views and models, and, in the end, with an overall uncertainty regarding what exactly is to be conceived in the concept of mind.

Undoubtedly, the underlying reason for the change sketched above is the unavoidable “discovery” of the central role of the brain—and of its regions—in many cognitive or emotional activities. This explains why, for instance, authors such as Popper and Eccles (1984) suggest that the main problem is not that of recognizing the existence of the mind—as certain as the existence of the brain—but, rather, that of describing the interfaces between them; for instance, Jerry Fodor assigns to mental representations the ability to set up a symbolic linkage between the mind and the body (Fodor 1983). Daniel Dennett is one of the most explicit philosophers in assigning, to the brain, functions with the power of triggering consciousness, thus bestowing upon it the role of cause, while consciousness becomes the effect (Dennett 1992). John Searle, following a more sophisticated strategy, speaks of the relationships between mind and brain as something deriving from a sort of “non-event causation” linking the brain and the mind, although it remains mysterious how a “non-event cause” might generate anything other than a “non-event effect,” which would seem to be no effect at all (Searle 1999). Gerald Edelman (2004) and Francis Crick (1994) recognize, instead, that in order to understand consciousness, it is necessary to understand what is going on in the brain. Roger Penrose, taking the road of a fine-structure investigation of the brain processes, maintains that even the role of neurons is open to question because they are “too big,” while the most interesting level of analysis—if we are ever to discover the roots of consciousness—concerns the cytoskeleton and the quantistic workings of its microlevel components (Penrose 1989).

Although only a minority of scholars explicitly embrace a monistic view (see, for instance, Rorty 1980), it seems clear that a growing power of attraction is being played by the brain and by its neurological functions. In a sense, therefore, even current dualism appears as if it were a residue of ancient metaphysics. In fact, it gives up the radical disjunction between the body and the soul—in terms of both origin and stuff—but simultaneously tries to keep alive the “existence” of something that, although it comes from the physical structure of the brain, cannot be understood as a regular physical function or effect.

In my opinion, such a position comes from a sort of “due respect” paid to the widely shared metaphysical tradition upon which our cultures are based. This takes the form of a die-hard view according to which any physical matter must be conceived as something brute, separated from the superior value of nonphysical reality. In this framework, even scientific observation of the world, while appreciated for its production of pragmatic knowledge, is widely conceived as a mere matter-based kind of activity, explicitly or implicitly classified by many scholars in the humanities, therefore, as belonging to a lower class with respect to the speculative and nonphysical realms. This intellectual standpoint derives from the Hellenistic culture, and its legacy induces many, even today, to think that the lack of empirical evidence doesn’t matter at all because the essential truth of things shouldn’t be reduced to their empirical phenomenology.

Significantly, in the past century in sociology and anthropology, we can find a meaningful analogy between the concept of mind and that of culture. Here, the structure of society plays the role of the brain, and culture is conceived as the mind of society to such an extent that, as Pitirim Sorokin says, “[t]he superorganic is equivalent to mind in all its clearly developed manifestations” (Sorokin 1947, p. 3).

Culture takes the physiognomy of something strictly similar to the human mind also in the definition given by Alfred Kroeber, when he says that,

Superorganic does not mean nonorganic, or free of organic influence and causation; nor does it mean that culture is an entity independent of organic life in the sense that some theologians might assert that there is a soul which is or can become independent of living body. Superorganic means simply that when we consider culture, we are dealing with something that is organic but which must also be viewed as something more than organic. (Kroeber 1948)

Such views were strongly criticized because of their more or less conscious tendency toward metaphysics, and probably for this reason, Kroeber, in the last years of his life, modified his position, underlining the methodological role, rather than the substantive one, of the concept of superorganic, which should be assumed as an abstract instrument of intelligibility. That is to say, he “came to maintain that culture was nothing but an abstraction form” (Bidney 1996). In other words, the superorganic and the mind, in the best case, may play the role of hypothetical constructs that are provisionally useful for designing research on cultural or mental behavior as processes instantiated by physical structures and not as empirical and autonomous realities in themselves.

In all likelihood, the mind, too, will gradually disappear as a sui generis “substance,” taking up, instead, the role of a more reasonable methodological tool—namely, an abstraction that is useful for expressing what the brain does and how it becomes externalized by communication, in turn generating cultural forms. Lastly, we should emphasize that monism is almost always based on the uniqueness of the brain and not on the uniqueness of the mind. Nobody, apart from a few neo-idealists, currently hopes to find a monistic view of the mind as the unique reality, given that the advancements of neuroscience also have an irresistible appeal, and, therefore, even philosophical speculation enters more and more into the discussion. Nevertheless, the majority of thinkers still refuse to accept the overlap between the working of the brain and the instantiation of thought and feelings. Rather, they are looking for the biophysical sources of something that eventually transcends both biology and physics.

2 The Reification of the Mind

While refusing to believe in the metaphysics of the soul, many contemporary scientists and philosophers of mind cannot but keep alive the traditional belief in the existence of an entity that is nevertheless recognized as a nonphysical one. This paradox is not based exclusively on the history of our civilization. Contemporary dualism also comes from the intriguing questions raised by cybernetics and by information theory as applied to biology. As is well known, cybernetic loops and recursivity—the so-called self-reference phenomena that underlie biological autopoiesis—are at the core of some biological schools of thought.

Although cognition is often outlined as a relational biological process, the insistence upon the self-referential ability of the human brain leads to a belief in the nonphysical nature of the mind and of the property of consciousness. Actually, if an observer is able to observe himself, then this happens as if he were outside himself. Therefore, according to this view, since the brain is the only physical entity at stake, the self-observer is a nonphysical external actor—namely, the mind that accounts for self-consciousness. This aspect has been discussed many times by relating it to Gödel’s seminal theorems denying the possibility, in a consistent formal system, of proving the truth of all true statements within that system. On the other hand, other authors are inclined to think that the human mind is not a formal system and that its most striking property is precisely that of working as if it were not a “part” of the brain system. According to this position, the mind is able to, for instance, evaluate the truth of a sentence, as if the process of evaluation would occur outside a formal system, escaping, this way, from Gödel’s theorem. The sentences uttered by Gödel himself could be taken as a good example of this ability of the human mind (Webb 1980). Anyway, as has been noted,

Gödel’s theorems do not prevent the construction of formal models of the mind, but support the conception of mind as something which has a special relation to itself, marked by specific limitations. (Bojadziev 1997)

Our ancient habit of thinking that an effect must be brought about by an external cause—while in cybernetic loops, a feedback comes from a part of the system itself—prevents us from accepting that the brain has this “special relation to itself.” Consequently, we conjure up an “external” actor, called the mind, in the same way as we have constructed so many myths or metaphysical entities to account for this or that natural phenomenon, our very existence included. However, the construction of metaphysical doctrines or beliefs is a wholly legitimate activity of our brain, which, it seems, is so inclined for some mysterious intrinsic reasons that would seem to lie beyond scientific investigation.

Although recursive abilities are strategic for achieving consciousness, it is far from clear why a nonphysical actor, which we call “mind,” should be required for obtaining such performance. Indeed, by introducing it, we are suddenly faced with three problems instead of one. In addition to dealing with the brain and its highly complex nature, we must also deal with the mind, endowed with its own supposed features, and finally, we must face up to the not-inconsiderable problem of connecting the nonphysical mind with the physical brain.

At this point, we should not neglect the rise of symbolic artificial intelligence (A.I.) that has powerfully influenced the debate regarding the mind-body problem, once more tending to privilege mind over brain. This has happened because the features of the mind are, on the face of it, rather less difficult to model than those of the brain, although some hope arises with the advent of so-called artificial neural networks, which mimic, at a superficial level, the way biological neurons create intelligent links. Unlike the brain, whose deep structures and inner workings are largely unknown, the mind and its properties have been described in many different ways. Apart from the numerous philosophical theories put forward over the centuries, we have models of the mind in each of the numerous psychological schools, and in linguistics, cognitive science, logic, anthropology and even philosophy.

After the classical debate on the feasibility of its most ambitious aims, which involved philosophers and engineers in the 1980s, A.I. researchers have, for the most part, chosen models and theories that are best suited for easy transference to computers, thus perhaps justifying the somewhat disparaging definition of A.I. as a technology oriented toward reproducing a theory or a model rather than discovering something new by adopting computer-based techniques. In fact, a computer is not a laboratory, but a “translator” of a model into a symbolic structure and process.

It is interesting to note that, today also, when A.I. researchers work on a “theory testing” level, they are constantly looking for some persuasive analogy that utilizes Ashby’s principle of functionally isomorphic devices. The general idea is that, in order to better understand a natural object—for example, the human brain or mind—it may be useful to build concrete devices that, within certain limits, should behave in the same way as the natural object under study (Cordeschi 2002). Nevertheless, this strategy neglects the fact that, in doing so, researchers will encounter behaviors that will come not only from the tested theory or from the model as an abstract outline of the natural phenomenon but from the undesigned interplay among the features of the material components of the device.

In other cases, models very often derive from some widespread philosophical or sociological doctrine. For example, Marvin Minsky’s theory of the mind as a society of simple and thoughtless agents comes from an old organicist philosophical and sociological tradition that assigns no special importance to the individual components of a society, holding instead that what matters is what “emerges” from the coexistence and interaction among the individual members. Thus, in contrast to Penrose—who supports the hypothesis of there being tracks of the mind at a quantum level in the deepest structures of neurons—Minsky says,

I’ll call “Society of Mind” this scheme in which each mind is made of many smaller processes. These we’ll call agents. Each mental agent by itself can only do some simple thing that needs no mind or thought at all. Yet when we join these agents in societies – in certain very special ways – this leads to intelligence. (Minsky 1988)

Nevertheless, it should be noted that the actual successes or failures of A.I. projects have little to do with the widespread discussion of mind that A.I. has promoted and renewed. What the vast majority of A.I. programs actually do is not the reproduction of human knowing and thinking as such, but rather the logical or quantitative calculations that reproduce explicit rules shared by human beings, including A.I. researchers themselves. In this direction, recent proposals, like ontology engineering, try to set up large databases of linguistic terms defined at various levels of formalization and put together by means of semantic and functional relationships (Denicola et al. 2009). With such strategies, researchers are trying to emulate human common sense, but presumably they establish a quite different system since nobody knows the “rules” that common sense follows.

This same pragmatic attitude in studying the human mind, which privileges the search for successful outcomes instead of pure knowledge, seems to describe the neural network approach. Here, as is well known, despite the ambitious name that recalls the neural functioning of the human brain, the target is to get from the machine the recognition of an input pattern after having “trained” the network—be it hardware or software—to recognize it. This technique is widely adopted for many tasks—especially incomplete data sets—in many sciences and professional activities. Nevertheless, it is at least uncertain to what extent such devices could help us in understanding the human brain. As far as the human mind is concerned—conceived as an additional entity to the physical brain—it has been proposed that neural networks should work together with symbolic A.I. programs (Sun and Bookman. 1994) in order to effect a convergence of reasoning and recognizing that characterizes human mind. Anyway, in the cases of symbolic A.I., which are more inclined to model the human mind, and in the case of neural networks, which are more inclined to model the human brain, it seems clear that dualistic or monistic premises play a key role although each, in the end, has to deal with a unique reality.

3 Reproduction and Observation Levels

While we have an ever-growing knowledge of the brain as a physical organ, we still have many interchangeable models of what the mind is and does. The weakness of models of the mind, as compared to the ever more reliable scientific study of the brain, is not, in itself, a great danger, because what we have come to think of as functions of the mind can often be assigned directly to the brain without any practical consequence. Nevertheless, while we are free to assign to the mind a very wide spectrum of properties and functions—as, in so doing, we have no empirical and spatiotemporal criteria to fulfill—there arises the problem of establishing which of them can really be assigned to the natural brain.

Thus, for example, while our thought is surely generated by the brain—even if one attributes it to the mind—this does not mean that each result of our thought corresponds to a given preestablished brain structure. On the one hand, we can view the mind simply as the performance of the brain, but on the other, we must admit that each so-called mental activity of the brain is not necessarily traceable to some isomorphic brain structure, whereas, within certain limits, we can localize the brain structure involved in, say, the contraction of a given muscle. For example, we can use words or numbers in very different ways, or build and then change views and theories at will, exploiting the same basic biological structure. What changes is probably the specific architecture that each brain assumes. As water flows downhill, the physical forces remain the same, though the paths and the consequences may differ widely, depending on the constraints encountered by the water, or, in the case of the brain, depending on the networks activated at various levels in a given moment.

Recent advancements in neurology include so-called neuroimaging (functional magnetic resonance imaging, or fMRI) that provides visual evidence of the brain areas involved in a wide class of mental states, feelings, or decisions, thus vindicating, and expanding upon, the nineteenth-century hypotheses proposed by Franz Joseph Gall and by Pierre Paul Broca. In brief, these new, highly promising experimental developments

…can be defined as the class of techniques that provide volumetric, spatially localized measures of neural activity from across the brain and across time; in essence, a three-dimensional movie of the active brain. (Aguirre 2003)

While neuroimaging falls short of being a “movie” of the mind, it certainly makes it very problematic to reject the idea that the mind is nothing but the performance of a physical system whose activation coincides with what we call consciousness. In observing the activated areas of the brain in real time, as neuroimaging allows, we cannot (as yet, at least) identify actual thoughts or words as such. Nevertheless, one cannot plausibly imagine that the mind is something “surrounding” those areas—something “superior” to what these areas are and do. In fact, this would require some empirical evidence as it happens when we define a field exhibiting the measure of all its points. We know that the brain, due to its electrical activity, generates an electromagnetic field, but this cannot be taken as the proof of the existence of the mind, of course.

Just because a model of the mind cannot leave aside the brain as the engine of our consciousness, reasoning, decision-making, and so on, the attempt to reproduce mental behavior artificially is, if conceived as an enterprise that views the mind as a stand-alone system, without doubt destined to fail. Or rather, the reproduction of a mental process in a computer program—for example, via calculations or reasoning—can be successful not because it captures the complex way of reasoning of humans, but because it reproduces the final results of the brain’s workings, that is to say, some established and expressible knowledge and logical or quantitative rules and their more or less complex combination.

An expert system, for instance, is a type of software that is able to provide consultancy, in terms of both explanation and prediction, to the user in a specific field of knowledge, such as medicine, law, or whatever. The system is able to do this with an acceptable success rate thanks to the “donation” from a human expert, who decants, as it were, his professional knowledge into a database. Then the software, through a set of inferential and statistical rules embedded in it, becomes able to deliver its consultancy as if it were, within certain limits, the human expert himself.

The key point is that what is modeled in an expert system is not a human brain, nor a supposed mind, but the final results—knowledge and rules—that humans have obtained after having worked for centuries on the best ways to reason with the facts within a given domain. This is why no A.I. program has yet been able to propose some new problem, although many such programs are undoubtedly useful in the problem-solving domain.

Within the long-running debate on the feasibility of A.I., John McCarthy maintained that a machine—even a simple thermostat—can think and have beliefs. He writes:

[T]he thermostat can only be properly considered to have just three possible thoughts or beliefs. It may believe that the room is too hot, or that it is too cold, or that it is okay. It has no other beliefs; for example, it does not believe that it is a thermostat. (McCarthy 1990)

Almost a decade later, John Searle suggested that such a claim would imply the bad idea

…that the hunk of metal on the wall that we use to regulate the temperature has beliefs in exactly the same sense that we, our spouses, and our children have beliefs. (Searle 1999, p. 410)

McCarthy and Searle were both right, because they were speaking of different things. McCarthy intended the operational logic embodied in the device, while Searle was referring specifically to human thought. The fact is that an algorithm incorporated in a thermostat is the explicit result of human reasoning—namely, that of the designer—and as such will demonstrate behavior reminiscent of artificially intelligent reasoning. Searle, by contrast, was concerned with the process of knowing which can, among other things, produce an algorithm. In the same way, humans generate knowledge that other humans are then taught in schools or universities. But these are very different processes.

Widening the well-known concept of “tacit knowledge” introduced by M. Polanyi in the 1960s (Polanyi 1966), and contrary to Maturana’s thesis according to which the human brain “thinks in language” (Maturana et al. 1995), we may state that everything that happens in our brain is “silent” and possibly very well hidden in the microstructures and networks of interactions within the brain. We can utter sentences whose knowledge-content comes from the brain only after such content has been translated and transduced via several still unknown processes. Thought is to be understood as a truly preverbal process; only a small part of it can actually be externalized by language, of which an even smaller portion becomes shared knowledge in our culture.

This explains why advancements of our knowledge, both at individual and cultural levels, always take a huge amount of time as compared to the speed with which a new problem or a new strategy occurs in one’s brain.

Therefore, models of mental processes, and their technological realizations, are successful when we implicitly define the “mind” as the name we give to our thought as already realized and communicated, such as inferential rules, mathematical or statistical ones, and common-sense based standards. It is quite unlikely that, instead of the established knowledge of a human expert in some discipline, we could exploit the “way of thinking” of Einstein or Mozart or anybody else. We can build an expert system based on Einstein’s physics or Mozart’s musical style, but we cannot enter into their way of relating to the world, or into the working of their brain when generating their theories or musical compositions. In Einstein’s and Mozart’s work, what is understandable and reproducible are the established and linguistically communicable results of their physical or musical thinking, and not the processes that have led to that.

Elsewhere (Negrotti 1999, 2010a, b) I have outlined a possible general theory of the technological reproduction of natural objects or processes—that is to say, the designing of naturoids. I wish here to make use of that theory in order to clarify the meaning of the foregoing discussion.

In order to design a naturoid, we should begin by observing the natural object or process we wish to reproduce. In fact, we may develop a model of a natural exemplar—to serve as the basis of a project—if and only if we can describe it after some empirical observation. For instance, if I wish to reproduce a kidney by means of current technological devices and techniques, I must be able to describe the natural exemplar—that is to say, the kidney—in the richest, most reliable, and objective way possible. All observation is, however, a process that is to be conceived as relating to some selected observation level: for instance, mechanical, chemical, electrical, biological, etc. To date, no project of naturoid production—beyond the level of chemically reconstructed molecules—can claim to have reproduced all the properties of a natural exemplar, and this depends, apart from other constraints, exactly upon the need to select one and only one observation level at any given moment and also upon the almost insurmountable difficulty of “joining” two or more such levels. Furthermore, even at a selected observation level, one has to decide what the essential performance of the natural object or process is that one wishes to reproduce.

The relativity of any observation level does not render impossible an objective description of a real object or process. It does, however, limit such a description to what is compatible with the particular level adopted, leaving, in the process, all else in the background. If, for example, I describe a certain exemplar from a chemical point of view, I am unable to capture its mechanical performances in my description. Nevertheless, the knowledge of its chemical properties may be sufficiently objective and useful for designing a naturoid that will be able to behave much like the natural exemplar in some respect, provided it is to work in a context characterized by the same observation level that I have selected—namely, the chemical one.

It often happens that a failure in the technological reproduction of a natural exemplar depends upon the wrong choice of observation level, as unfortunately happens not so rarely in bioengineering projects (Negrotti 2010a, b). But failure may also arise from the pretense of having the naturoid work according to an observation level that differs from the one at which it has been designed. Thus, as far as the problem of the reproducibility of the brain is concerned, much of the difficulty in the design depends on our rather poor knowledge of its possible observation levels. Furthermore, it would be even more difficult to decide which level should be considered the most indispensable in order to generate the brain’s essential performance—namely, what we call mental states and processes.

However, we should also take into account that not all the designers of artificial objects remain faithful to the rule of the objective observability of the exemplars they wish to reproduce. Although this rule is widely accepted in the fields of naturoids designed within mature scientific disciplines, such as bioengineering, in several other fields, there is widespread use of arbitrary models constructed for describing equally arbitrary entities. This has happened many times in art, for instance, where painters have often represented metaphysical entities, such as God, assigning to them features imposed by a religious tradition acting as a “model” to be realized. But this happens regularly even in the field of A.I., since the model of the mind, or of one of its functions, is built up on the basis of this or that theory, even though none of the theories relates to an objectively established observation of the topic at issue. In other words, we cannot speak of a mental or mind-based observation level for the simple reason that we can observe only the brain and not the mind, the communicable results of the brain’s workings and not the flow of mental processes.

From a methodological viewpoint, we cannot say to what extent the behavior of an A.I. program faithfully reproduces that which occurs in our brain, apart from the cases in which—as in the expert systems mentioned above and other computer science software—the project aims to reproduce only the results of our thinking and not the performance of thinking in itself.

A final question is: What would happen if we were able to reproduce a brain by means of technology, and, therefore, on the basis of some reliable model of this natural organ, though built according to only one observation level?

If its reproduction were to follow the ways of past and current methodology of the design of naturoids—whose limits are perhaps imposed by our very nature in observing the world—my opinion is that we would not see any “mind” emerging from it, even if we were successful in making the machine exhibit behavior that, if it were seen in human beings, would indicate some class of mental states and would be self-aware. In such a case, we would have surely built an artificial brain, and, in so doing, we would have discovered how useless, or unmanageable, the notion of mind is.