Keywords

1.1 Introduction

The term “Global Village” was coined in the 1960s by the Canadian communication theorist Marshall McLuhan (1962, 1964) . It was intended to describe an emerging world in which mass communications technologies were starting to make it possible for people around the globe to be in contact with each other routinely, producing a form of “global consciousness” or, as Peter Russell (1983) called it at the threshold of the Internet era, a “global brain.” For McLuhan, it was the second great paradigm shift in human consciousness. The first one came about after the invention of alphabets around 1000 BCE, which initiated a radical break from oral cultures. With the advent of mechanical print technology in the late 1400s, making the written word available broadly and cheaply, human consciousness became more logocentric and more “individualistic,” shaped by the linear structure of print and the fact that people read by themselves. He called this the “Print Age”—an age that was behind radical social revolutions, from Protestantism and the Enlightenment to political movements favoring nationhood. Before his death in 1981, McLuhan saw the end of this Age and a return to a tribal-like form of consciousness, brought about by electronic media that united people from across the globe as if they were in a village. This has come to be called the “Information Age.” It is an age where information in itself has value, independent of the meanings it may harbor or the individualistic interpretations it may engender. As in tribal oral cultures, any meaning that the information contains is extracted communally, via the global brain. Information itself has become the language of the Global Village—a language that requires little or no cognitive effort to process and, thus, a language that does not distinguish between truth and falsehood, as is becoming ever more evident in global affairs. And, by extension, it is a cybernetic language spoken by humans and machines alike.

Perhaps like never before in its history, semiotics has a critical role to play in this cybernetic age that does not distinguish, or care to distinguish, between human and artificial intelligence. Aware of this potential for semiotics, in 2008, Danish semiotician Søren Brier (2008) put forth a mode of inquiry that aimed to study the cybernetic universe that has emerged in the Information Age. As a blend cybernetics and biosemiotics—hence his term cybersemiotics Brier gave semioticians, cognitive scientists, and information theorists a powerful new theoretical tool for tackling the information-versus-interpretation problem in the Global Village. Cybernetics was conceived by mathematician Norbert Wiener, who coined the term in 1948 in his book Cybernetics, or Control and Communication in the Animal and Machine. The same word was actually used in 1834 by the physicist André-Marie Ampère to denote the study of government in his classification system of human knowledge. Ampère, for his part, had probably taken it from Plato’s The Laws, where it was also used to indicate the governance of people. The term applies to systems in which feedback and error-correction signals control the operation of the systems. Biosemiotics traces its origins to the biology of Jakob von Uexküll (1909) who tackled the information-versus-interpretation dilemma in a biosemiotic way. Essentially, he argued that each organism is designed by nature to process only the information that it needs from the environment in order to model its life advantageously. However, in humans, the information is not only used for biological modeling but is interpreted psychologically and emotionally, producing knowledge that goes far beyond the information itself. The cybersemiotic agenda is shaped by an interesting and fruitful search for the biological, psychic, and social roots of human meaning-making, as distinct from information-processing.

The question becomes why all this is so. It is one of the greatest conundrums of philosophy and semiotics. We could conceivably live without the Pythagorean theorem, extracted from information about right-angled triangles. It tells us what we know intuitively—that a diagonal distance is shorter than taking an L-shaped path to a given point. And perhaps this is why it emerged—it suggests that we seek efficiency and a minimization of effort in how to do things and how to classify the world. But in so doing we “squeeze out” of our economical symbolizations other ideas and hidden truths. To put it another way, the practical activity of measuring triangles contained too much information, a lot of which was superfluous. The theorem refines the information, throwing out from it that which is irrelevant. The ability to abstract theories and models from the world of information involves the optimal ability to throw away irrelevant information about the world in favor of new information that emerges at a higher level of analysis.

1.2 Information

Information is essentially data that each organism or machine is designed to take in and process selectively. In human life, information is literally meaningless, unless it is connected to interpretation, so that it can be utilized for some purpose. In effect, information is useless without a semiotic code for interpreting and using it. It is, as its etymology suggests—from Latin information “a sketch, an outline”—nothing more than encoded form. Deriving content from this form requires knowledge of how it can be represented (semiotized) and how it can be used. Not only, but the relation between the representation of information and the information itself is so intrinsic that it is often impossible to differentiate between the two. As an example of this interconnection consider an anecdotal illustration. Suppose that a scientist reared and trained at MIT in the United States sees a physical event that she has never seen before. Curious about what it is, she takes out a notebook and writes down her observations in English. At the instant that the American scientist observes the event, another scientist, reared and trained in the Philippines and speaking the indigenous Tagalog language, also sees the same event. He similarly takes out a notebook and writes down his observations in Tagalog. Now, to what extent will the contents of the observations, as written in the two notebooks, coincide? The answer of course is that the two sets of observations will not be identical. The reason for this discrepancy is not, clearly, due to the nature of the event, but rather to the fact that the observers were different, psychologically and culturally. Their interpretants (as Charles Peirce would have called their notes) varied as a result. The true nature of the event is indeterminable, although it can be investigated further, paradoxically, on the basis of the notes taken by the two scientists.

Despite the obvious relation between interpretation (representation) and information, very little has been done to show the explicit connection between these two domains in semiotics proper—a gap that both biosemiotics and cybersemiotics are attempting to fill. As mentioned, given the Information Age in which we live, it is becoming increasingly more difficult to distinguish between the two. It is much easier to theorize about information, which can be pinned down scientifically fairly easily through appropriate mathematics; it is much harder to develop models of interpretation because of variation, subjectivity, and the forces of history. Indeed, the traditional information sciences have focused on information itself, developing models for measuring it and for developing methods for storing or retrieving any fact or datum. From this, the concept of information content has emerged, defined mathematically as the amount of information in a message, represented as I, measured as an inverse function of its probability. The highest value of I = 1 is assigned to the message that is the least probable. On the other hand, if a message is expected with 100% certainty, its information content is I = 0. For example, if a coin is tossed, its information content is I = 0, because we already know its result 100% of the time—i.e., we know that it will have a 100% probability of ending up as either heads or tails. There is no other possible outcome. So, the information carried by a coin toss is nil. However, the two separate outcomes heads and tails are equally probable.

In order to relate information content to probability, American engineer Claude Shannon (1948, 1951, with Weaver 1949), argued that information of any kind could be described in terms of binary choices between equally probable alternatives. To this end, he devised a simple formula, I = log2(1/p), in which p is the probability of a message being transmitted. Log2 of a given number is the exponent that must be assigned to the number 2 in order to obtain the given number: e.g., log2 of 8 = 3, because 23 = 8; log2 of 16 = 4, because 24 = 16; and so on. Using Shannon’s formula to calculate the information content of the outcome of a single coin toss will, as expected, yield the value of 0, because 20 = 1. Shannon used binary digits, 0 and 1, to carry out his calculations because the mechanical communications systems he was concerned with worked in binary ways—open vs. closed or on vs. off circuits. So, if heads is represented by 0 and tails by 1, the outcome of a coin flip can be represented as either 0 or 1. For instance, if a coin is tossed three times in a row, the eight equally possible outcomes that could ensue can be represented with binary digits as follows: 000 (= three heads), 001 (= two heads in a row, a tail), 010 (= a head, a tail, a head), 011 (= a head, two tails), 100 (= a tail, two heads), 101 (= a tail, a head, a tail), 110 (= two tails in a row, a head) 111 (= three tails).

The objective of information science is that of investigating systems for the generation, collection, organization, storage, retrieval, and dissemination of information. The field today brings together ideas and techniques from the social sciences, computer science, cybernetics, linguistics, management, neuroscience, and systems theory. With the transfer of massive databases to computers, information science has become a vast enterprise, merging often with Artificial Intelligence (AI), to develop models of both natural and artificial intelligence. Work on developing programs that enable a computer to understand written or spoken language (natural intelligence), for instance, has shown that whereas the logic of language structure is easily programmable, the problem of meaning and variability and interpretability is problematic for machines to compute (Danesi 2016). Nevertheless, some AI researchers believe that parallel processing—interlinked and concurrent computer operations—is leading to the development of true AI, which will become indistinguishable from natural intelligence. By integrating silicon neurons with various circuits that emulate nerve-cell membranes it is believed that artificial systems will operate at the speed of neurons and become indistinguishable from human brains. But this line of research ignores the fact, rather stubbornly, that natural intelligence emerges from bodily (protoplasmic) sensations, not simulated ones. One cannot take the mind out of the body; if one does then it would be a strange form of consciousness indeed.

In his book, Mental Models (1983), Johnson-Laird gives us a good overall taxonomy for talking about the notion of consciousness, which is worth revisiting here since it is particularly relevant today. According to Johnson-Laird, there are three basic interpretive schemata with which to view this notion:

  1. 1.

    “Cartesian machines” which do not use symbols and lack awareness of themselves;

  2. 2.

    “Craikian machines” (after Craik 1943) which construct models of reality, but lack self-awareness;

  3. 3.

    self-reflective machines that construct models of reality and are aware of their ability to construct such models.

Programs designed to simulate human intelligence are Cartesian machines in Johnson-Laird’s sense, whereas animals and human infants are probably Craikian machines. But only human infants have the capacity to develop self-reflective consciousness, which is autopoietic, that is, one that is capable of self-generation and self-maintenance.

Another relevant view of consciousness is the one put forward by Karl Popper (1976; Popper and Eccles 1977). Popper classifies the world of the mind into three domains. “World 1” is the domain of physical objects and states, including human brains which can affect physical objects and processes by means of neuronal synapses transmitting messages along nerve paths that cause muscles to contract or limbs to move. It is also the world of “things.” World 1 can be inhabited both by human-built Cartesian machines and by Craikian machines. “World 2” is the whole domain of subjective experiences. This is the level at which the concept of Self emerges, as the mind allows humans to differentiate themselves from the beings, objects, and events of the outside world. It is at this level that we perceive, think, plan, remember, dream, and imagine. “World 3” is the domain of knowledge in the objective sense, containing the externalized artifacts of the human mind. It is, in other words, the totally human-made world of culture. Consciousness emerges and resurfaces each time that the mind “descends” into World 2 to access its particular meaning-making mode of understanding. The act of making meaning is an act of consciousness. The mind cannot possibly descend into World 1. It can think about it, but it will never “know” it. There is evidence that animals have a form of consciousness, and an experiential domain similar to that of humans. This is the ability to know a sensation and to react to it in purposeful ways. The ethological evidence shows that animals can indeed react factually and purposefully to stimuli. The problem is trying to determine to what extent this form of consciousness becomes imaginative and inventive, transforming felt experience into a reflective type. As von Uexküll (1909) cogently argued at the turn of the twentieth century, it is unlikely that we will ever be able to “know” how animals “know,” given our different anatomical and neurological systems. Moreover, it is highly unlikely that we will ever be able to penetrate the workings of our own biological systems to discover how they form the substrate of consciousness. The search to find some evolutionary, or genetic, World 1 basis to consciousness using World 3 structures such as scientific theorizing, therefore, will invariably turn out to be a difficult, if not impossible, enterprise.

A disembodied view of how humans encode and use information is essentially a useless one. So cybersemiotics aims, instead, to investigate how information relates to the sensory, emotional, and intellectual structures that undergird both the production and interpretation of signs and the extraction of personal, social, and imaginary meanings from them. It is, in other words, a study of self-reflective machines, and how these are different from Cartesian and Craikian machines. It is a means of connecting Popper’s three worlds through a paradigm of semiosis. The information that a piece of music contains cannot be reduced to a mere probability event, which is part of World 1. Indeed, most of human information processing is immeasurable. So, the study of information as a human system requires much more than a Shannon-type framework. It must involve Worlds 2 and 3.

Of particular relevance to the study of information is the research in biosemiotics, which has shown how remarkably rich and varied animal communication systems are in handling specific kinds of information. Biosemiotics aims to investigate such systems, seeking to understand how animals are endowed by their nature with the capacity to use specific types of signals and signs for survival (zoosemiosis), and thus how human semiosis (anthroposemiosis) is both linked to, and different from, animal semiosis. As the late Thomas A. Sebeok (for example, 2001), a primary figure in this movement, emphasized, the objective of biosemiotics is to distill common elements of semiosis from its manifestations across species, integrating them into a taxonomy of notions, principles, and procedures for understanding this phenomenon.

As mentioned, biosemiotics takes its impetus from the work of von Uexküll (1909), the Estonian biologist who provided empirical evidence at the start of the twentieth century to show that an organism does not perceive an object in itself, but according to its own particular kind of innate modeling system. For von Uexküll every organism has different inward and outward “lives.” The key to understanding this duality is in the anatomical structure of the organism itself. Animals with widely divergent anatomies do not share common modeling systems (perceptions, symptoms, etc.) equally. An organism does not perceive an object in itself, but according to its own particular kind of Bauplan—the preexistent mental modeling system that allows it to interpret the world in a biologically-set way. For von Uexküll, each system is grounded in the organism’s body, which routinely converts the external world of experience into an internal one of representation in terms of the particular features of the Bauplan with which it is endowed. Current biosemiotics has incorporated von Uexküll’s basic conceptualization into a flourishing new field of empirically-based semiotic research that continues to hold much in store for the study of information and its relation to human interpretation (see, for example, Lotman 1990; Sebeok and Danesi 2000; Posner et al. 1997–2004; Barbieri 2006; Favareau 2010; Cobley 2016).

1.3 Cybersemiotics

Biosemiotics focuses on biological organisms and their information modeling systems; it does not enlarge its purview to encompass mechanical systems, such as computers and networks such as the Internet. The expansion of this paradigm to encompass such systems has been made possible with the advent of cybersemiotics, which integrates factors that shape and constitute biology, culture, history, technology, and other systems in the study of information as an interpretive system (within the human species). It is a truly interdisciplinary science, fusing not only elements of biosemiotics and cybernetics, but also of cognitive linguistics, anthropology, and historiography.

The cybernetic component of cybersemiotics requires some commentary. Cybernetics is the science of regulation and control in animals (including humans), organizations, and machines, viewed as self-governing entities consisting of parts and their organization into wholes. It was conceived, as mentioned, by mathematician Norbert Wiener, who then popularized the social implications of cybernetics, drawing analogies between machines and human institutions in his best-selling 1950 book, The Human Use of Human Beings: Cybernetics and Society. Cybernetics views information processing in all self-contained complex systems (biological and mechanical) as analogous. It is not interested in the material features of such systems, but in how the ways in which they organize data. Because of the increasing sophistication of computers and the efforts to make them behave in humanlike ways, cybernetics today is closely allied with AI and robotics, drawing heavily on ideas developed in information theory. As used in communication studies, the term applies primarily to systems in which the feedback and error-correction signals control the operation of the systems. Such signals (or signal systems) are called servomechanisms. Servomechanisms were first used in military and marine navigation equipment. Today they are used in automatic machine tools, satellite-tracking antennas, celestial-tracking systems, automatic navigation systems, and antiaircraft control systems. The primary task in the cybernetic study of communication is to understand the guidance and control servomechanisms that govern the operation of social interaction and then to devise better ways of harnessing and intervening in them.

Cybernetics encompasses a taxonomy of notions, principles, and procedures for understanding the phenomenon of information in its structural and organizational features. This seems paradoxical, but information bears either organization within it, or the potential for organization, via the system (animal or mechanical) that carries it. At first, cybernetics developed its concepts from the information models devised by Shannon, which essentially depict information transfer as a unidirectional process dependent on probability factors, that is, on the degree to which a message is to be expected or not in a given situation. It is called the “bull’s-eye model” because a sender of information is defined as someone or something aiming a message at a receiver of the information as if he, she, or it were in a bull’s-eye target range. Shannon also introduced several key terms into the general study of communication: channel, noise, redundancy, and feedback.

What is clearly missing from cybernetics and Shannon-type information science is the semiotic focus on meaning and interpretation—both elusive features of human semiosis. Cybersemiotics entered the scene to complete the cybernetic agenda by integrating it with this basic principle of semiotics, thus extending the semiotic paradigm to cover the study of semiosis in humans, animals, and machines, putting the spotlight, however, on the uniqueness of human semiosis. Cybersemiotics has come forward to show that human sign systems have physical structures similar to those of humans and machines that are so constituted as to serve specific psychological and social functions; but they differ in their meaning-making modalities. Moreover, animal, and especially, human systems are autopoietic, that is, self-organizing in a creative, not deterministic fashion. The crux of the cybersemiotic approach thus inheres in looking more closely at the relation between information and interpretation, as Charles Morris (1938, 1946) had cogently suggested already in the 1930s and 1940s.

The ultimate goal of semiotics proper is the study meaning as it manifests itself in all spheres of human life, from small-scale structures (words, symbols, etc.) to large-scale ones (the Aristotelian final causes of things). To do so, it requires a broadening of its epistemology to encompass how signs encode raw information across systems (biological and mechanical) and how the systems select elements from the information and make use of them. The cybersemiotic agenda aims to do exactly this, and it comes at an appropriate time, given that we live in the Information Age where data itself has salience, irrespective of how it is interpreted or used. Although constituted as an interdisciplinary science, semiotics has hardly ever been adopted, or incorporated methodologically, by the information sciences in a genuine integrative fashion. Parallels are often drawn between the disciplines, but true theoretical interaction has seldom occurred, outside of sporadic attempts by individual researchers working within semiotics and information science. In the cybersemiotic paradigm the integration is both implicit and explicit, since it subsumes two main concepts: (1) semiosis occurs across all information systems (organic and mechanical) in ways that are manifestly similar, yet different in the minutiae of how the information is understood (insentiently or sentiently); and (2) human semiosis also involves a level of consciousness wherein unique meaning structures emerge and reproduce and expand on their own (autopoiesis).

Cybernetics thus asks fundamental questions about semiosis such as the following one: Do words emerge spontaneously as part of the human organism’s reaction to the world, as does any reaction to a given stimulus in instinctive behavior, or are they interpretants? At a primary level, which Brier calls a first-order level, it would seem that, indeed, words and linguistic structures are products of instinctual activity, as both Uexküll (1909) and Karl Bühler (1934) argued early in the previous century. It corresponds to Popper’s World 1 level and Johnson-Laird’s Cartesian machine level. At this level, a continuity between humans, animals, and machines is observable—the level that was studied by Wiener formally, in discussing the many striking similarities in the functioning of human beings, animals, and machines. Essentially, all three can be characterized as displaying orderly operation and stability. One of the most important shared characteristics is feedback, or the circling back of information to a regulating device or organic system. For instance, when body temperature is too high or too low, the brain acts to correct the temperature. A household thermostat uses a similar type of feedback mechanism when it corrects the operation of a furnace to maintain a set temperature.

Aware of the far-reaching implications of his proposal, Wiener discussed both the pros and cons of making analogies between animals, machines and humans in 1950. Brier (2008) has thus re-designated Wiener’s approach to the level of World 1 or first-order cybernetics, a form of investigation of structural patterns that reveal how the physical (organic) systems themselves adjust themselves to change. Such comparative study would explain why mathematics, a human invention, is the basic language that runs machines (computers). Machines and humans thus “speak to each other” through this particular Cartesian language. Because of the increasing sophistication of computers and the efforts to make them behave in humanlike ways, this type of cybernetics today is closely allied with AI and robotics, as mentioned.

However, human information systems extend far beyond this level of operation, rising to a second order, as Brier argues, where the content of the forms determines their functions. In second-order cybersemiotics, studying the first-order servomechanisms in themselves is seen as only a point-of-departure; it is in examining how they acquire meanings that the true regulation and use of information for knowledge-making—discovery, creativity, etc.—emerges in the human species. This whole line of analysis, as Brier has described throughout his substantive work in the field (Brier 1993, 1995, 1996, 1998, 1999, 2003, 2008) begs two important questions, when it comes to human semiosis. First, as Brier himself points out, it is insufficient to study information transfers solely as probabilistic signals. It is in the linkages made between words, meanings, and other interpretive structures that the human brain somehow is capable of extracting a unique form of conscious understanding. The process is not logical or purely probabilistic; it is inferential (or to use Peirce’s [1931–1958] well-known term, abductive). As it turns out, therefore, language does indeed allow us to understand the world—but on our own terms. Real machines, on the other hand, are programmed to understand information practically; they cannot interpret it because, if they could, they would develop structures that become interpretants. In effect, interpretation starts at the Craikian (World 2) level, but it is not fully conscious yet. From this second order semiosis consciousness eventually emerges to provide self-reflective interpretation (World 3). There are, of course, dangers in making claims of this type, especially at the lower levels. By comparing semiotic structures with biological and artificial ones we might be engaging in anthropomorphic speculation. For example, in 1974 Marcel Florkin suggested that the concepts of signifier and signified were equivalent to genotype and phenotype respectively. Barbieri (1985, 2003) correctly pointed out a little later that a cell has triarchic structure consisting of genotype, phenotype and ribotype (the ribotype is the ribonucleoprotein system). So Florkin’s analogy did not really hold.

As Brier insists, it is absolutely necessary to distinguish between information-as-entropy and information-as-meaning. Entropy is a measure of the amount of disorder or randomness in a system. Because there are many more random manners of arranging a group of things than there are organized ways, disorder is much more probable. For instance, shuffling a deck of cards is likely to lead to a jumbled distribution of cards, not an ordered sequence. By extension, entropy can be viewed as a measure of the randomness or uncertainty in information. In order for that information to become something “ordered” or “patterned” it needs to be assigned a “meaning” through interpretation. And this entails a code, a system of contact, and various other components, as Jakobson (1960), among others, have argued. For information to be more than entropy, it must be organized into something recognizable and useable (words, symbols, gestures, etc.). At a secondary (higher-order) level, the meanings invested in these primary models evolve through social and psychological semiosis, not through biology. It is, in fact, this level of conscious manipulation of information that cybersemiotics aims to study. As von Uexküll (1909) certainly knew, the transformation of information into signification inheres in understanding how the Innenwelt (inner world) of an organism is well adapted to interpret the Umwelt (the outer world it inhabits) in a specific way and thus to generate species-specific models of it so as to combat entropy. It is the interaction between raw information and its modeling that produces what we vaguely called “meaning” in human semiosis.

It has been the practice, before the biosemiotic movement, for semioticians to study, by and large, small-scale experiences of meaning within cultural domains, leaving it up to biologists to study the organic basis of communication and philosophers to study the large-scale experiences of meaning. But these should not be treated as separate practices. In human cognition, the sense of the particular reflects the sense of the general and vice versa. It is easier and much more practicable to study the particular, of course. And this is where semiotics has been thriving. As in physics, it is possible to develop special semiotic theories of the world, but rather intractable to develop general ones. Cybersemiotics seeks to develop a basis for investigating the latter. As Lotman (1990) claimed, studying human systems is equivalent to studying how they model reality in the particular. However, Lotman saw an intrinsic interconnection between the particular and universal forms of meaning-making, which emerge in the semiosphere. The latter is a state of consciousness that, like the biosphere, allows humans to adapt to, and regulate, their cognitive experiences. Lotman thus implicitly argued that the need to understand who we are, and why we feel the way that we do, is part of a larger quest for understanding of which we are an unwitting part. Eastern philosophies are attuned to this quest, modeling it in various symbolic and ritualistic ways; western philosophies have often shied away from it, preferring instead to focus on the dichotomy between the body and the mind, the biosphere and the semiosphere.

As Sebeok (1994) argued, the focus of true semiotics should be the study of the modeling capacities of the brain, which he characterized as a “semiotic organ.” The operation of this organ can be seen conspicuously during infancy and childhood. When an infant comes into contact with a new object, its instinctive reaction is to explore it with the senses, that is, to handle it, taste it, smell it, listen to any sounds it makes, and visually observe its features. This exploratory phase of knowing the object constitutes a sensory modeling stage. The resulting internal models allow the infant to recognize the same objects subsequently without having, each time, to examine them over again tabula rasa with his or her sensory system (although the infant often will reexamine its physical qualities for various other reasons). Now, as the infant grows, it starts to engage more and more in modeling behavior that replaces this sensory phase; i.e. it starts pointing to the object and/or imitating the sounds it makes, rather than just handling it, tasting it, etc. These imitations and indications are the child’s first attempts at modeling the information it has stored previously in sensory ways. Thereafter, the child’s repertoire of modeling activities increases dramatically, as it learns more and more how to refer to the world through the brain’s semiotic organ. These stages in childhood development are universal.

The foregoing discussion cannot ignore Chomsky’s (1986, 2000, 2002; see also Anderson and Lightfoot 2002) concept of a language organ, since it seems to be suspiciously analogous to it (although it is not). From birth, we have a sense of how language works and how its bits and pieces are combined to form complex structures (such as sentences). And this, he suggested, was evidence that we are born with a unique faculty for language, which he at first called a “Language Acquisition Device” and later called an “organ,” that allows us to acquire the language to which we are exposed in context effortlessly. Language is an innate capacity. No one needs to teach it to us; we acquire it by simply listening to samples of it in childhood, letting the brain put them together into the specific grammar on which the samples are based. It is as much an imprint as is our reflex system. Specifically, the organ contains a Universal Grammar (UG), which would explain the blueprint on which all language grammars are built, and thus explicate why children learn to speak so naturally. The rule-making principles of the UG are available to all children, hence the universality and rapidity of language acquisition—when the child learns one fact about a language, the child can easily infer other facts without having to learn them one by one. Differences in language grammars are thus explainable as choices of rule types, or “parameters,” from the UG. He then claims that this language is present in the genes. It has been found that if a gene, called FOXP2, goes wrong a specific language impairment seems to be passed on involving word inflections and complex syntax. The gene seems, therefore, to be connected with language. But as Burling (2005) points out, connecting a faculty to a genetic source is fraught with problems:

FOXP2 should not be considered a language gene, however. Several thousand other genes are believed to contribute to building the human brain, and a large portion of these could contribute, in one way or another, to our ability to use language. Any one of these might interfere with language if it were to mutate in a destructive way. Nor is the influence of FOXP2 confined to language or even the brain, for it is known to play a role in the embryological development of lung, heart, and intestinal issues (pp. 148–149).

There are several problems with UG theory that need not be discussed here. One is that it accounts for the development of language as a rule-making system in the child, ignoring a much more fundamental developmental force in early infancy—the ability of the child to fill conceptual gaps with words and phrases in an unexpected creative way. Bu the main one is that Chomsky claims that the language organ is, to put it colloquially, where all the action is. What about laughter, music, drawing and other semiosic abilities? Do these have separate organs or are they derivatives of language? Clearly, Chomsky’s language organ is a convenient metaphor for supporting a rule based theory of language, rather than a veritable theory of mind. On the other hand, Sebeok’s semiotic organ encompasses all abilities, verbal and nonverbal, as inherent in the human brain. They are hardly innate or hardwired—they develop through an interaction between the Umwelt and the Innenwelt. So the brain is a semiotic organ, which controls language, music, humor, and all other human faculties in tandem. It is, in effect, an autopoietic organ.

The term autopoiesis (Greek “self” and “creation”) designates a system capable of reproducing and sustaining itself. It was introduced by Chilean biologists Humberto Maturana and Francisco Varela in 1972 to characterize the self-maintaining chemistry of cellular structures (Maturana and Varela 1973). Brier himself has written in-depth analyses of how autopoiesis characterizes the handling of information on the part of humans (and some other animals). Based in part on the key ideas put forward by the linguist Deacon in 1997 that symbols change through a form of creative adaptation, Brier argues that the lack of autopoiesis in machines is because machines, which belong to first-order cybernetics, are therefore allopoietic; that is, they are controlled by someone or something else. Autopoiesis is controlled by a semiotic organ—the higher the order of the organ’s abilities, the more creative it is. In the case of humans, therefore, autopoiesis seems to know no bounds.

1.4 The Internet Era

The cybersemiotic approach to information is a powerful one, since it divides semiosis into orders that range from mechanical activity to creative activity. In the Internet era, it is important to remember this fundamental aspect of semiosis, given that it often gets lost in the quagmire of the mediasphere. When the Internet came into wide use, it was heralded as bringing about a liberation from conformity and a channel for expressing one’s opinions freely. But this view has proven to be specious. Living in a social media universe, we may indeed feel that it is the only option available to us. The triumph of social media lies in their promise to allow human needs to be expressed individualistically, yet connect them communally—hence the paradox. Moreover, as social media communities are themselves connected to the larger pathways of a global connected intelligence network (the Internet), a new form of consciousness has emerged, that can be perhaps be called a third-order cybersemiotic system.

As mentioned, the term “global brain” was coined before the Internet era by Peter Russell, anticipating the effects of the Global Village on human consciousness. The global brain concept has, since then, produced a whole series of new theories about humanity that could only crystallize in the Information Age. One of these is post-humanism, used broadly to refer to an era in which humans no longer dominate the world but instead have merged with their machines and with animals to create a new world order that pits humans not at the center of the universe but as equal partners with other intelligences (artificial and animal). A leader of this movement is Donna Haraway (1989, 1991), whose ideas about the impact of technology on our perception of the body have become widely quoted in media, culture, and communication studies. She is also well known for her work on “cyborg theory,” or the view that machines are merging more and more with humans, replacing many functions of the human body and mind. But scholars like Haraway are ignoring the paradox of technology. As McLuhan argued, we always tend to retrieve the past in the present, and thus human progress is not linear, but cyclical. The fact that the foregoing discussion has plausibility, indicates that we are all shaped by contrasting forces at work—individualism versus globalism. This describes, in my view, how semiotic systems are evolving. In the Information Age, there is a third-order semiosis that stresses connectivity and communal cognition that often reduces meaning to a first order level. Indeed, the cybersemiotics of information is an antidote to this tendency.

The work of Derrick de Kerckhove is also highly relevant here. He is the one who introduced the term “connected intelligence” at the threshold of the Web 2.0 revolution (de Kerckhove 1997, 2015). This notion is now a common one, being renamed vicariously as “distributed cognition” or “networked intelligence.” It suggests that we are more involved in the extroverted form of intelligence (global brain), as it is distributed through the electronic mediasphere, than in individual acts of intelligence, even if we sometimes pay tribute to them as such. The interests of the group are more important than the fame of the individual in the mediasphere; and those that are considered to be important essentially are considered to be experts, meaning that virtually anyone, not just the professionally-sanctioned experts, can become famous in the mediasphere.

For de Kerckhove, the Global Village will reach (and today it can be said that it “has reached”) a critical mass of connected intelligence, which means that the sum total of the ideas of the global brain will be vastly more important than those of any individual’s intelligence could ever hope to be. He speculates that there is a strong possibility that we are undergoing one of the greatest evolutionary leaps in the history of our species. The architecture of this connected intelligence resembles that of a huge brain whose cells and synapses are encoded in software and hardware that facilitate the free assemblage and parting of minds in collaboration for any purpose. Because of this, individual brains in the connectivity are able to “see more, hear more and feel more,” as the composer Karlheinz Stockhausen put it (as cited by de Kerckhove 2015).

It is in this “networked cyber-system” that cybersemiotics can play its most important role—deconstructing and analyzing its features in terms of first order, second order (and possibly third order) semiosis and the inherent desire of the human semiotic organ to insert itself in autopoietic ways within the network. As the late Umberto Eco (1978) cogently argued a while back, the findings of semiotics can and should lead to a modification of the actual state of the objective world. For this to happen in a real way, however, semioticians must expand the purview of their science to encompass humans, animals, and machines as they communicate both uniquely and interactively in an age where information has become a premium, with or without interpretive systems. That is the goal of cybersemiotics. As we communicate, we leave traces of what we have thought and done, and thus of whom we think we are. It is an example of autopoiesis in action. Much like the body enters into a bio-communicative system with food, so too our mind enters into a parallel cognitive system with words and other signs. Particular sign systems are specific instantiations of an intrinsic need to understand ourselves and to solve similar problems of consciousness and adaptation throughout the world.

The cybersemiotic approach is a specific instantiation of Gregory Bateson’s (1972) goal to understand the relation between the human brain and nature, using scientific rather than convoluted philosophical theories, such as Cartesian dualism. The crux for fashioning a semiotic agenda that can change the state of the world, as Eco claimed, is to study how human meaning systems emerge to serve human needs and aspirations. The evolution of Homo sapiens has been shaped by forces that we will never really understand. Indeed, for no manifest genetic reason, humanity is constantly reinventing itself as it searches for a purpose to its existence. We make veritable discoveries, we explore space, and, in a sense, we go beyond semiosis, reaching for something that no word or sign can ever really capture, just record in part. Cybersemiotic analysis has, ultimately, the aim of showing how humans, in their apparent quest for large-scale meaning, have the capacity to generate their own evolutionary momentum.