Keywords

4.1 Cybersemiotics

Cybersemiotics is a transdisciplinary research field at the confluence of cybernetics, systems theory, semiotics, radical constructivism, biology, ethology, cognitive linguistics, communication theory, and the sciences of information and computation. Founded as a novel “transdisciplinary theory of consciousness, cognition, meaning and communication” (Brier 2013, p. 97) and advanced through Brier’s “journal of second-order cybernetics, autopoiesis and cybersemiotics”, Cybernetics & Human Knowing , the research field offers an umbrella for several current research tendencies. From information theory to semiotics, from first to second-order cybernetics, and from Heinz von Foerster’s radical constructivism to Niklas Luhmann’s constructivist theory of social systems, cybersemiotics aims at expanding the horizon of general semiotics outlined by Charles S. Peirce. Programmatically, Brier (2013) declares:

Cybersemiotics proposes a new transdisciplinary framework integrating Peirce’s triadic semiotics with a cybernetic view of information […]. The proposed framework offers an integrative multi- and transdisciplinary approach, which uses meaning as the overarching principle for grasping the complex area of cybernetic information science for nature and machines AND the semiotics of all living system’s cognition, communication, and culture. Cybersemiotics is an integrated transdisciplinary philosophy of science allowing us to perform our multidisciplinary research, since it is concerned not only with cybernetics and Peircean semiotics, but also with informational, biological, psychological and social sciences. In order to incorporate the sociological disciplines and contributions from multiple areas of applied research, cybersemiotics draws extensively on Luhmann’s theories (p. 222).

Cybersemiotics adopts Peirce’s phenomenology, semiotics, and evolutionary philosophy as basic tools in its project to integrate biology, ethology, autopoiesis theory, the theory of embodied cognition, and the theories of evolution and emergence under its transdisciplinary umbrella. Since the very broad scope of the project of cybersemiotics makes it impossible to pay due tribute to all of its purposes in a single chapter, the present contribution has to restrict itself to shedding some light on topics concerning four theoretical pillars of cybersemiotics: systems theory, communication theory, information theory, and the semiotic philosophy of Charles S. Peirce.

4.2 Systems Theory

General systems theory, according to its founder, Ludwig von Bertalanffy (1968, p. 90), is a transdisciplinary framework for such diverse research fields as cybernetics, information theory, game theory, decision theory, topology, factor analysis, and the branch of philosophy known as systems philosophy. Laszlo extended this list to include catastrophe theory, the theory of autopoietic systems, nonequilibrium dynamics, and synergetics (1972, p. 13; 1983). With Parsons (1951) and Bateson (1972), systems theoretical ideas began to spread in the social and behavioral sciences. The theory of autopoietic and self-referential systems introduced a new variant of systems theory in biology to the social sciences (Luhmann 1995a, b, 1997) as well as in literary and media studies (Schmidt 1997; Nöth 2011). Other tendencies of systems theory are the ones of dynamic systems, complex systems, and the theory of self-organization. The study of complex systems has also developed into a research field in mathematics and economics of its own known as the sciences of complexity. Furthermore, artificial intelligence, artificial life, ecology, the neurosciences, and research in neural networks in computer science have been subsumed under the umbrella of systems theory (Cruse 2009).

The concept of system has many facets, of which only those that have become key concepts in systems theory can be discussed here (for others, see Nöth 2000, pp. 208–215). According to Bertalanffy (1968, pp. 37–8), systems theory aims at discovering isomorphisms that explain how systems, from simple static to complex dynamic ones, are organized in such diverse fields as technology, physics, biology, and the social sciences. Today, systems theory looks back on a history falling into first, second, and third-generation research in systems for which different types of system and key notions are characteristic (Iba 2010, pp. 6613–6614). The systems in the focus of the first-generation scholars (Cannon, L. Bertalanffy, K. Boulding, G. Klir, A. Rapoport) are dynamic equilibrium systems. Among their key concepts are feedback, homeostasis, invariance, equilibrium, or self-stabilization. In sociology, first-generation concepts of systems theory were incorporated within T. Parsons’s theory of social systems. The pioneers of cybernetics (N. Wiener, R. Ashby), the generation of the so-called first-order cyberneticist, is sometimes included in the first generation of systems theory, although Bertalanffy set store on emphasizing that the scope of general systems theory was different from the one of cybernetics (Drack and Pouverau 2015). Key concepts of the first paradigm of systems theory, with brief definitions, are the following:

  1. (1)

    System. According to Hall and Fagen (1956), “a system is a set of objects together with relationships between the objects and between their attributes” (p. 18). Examples of systems include machines, cells, organisms, ecological habitats, persons, social groups, families, companies, legal institutions, languages, literatures, media, or cultures. For Bertalanffy (1975), a system is “a set of elements standing in interrelation among themselves and with the environment” (p. 159).

  2. (2)

    Wholeness, order, and invariance are characteristics of systems according to the first generation of systems theoreticians (e.g., Bertalanffy 1968, p. 55). Every system is an ordered whole that cannot be reduced to the sum total of its constituent elements. “Order in a system refers to the invariance that underlies transformation of state and by means of which the system’s structure can be identified” (Laszlo 1983, p. 28).

  3. (3)

    Open vs. closed systems. First-generation systems theory defined biological organisms as open systems in the sense that they exchange energy and matter with their environment. As long as they live as open systems, they escape from decay through metabolism and by drawing information from their environment (Schrödinger 1947, pp. 70–2). Closed systems, by contrast, are isolated and without environmental input and output. Third-generation systems theory has an almost opposite conception of the “organizational closure” of systems (see below).

  4. (4)

    Equilibrium and stability. Systems are in states of equilibrium that range from stability to instability. A stable equilibrium is one in which perturbations do not change the value of the variables of the system. After disturbances that do not amount to a catastrophe, they return to their previous state. Balls in a basin exemplify a system in such a state. Systems that move away quickly from the state of equilibrium even after only minor disturbances are in a state of unstable equilibrium. A house of cards exemplifies a system of this kind.

  5. (5)

    Homeostasis and flow equilibrium. Homeostasis describes the ability of a system to stabilize itself dynamically at the level of a desired state (Cannon 1932). Open systems are in a flow equilibrium (Bertalanffy 1975, p. 127). After absorbing environmental influences (e.g., in the process of metabolism), they do not just return to their previous state, but attain a new state of equilibrium. When self-stabilization has a variable desired state, for example in the process of growth according a genetically determined program, its development is described as homeorhesis (Waddington 1957, p. 43).

  6. (6)

    Equifinality. Bertalanffy defines the capacity of a living system to reach a desired final state in different ways from various initial states as equifinality (1968, p. 79). Equifinality characterizes the behavior of a living system in which “future goals are already present in thought and direct the present action” (Bertalanffy 1950, p. 140-141).

  7. (7)

    Information as negative entropy. Entropy is a concept of the second law of thermodynamics. A closed system, isolated from its environment, tends towards entropy, a state in which the distribution of the molecules is entirely unpredictable and hence disordered (random). From thermodynamics, information theory adopts the concept of entropy and defines its inverse, negative entropy, as information (Shannon 1948). The more the elements of a system are ordered, the more information it contains. The more they are in disorder, the more the system lacks information.

Second-generation systems theory is concerned with processes of self-organization in dynamic nonequilibrium systems. The possibility of the emergence of order from chaos, as discovered by Prigogine in thermodynamic systems, is in its focus. Key concepts are “dissipative structure” (I. Prigogine), hypercycle, self-replication, autocatalysis (M. Eigen, in chemistry), and synergetics (H. Haken, in thermodynamics). The studies in dynamic processes in the framework of catastrophe theory have been included within this paradigm. Further key concepts are:

  1. (1)

    Self-organization and morphogenesis (cf. Laszlo 1972). In contradistinction to self-stabilization, which maintains a system at a desired state by means of negative feedback (morphostasis), self-organization proceeds by means of positive feedback, too. In its morphogenesis, a self-organizing system grows by amplifying inner changes and adapting to perturbations from without in order to reach higher stages of development. In each phase of this process, there are nonequilibrium states requiring an enforcement of the mechanisms of self-stabilization (cf. Laszlo 1972, pp. 42–5). Self-organization presupposes a system with multiple equilibria and strata of potential stability (Laszlo 1983, p. 32).

  2. (2)

    Self-stabilization is a key concept of dynamic systems theory. Negative feedback, already a key concept of first-generation systems theory, is the control processes by means of which a system maintains a desired state stable (cf. Laszlo 1972, p. 39). A thermostat, e.g., counteracts changes of temperature above or below a desired value by cooling or heating up the system. A system that aims at keeping a desired state stable is a teleological system.

  3. (3)

    Nonequilibrium dynamics is no longer concerned with merely maintaining a system stable (Nicolis and Prigogine 1977; Prigogine and Stengers 1984). Instead, it describes systems in which spontaneous transformations from states of fluctuation and disorder far from equilibrium result in higher states of order and stability. In such processes, the stability of a system depends on the use or the dissipation of energy. The goal of maintaining a system stable is supplanted by the goal of permanent dynamic nonequilibrium states in processes of self-organization and evolution.

Third-generation systems theory originates with Maturana and Varela’s (1980) biological theory of autopoiesis . Niklas Luhmann adopted and modified it in his theory of social systems. Second-order cybernetics (Bateson, von Foerster) and the ideas of the radical constructivists (von Glaserfeld, S.J. Schmidt) are often included within this paradigm. The project of cybersemiotics belongs to it (Brier 1996). The key concepts are:

  1. (1)

    System. Maturana and Varela (1980) restrict themselves to the most lapidary definition of a system as “any definable set of components” (p. 138). For Varela (1979), systems are “machines”, which allows him to distinguish between nonliving and living machines, alias systems (p. 9).

  2. (2)

    Autopoiesis in biology. Whereas nonliving machines are externally determined (allopoietic) systems, defined in terms of “inputs, outputs, and their transfer functions”, living machines (i.e., organisms) are autopoietic or autonomous systems. “In an autonomous system, we find that its components are so strongly interrelated that it is this internal coherence and interrelatedness what is central […]. Instead of inputs and their transformations, one shifts to operational closure, as a characterization of the internal network” (Varela 1986, p. 118). For Maturana (1981), autopoietic systems are “unities as networks of production of components that (1) recursively, through their interactions, generate and realize the network that produces them; and (2) constitute, in the space in which they exist, the boundaries of this network as components that participate in the realization of the network” (p. 21).

  3. (3)

    Autopoiesis in nonbiological self-referential systems. Luhmann accepts Maturana’s definition, but supplements it as follows:

    Autopoietic systems, then, are not only self-organizing systems, they not only produce and eventually change their own structures; their self-reference applies to the production of other components as well. This is the decisive conceptual innovation. It adds a turbocharger to the already powerful engine of self-referential machines. Even elements, that is, last components (in-dividuals) which are, at least for the system itself, indecomposable, are produced by the system itself. Thus, everything that is used as a unit by the system is produced by the system itself. This applies to elements, processes, boundaries, and other structures and, last but not least, to the unity of the system itself. (Luhmann 1990, p. 3)

  4. (4)

    System and environment. Luhmann rejects the definition of systems as a totality of the elements that constitute it as a whole, as the first-generation systems theoreticians taught. According to Luhmann’s redefinition, a system needs to be conceived in terms of the difference between the system and its environment (1997, p. 201). This difference is “created” by the system’s operations, which are the constitutive elements of the system. “Only a system can operate and only operations can produce a system” (1995b, p. 27).

Systems theory is sometimes studied under the designation of systems science (Laszlo 1983; Mobus and Karlton 2014), but others have avoided referring to systems theory as a science. Instead of calling it as a science or an academic discipline, they prefer expressions such as “systemic thinking” (Emery 1969), “systemic approaches to”, or “systems views of” the sciences (Bertalanffy 1965). For Bertalanffy (1968, pp. 90–1), who had contributed to the foundations of general systems theory since the 1930s, systems theory is a research paradigm (1968, pp. 90–1). Laszlo also called it “a perspective” (1975, p. 10).

4.3 Systems, Systems Theory, Cybersemiotics, and Cultural Semiotics

To approach semiotics from a systems theoretical perspective means to construct a bridge over the gulf that divides the natural sciences from the humanities (Brier 2015a). On biosemiotic grounds, the project of cybersemiotics has undertaken to construct such bridges in the study of cognition and communication. To bring systems theory within the scope of cultural semiotics has been a project of Nöth (1977) and W.A. Koch (1986), among others (Nöth 1990). The founder of general systems only contributed to cultural semiotics with an essay on the nature of the symbol (Bertalanffy 1965). Transdisciplinary bridges between systems theory and cultural semiotics can be found in Altmann and Koch’s (1998) volume Systems: New Paradigms for the Human Sciences. The volume opens perspectives on systems in science, social organizations, ideologies, knowledge domains, cognition, culture, music, language, and literature. In this volume, Bunge wrote on “Semiotic systems”, Koch on “Systems and the human sciences”, Wildgen on “Chaos, fractals, and dissipative structures in language”, Merrell on “Fractopoi, chaosmos, or merely simplicity-complicity”, and S. J. Schmidt on “A systems-oriented approach to literary studies”.

“System” is a notion that both brings together and separates systems theory and semiotics. In semiotics, the concept was central for the structuralists, not for Peirce (cf. Nöth 2018, p. 21). Whereas language was an organism in the nineteenth century evolutionary linguistic conception of Wilhelm von Humboldt, its interpretation changed with Ferdinand de Saussure to “a system in which everything holds together”, as Meillet paraphrased Saussure’s idea (cf. Koerner 1996). For the structuralist, the system of language is “tightly closed” (“serré”), homogeneous, “well-defined in the heterogeneous mass of speech facts”. Language is not only “a complex mechanism”, but a mechanism characterized by “over-complexity” (Saussure 1916, pp. 14–15, 73; cf. Sofia 2017). In its structure, Saussure’s system is self-sufficient, insofar as it is conceived as entirely independent of any environmental factor. Even language change is “self-generated in the absence of certain external conditions” (1916, p. 150). The independence of the semiotic system from its environment became a structuralist dogma: “My definition of language presupposes the exclusion of everything that is outside its organism or system – in a word, of everything known as ‘external’”, declared Saussure (1916, p. 20). The Saussurean conception of the self-sufficiency of a system constitutes the major contrast between the structuralist and later systems theoretical concepts of the language system since Roman Jakobson (1959, p. 275). For Luhmann, system and environment constitute themselves mutually.

Relationship to the environment is constitutive in system formation. It does not have merely ‘accidental’ significance, in comparison with the ‘essence’ of the system. Nor is the environment significant only for ‘preserving’ the system, for supplying energy and information. For the theory of self-referential systems, the environment is, rather, a presupposition for the system’s identity, because identity is possible only by difference (Luhmann 1995b, pp. 176–177).

In semiotics, the neglect of the environment of semiotic systems only ended with Lotman’s theory of the semiosphere as the environment of any semiotic system. It is the theory of an environment conceived as a semiotic space in which the “codes of a culture” are “immersed” and which constitutes a “cluster of semiotic spaces and their boundaries” (Lotman 1990, p. 123–125). For Lotman, such an environment is “necessary for the existence and functioning of languages”, but its structure is not complementary to the semiotic system, as Uexküll’s Umwelt is in relation to the “organism’s inner world” (Uexküll 1940). Instead, it is a space of otherness that serves to confirm and strengthen the system’s identity self-referentially within its own boundaries (cf. Nöth 2006, p. 260). While Luhmann adopts a post-Saussurean stance with respect to the concept of system, he remains somewhat closer to the structuralists in his use of the concept of difference. Luhmann’s remarks about the function of differences in systems seem to echo Saussure’s famous dictum that in the system of “language there are only differences” (1916, p. 120), although they are certainly no copy of it. Luhmann (1995b) writes:

In a certain way, difference holds what is differentiated together; it is different and not indifferent. To the extent that differentiation is unified in a single principle (e.g., as hierarchy), one can determine the unity of the system from the way in which its differentiation is constituted. Differentiation provides the system with systematicity; besides its mere identity (difference from something else), it also acquires a second version of unity (difference from itself) (p. 18).

The idea of difference as the power that holds the system together differs sharply from the poststructuralist conception of difference as reflected in Eco’s Deleuze-inspired reflections on “The sign as difference”. Here, difference no longer constitutes the system, but is, to the contrary, a wound in the system’s body. “The sign function exists by a dialectic of presence and absence, as a mutual exchange between two heterogeneities. Starting from this structural premise, one can dissolve the entire sign system into a net of fractures. The nature of the sign is to be found in the “wound” or “opening” or “divarication” which constitutes it and annuls it at the same time” (Eco 1984, p. 23). To refer to the sign system as a net of fractures instead of differences is certainly a poststructuralist perspective that overthrows the Saussurean dogma of the system in which differences hold everything together. It is equally incompatible with Luhmann’s systems philosophy of difference as the structure constituting the system.

4.4 Information, Meaning, and Form

Cybersemiotics distances itself from the probabilistic concept of information adopted by information theory in the tradition of Shannon and Weaver (1949). It integrates, instead, within its core, Charles S. Peirce’s semantic theory of information as outlined in Brier’s Cybernetics and Human Knowing (Nöth 2012). The founder of cybersemiotics first formulated his own programmatic goal of restituting the semantic dimension inherent in the ordinary language concept information to the theoretical concept of information in the subtitle of his seminal book, Cybersemiotics. Why Information Is Not Enough (Brier 2008). Information is not enough because Shannon’s mathematical theory of information is a theory of signals and not of signs conveying meanings, let alone meanings that convey new knowledge to its interpreters.

Brier’s most comprehensive account of how information should be redefined on Peircean grounds is in his paper “Finding an information concept suited for a universal theory of information” of 2015. An appropriately revised approach to information should take into account “subjective experiential and meaningful cognition as well as intersubjective meaningful communication in nature, technology, society and life worlds”, writes Brier (2015c, p. 622). In this context, Brier proposes that a theory of information on Peircean grounds could make progress by incorporating elements of Luhmann’s systems theory. Indeed, Luhmann and Peirce do not only share a semantic concept of meaning but they also share “the idea of form as the essential component” of meaning (Brier 2015c, p. 631). Luhmann’s concept of meaning has more affinities with Saussure’s than with Peirce’s semantics (Zeige 2015). Key notions in the context of his reflections on meaning are difference and form, form being a synonym of “structure” for the structuralists. With Saussure, Luhmann shares the premise that a theory of meaning needs to exclude the idea of an object of reference. The sign is a form within a closed system that has no window to allow any view of reality since the only reality it knows is the system’s internal reality that the sign itself constructs through its form (Luhmann 1993, p. 50). With such definitions, both Saussure’s and Luhmann’s concepts of meaning connote an element of self-referentiality. Luhmann (1995a) acknowledges this characterization of his semantics explicitly:

The problem of self-reference reappears in the form of meaning. Every intention of meaning is self-referential insofar as it also provides for its own reactualization by including itself in its own referential structure as one among many possibilities of further experience and action. At any time, meaning can gain actual reality only by reference to some other meaning; to this extent there is no point-for-point self-sufficiency and also no per se notum (i.e., no matter-of-factness). Ultimately, the general problem of self-reference is duplicated, to the extent that in the domain of the meaningful it becomes unproductive for meanings to circulate as mere self-referentiality or in short-circuited tautologies (p. 61).

Distinctions are drawn by interpreters, conceived as autopoietic systems, as “receivers” who “construct” the meaning “from the information produced by the interpretation of signs, within certain frames that reality imposes” (Brier 2015c, p. 630). For the constructivist, the form of meaning is a form imposed on the sign interpreter’s mind by the sign system. For them, meaning, thus conceived, is “the form of the world”.

The form of the world […] consequently overlaps the difference between system and environment. Even the environment is given to them in the form of meaning, and their boundaries with the environment are boundaries constituted in meaning, thus referring within as well as without. […] The system’s differentiation with the help of particular boundaries constituted in meaning articulates a world-encompassing referential nexus […]. But the boundary itself is conditioned by the system, so that the difference between the system and its environment […] is thematized in self-referential processes (Luhmann 1995b, pp. 61–62).

For Peirce, by contrast, it is the object of a sign that conveys meaning, neither its interpreter nor the sign system; and form is what this object conveys through the meaningful sign. In an early paper in which the object of the sign is simply a “thing”, Peirce’s ideas concerning the dichotomy of form and meaning are these:

The meaning of a thing is what it conveys. Thus, when a child burns his finger at the candle, he has not only excited a disagreeable sensation, but has also learned a lesson in prudence. Now the mere matter cannot have given him this notion, since matter has no notions to give. […] What is the necessary condition to matter’s conveying a notion? It is that it shall present a sensible and distinct form. It must obviously possess a form, since formless matter is chaos […] It is the form of a thing that carries its meaning (Peirce 1861, p. 50)

Hence, the form of nature is not intelligible because human minds organize it by means of their signs and sign systems. Nature is intelligible because it is itself rational insofar as its processes “are seen to be like processes of thought” (“The Critic of Arguments”, CP 3.422, 1892; cf. Brier 2015b). The human mind can perceive the forms of nature because these forms have evolved under, and are determined by, the same evolutionary laws that have also determined the evolution of the objects of cognition. These forms carry a meaning of their own, irrespective of the meanings that different cultures may attribute to them. The significant form of the sign consists in its semiotic potential, its power to represent its object and thereby determine an interpretant to represent its signification and denotation. About the sign as a significant form, Peirce also says that “it is a type, or form, to which objects, both those that are externally existent and those which are imagined, may conform, but which none of them can exactly be” (“What Pragmatism is”, CP 5.429, 1905).

4.5 Peircean Systems Theoretic and Cybersemiotic Perspectives on Signs

Peirce’s general semiotics is not a theory of sign systems, even though some thoughts on the nature of systems can be found in his prolific writings (Herbenick 1970). It is rather a theory of “the general conditions of signs being signs [… and] of the laws of the evolution of thought” (“The Logic of Mathematics”, CP 1.444, c.1896). Nevertheless, there are elements in Peirce’s concept of a sign that evince affinities with the notion of system as defined in systems theory. Some parallels become apparent in a comparison of what Peirce says about the nature of a sign with what systems theoreticians say about the nature of systems. A sign is in one sense not a system but an element of a sign system, but the study of signs to which Peirce dedicates his method of pragmatism are mainly concepts. If we keep in mind that for Peirce, a diagram is “an Icon of intelligible relations” (“Prolegomena to an Apology for Pragmaticism”, CP 4.532, 1906), it is not difficult to recognize that a concept in Peirce’s definition is a system in Bertalanffy’s definition. While the founder of general systems theory defines a system as “a set of elements standing in interrelation among themselves and with the environment” (Bertalanffy 1975, p. 159), Peirce defines a concept quite similarly as follows: “A concept is not a mere jumble of particulars, – that is only its crudest species. A concept is the living influence upon us of a diagram, or icon, with whose several parts are connected in thought an equal number of feelings or ideas” (“The Essence of Reasoning”, CP 7.467, 1893).

A first cybernetic principle characteristic of signs according to Peirce’s definitions is their agency by final causes. Peirce attributes agency by final causality both to living beings and to symbols (Santaella 1999). The purpose of a living system is to survive both individually and as a species. Maturana and Varela define this feature of life in terms of teleonomy as “the element of apparent purpose or possession of a project in the organization of living systems” (1980, p. 138). Teleonomy is thus a distinctive feature of autopoietic systems, “continuously revealed in the self-asserting capacity of living systems to maintain their identity through the active compensation of deformations” (Maturana and Varela 1980, p. 73). As conceived by Peirce, symbols are “living realities” (“The Law of Mind”, CP 6.152, 1891). Their purpose is self-replication insofar as they aim at creating interpretants and thus determine future thoughts. “The whole purpose of a sign is that it shall be interpreted in another sign”, says Peirce (“On Pragmatism”, CP 8.191, 1904). Both symbols and biological systems pursue their goals without some precisely predetermined trajectory. This distinguishes the causality by which they operate from the efficient causality that operates in simple machines, where efficient causes determine a fixed trajectory admitting no other exception than the system’s breakdown. As Peirce puts it, the laws of mind and of life “exhibit a striking contrast to all physical laws […]. A physical law is absolute, [… but] no exact conformity is required by the mental law” (“The Architecture of Theories”, CP 6.23, 1891).

The second cybernetic principle characteristic of both signs and systems is the one of self-control. A cybernetic system has the capacity of self-control to the degree to which it maintains itself stable. For Peirce, self-control is also a characteristic of symbolic signs. This insight is not entirely new. Holmes (1966), Ransdell (1992), Queiroz and Loula (2011), and Antomarini (2017) have addressed some aspects of it. For the affinities between signs and living systems, see also Nöth (2014). In living systems, self-control manifests itself in the form of homeostasis. Homeostasis also occurs in processes of semiosis and in the evolution of semiotic systems in processes that counteract disturbances of the system (cf. Nöth 1977). For Peirce, self-control manifests itself in the purpose of symbols to “to bring truth to expression” (“The Grammatical Theory of Judgment and Inference”, CP 2.444, c.1897). With this argument, Peirce expresses his conviction that, over the long term, the laws of inference are powerful enough to reveal distortions or falsifications and to bring truth out into the open. As Peirce put it, “though men may for a time persuade themselves that Caesar did not cross the Rubicon, and may contrive to render this belief universal for any number of generations, yet ultimately research – if it be persisted in – must bring back the contrary belief” (“Truth and Falsity and Error”, CP 5.565, 1901). Under these premises, Ransdell describes the cybernetic nature of the agency of symbols according to Peirce as follows:

All of this is reminiscent of the way in which even relatively simple cybernetic devices, such as thermostats and automatic pilots, continually tend toward a certain goal state in spite of the variations in the observational data which are fed into them, simply because they are so constructed as to move from whatever data they ingest towards the same end. This tendency is often called nowadays ‘equifinality’. The parallel in Peirce’s philosophy to the principles of construction of such self-control systems is to be found in his theory of inference, in which he correlates the hypothetical, deductive, and inductive types of inference into a unified total process, exemplifying continuous self-control and self-correction in such a way as to constitute an inherent tendency toward truth (Ransdell 1992, p. 171).

A scholar who has studied, with Peirce, cybernetic forms of self-control in other processes of semiosis is Larry Holmes. Extending the Peircean principle of self-correction in logic, the author also sees evidence of rational self-control in ethical conduct:

The process is the same for logical reasoning as for moral. “No sooner have we drawn a conclusion, than we begin to turn upon it a critic’s eye and to ask ourselves whether it really conformed to our logical ideals. […] Reasoning properly means controlled thought, and the only possible control consists in critical review, or self-confession” (MS 451, pp. 12-13). In cybernetic terminology, there is a corrective feedback, which tends, as the action is considered and repeated, to reduce the oscillations – one’s violent wayward impulses” – and to bring the action closer to the ideal. There is also a similar process with respect to norms or ideals, until a stable one emerges; although Peirce appears to hold that in the overall development of reason no norm is entirely stable, which indeed seems consistent with an evolutionary pragmatism applied to a developing organism. As Norbert Wiener says, “The stable state of a living organism is to be dead” (Holmes 1966, p. 117).

4.6 How Autopoietic Systems Communicate

Brier (2006) proposes a cybersemiotic theory of communication integrating Peircean semiotics, autopoiesis theory, second-order cybernetics, and information science within a comprehensive model. The application of the theory of autopoiesis to the study of communication calls first for an answer to the fundamental question, Is communication possible between closed systems at all (cf. Nöth 2013a)? Luhmann himself recognized the problem and discussed it under the heading of “The improbability of communication” (1981). In Autopoiesis and Cognition (1980) Maturana and Varela inaugurated a new paradigm of communication in which organisms are defined as autopoietic (literally: ‘self-creating’) systems. They are autopoietic because they have the capacity of self-maintenance and of self-determined interaction with their environment (Varela 1979, 1981). A cell of any organism is an example. In the process of maintaining itself alive, it has the capacity of continuous self-renewal in a process in which it only fulfils functions made possible by its own structure. This capacity defines the cell’s agency as self-referential (Maturana and Varela 1980). The opposite of an autopoietic system is an allopoietic (literally ‘other-created’) system. A motor vehicle operated by a driver is an allopoietic system since its output is determined by the driver’s external input. In contrast to Bertalanffy, who described living systems as open, Maturana and Varela conceive it as essentially closed. “Every autonomous system is organizationally closed” (Varela 1979, p. 58).

How can two systems interact in communication if both are closed to each other? Systems theory has long since distanced itself from the naïve Shannon-Weaver model of communication as the flow of information from a source to a destination, optimizable as to its efficiency in attaining the goal of congruence between the sender’s message and the receiver’s interpretation (Laszlo 1972, p. 251). The theory of autopoietic systems proposes a radically different model (Köck 1980). “The view of communication as a situation in which the interacting systems specify each other’s states through the transmission of information is either erroneous or misleading”, declares Maturana (1978, p. 54). Instead, communication is a cognitive process of interaction between structurally coupled autonomous organisms:

Autopoietic systems may interact with each other under conditions that result in structural (behavioral) coupling. In this coupling, the autopoietic conduct of an organism A becomes a source of deformation of an organism B, and the compensatory behavior of organism B acts, in turn, as a source of deformation of organism A, whose compensatory behavior acts again as a source of deformation of B, and so on recursively until the coupling is interrupted. In this manner, a chain of interlocked interactions develops. In each interaction, the conduct of each organism is constitutively independent in its generation of the conduct of the other, because it is internally determined by the structure of the behaving organism only; but it is for the other organism, while the chain lasts, a source of compensable deformations that can be described as meaningful in the context of the coupled behavior. These are communicative interactions (Varela 1979, p. 49).

A necessary prerequisite of communication, according to Maturana, is that “the domain of possible states of the emitter and the domain of possible states of the receiver must be homomorphic, so that each state of the emitter triggers a unique state in the receiver” (1978, p. 54). Only after “a behavioral homomorphism” has been established through processes of “ontogenic structural coupling” that create “consensual domains” (ibid.) can communication take place. Otherwise, “there is no behavioral homomorphism between the interacting organisms and, although individually they operate strictly as structure determined systems, everything that takes place through their interactions is novel and anti-communicative in the system that they constitute together, even if they otherwise participate in other consensual domains” (ibid.). Communication thus results in the expansion of consensual domains through autopoietic processes of self-generation and self-transformation. The autonomous mind of an organism develops through “an endless sequence of interactions with independent entities that select its changes of state but do not specify them” (Maturana and Varela 1980, p. 35). Congruence of the cognitive domains of emitter and receiver is not the goal of a communicative process, but consensus in consensual domains is the prerequisite of communication. Instead of information flow, there is a coupling of autopoietic systems, but these behave self-referentially.

The concept of system in Maturana and Varela’s theory of communication is ambiguous. In some contexts, the system is the individual organism that communicates; in others, the couple of the addresser and the addressee constitute it. At any rate, not only the communicating organisms individually but also the system constituted by an addresser and an addressee form semiotically closed systems (Uexküll 1978, 1981; Köck 1980, p. 100). Varela’s account of what happens within the system of a communicator A interacting with a communicate B is radically constructivist. The agent in a communicative process is not an individual addresser or addressee but the autopoietic system constituted as such through the very situation in which they communicate. In a communicative process, two autonomous systems are mutually “coupled” in a way that A cannot “inform” B. Hence, information is actually impossible in communication:

If the coupled organisms are capable of plastic behavior that results in their respective structures becoming permanently modified through the communicative interactions, then their corresponding series of structural changes (which would arise in the context of their coupled deformations without loss of autopoiesis) will constitute two historically interlocked ontogenies that generate an interlocked consensual domain. […] Thus, communicative and linguistic interactions are intrinsically not informative: organism A does not and cannot determine the conduct of organism B, because due to the nature of autopoietic organization itself, every change that an organism undergoes is necessarily and unavoidably determined by its own organization (Varela 1979, p. 49).

The prototype of communication is dialogic exchange, conversation in the etymological sense of a “turning around together”, acknowledges Maturana (1978, p. 55). However, there can be no dialogic “exchange” of information under the premise of the autopoietic closure of systems that allow only coupling. The systems theoretical scenario of communication between autopoietic systems that cannot exchange information has affinities with J. von Uexküll’s umweltlehre, whose principal argument is similar: organisms live in a “self-centered” environment (Kull 2010, p. 348) that prevents them from knowing signs other than those that their species-specific constitution allows them to cognize. Thure von Uexküll (1981, p. 14) has argued that Jakob von Uexküll’s (1940, p. 8) biosemiotic functional circle and even Wiener’s cybernetic control systems include an element of autonomous closure: both biological and cybernetic systems can only react to their environment according to their inner needs, which are their system’s desired states.

4.7 Luhmann’s Radicalization of the Scenario of Self-Reference in Communication

Luhmann sides with the theory of communication as an autopoietic process when he adopts the argument that in communication there is no transfer of information but “a shared actualization of meaning” (1995b, p. 32). Meaning is merely actualized but not transmitted since communication presupposes an “underlying meaning structure” common to the addresser and addressee. Meaning is a necessary presupposition of communication since it forms the “shared background against which informative surprises may be articulated”. Hence, communication can only have the effect of “reciprocal regulations of surprises” (ibid.; cf. Brier 2008, p. 239). Luhmann’s argument cumulates in the thesis, “What we have in the case of communication, then, is not the transfer of things but the allotment of surprises” (1995b, p. 32). The polemic style of this formulation is apparent to anyone who knows that nobody has ever defined communication as a transfer of “things” except the professors of Jonathan Swift’s Lagado, who wanted to substitute words for objects.

As provocative as the theories of closed systems that communicate without transferring any information may be, Niklas Luhmann’s systems theoretical account of communication appears still more provocative when its author postulates that communication is, if not impossible, then at least improbable. Without denying that communication is the prerequisite of human life, Luhmann (1981) speaks of the “improbability of communication” and argues that communication never really happens because minds are self-referentially closed systems. “What another has perceived can neither be confirmed nor repudiated, neither questioned nor answered. It remains enclosed within consciousness and opaque for the communication system as well as for another consciousness” (Luhmann 1992, p. 253). Unlike letters or packages that can be sent from a sender to a receiver, thoughts and meanings cannot be transmitted because the sender’s mind is a closed and therefore self-referential system.

Luhmann rejects the common-sense assumption that social action and human communication are due to “individuals or subjects to whom the action or communication can be attributed” (1992, p. 251). Not some individual, but “only communication can communicate” within a network of communication (1992, p. 251). Not only is each individual coupled in a communicative situation itself a closed system but the communication system that the communicating individuals constitute together is also a “completely closed system that creates the components out of which it arises through communication itself. In this sense, a communication system is an autopoietic system that (re)produces everything that functions as a unity for the system through the system itself” (Luhmann 1992, p. 254). Again, Luhmann likes to provoke. His argument of the impossibility of communication implies a paradox because, convinced of the impossibility of communication, Luhmann could hardly pretend that communicating his ideas to his readers could make any sense. It was Wittgenstein (1953) who recognized this paradox, when he argued: “But if you say: ‘How am I to know what he means when I see nothing but the signs he gives?’ then I say: ‘How is he to know what he means, when he has nothing but the signs either?’” (p. 504).

4.8 Self-Referential Communication from the Peircean Perspective

It has been argued that Peirce was a philosopher of signs and not one of communication, but in fact, the founder of pragmatism had much to say about the nature of communication (Nöth 2013b). Peirce’s semiotic theory of communication differs from the ones developed in information and systems theory, but not in all respects. For Peirce, information can neither be accounted for in terms of negentropy (Wiener) nor can it be conceived in terms of selection from a repertoire of possibilities (Luhmann). Instead, information is concerned with signification, denotation, and propositional knowledge (Nöth 2012). For Peirce, communication does not connect autopoietic systems constructing their own signs, meanings, and realities. The founder of pragmatism would have criticized this view as the error of considering “a mind as something that ‘resides’” in a brain, “something within this person or that, belonging to him” (“Lecture on Pragmatism III”, CP 5.128, 1903). Signs are not the intellectual property of those who replicate them. The real agents in communicative processes are living systems selecting information units from a repertoire of semantic possibilities (Luhmann 1992, p. 252). Organisms are perhaps coagents, but not the true agents in processes of semiosis. It is not the so-called sign producer that produces the meanings conveyed by the sign; it is the sign that carries and conveys it to an interpretant (Nöth 2009). A sign is not the products of a brain; it is only embodied and replicated there. In one of his definitions, Peirce says about the sign: “It is an element of cognition so embodied as to convey that cognition from the thought of the deliverer of the sign, in which that cognition was embodied, to the thought of the interpreter of the sign, in which that cognition is to be embodied” (“On the Logic of Quantity, and especially of Infinity”, MS 16:12, c.1895).

The thoughts of a literary author, for example, are in some sense much more outside the brain in which they were conceived than they are located within it (“Psychognosy”, CP 7.364, c.1902). The agents in a sign process are not even human subjects at all since by a sign process (“semiosis”), Peirce means “on the contrary, an action, or influence, which is, or involves, a coöperation of three subjects, such as a sign, its object, and its interpretant” (“A Survey of Pragmaticism”, CP 5.484, c.1907). On the other hand, there are indeed elements in Peirce’s theory of communication that do forebode or even anticipate elements of the systems theoretical communication theories reviewed above, even though in a somewhat different guise. Here, we can only examine three of them, (1) the conception of the dialogue as a system, (2) the argument of the impossibility of communication without a common background of collateral experience, and (3) the argument of the impossibility of communication between two minds because these are closed to each other.

  1. (1)

    Dialogue. The conception of the dialogue as an autopoietic system that is more than the mere conjunction of two (or more) autonomous living systems is the following:

    Whenever we engage in social interactions that we label as dialogue or conversation, these constitute autonomous aggregates, which exhibit all the properties of other autonomous units. It is not easy to establish strict criteria for this view of conversations, for their closure is transient and mobile. However, this view is not more laden with difficulties than the predominant way of looking at it in terms of the performance and competence of single speakers (Varela 1979, p. 269).

Peirce’s first counterpart to this notion of conversation as a system of its own is in his concept of the commind or commens, “that mind into which the minds of utterer and interpreter have to be fused in order that any communication should take place” (“Letter to Lady Welby”, EP 2, p. 478). In the letter to Lady Welby of 1906, in which he introduced the notion, Peirce explains, “This mind […] consists of all that is, and must be, well understood between utterer and interpreter, at the outset, in order that the sign in question should fulfill its function. […] No object can be denoted unless it be put into relation to the object of the commens” (ibid.).

  1. (2)

    Collateral experience. “Collateral experience” and “collateral observation”, defined as the “previous acquaintance with what the sign denotes” (“Letter to William James”, EP 2, p. 494; 1909), are the concepts by means of which Peirce describes the necessary prerequisite of knowledge that an addresser and an addressee need to share in order to communicate successfully. Maturana (1978) formulates the same concept more abstractly and radically. In his words, “the domain of possible states of the emitter and the domain of possible states of the receiver must be homomorphic, so that each state of the emitter triggers a unique state in the receiver” (p. 54). Luhmann’s (1995b) corresponding notion is the one of communication as “a shared actualization of meaning” (p. 32). It expresses the idea that collateral knowledge, as the presupposition of successful communication, cannot be conveyed through the very process of communication of which it is itself a presupposition. Instead, knowledge of, and experience with, the object of the sign, the subject matter of communication, must precede its communication, wherefore this knowledge cannot be transmitted but only actualized. Peirce’s way of expressing this idea is the following:

    A Sign may bring before the Mind, a new hypothesis, or a sentiment, a quality, a respect, a degree, a thing, an event, a law, etc. But it never can convey anything to a person who has not had direct experience, or at least original self-experience of the same Object, collateral experience. It cannot convey a notion of the color red to a color-blind person, nor of Shakespearian diction to a person who does not know […]. (Fragment of a letter to Lady Welby, MS, reel L6, microfilm 617, p. 14, first version, c.1908)

  2. (3)

    Impossibility of communication between mutually closed minds. The argument of autopoiesis theory that communication is the interaction between mutually closed minds is reminiscent of the proverbial insight that we cannot read thoughts. Although his theory of communication is not one of communicating subjects but one of the agency of the sign and its effects on interpreters in the form of embodied interpretants, Peirce did write on the topic of the mutual inaccessibility of the minds of addressers and addressees of a message. In a still unpublished fragmentary passage of MS 318 of 1907, a manuscript that has been published only in part in EP2 under the title of “Pragmatism”, Peirce formulates this paradoxical insight in the following description of a dialogue between an utterer an interpreter:

    But why should I particularly care who it may be that has uttered the sign that I am proposing to interpret? Answer: It is because the purpose of a sign is to supplement the ideas of the life of which I, the interpreter, am a part, − ideas which I have drawn directly from my own life, − with a copy of a scrap torn out of another’s life or rather from his panorama of life, his general view of all life, and I need to know just where on my panorama of universal life I am to insert a recopy of this copied scrap. Here note well that no sign can ever fully direct its interpreter where upon his own panorama any copied scrap from another that contains that same sign ought to be attached and the reason is obvious. The utterer’s sign can embody nothing but a bit of the utterer’s idea of his own life (MS 318, “Prag.”, Reel 7, microfilms no. 718–723, 1907).

The imaginary question brought before the pragmaticist’s mind concerns the autonomy of the interpreter, the pragmaticist – let us call her or him P and the utterer U. Should P care about U’s thoughts, which P cannot read anyhow? U is hardly mentioned any more in the answer. After all, the message is not about the sender’s intentions, but about the purpose of the sign, which is only a fragment, a scrap, torn off from U’s life panorama. But the ideas conveyed by the sign are not just “received” by P, as in the scenario of a receiver who receives a message transmitted by a sender. To the contrary, these ideas, scraps torn from U’s life panorama, must be inserted within P’s own life panorama to become meaningful, but within this panorama, they are nothing but fragments, too. This scenario of interpretation as the insertion of a copy within the interpreter’s mental panorama is the one of an autopoietic interpreter who reconstructs a message self-referentially and anew within his or her own mind, as conceived in autopoiesis theory. What Peirce emphasizes in addition is that P’s reconstruction of the ideas embodied in the sign are necessarily as fragmentary as the sign’s embodiment of U’s life is.

Peirce then goes on to discuss and interpret the rhetorical implications of the communicative scenario of the imaginary communication between U and P. “In attempting to give the interpreter to understand to what part of the interpreter’s life it is to be attached, the utterer has several courses open to him, a real variety one should suppose. The problem before him will be to represent part of the interpreter’s life” (ibid.). The argument that U, in order to convey signs embodying fragments from U’s own life to P, needs to convey not fragments of U’s own life, but anticipate signs embodying fragments of P’s life, reads like a description of the rhetoric of communication as the coupling of two autopoietic systems. In order to establish a relation of coupling with P, U needs to imagine how U’s fragmentary signs may be inserted within the mosaic of P’s life panorama. Cognitively, however, U’s attempt to anticipate ideas of P’s life panorama is doomed to fail because P’s mind is a closed system. “The utterer, has no ideas but his own ideas, lives no life but his own life. Let him try to specify a place on the interpreter’s panorama, and he can only look over his own panorama, where he can find nothing but his own ideas”, argues Peirce (ibid.). P’s mind is closed within itself, and any attempt to transcend the boundaries of P’s own panorama to reach U’s mind can only refer back to the confines P’s own mind. However, the mutual closure of U’s and P’s minds does not make communication impossible or unlikely. For U, the way out of the dilemma of the mutual closure of two minds that desire to communicate is to use his or her own panorama as the arena for staging the panorama that most likely represents P’s life. Such an imaginary scenario is not doomed to failure because the assumption is plausible that two minds work similarly. The operations of semiosis in one is to a certain degree an icon of the operations in the other. Thus,

on that panorama, he [i.e., the utterer] has, however, no difficulty in finding the interpreter’s life, that is to say, his idea of it, and among the interpreter’s ideas, that is, his own idea of the interpreter’s ideas, he finds an idea of that part of the interpreter’s panorama to which he conceives this scrap should be attached and this he expresses in his sign for the interpreter’s benefit. The latter has to go through a similar round-about process to find a place in his own life that seems to correspond with his idea of the utterer’s idea of his idea of his life and with all these changes of costume there is such imminent danger of mistake that the utterer would have done far better to express his own idea as well as he could convey it to the interpreter and allow the latter to find the place in his own life as he thinks of it. (ibid.)

Communication in this sense does have a touch of self-referentiality, if it is not even solipsism, because the dialogue of an utterer with an interpreter involves ultimately, if not two monologues, then at least the coupling of two inner dialogues. As such, communication has a characteristic of thinking in general. Thinking, too, “always proceeds in the form of a dialogue – a dialogue between different phases of the ego” (“Phaneroscopy”, CP 4.6, 1906). Communication, under such premises, should be possible.