1 Introduction

For a cognitive agent living in an environment, in order to produce efficient behaviour, it needs to interact with the environment and make sense of those interactions. Interactions are based on sensors that enable perception and actuators that enable action. What an agent perceives of the environment will vary depending on its sensory apparatus, and the context in which it needs to interpret and act upon incoming information. This is true for biological and artificial agents alike. Any cognitive agent is therefore faced with the problem of constructing, internally, its own version of reality. This does not itself imply the need for representation in the classical sense, as information may be encoded in agent’s physical bodily structures directly affecting its behaviour. It does, however, imply a form of information processing as the essential property of cognition: information about the environment is processed so as to produce behaviour that aids the survival of the agent.

In this chapter, we will argue that information processing in general, and in the context of cognition in particular, is best understood as computation. This leads to a wide notion of computation known as info-computation, in which all natural processes may be viewed and reasoned about as computational processes, or natural computing. This view of computation is broader than physical computation presented in [1] claiming that “Physical computing is the use of a physical system to predict the outcome of an abstract evolution”. On this view there is a human using a physical system, and intrinsic physical processes going on in natural systems are not automatically qualified as computations, which is the case in the info-computational framework of computing nature. The taxonomy of [2, 3] shows the relationships between different models of computation.

From our standpoint of computing nature, we attempt to formulate the process of reality construction in a cognitive agent as a network of networks of hierarchically more complex info-computational processes. This understanding of computation is essentially different from the classical view of early computationalism [4], which focussed on cognition understood as human thinking. Our notion of cognition follows Maturana and Varela’s view that cognition can be found in all living organisms, as “Living systems are cognitive systems, and living as a process is a process of cognition. This statement is valid for all organisms, with or without a nervous system” [5]. So cognition = life, argued even by [6], and thus follows evolution of life/cognition from abiogenesis via complexification through interactions. Life/cognition in our approach is characterised by information processes in networks of agents—starting from non-living parts of a cell to cells, organs, organisms and eco-systems—on different levels of organisation. Interesting to notice is that this idea can be traced back to Aristotle’s concept of soul that is exactly what makes the difference between animate and inanimate. “The soul of an animate organism, in this framework, is nothing other than its system of active abilities to perform the vital functions that organisms of its kind naturally perform, so that when an organism engages in the relevant activities (e.g. nutrition, movement or thought) it does so in virtue of the system of abilities that is its soul” [7].

“The relation between soul and body, on Aristotle’s view, is also an instance of the more general relation between form and matter: thus an ensouled, living body is a particular kind of in-formed matter” (ibid). This view connects smoothly to the idea of information as a structure that defines the form (relations).

There is a widespread misconception that all computational theories of mind necessarily imply purely syntactical manipulation of symbols that objectively represent the (external) environment. Of course, no such thing is actually possible in a real agent: no symbolic representations of unknown entities could exist a priori, but would have to be constructed, and refined, during the lifetime of the agent through interaction with the environment. As the environment can never be observed directly, completely or even reliably, any representations, symbolic or not, would necessarily be dependent on the subject who constructs it. In other words, although the environment must be assumed to exist as a source of observations, it is the subjective relationship to the environment, as well as to the received relationships of the community of practice of the social group that agents typically acquire via language, that the cognitive agent has access to. The aforementioned problem of constructing symbolic representations from non-symbolic sensory data is known as the symbol-grounding problem. The symbol-grounding problem is addressed by the theory of enactivism [5, 8], which states that all of cognition is a process of representation-free sensorimotor perception and response, with no intermediate symbol manipulation. Dynamical, analogue systems like neural networks are often classified under this category.

Identifying computation with the Turing machine model, there does indeed seem to be a dichotomy between computational and enactivist theories of cognition, and we would argue that enactivism is correct in its basic assumptions, see [9] p. 153. However, modern flavours of computationalism have expanded to include all forms of information processing as computation. Since information to a general agent need not come in the form of discrete symbols, there is no reason to assume that the processing of that information to produce behaviour necessitates the introduction of discrete symbols (although it is certainly useful when dealing with logic and mathematics, the original applications of computational models). In this view, neural networks and other dynamical systems are computational because they process information by changing their internal states and outputting previously inaccessible information. It follows that other forms of information processing, such as non-deterministic models, are also taken as computational, and the dichotomy between computational and enactivist theories of mind breaks down. A detailed overview of computational theories of cognition is given by Fresco [10] and comparison between different computational models, including cognitive computing, in [2].

The chapter is structured as follows: we start by defining cognition as a process of living. The next step is to ask what constitutes minimum cognition within the framework of computing nature/natural computationalism. Many still today believe that the only truly cognitive beings are humans. Adopting Maturana and Varela’s view, we argue that it is much more explanatory fruitful to see cognition as developing in degrees during evolution. The subsequent section elaborates on the info-computational structure of reality for a cognitive agent. Living beings are parts of eco-systems, and they appear in different networks that interact and sustain each other, thus one section is dedicated to computation (information processing) in networks of agents. Organisms develop and evolve through morphological/morphogenetic processes that can be modelled as info-computation; so one section elaborates such processes. After making this connection between growth of form, evolution and development in living (cognitive) systems, we return to the main theme of our study: generation of reality for an agent: from raw data to information and cognition. In the subsequent section we make the connection of computational models of natural mind to its physical substrate, all the way down to quantum. To meet the criticism that pancomputationalism by necessity implies panpsychism such as furthered by [11], we address the distinction between pancomputationalism/computing nature and panpsychism. We finish with the conclusion and some remarks on future work.

2 Inanimate Versus Living/Cognitive Agents

When discussing cognition as a bioinformatic process, we use the notion of agent, i.e. a system able to act on its own behalf [12]. In this sense agency can be ascribed even to such simple non-living objects as elementary particles, which act upon each other and change their states by exchanging force carriers (elementary particles). It is important to keep in mind that this definition of agency is more general than the majority of other definitions, such as the notion of agency presented in [13], as we include both non-living and living agents. Cruse makes a distinction between reactive systems and cognitive systems, and thus excludes not only inanimate agents but also the simplest organisms without a nervous system [14]. Froese and Ziemke, in addition to adaptivity, emphasise organismic constitutive autonomy as necessary leading principles of enactive cognition [15]. It is worth mentioning that there is a gradual transition between inorganic systems and reactive living systems. The main difference between inanimate (material) agency and living agency is that a living agent acts in interplay with its environment so as to persist. (As a side note: Apoptosis, the process of programmed cell death, is an exception to this general tendency of life to survive. Apoptosis can be a part of the development of an organism, such as formation of fingers, where cells “in between” recede. Apoptosis can also be found in messenger bacteria losing their lives for the good of the colony, or it may be the consequence of damage or disease of a cell. In the big picture, it is a tautology that living beings are actively engaged in their own survival).

Inanimate agency is found in agents whose existence does not depend on their active engagement. An atom’s interactions with the world do not actively support its continued existence. On the other hand, for living agents, it is characteristic that they have the possibility of choice of anticipated future outcomes, and they in general tend to make choices so as to survive. They adapt to the dynamical environment and learn. Living beings are open thermodynamic systems that actively “work” on their own survival. This is what Maturana and Varela termed “autopoiesis.” Agency in the case of living agents has been explored in [16, 17]. We have no reason to believe that inanimate objects possess any anticipation. However, even the simplest living organisms such as bacteria exhibit ability to anticipate, as shown in the work of Ben Jacob et al. [18, 19].

This sharp differentiation between living and non-living agency might remind someone of vitalism, the belief that there is a fundamental difference between non-living and living entities, which vitalists ascribed to unexplained “élan vital” possessed by life. However, this is not the case. Living agency originates as an emergent capacity of specific autopoetic, operationally closed [20] structures that have circular self* properties (self-organisation, self-reflection, self-healing etc.) These properties are based on common physics and chemistry, and there is nothing mysterious about them. Just the opposite, with the info-computational framework we hope to contribute to understanding of morphogenetic processes that lead from inorganic to organic matter—abiogenesis, thus the transition from inanimate to living agency.

This chapter addresses the question of reality for different classes of cognitive/living agents, suggesting that study of cognition in nature is helping us construct inanimate cognitive agents. The world as it appears to a cognising agent depends on the type of interaction through which the agent acquires information [12]. Agents communicate by exchanging messages (information) that help them coordinate their actions [21] based on the (partial) information they possess about the world originating in their own inputs and information processing which they share as a part of distributed, social cognition. Increasing levels of cognition developed in living organisms during evolution, starting from basic behaviour such as found in bacteria and later on in insects, to increasingly complex behaviour in higher organisms such as mammals.

It is interesting to notice, that even though relatively simple organisms such as insects have a nervous system and brain, they lack the limbic system that in humans controls emotional response to stimuli, suggesting that simple organisms do not process physical stimuli emotionally. Bacteria communicate through exchange of molecules, by which they can display complex behaviour through quorum sensing [22], which activates gene expression in a bacterium based on the strength of a signal from the rest of the swarm. The same mechanisms are still in use further up the evolutionary ladder, with ants and honey bees capable of quorum sensing through pheromone exchange [23]. In the body of a multicellular organism, neurotransmitters and hormones exchanged between individual cells play a crucial role in the organism’s function, thus revealing the principle that the later steps of evolution build upon the preceding ones. Interestingly, emotion appears on a higher level of evolution, mediated by neurotransmitters. It has been shown to have a modulating effect on learning [24], an important ability of adaptive agents in a dynamic environment. Part of the explanation as to why emotion appears gradually along the evolutionary ladder, from simple arousal and primal fear to complex social emotions, may thus be that it serves to enhance the adaptability and learning capacities of an organism in an increasingly complex and unpredictable environment—for as the capability of the organism to perceive and interact with its environment expands and becomes more complex, so does the reality for that organism, in a circular process (ibid).

The framework for the discussion in this chapter is computing nature in the form of info-computationalism [25]. It takes reality to be information for an agent with a dynamics of information understood as computation. Information (for an agent) is taken to be a structure, while computation is its dynamics. Information is observer relative, and so is computation, [12, 26, 27]. There is no information without physical implementation [2830] and this approach is in line with the view of embodied embedded cognition [31, 32].

While speaking of general computational systems, we refer by agent to an entity capable of acting upon the environment through its output (which must take the form of physical interaction), as is usual in agent-based models. In case of a cognising agent, we refer to such an agent, which is taking actions according to some intrinsic goal(s). It should therefore not be interpreted as implying that a molecule in a molecular network, while acting in the sense of agent in the agent-based model, makes a choice between different actions (that would implying panpsychism, which is a non-scientific and non-naturalist doctrine). This can be compared with the actor model of concurrent computation [33] where the “actors” are the universal computational primitives. Responding to a message it receives, an actor can send messages, create new actors, and determine how to respond to the next message. A more detailed distinction between general computational agents, actors and cognising agents and their thorough characterisation deserves elaboration and remains to be done in forthcoming work, although a treatment of the subject can be found in [34].

The related question is what can we learn from nature in building artifactual cognitive agents with different degrees of cognition? Is synthetic cognition possible at all, given that cognition in living organisms is a deeply biologically rooted process? Recent advances in natural language processing present examples of developments towards machines capable of “understanding natural language”, “translating” and “speaking” so that humans can understand. However mechanisms behind these capacities in machines are very different from biological ones. Along with reasoning, language is considered a high-level cognitive activity of which only humans are capable. Would it be possible to implement other cognitive functions in a machine, including consciousness, which many believe is the ultimate and exclusivly human characteristic? Can AI “jump over” evolutionary steps in the development of cognition and decouple it from metabolism and reproduction? For non-living cognitive agents such as intelligent robots, cognitive computing is being developed with the hope of implementing cognition on a basis different from the biological one.

3 Minimal Cognition

For naturalism, nature is the only reality; in short: there are no miracles [35]. It describes nature through its structures, processes and relationships using a scientific approach. Naturalism studies the evolution of the entire natural world, including the life and development of humanity as a part of nature. Social and cultural phenomena are studied through their physical manifestations. If cognition is a result of natural physico-chemical and biological processes, the important question is: what is minimal cognition? In other words: what are the minimum requirements that we should place on a system to call it cognitive? We mentioned already that for Maturana and Varela minimal cognition corresponds to minimal life as an emergent property of autopoetic molecular processes. Similar approaches can be found in [16, 36], who developed understanding of minimal life as “a distributed, emergent property based on an organised network of reactions and/or processes” [37]. This network of chemical processes can be modelled as distributed natural computation.

Recently, empirical studies have revealed an unexpected richness of cognitive behaviours (perception, information processing, memory, decision making) in organisms as simple as bacteria swarms and colonies [22, 38, 39]. Even though single bacteria are small, sense only their immediate environment, and live too briefly to be able to memorise a significant amount of data, they are still living organisms and thus possess a degree of cognition in the sense of Maturana and Varela. Already bacterial colonies, swarms and films formed by self-organisation of bacteria exhibit a surprising complexity of behaviours that can undoubtedly be characterised as social cognition. It is a rather new insight that bacteria possess complex cognitive capacities based on information communication that resembles (chemical) language.

Apart from bacteria and similar organisms without a nervous system (such as e.g. slime mould, multinucleate or multicellular Amoebozoa, which recently have been used to compute shortest paths) [40], even plants are typically thought of as living systems without cognitive capacities. However, plants too have been found to possess memory (in their bodily structures that change as a result of past events), the ability to learn (plasticity, ability to adapt through morphodynamics), and the capacity to anticipate and direct their behaviour accordingly. Plants are argued to possess rudimentary forms of knowledge, according to [4143]. A special field of study is dedicated to plant signalling, which involves information communication by chemical or electrical signals within a plant or between plants.

Plant signalling is a special case of biocommunication that has been studied by Witzany [21] as signal transduction processes among cells, tissues, organs and organisms in prokaryotes (bacteria), protoctists (eukaryotic microorganisms), fungi, plants, animals and human beings. Within the info-computational framework, communication is a special case of computation, as shown in [44], where communication is an interaction/computation going on between a system and its environment, via sign/signal/message/information exchange. Witzany’s work on biocommunication is of great interest for understanding how information processing occurs in different kinds of organisms. Biocommunication (present on different levels of a living organism: intra- and intercellular, interorganismic and transorganismic (trans-species)) includes also genetic code with its evolution and development (editing). In this context, viruses are very interesting as they and their parts present the most abundant genetic matter on Earth, and tools for genome formatting in “natural genetic engineering” [21]. In a similar vein, [45] argues that cells, in order to survive, cannot act blindly, but in a cognitive way, which means that “they do things based on knowledge of what’s happening around them and inside of them. Without that knowledge and the systems to use that knowledge they couldn’t proliferate and survive as efficiently as they do.”

In this chapter we take cognition to be the totality of processes of self-generation, self-regulation and self-maintenance on different levels of organisation that enables organisms to adapt and survive using information and matter/energy from the environment. The understanding of cognition as it appears in degrees of complexity in living nature modelled as info-computation can also help us better understand the step between inanimate and animate matter from the first autocatalytic chemical reactions to the first autopoietic proto-cells.

Minimal cognition builds upon minimal biological agency, which for Kauffman and Clayton requires that: “such a system should be able to reproduce with heritable variation, should perform at least one work cycle, should have boundaries such that it can be individuated naturally, should engage in self-propagating work and constraint construction, and should be able to choose between at least two alternatives” [46].

4 Natural Info-computation in Networks of Agents. Computation All the Way Down to Quantum

The notion of computation in the info-computational framework of computing nature refers to the most general concept of intrinsic computation, that is, spontaneous computation processes in nature, which is used as a basis of specific kinds of designed computation found in computing machinery [47]. Intrinsic (natural) computation includes quantum computation [47, 48], molecular processes of self-organisation, self-assembly, developmental processes, gene regulation networks, gene assembly, protein–protein interaction networks, biological transport networks, eco-system networks, symbol manipulation and language processing in distributed networks of agents and similar. Computation can be both continuous (such as found in dynamic systems) and discrete (as found in Turing machines). Some info-computational processes in living agents are sub-symbolic signal processing, and some of them are symbolic (like symbol manipulation of languages) [49].

Info-computationalism, which is a variety of computing nature/pancomputationalism, models physical nature that spontaneously performs different kinds of computations (through information dynamics) at different levels of organisation. Natural computation is intrinsic, and it is specific for a given physical system. Intrinsic computation(s) of a physical system can be used for designed computation [47, 48] such as that found in computational machinery, which is only a small part of the computation that can be found in nature. Sometimes pancomputationalism is wrongly accused of being vacuous. Natural computationalism is not vacuous in the same way as physics is not vacuous when it makes the claim that the entire physical universe is made of matter/energy. When physical entities exist in nature, unobserved, they are part of Ding an sich, in Kant’s philosophy a thing as it is in itself . How do we know that they exist? We find out through interactions. What are interactions? They are information exchanges. Epistemologically, constraints or boundary conditions are also information for a system. The bottom layer for the computational universe is the bottom layer of its material substrate that is considered empirically justified.

Within the info-computational framework, computation on a given level of organisation of information presents a realisation/actualisation of the laws that govern interactions between constituent parts. On the basic level, computation is manifestation of causation in the physical substrate. In each next layer of organisation, a set of rules governs the system’s switch to the new emergent regime. It remains yet to be established how this process exactly goes on in nature, and how emergent properties arise. Research in natural computing is expected to uncover these mechanisms. In the words of Rozenberg and Kari: “(O)ur task is nothing less than to discover a new, broader, notion of computation, and to understand the world around us in terms of information processing” [50]. From the research in complex dynamical systems, biology, neuroscience, cognitive science, networks, concurrency and more, new insights essential for the info-computational universe may be expected in the years to come.

Among models of computation, natural computing has a particular place, as “the field of research that investigates both human-designed computing inspired by nature and computing taking place in nature” [51]. It includes, among others, areas of cellular automata and neural computation, evolutionary computation, molecular computation, quantum computation, nature-inspired algorithms and alternative models of computation. An important characteristic of the research in natural computing is that knowledge is generated bi-directionally, through the interaction between computing and natural sciences. While natural sciences are adopting tools, methodologies and ideas of information processing, computing is broadening the notion of computation, recognizing information processing found in nature as computation [50, 51]. Based on that, Denning argues that computing today is a natural science [52]. Computation found in nature is understood as a physical process, where nature computes with physical bodies as objects. Physical laws govern processes of computation, which necessarily appears on many different levels of organisation of physical systems.

With its layered computational architecture, natural computation provides a basis for a unified understanding of phenomena of embodied cognition, intelligence and knowledge generation [53, 54].

Info-computation can be modelled as a process of exchange of information in a network of informational agents. Agent-based computational models are a variety of the parallel distributed processing (PDP) model [49]. What can be identified as representation here is a distributed network of activities.

Unlike the traditional computer-model, the PDP-approach treats mental representations not as discrete symbols but rather as distributed activity in a network. In this case, a particular representation can be identified by its relations to other representations; each of them can be depicted as a vector in a multidimensional space. Another obvious analogy concerns the development of suchlike systems. According to the holistic approach, this development is based on a process of differentiation which occurs with increasing experience in one field. This is just what happens with PDP networks if their training in one area proceeds. While their ability to differentiate is quite low at the beginning it increases in the learning-process. [55]

Here “mental representations” can be replaced with representations of the world of any kind that can be found in cognitive agents. It is not self-evident that networked activity representations in a bacterium should be called “mental”. It is distributed activity in its molecular network and not a nervous system. We can nevertheless ascribe learning abilities to such simple systems as bacteria swarms that interact with the environment and learn from previous experiences, that is, adapt continuously based on molecular interactions as activities in molecular networks. Their memory consists of their bodily structures, changes in genotype and phenotype.

One sort of concurrent natural computation is found on the quantum-mechanical level where computational agents are elementary particles, and messages (information carriers) are exchanged by force carriers, while different types of computation can be found on other levels of organisation, such as the molecular level where molecules, atoms and elementary particles are exchanged as messages. In biology, information processing is going on in intracellular networks, cells, tissues, organs, organisms and eco-systems, with corresponding agents and message types. In social computing the message carriers are complex chunks of information such as signs, words or sentences and the computational nodes (agents) can be organisms or groups/societies [27].

4.1 Natural Info-computation as Morphological/Morphogenetic Computation

Back in 1952, Turing wrote a paper that may be considered as a predecessor of natural computing. It addressed the process of morphogenesis, proposing a chemical model as the explanation of the development of biological patterns such as the spots and stripes on animal skin [56]. Turing did not claim that physical systems producing patterns actually performed computation. Nevertheless, from the perspective of computing nature, we can argue that morphogenesis is a process of morphological computing. Physical process—though not computational in the traditional sense, presents natural morphological computation. An essential element in this process is the interplay between the informational structure and the computational processes, information self-structuring and information integration, both synchronic and diachronic, unfolding on different time and space scales in physical bodies. Informational structure present a program that governs computational process [57], which in turn changes the original informational structure obeying/implementing/realising physical laws.

Morphology is the central idea in understanding the connection between computation (morphological/morphogenetic) and information. That which is observed as material on one level of analysis represents morphology on the lower level, recursively. So water as material presents arrangements of [molecular [atomic [elementary particle]]] structures. Thus, “‘Hardware’ [is] made of ‘software’” [58].

Info-computationalism describes nature as informational structure—a succession of levels of organisation of (natural) information. Morphological/morphogenetic computing on that informational structure leads to new informational structures via processes of self-organisation of information. Evolution itself is a process of morphological computation on a long-term scale. It is also relevant and instructive to study morphogenesis of morphogenesis (meta-morphogenesis) as done by Sloman in [59].

One of the important characteristics of computing in biological systems is that it is approximate, robust and just “good enough” given the finite resources of time, energy and space. That is why we humans have problems multiplying big numbers but can make reasonable quick assessments of the order of magnitude. This way of computation developed evolutionary by learning processes in living agents. Valiant [60] describes this phenomenon by ecorithms—learning algorithms that perform probably approximately correct computation, PAC. Unlike the present paradigm of computing machinery, the results are not perfect but just good enough for an agent’s purpose.

All living beings possess cognition (understood as all processes necessary for an organism to survive, both as an individual and as a part of a social group—social cognition), in different forms and degrees, from bacteria to humans. Cognition is based on agency; it would not exist without agency. The building block of life, the living cell, is a network of networks of processes, and those processes may be understood as computation. It is not any computation, but exactly that biological process itself, understood as information processing. Now one might ask what would be the point in seeing metabolic processes or growth (morphogenesis) as computation? The answer is that we try to connect cell processes to the conceptual apparatus of concurrent computational models and information exchange that have been developed within the field of computation and not within biology—we talk about “executable cell biology” [61]. Info-computational approach gives something substantial that no other approach gives to biology, and that is the possibility of studying the real-time dynamics of a system.

Processes of life and thus cognition are critically time dependent. Concurrent computational models are the field that can help us understand real-time interactive concurrent networked behaviour in complex systems of biology and its physical structures (morphology). That is the pragmatic reason why it is well justified to use conceptual and practical tools of info-computation in order to study living beings. Of course, in nature there are no labels saying: this process is computation. We can see as computation, conceptualise in terms of computation, model as computation and call computation any process in the physical world. Doing so we both expand our understanding of natural processes (physical, chemical, biological and cognitive) and enrich our concept of computation.

5 Info-computational Nature of Reality for a Cognitive Agent

Computational naturalism (computing nature, naturalist computationalism, pancomputationalism) is the view that all of nature is a huge network of computational processes, which, according to physical laws, computes (dynamically develops) its own next state from the current one [62]. Among representatives of this approach are Zuse, Fredkin, Wolfram, Chaitin and Lloyd, who proposed different varieties of computational naturalism. According to the idea of computing nature, one can view the time development (dynamics) of physical states in nature as information processing of natural computation. Such processes include self-assembly, self-organisation, developmental processes, gene regulation networks, gene assembly, protein–protein interaction networks, biological transport networks, social computing, evolution and similar processes of morphogenesis (creation of form). The idea of computing nature and the relationships between the two basic concepts of information and computation are explored in more detail in [12, 26, 27].

Talking about computing nature, we can ask: what is the “hardware” for this computation? The surprising answer is: the software on one level of organisation of information is the hardware for the next highest level in the sense of Kampis’s self-modifying systems [57]. And on the basic level, the “hardware” is potential information, the structure of the world that one usually describes as matter–energy [63]. As cognising agents we are interacting with nature through information exchange, and we experience the world as information.

This view of nature as informational structure has been developed as informational structural realism by Luciano Floridi [64] and Kennet Sayre [65]. It is a framework that takes information as the fabric of the universe (for an agent). Even the physicists Zeilinger [66] and Vedral [67] suggest that information and reality are one epistemologically. For a cognising agent in the informational universe, the dynamical changes of its structures make it a huge computational network of networks [12]. The substrate, the “hardware”, is information that defines data structures on which computation proceeds. It is important to note the difference between the everyday use of the term information as “the communication or reception of knowledge or intelligence” (Webster dictionary) and Shannon’s technical definition of information [68]. Shannon’s quantitative model of information transmission between sender and receiver through a noisy channel is agnostic about the meaning of information being transmitted, whereas our model of the information intends to apply to different phases of information communication and processing from the point of view of a cognising agent that makes sense of information.

The info-computational framework [54, 69, 70] is a synthesis of informational structural realism and natural computationalism (computing nature, pancomputationalism)—the view that the universe computes its own next state from the previous one [62, 71]. It builds on two basic complementary concepts: information (structure) and computation (the dynamics) as described in [72].

The natural world for a cognising agent exists as potential information, corresponding to Kant’s das Ding an sich. Through interactions, this potential information becomes actual information for an agent, “a difference that makes a difference” [73]. Similarly, Shannon describes the process as the conversion of latent information into manifest information [74]. Even though Bateson’s definition of information is the most widely cited one, there is a more general definition that subsumes Bateson’s definition, including the fact that information is relational in nature:

Information expresses the fact that a system is in a certain configuration that is correlated to the configuration of another system. Any physical system may contain information about another physical system. [75]

Combining the Bateson and Hewitt insights, at the basic level, information is a difference in one physical system that makes a difference in another physical system. Here “physical system” refers to systems as studied in physics. This comes from the fact that there is no information without physical implementation. [2830]

6 From Raw Data to Information to Knowledge

Cognition can be seen as a result of processes of morphological computation on informational structures of a cognitive agent in interaction with the physical world, with processes going on at both sub-symbolic and symbolic levels. This morphological computation establishes connections between an agent’s body, its nervous (control) system and its environment [1, 32, 76]. Through the embodied interaction with the informational structures of the environment, via sensorimotor coordination, information structures are induced in the sensory data of a cognitive agent, thus establishing perception, categorisation and learning. Those processes result in constant updates of memory and other structures that support behaviour, particularly anticipation. Embodied and corresponding induced informational structures (in the virtual machine of Sloman) are the basis of all cognitive activities, including consciousness and language as a means of maintenance of “reality”.

An essential element in this process is the interplay between the informational structures and the computational processes—information self-structuring and information integration, both synchronic and diachronic, going on in different time and space scales [26].

From the simplest cognising agents such as bacteria to complex biological organisms with a nervous system and brain, the basic informational structures undergo transformations through morphological computation (developmental and evolutionary form generation).

Here an explanation is in order regarding cognition that is defined in the general way of Maturana and Varela, who take it to be synonymous with life [5, 77]. All living organisms possess some degree of cognition and for the simplest ones like bacteria cognition, involves metabolism and (our addition) locomotion [12]. This “degree” is not meant as a continuous function but as a qualitative characterisation that cognitive capacities increase from the simplest to the most complex organisms. The process of interaction with the environment causes changes in the informational structures that correspond to the body of an agent and its control mechanisms, which define its future interactions with the world and its inner information processing. Informational structures of an agent become semantic information first in the case of intelligent agents.

Even though we are far from having a consensus on the concept of information, the most general view is that information is a structure consisting of data that is the difference that makes a difference for an agent. Floridi [64] has the following definition of a datum: “In its simplest form, a datum can be reduced to just a lack of uniformity, that is, a binary difference.” Bateson’s “the difference that makes the difference” [73] is a datum in that sense. Information is both the result of observed differences (differentiation of data) and the result of synthesis of those data into a common informational structure (integration of data), as argued by Schroeder in [78]. In the process of knowledge generation an intelligent agent moves between those two processes—differentiation and integration of data, see [25]. For potential information to become actual there must exist an agent from whose perspective the relational structure between the world as potential information and its actualisation in an agent is established. Thus (actual) information is a network of data points observed from an agent’s perspective.

There is a distinction between the world as it exists autonomously, independently of any agent, Kantian Ding an sich, (thing in itself, noumenon), and the world for an agent, things as they appear through interactions (phenomena). Informational realists [6467] take the reality/world/universe to be information. Dodig-Crnkovic [27] added by analogy information an sich representative of the Ding an sich as potential information. When does this potential information become actual information for an agent? The world in itself as (proto) information gets actualised through interactions of agents. Huge parts of the universe are potential information for different kinds of agents—from elementary particles, to molecules, etc. all the way up to humans and societies. Living organisms as complex agents inherit bodily structures as a result of a long evolutionary development of species. Those structures are embodied memory of the evolutionary past. They present the means for agents to interact with the world, get new information that induces memories, learn new patterns of behaviour and construct knowledge. World via Hebbian learning forms an organism’s informational structures. As an example we can mention neural networks that “self-organize stable pattern recognition codes in real-time in response to arbitrary sequences of input patterns” [79].

If we say that for something to be information there must exist an agent from whose perspective this structure is established, and we argue that the fabric of the world is informational, the question can be asked: who/what is the agent? An agent (an entity capable of acting on its own behalf) can be seen as interacting with the points of inhomogeneity (data), establishing the connections between those data and the data that constitute the agent itself (a particle, a system, an organism). There are myriad agents for which information of the world makes differences—from elementary particles to molecules, cells, organisms, societies, etc.—all of them interact and exchange information on different levels of scale in space and time, and this information dynamics is natural computation.

On the fundamental level of quantum-mechanical substrate, information processes represent laws of physics. Physicists are already working on reformulating physics in terms of information. This development can be related to Wheeler’s idea “it from bit” [80]. Actually, the idea can be traced back to 1977 when F. W. Kantor published the book Information Mechanics [81] that presents the first formal theory built upon the idea that information might be the fundamental phenomenon underlying the foundation of physics. For more details on physics and information research, see the special issue of the journal Information dedicated to matter/energy and information [63], and a special issue of the journal Entropy addressing natural/unconventional computing [82] that explores the space of natural computation and relationships between the physical (matter/energy), information and computation.

7 Computing Nature is a Naturalist Framework, not Panpsychism

Some computational (information integration) models of consciousness [8386] can be interpreted as leading to panpsychism—“the belief that the physical universe is fundamentally composed of elements each of which is conscious” [11].

On the other hand, pancomputationalism (natural computationalism, computing nature) is the doctrine that the whole of the universe, every physical system, computes [87].

However, computation going on in a hurricane is not like human program execution in current computing machinery. A hurricane computes intrinsically its own next state, by implementation of natural laws on many levels of organisation. The info-computational approach starts bottom-up, from basic natural processes understood as computation. It means that computation appears as quantum, chemical, biological etc., see [2]. Only those transformations of informational structure that correspond to intrinsic processes in natural systems qualify as computation. Studying biological systems at different levels of organisation as layered computational architectures gives us powerful conceptual and technological tools for studying real-world systems. Even though we can imagine any sort of mapping, those that do not work spontaneously on the “hardware” of the universe do not count as natural (intrinsic) computations. We can simulate virtual worlds, but computation behind this visualisation relies on the implementation on the basic level on a physical substrate (computer hardware) with causal processes in the physical world.

The info-computational view of nature, where every living organism possesses some extent of cognition, does not contradict embodiment, embeddedness, and the interactive, communicative—enactive view of cognition. It provides also insights about possible mechanisms behind for example concepts of Umwelt, that is, the specific way an organism experiences the world, due to Uexküll [88]. The info-computational approach builds on physical computation that represents all those interactive real-world processes. Nevertheless there is still a lack of a common view of what computation is and what it is developing into, so many critics of computational approaches address early forms of computationalism without taking notice of contemporary alternatives. For example Bishop [11] claims that pancomputationalism necessarily implies panpsychism. Bishop’s core argument derives from ideas originally proposed by Putnam [89] that “every open system implements every finite state automaton” (FSA) and thus “psychological states of the brain cannot be functional states of a computer”. If a computer instantiates genuine phenomenal states, argues Bishop, then this necessarily implies panpsychism. And as panpsychism is not an option:

against the backdrop of our immense scientific knowledge of the closed physical world, and the corresponding widespread desire to explain everything ultimately in physical terms, panpsychism has come to seem an implausible view …then no computers can be conscious.

So the whole argument is essentially dependent on Putnam’s claim that every open system implements every discrete state machine. Even if that claim were true, state machines are definitely not all that is meant by computing, especially not physical computing that we argue is the fundamental principle in nature. As Bishop points out in the paper, his argument does not apply to dynamical systems. If we look into the taxonomy of computation given in [2], we see that this argument does not apply to many other types of computational model, in spite of its general claim that cognitive computation is fallacious, as it presupposes that computation is only a subset of possible computations that implement finite state automata, and that of course cannot cover all cognitive processes.

However, we agree that panpsychism is not a good scientific hypothesis. Instead of opening all doors for investigation of the natural basis of consciousness, it declares consciousness permeating the entire universe as the solution. As a theory, panpsychism belongs to mediaeval tradition—that which is to be explained is postulated.

Info-computationalism/computationalism/pancomputationalism/computing nature provides constructive tools for investigation of cognitive agents at different levels of complexity and presents a fruitful approach and coherent theoretical construal that by no means implies or presupposes panpsychism. It is an open framework for learning based on physics, chemistry, biology and cognitive science expressed in terms of info-computation.

The idea of consciousness as information integration in the work of Giulio Tononi [85] and Christoph Koch [86] has recently been widely debated. While we adopt the concept of cognition as a biological process in all living organisms, as argued by Maturana and Varela [5, 90], it is not at all clear that all cognitive processes in different kinds of organisms are accompanied by consciousness. We suggest that cognitive agents with a nervous system are the first step in evolution that enabled consciousness. A nervous system, on top of metabolism, which all living beings have, provides an internal “model” of an agent’s body in relation to its environment [32]. The nervous system models the “self” of an organism that acts in the context of “other”. Unlike reactive processes in simplest organisms, where behaviour is directly driven by chemical and physical relationships (“molecular machinery”), the emergence of a nervous system in more complex organisms acts as an integration mechanism for reactive behaviours, establishing an additional self-reflective layer of cognitive architecture. “Neural control essentially grows out of the need for maintaining proper internal value systems” [91]. Starting with the distinctive self, enabled by a nervous system, we can talk about consciousness as the capacity of self-reflection and self-awareness.

8 Conclusions and Future Work

The questions: What is reality for an agent? What is minimal cognition? How does the morphology of a cognitive agent affect cognition? that we posed at the beginning of this chapter led us to a discussion of info-computational models of cognition that can be applied to both living organisms and machines. Within the info-computational framework applied to living nature, cognition is understood as the process of living itself. Following Maturana and Varela’s argument [5], we understand the entire living world as possessing cognition of various degrees of complexity. In that sense, bacteria possess rudimentary cognition expressed in quorum sensing and other collective phenomena based on information communication and information processing. Cognition is a distributed agent-based network of processes realised through communication via different mediators—from chemical molecules to electrical signals. Comparing processes of communication among cells, tissues, organs and organisms in all kinds of living organisms—from prokaryotes (bacteria), protoctists (eukaryotic microorganisms), fungi, plants, animals, to human beings as described in [21], we notice what von Uexküll already found out: differences in cognition and ways of reality construction dependent on the type of cognising agent [88]. The simplest organisms communicate by “chemical language”, exchanging molecules, which also include genetic material. The signal itself is information for an agent, and its processing is physical computation. Memory consists in bodily structures of an organism. As organisms develop more complex structures, their communication becomes more diverse. The Brain of a complex organism consists of neurons that are networked communication computational units. The signalling and information processing modes of a brain are much more complex than the exchange of chemical signals in brainless organisms, and consist of more layers of computational architecture than cognitive processes in a bacterial colony.

Maturana and Varela did not think of cognition as computation, as at the time they developed the theory of autopoiesis, computation was conceived as abstract symbol manipulation, and information was considered in its narrow meaning as a way of verbal communication between humans. With our current understanding of information and computation, we claim that cognition is a process of natural/physical/intrinsic/embodied computation [92]. The broader view of computation as a physical process with emergent properties, as found in info-computationalism, is capable of representing processes of life studied in bioinformatics and biocomputation. Reality for an agent is info-computational in nature, and it is established as a result of the interactions of the agent with the environment as the information processes in agents, own intrinsic structures that take the form of anticipation, reasoning etc.

Computing nature consists of physical structures that form levels of organisation, on which computation processes differ. It has been argued that on the lower levels of organisation, finite automata or Turing machines might be adequate models of local processes, while on the level of the whole brain, non-Turing computation is necessary, according to Ehresmann [93] and Ghosh et al. [94].

We also asked a question regarding non-biological cognition: Is synthetic cognition possible at all, given that cognition in living organisms is a deeply biologically rooted process? Currently we observe advances in natural language processing, indicating developments of machines capable of “understanding natural language”, “translating”, “speaking”, “object recognition” and “learning”. The mechanisms behind those capacities in machines are very different from corresponding mechanisms in living organisms. We asked: Would it be possible to implement other cognitive functions in a machine, including consciousness that many believe is the ultimate and exclusive human characteristic? Can AI “jump over” evolutionary steps in the development of cognition and decouple from metabolism and reproduction? From what we know today, it seems that the answer is positive. In a similar way as machines move, or airplanes fly based on mechanisms different from living beings, cognitive computing is being developed that implements cognitive functions on a basis different from the biological one. Is cognition or consciousness essentially different from movement? This remains to be seen. The development indicates that it might be the case that cognition is necessary for life, but life is not necessary for cognition.

Finally, an argument is advanced that computational cognition by no means implies panpsychism. We argue that computing nature/pancomputationalism is a sound scientific approach based on insights from physics, chemistry, biology, cognitive science (including distributed/social cognition) and theories of information and computation, which is opposite to panpsychism.

For the future, a lot of work remains to be done, especially on the connections between the low-level cognitive processes and the high-level ones. It is important to find relations between cognition and consciousness and the detailed picture of info-computational mechanisms behind cognitive phenomena in the developmental and evolutionary time frame.