1 Ignorant physical and biological bodies

A recent area of studies that concerns the so-called morphological computation deals with the problem of enriching classical digital computation taking advantage of “bodies”, which are transformed in entities that carry their own computation in a useful (and unusual) way. It is in this framework of research that Hauser et al. (2014, p. 230) say that the body does not “know” that it is part of a computational device because it simply conforms to the laws of physics and it responds accordingly. Also, the body does not over- nor under-compensate, indeed it is a stable physical system. The proposed apparatus simply adds some linear readouts to the body to conclude the computation. The body would behave precisely the same, if there were no readouts at all. The authors further note that “If this output is used, e.g., in a feedback loop as a control signal for the robot, the behavior of the robot of course should be different if we close the loop by adding the readout. However, the body still does not ‘know’ that it is part of this computational loop. This raises the philosophical question, sometimes referred to as the teleological principle, whether the body is carrying out computation at all” (ibid.) Moreover, the so-called Physical RCFootnote 1 is of practical value if it is able to provide a fruitful mapping of input streams onto output streams taking advantage of the physical body at hand. Finally, the authors introduce the role of ignorance: “Another interesting implication of this property of ‘ignorance’ of the body (i.e., it behaves the same, whether we add readouts or not) is that we can employ the same body for multiple computations on the same input at the same time. We simply have to add the according number of readouts” (Hauser et al. 2014, p. 230).

In this passage it is clearly said that an “absence of knowledge” is at play “the body still does not ‘know’ that it is part of this computational loop” and that this absence of knowledge “raises the philosophical question [...], whether the body is carrying out computation at all”. The problem of ignorance explicitly emerges: the body “behaves the same, whether we add readouts or not” and “we can employ the same body for multiple computations on the same input at the same time. We simply have to add the according number of readouts”.

Authors that incorporate in the so-called “computing nature” or “natural computation” the strong idea that whatever nature does can be framed in terms of computation as information processing will disagree with this attribution of ignorance to the body. They would say that, unlike inorganic matter, living cells are cognitive systems that possess agency and goals, and act in the world in predictive and adaptive way and that for a process to be computation it is not important to be part of computational mechanisms or even that whole computational machine knows that it is computing. They would also probably add that pebbles are certainly always ignorant, and so pebbles do not compute in the sense humans do, but that their very existence is a result of physical computational processes on variety of levels of organization in nature. To avoid epistemological confusions I will further comment this issue in the following subsection and below in Sect. 6, showing some puzzling problems related to the the interpretation of the concept of natural computing.Footnote 2

In this article I will discuss the problem of “ignorant entities” that carry computation as a fundamental recent issue of a philosophy of computing that does not aim at exhausting the concept of computation adopting the unilateral reduction to the digital definition (that is to the processing of strings of digits which agree with rules). We do not need a conclusive and rigid definition of the concept of computation, such as a textbook or some mainstream epistemological approaches could provide: I contend that the concept has a historical and dynamical character.Footnote 3 Various types of computation have to be considered, which involve the change of the meaning of the concept thanks to the emergence of new variances. In the present article, I will mainly, but not only, describe the consequences generated by the irruption in computation of the morphological chance.

I will take advantage of what I call an eco-cognitive perspective, which is appropriate to clarify the meaning of the property of ignorance of the physical and biological bodies involved in a computation. Both perspectives will be necessary to illustrate, once and for all, the historical and dynamical character of the concept of computation: I do not aim at furnishing a final and stable definition of the concept of computation but an intellectual framework that depicts how we can understand not only the changes of its meaning, but also the “emergence” of new forms of computations, which precisely transform ignorant entities in cognitive entities through computation.

1.1 Different kinds of ignorance

The issue of the computational domestication of ignorant entities will be introduced in the next subsection. As for now, however, to avoid possible misunderstandings, it is useful to add some relevant observations regarding the different kinds of ignorance involved in physical and biological bodies. Indeed in this article, I am treating both non-living and living bodies as “ideally” ignorant because I am interested in the “active” exploitation of both entities to the aim of getting computational performances endowed with cognitive consequences. On the contrary, adopting a different intellectual perspective, which directly attributes cognitive capacities to living bodies and pays preeminent attention to this aspect, physical and biological entities present different kinds of ignorance. For example, from the rich and eclectic info-computational point of view proposed in Dodig-Crnkovic (2013, 2017), there is an important difference between nonliving and living bodies. Biological bodies possess cognition on different levels, from the lowest to the highest organization level. Even a single cell (the simplest living organism and the basic block of all living organisms) can be considered a cognitive system. A living cell in its Umwelt is not ignorant but a well informed, active agent with intentions. Of course our cells in the body do not know what we are doing, but they “know”/cognize what they are doing. And so is the case with all living cells, they are informed by the environment, they learn, adapt, change their bodily structures, evolve. In their microscopic world they are alert, they can move towards the source of food, swim away from poison, using environment for their own survival.Footnote 4

Moreover, it can be hypothesized that in many organisms the perceptual world is the only possible model of itself and in this case of low-level cognitive activity cognition can be accounted for in terms of a merely reflex-based notions: no other internal more or less stable representations are present. It is mandatory to note that the seminal studies that Schrödinger’s performed on energy, matter and thermodynamic imbalances furnished by the environment, address the attention to the fact that all organisms, also bacteria, are capable to perform simple cognitive functions because they “sense” the environment and process internal information for “thriving on latent information embedded in the complexity of their environment” (Ben Jacob et al. 2006, p. 496).

Indeed Schrödinger contended that life involves the consumption of negative entropy, i.e. the use of thermodynamic imbalances in the environment. As a member of a complex superorganism—the colony, a multi-cellular community—each bacterium is endowed with the ability to sense and communicate with the other units of the collective and executes its work within a distribution activity. Hence, bacterial communication implies collective sensing and cooperativity through interpretation of chemical messages, differentiation between internal and external information, and a kind of self vs. non-self distinction (peers and cheaters are both in action). In this view “biotic machines” are meaning-based kinds of intelligence to be contrasted with the information-based forms of artificial intelligence: biotic machines create new information, attributing contextual meaning to collected information. Self-organizing organisms like bacteria are afforded—through an actual cognitive act—by “relevant” information that they later on couple with the regulating, reconstituting, and plastic activity of the contextual information (intrinsic meaning) previously internally stored, which reverberates the intra-cellular state of the cells. Of course the “meaning production” that occurs in the above processes regards structural aspects of communication that cannot be compared to the particular sentential and model-based cognitive abilities of humans, primates, and other more rudimentary animals, but still partakes fundamental functions with these like sensing, information processing, and collective abductiveFootnote 5 contextual production of meaning.Footnote 6 As stressed by Ben Jacob, Shapira, and Tauber

In short, bacteria continuously sense their milieu and store the relevant information and thus exhibit “cognition” by their ability to process information and responding accordingly. From those fundamental sensing faculties, bacterial information processing has evolved communication capabilities that allow the creation of cooperative structures among individuals to form super-organisms (Ben Jacob et al. 2006, p. 504).

I have just illustrated the role of “cognitive” activities in non-human animals: another—more controversial—step, from cognition to computation, can be taken. Biological systems in order to survive must have agency which makes their natural computationFootnote 7 way more complex than the one constituting behaviors of pebbles—they have memory (in their physical body and especially in their DNA), they communicate actively with other living organisms (bacteria “talk” using chemical language) and exchange DNA (learn) through an information processing activity that refers to computation. Hence, also in this computational perspective, a difference between bacterial ignorance and the ignorance of a pebble should be established.

1.2 Ignorance in an eco-cognitive perspective

I think that the main advantage of speaking of the transformation of “ignorant” entities in cognitive ones through computation derives from its capacity to stress the dynamic character of processes of attribution of cognitive roles to external entities. This idea is certainly related to the studies in the area of evolutionary theories—especially the ones on cognitive niche constructionFootnote 8—and to the recent tradition of studies in the area of cognitive science called “distributed cognition”. Indeed, the idea of cognition that I am adopting in this article is informed by the “ecological” perspective I fully explained and described in my recent book The Abductive Structure of Scientific Creativity. An Essay on the Ecology of Cognition (Magnani 2017). In this ecological perspective individual human agents, senses, environment, and cognition are all involved. As just mentioned we also need refer to the cognitive science tradition of embodied and distributed cognitive systems: the emphasis in this case is on the “practical agent”, on the individual agent operating “on the ground”, that is, in the contexts of actual life.

Hence, to better explain my eco-cognitive approach let me illustrate some basic concepts regarding distributed and embodied cognitive systems. The first studies in distributed cognition were informed by the fundamental idea that cognition is a socially distributed phenomenon, one that is situated in actual practices.Footnote 9 Also, cognitive processes are better understood if seen situated in and distributed across real socio-artifactual settings, beyond the internalist view typical of the traditional theories in cognitive science: cognitive activities are very often “delegated” to various (eventually artifactual) entities of the external environment and problem solving—for example—is in general occurring in collaborative contexts. The eco-cognitive perspective stresses the role played by the agent-environment interaction: for example, it is well-known that in current collective work environments, humans and artifactual technologies are intertwined in the activity of preserving and managing representational states, very frequently to the aim of solving problems.

Ideally, and as a first approximation, a cognitive system can be seen as a triple (ATR), in which A is an agent, T is a cognitive target of the agent, and R refers to the cognitive resources on which the agent can rely when engaged in meeting the target—information, time and computational capacity, to indicate the three most relevant. As I have sketched in the previous paragraphs my agents are also embodied distributed cognitive systems: cognition is evidently embodied and the interrelation between brains, bodies, and external environment are its fundamental features. In this perspective cognition is seen happening taking advantage of a continuous exchange of information in a complex distributed system that goes across the boundary between humans, artifacts, and the surrounding environment, where also instinctual and unconscious capacities play a crucial role. At this point I have to add that Peirce himself anticipated the approach to cognition I have just described by interpreting the concept of “inference” semiotically (and not simply “logically”), saying that all inference is a form of sign activity, where the word sign includes “feeling, image, conception, and other representation” (Peirce 1931–1958, 5.283). It is clear that this semiotic perspective, which sees signs as distributed everywhere, is substantially consistent with my view of cognitive systems as embodied and distributed systems.Footnote 10

Finally, I have to note that, as I plainly explained in Magnani (2018a), the concepts of information processing and computation are in our era certainly involved in the definition of cognition, but to elucidate what kinds of computation and information processing are implicated in cognition not only is a a difficult task, but it is destined to become outdated because the concept of computation evolves and mutates, and the other two concepts, intertwined with it, and already marked by volatile and multiple definitions, subjected to the developments of knowledge, technologies, and cultural frameworks, also evolve. It is in this perspective that we can exploit the concept of ignorance: the transformation of ignorant entities in cognitive entities through computation is intertwined with the dynamic and emergent character of processes of attribution of cognitive—and computational— roles to external entities.Footnote 11

2 An eco-cognitive perspective on computation

I have introduced the idea of Eco-Cognitive Computationalism in Magnani (2018b): computational activities have to be seen in context, taking advantage—as I explained in the previous subsection—of the recent contributions provided by cognitive science to the fields of embodied, situated, and distributed cognition. In this perspective computation has to be seen as occurring in “external” physical entities suitably metamorphosed so that they acquire the status of what I call cognitive mediators,Footnote 12 in which data can be encoded and decoded to reach interesting outcomes.

It is important to note that eco-cognitive computationalism, even is sees cognition as computational, does not reduce computation to digital computation. As I have illustrated in Magnani (2018a) I privilege a perspective that sees the evolutionary emergence of information, meaning, and of the first vestigial forms of cognition, as the result of a complicated eco-cognitive interaction and simultaneous coevolution, in time, of the states of brain/mind, body, and external environment. This view does not contemplate a stability of the meaning of concepts such as computation, information processing, and cognition. The concept of computation I am assuming here goes beyond digital computation, that is the processing of strings of digits according to rules; it also concerns other computations executed by physical devices, such as for example brains, with their activity of modifications of the configurations of neural networks, but also analog computers, and the so-called neural computers. It is an expanded idea of the concept of computation: as I have indicated above the concept does not have to be seen as static, it develops according to the changes of theory and practice.

Let us repeat, this eco-cognitive view of computation is firmly intertwined with studies on distributed and embodied cognitive systems, which see cognition as a socially distributed process, oriented by actions and taking advantage of material artifactual entities, in which the “ecological” view also stresses the effects of the agent-environment interaction. The ecology of cognition framework makes us able to focus on “physical computation”, exactly following Turing’s original ideas concerning the emergence of computation in organic, inorganic, and artefactual agents.Footnote 13 This perspective is especially appropriate to deal with morphological computation because its main tenet is that computation is “physical”, as it is occurring in external entities.

Following my eco-cognitive perspective we can see “innocent”Footnote 14 (and “ignorant”) entities, in the sense of pure tools or instruments “devoid of cognitive capacities”, becoming first of all bearers of information and knowledge, and, finally, of computation, with the birth of both Turing’s (Universal) Logical Computing Machines (LCMs) (which sees computation as a merely syntactic process) and the strictly intertwined concrete (Universal) Practical Computing Machines (PCMs). To make an example, blackboards are simple non technological artefactual entities, which can be endowed with cognitive delegations thanks for instance to the drawing of diagrams or the writing of texts. More technologically complex artefacts, such as telephones and telegraphs, were able to carry and perform elementary cognitive tasks, but it is only with highly technological man made instruments, such as digital computers, that we face the final “loss of innocence” of entities. Turing would have said that these technological tools become universal cognitive physical entities, in so far as they become computational artefacts that compute for humans or artefactual agents. When I say loss of “innocence” I am not equating innocence to a mere cognitive passivity. I want to stress the fact that when ignorant entities are transformed in cognitive and, at the same time, in computational ones, they become endowed with a moral worth and can play a moral role (and so moral acts and possible consequent violent acts). As I have illustrated in my books concerning philosophy of technology (Magnani 2007) and philosophy of violence (Magnani 2011), modern technology— computational artifacts obviously included—has brought about (potential) moral and (potential) violent consequences of great magnitude. Imagine the consequences (often unintended) of technological tools such as internet and artificial intelligence, for example the negative effects on privacy in the cyberspace and the threats to individual identities, cultures, etc. Computational structures and technological artifacts became moral carriers and mediators of effects that can be perceived and/or judged as violent, strictly integrated with humans, and so surely less “innocent” than simply ignorant entities.

Finally, I have to repeat that eco-cognitive computationalism does not aim at furnishing an ultimate and static definition of the concepts of information, cognition, and computation, such as a textbook could provide, instead it intends, by respecting their historical and dynamical character, to propose an intellectual framework that depicts how we can understand their forms of “emergence” and the modification of their meanings: as I have already said in the previous subsection it is in this perspective that we can see the transformation of ignorant entities in cognitive entities through computation by stressing both the dynamic and emergent character of processes of attribution of cognitive—and computational—roles to external entities.

3 Computation is physical: the epistemological overturning

It is well known, thanks to the cognitive studies of the last decades, that a remarkable simplification of cognitive and motor tasks in human and animal beings is often performed thanks to their morphological aspects—and this character had been already obviously used in robotics. The recent novelty is that we can exploit morphologies of appropriate entities to integrate classical computation: in this way we construct what I called mimetic bodiesFootnote 15 (Magnani 2018b) able to simplify the whole process of computation of a certain task, so fulfilling a general request of that simplexityFootnote 16 which characterizes animal embodied cognition, that is the complementarity of complexity and simplicity (Kluger 2008; Berthoz and Petit 2014). To understand this novelty it is necessary to establish a naturalistic view of computation, acknowledging the fact that it happens through physical systems, technologically appropriately built and tested.

3.1 Computational domestication of ignorant physical systems

In my article (Magnani 2018a), I afforded—taking advantage of the intertwining between the conceptual interplay between information, cognition, and computation—the following classical problem: “Is cognition computation?” We can answer: yes it is, but cognition is not exclusively computation. A variety of serious studies concerning the many ways in which information processing and computation are intertwined with cognition has been recently produced, even if in the perspective of the ecology of cognition I previously sketched they become outdated. I said that the notion of computation changes from the point of view of the meaning and the same happens in the case of the other two concepts. All three concepts can be usefully captured by definitions, that anyway are subjected to obsolescence due to the change of the developments of knowledge, technologies, and cultural frameworks. Several studies have elucidated various notions of computation in the light of recent epistemological and cognitive science research (see for example Vallverdú i Segura 2009) and emphasized that computation has in general to be seen as enforced in physical systems (of course avoiding to arrive to the so-called pancomputationalism,Footnote 17 that to a first approximation consists in the view that every physical system is a digital computing one and can be illustrated in computational terms).Footnote 18

This perspective on physical computation has to be further deepened with the adoption of the eco-cognitive framework, that depicts how we can understand the “emergence” of what I call the new computational “domestication” of physical entities, thanks to morphological computing. Various insights on the problem of physical computation are illustrated in Horsman et al. (2014, p. 14):

  1. 1.

    a computer is a physical system composed by concrete parts and by its peculiar inner connections that allow passages from one physical state to another;

  2. 2.

    Horsman et al. contend, and I agree with them, that physical computing is “the use of a physical system to predict the outcome of an abstract evolution” (Horsman et al. 2014, p. 14). It is extremely important to stress the fact that in this case we face a kind of epistemological overturning. To perform a physical computation we deal with something opposite with respect to what happens in sciences such as physics: at stake it is not a physical system that needs to be rationally studied, but, on the contrary, we have at our disposal an abstract entity that we want to see transformed thanks to the physical system itself. Hence, this use of a physical entity, from physical states to others, realizes a computation, and it is usually understood as such by human beings or accepted as such by other artificial agents;

  3. 3.

    the physical entities devoted to computation are technologies, fruit of scientific knowledge: step after step they became engineered physical systems which are endowed with “wanted” features (more details on this aspect are given below in Sect. 4.1);

  4. 4.

    the physical computation is successfully concretized when we are capable to enforce a precise relationship between abstract mathematical/logical entities and the physical ones;

  5. 5.

    of course the notion of computation, and its system property, information, can be used in other intellectual fields of research, trying to “explain” processes such as for example photosynthesis or the conscious mind, and some researchers arrived to contend that “everything is information” or “the universe is a quantum computer” or “everything computes” (Horsman et al. 2014, p. 2). In this case I fear that the notion of physical computation tends to become devoid of meaning: not all physical systems (biological and artefactual included) can carry computation. I will come back to this issue below in Sect. 7.

3.2 The epistemological overturning: physical computing versus physics

I have said that a physical system is used to predict the outcome of an abstract dynamics: this is the opposite of what happens in physics, in which an abstract model is used to predict physical dynamics. Horsman et al. distinctly contend (2014, p. 2) that physics works by representing physical systems abstractly, using abstract theories to predict the result of physical evolution, and computation uses a physical system to predict the outcome of an abstract dynamics. The ignorant technological entity (for example the digital computer) has been domesticated to become a tool that is endowed with computational—and so cognitive—capacities: it works because the predictions concerning its behavior are reliable, exactly as in the case of an equation that is a good explanation of a physical system and provides good predictive abilities.Footnote 19 Obviously a representational process is also necessary, able to assure a “comparison between physical processes and mathematical described computations” (Horsman et al. 2014, p. 2); again, in computation “the initial impetus is not a physical system that needs to be described, but rather an abstract object that we wish to evolve. An abstract problem is the reason why a physical computer is used” (cit., p. 10).

4 Enhancing ignorant bodies through disseminated computation: the birth of mimetic bodies

4.1 Classical computational physical systems are fruit of technological domestication

To render an ignorant entity such a sheet of paper cognitively significant is relatively simple, it is sufficient to write some statements using a natural language. The tradition of cognitive science concerning the so-called “distributed cognition” would add that in this non-computational case we are delegating to an external object a cognitive role, as an anchor for our mind (Mithen 1999; Fauconnier and Turner 2003; Hutchins 2005) or as a tool for rendering our mind “extended” (Clark and Chalmers 1998).

In the case of computation, Turing himself clearly said that a simple “Paper Machine” is able to compute: “It is possible to produce the effect of a computing machine by writing down a set of rules of procedure and asking a man to carry them out. [...] A man provided with paper, pencil and rubber, and subject to strict discipline, is in effect a universal machine” (Turing 1969, p. 9). However, we know that for more efficient computational results a more complicate ignorant but potentially cognitive entities have to be built: they are fruit of a technological construction.

I have always considered incredible that researchers have been able to embed the Turing’s abstract machine, as a Discrete-State Machine (DSM) in physical machines, so building artefactual technological engines that are capable to evolve by discrete states, thanks to the virtues of the silicon electronic structures.Footnote 20 It is well-known that it was science that provided those conceptual tools to engineer physical systems that humans later on “domesticated” to get technologies characterized by “wanted” features and expected behaviors. In the case of digital computers researchers started from a good physical theory, capable to delineate an abstract description of the physical system to be built by technology. That new built artefactual system had to perform a set of evolutions in expected ways, to be computationally reliable.

It is important to stress that non standard physical artefactual entities that are exploited as computing tools and substrates—with the exclusion of quantum computers—present a less mature theory so that their reliability is lower than the one of silicon devices (Horsman et al. 2014).Footnote 21 We will soon illustrate that the new non silicon-based substrates that take advantage of the resources offered by their morphology are becoming computationally significant. As for now, more words about the problem of encoding and decoding have to be added, a process we can call of semiotic domestication of the already technological domesticated artefactual physical systems.

4.2 Semiotic domestication: encoding, decoding, computational entities, unconventional substrates

It is the only presence of a “computational agent” that permit the exploitation of the already domesticated (thanks to science and engineering) physical computational entity (but still ignorant) as a machine that performs cognitive tasks in the external environment.

  1. 1.

    First of all the agent starts a computational process by encoding—that is by representing—abstract data in the artefactual physical system (as I have said, we already have a physical theory that grants that the artefactual physical system appropriately operates: we expect the encoded data will work). Encoding is a way of managing a problem, thanks to some appropriate cognitive representation, for example mathematical, to have it encoded and at the same time channeled by abstract algorithms, in turn encoded (they become concrete algorithms) into the machine: at this point the artefactual physical entity starts to behave as a computer. It is through this process of further domestication that we can reliably expect to get the computable output of abstract evolutions, obtaining what Turing called (Universal) Practical Computing Machine (PCM) (Magnani 2018a).

  2. 2.

    Charles Sanders Peirce clearly illustrated that humans create signs, a word for example, or an icon, which refer to some external object or event: this is called “process of semiosis”. Still according to Peirce, we can say that human brains build a lot of signs that are exploited to the aim of responding to a series of other signs and to expect some feedback. It is being involved in this process, which is material and semiotic but also clearly cognitive, that human brains become “minds” to produce intelligent reasoning. We perfectly know that signs can be externalized in both natural and artefactual mediators, for instance thanks to drawing and writing, originating that process of externalization of the mind (or disembodiment of the mind), which has been studied by cognitive science as one of the main aspects of the idea of distributed cognition. It is in the interplay between internal and external representations that a considerable part of standard human cognition is occurring, but also that part of cognition that consists of more creative results, such as the generation of new thoughts, ideas, and concepts that do not have a “natural home” internally. In this perspective minds have to be considered as “extended” and artificial in themselves, given the role played by external representations.

  3. 3.

    The activity of encoding described in the first item is still a process of semiotic externalization of signs: it is the incorporation of a problem (semiotically expressed thanks to words, but eventually also symbols, technical languages, icons and other representations), with the help of the mediation of some appropriate other cognitive tools already embedded in the engineered physical artifact (basic programs, algorithms, etc.) In this way the system itself becomes a domesticated actuator of semiosis, computationalized, in this case. Furthermore, decoding is the process of picking up the meaningful signs produced by the “concrete evolution” of the physical artefactual system.

  4. 4.

    When thanks to the physical process of our doubly domesticated artefactual physical system we arrive to the final physical state (rendered possible by the physical evolution of the system, usually a classical silicon device), a final state is decoded by the computational agent (a human or another machine) as a new abstract semiotic state. We have to remember that to get this outcome we have to face with a concrete evolution, not an abstract one. Without the encode and decode aspects, there is no computation, there is a trivial “ignorant”—so to speak—physical system undergoing evolution: “going about its business”, only potentially computational (Horsman et al. 2014, p. 5). In this light computation can be clearly illustrated as the fruit of the complex composed system emerging from the processes of both technological and semiotic domestication.Footnote 22

  5. 5.

    Hence, coding and decoding acquire meaning and consistency only if seen as related to a computational entity, human or artificial, that works as a semiotic delegator and recognizer of meanings, when dealing with the specific artefactual physical entity at play.

5 Morphology-based enhancing: computational domestication of new substrates

Organic living beings of all kinds can be considered physical entities that are potentially inclined to process information (more in general, they are “semiotic beings”, Peirce would have said) and that can “also” potentially be exploited to perform computations. The example of Turing’s paper machine, which regards the chance human living beings have to become computational beings—I have quoted above in Sect. 4.1—is mandatory.

At this point it is important to stress that entities that are not comparable to the engineered silicon devices, but that have the morphological features typical of the bodies of organic beings, offer a “new” way of becoming part of a computational process. I will illustrate below that research on physical RC—Reservoir Computing—represents a new way of using the dynamical properties of body parts to channel important computational tasks.

Of course a great relevant and reliable quantity of scientific and technological knowledge about the traditional engineered silicon substrates is available. Therefore, if we desire to have at our disposal new substrates able to carry computation it is necessary to provide adequate physical theories of them, because we have to be assured they will present regular behaviors: we need theories able to predict expected modifications of the substrate that are in turn empirical confirmed. The following is the description, offered by Horsman et al. (2014, p. 20), of an “unlucky” procedure for assigning computational abilities to a non-standard system: imagine to provide a new substrate for computation (for example a stone, a soap bubble, a large interacting condensed-matter system, etc.) and then, after a while, the termination of the process is proclaimed and some measures obtained. At this point the initial and the final states of the system are matched, “and then a computation and a representation picked such that if the initial state and final states are represented in such a way, then such a computation would abstractly connect them”: we can now report that a certain computation has been accomplished “the stone has evaluated the gravitational constant, the soap bubble has solved a complex optimization problem and so on”.

In summary, it is uncorrect to establish the computational description of a physical evolution post-hoc: “the challenge then for non-standard computation is to demonstrate that the theory of the device, and the representation of data within it, is known and stable enough to use the physical device to predict the desired abstract computation” (cit., p. 21). Hence, we have to look for a more reliable computational domestication of new ignorant substrates: morphological computing is one of the recent answers.

5.1 Morphology-based enhancement of mimetic bodies

Cognitive science has provided rich evidence that smart cognitive capacities to know and act in the case of natural agents are considerably due to the effects caused by their bodily structure (morphology). This morphology can be obviously and simply directly exploited for the engineering of smart capacities in artificial entities such as robots. Recent research has listed three major functions performed by morphologyFootnote 23:

  1. 1.

    morphology that helps control, in which no control system, no motors and no sensors are at play;

  2. 2.

    morphology that helps perception;

  3. 3.

    morphological computation in proper sense, for example “reservoir” computing (RC), in which embodiment and computation are rigorously intertwined.

The passive dynamic walker represents an example of the first case (the behavior is happening exclusively by mechanical interaction); the gecko feet are an example of the second case, which can also be considered as active extensions of the passive walker above, thanks to actuators and sensors (their special ability is the consequence of their morphology interacting with specific surroundings—and so not principally that of a higher-level central control); the case of the eye of the fly too, which not only regards movement, but also cognitive capabilities related to perception, is another typical good example; it is Reservoir Computing that incarnates the third most interesting case, theoretically studied and advanced by the neural network community, and amazingly applied to physical entities/devices that can work as a reservoir (“Physical” RC), in which the body is efficaciously domesticated for computation.

Reservoir computing certainly regards morphological computations but, as I have already remarked, emerged from research on neural networks, as Müller and Hoffmann (2017, p. 6) clearly summarize, “with nonlinear activation functions and with recurrent connections that have a random but bounded strength; this is referred to as a dynamic reservoir. These neurons are randomly connected to input streams, and the dynamics of the input is then spread around and transformed in the reservoir, where it resonates (or ‘echoes’—hence the term ‘echo-state networks’) for some time. It turns out that tapping into the reservoir with simple output connections is often sufficient to obtain complex mappings of input stream to output stream that can approximate the input—output behavior of highly complex nonlinear dynamical systems”. In the course of training, the weights from the input streams and between the reservoir neurons remain untouched and only the output weights—from the reservoir to the output layer—are transformed by a learning algorithm (it is a linear regression). It is the exploitation of the reservoir that permits an appreciable reduction of the complexity of the training task (that is avoiding to train all the connections). The reservoir is used to accomplish a spatiotemporal transmutation of the input stream (the temporal feature of the input sequence has fundamentally been disclosed by the reservoir and can be promptly, extracted directly at any instant): “Furthermore, if feedback loops from the output back to the reservoir are introduced and subject to training, the network can be trained to generate desired output streams autonomously”.

The astonishing novelty is that various physical entities can work as a reservoir, giving birth to the so-called Physical RC, which can also be the body of an agent: indeed, biological bodies which interact with their surroundings can exhibit the required features—high dimensionality, nonlinearity and fading memory.Footnote 24

5.2 The birth of mimetic bodies: enhancing ignorant bodies through disseminated computation

To further illustrate the modes of domesticating—beyond the well-known case of silicon devices I have described above—esoteric entities for computation, it is important to clarify the main cognitive and epistemological features of RC (Reservoir Computing). To make an example, imagine you have at your disposal a traditional “digital” computational controller tool (not necessarily a PC, but also a robot)—that already presents its own “traditional” engineered physical embodiment—you want to simplify and improve from the point of view of its capacities and/or behavior. You can do this thanks to some cognitive “virtues” that are outsourced to the morphology of physical entities with the aim of receiving an estimable help in terms of what can be called “analog” computation.

Hauser et al. (2014, p. 227) furnish the following explanation of the mechanisms of the RC (Reservoir Computing), also clarifying the distinction between morphological computation and pure digital computationFootnote 25: “At the core of RC lays the so-called reservoir, a randomly initialized high-dimensional, nonlinear dynamical system, which maps the typically low-dimensional input (stream) onto its high-dimensional state space in a nonlinear fashion”. It plays the role of a kernel (in the machine learning sense, that is a nonlinear projection of a low dimensional input into a high-dimensional space) and, as a dynamical system, enjoys the intrinsic property to combine input information over time, which of course is profitable for those computations that need information regarding the history of their input values. Moreover, it is remarkable the reservoir is not modified in the course of the learning process: “Even if it is randomly initialized, its dynamic parameters are fixed afterwards. In order to learn to emulate a desired input output behavior (to be more precise, a desired mapping from input streams to output streams), one has to add a linear output layer, which simply calculates a linearly weighted sum of the signals of the high-dimensional state space of the reservoir. These output weights are the only parameters that are adapted during the learning process” (ibid.)

Physical RC regards real physical bodies that are exploited as reservoirs and as computational generators. In the case of both entities, the extremely engineered silicon device and the Physical RC, they do not “know” that they belong to computational device. They trivially conform to the laws of physics and they directly answer, once computationally domesticated. In the case of Physical RC, it is important to add “[...] that the body does not over—nor under—compensate, since it is a stable physical system. The proposed setup simply adds some linear readouts to the body to complete the computation. The body would react exactly the same, if there were no readouts at all. If this output is used, e.g., in a feedback loop as a control signal for the robot, the behavior of the robot of course should be different if we close the loop by adding the readout” (Hauser et al. 2014, p. 230). Moreover, in morphological computation the physical computer does not need to be intelligently conceived and engineered: it naturally evolves and this evolution also provides a computation.Footnote 26

The body, is “ignorant” (Hauser et al. 2014, p. 231), it behaves the same whether we add readouts or not and, this is remarkable, can be exploited in several computations on the same input at the same time supplementing the suitable readout. Classifying the body as “ignorant” of the computations that are channeled through it may be strange. It is more than obvious that no part of the entity acknowledges that is part of the computational system itself, it simply presents a kind of passivity to the computation that is occurring through it. It is ignorant of the semiotic life it is computationally channeling but at the same time it has been domesticated to do it so appearing to the computational agent (the user, see above Sect. 4.2) a cognitive smart entity.

The traditional “physical” computational entities (hardware) as silicon devices and robots empowered with an abstract controller are founded on the discreteness of digital processing and they appear, at least phenomenologically, from the rough exterior perspective, as stable, insusceptible, and robust with respect to the external influences that are not ascribable to encoding and decoding (see above Sect. 4.2). Moreover, we have to add that classical robots, that are clearly unrelated to morphological computing, are formed by rigid body parts, which are “connected with high torque servos”, and do not possess “any deliberate computation in the physical layer, hence, any computation in the robot is carried out only in the abstract controller layer” (Hauser et al. 2014, p. 234). Morphological computation, in the case of Physical RC, is instead happening in the continuous actual reality (without digital complicatedness) as a limpid kind of quick and simple analog computation, in which the use of concrete entities at the same time obviously causes restrictions and limits. It is certainly characterized by an absence of a great cognitive/computational plasticity (and consequently of “universality”, if compared to classical Turing machines). To improve this situation a physical reservoir can be enriched by what has been called morphosis (for instance, moving the body into a new position) (Hauser et al. 2014, p. 235): this chance permits the modification of the morphology online, to obtain new computational gains, also considering the potential effects on the entire system generated by the accompanying environment.

Thanks to morphological computation the bodies that are “ignorant” become mimetic bodies, that is, bodies that are able to mimic several cognitive tasks because embedded in a computational environment. The term mimetic that I am using here is in tune with some of Turing’s ideas: indeed he says that by “mimicking education, we should hope to modify the machine until it could be relied on to produce definite reactions to certain commands” (Turing 1969, p. 14). Analogously, mimicking the morphological aspects of an entity we obtain computational outcomes: as observed above in footnote 25 a notable case is the one illustrated by Nakajima et al. (2013; 2015), who have “rigorously” mimicked part of the morphology of an octopus exploiting a prototype of a soft robotic arm suggested by it.

In the case of morphological computation we are dealing with new relevant epistemological and technological advances of what I can indicate as disseminated computation: we are facing with a computation increasingly distributed in an extended diversity of props, tools, bodies, and devices to the aim of getting new cognitive results. A further application could be imagined in the case of robot design to the aim of constructing an unexpected generation of robots with increased adaptability and a minor amount of involved control parameters.

5.3 Mimicking morphology and synthetic biology

A final remark on the concept of mimetic body has to be added. Understandably, mimicry, at a general cognitive level, can be considered a morphological wired mechanism that makes possible learning, and embeds intentionality. In the present article I have illustrated a more restricted sense of the word: thanks to morphological computation bodies that are “ignorant” become mimetic bodies, that is, bodies that are able to mimic various cognitive tasks because inserted in a computational context. I added that the term mimetic is in this case coherent with Turing’s conception: he says that “to modify the machine until it could be relied on to produce definite reactions to certain commands” (Turing 1969, p. 14) it is necessary to “mimic” education, a metaphor that introduced a new idea surely at the roots of the modern concept of computational program. In sum, in our present case morphological computation efficiently mimes some cognitive processes thanks to the integration of the morphology of entities in a traditional computational digital context.Footnote 27

At this point it is mandatory to ask a question regarding the role of morphologies, which affords a specular problem with respect to the one of morphological computation: what happens when, on the contrary, we need mimic some (biological) morphological entities? Synthetic biology inaugurated an ambitious program aiming at answering this question. With synthetic biology we are dealing with “alternative realities”, synthetic fictions, we can say, in which, as contended by some epistemologists, synthetic “fictional” imagined systems are turned into “concrete” ones (Knuuttila and Loettgers 2017). At least three main areas characterize this research: (1) the project of constructing a minimal genome (that would include the smallest number of genetic elements sufficient to produce and sustain a free-living cellular organism in an ideal environment); (2) the building of an artificial and minimal cell as “a hypothetical biological system that possesses only the necessary and sufficient attributes to be considered alive” (Gil 2011, pp. 1065–1066), also aiming at the construction of protocells or the so-called chassis cellsFootnote 28; and (3) the synthetical design of alternative genetic systems as chemical structures.

Given the fact that many difficulties are experienced by synthetic biologists when they apply plan engineering models to biological systems and consequently real biomachines fail because of the lack of operativity at multilevel complexity,Footnote 29 computation can be still of help, but differently that in the case of morphological computing. However, in these kinds of research regarding the systematic design and engineering of biological systems the problem of managing engineering complexity can be afforded thanks to the involvement of artificial intelligence. Biologists have started to institute engineering control over the genetic machinery of cells and identified and generated many DNA sequences which can be exploited as building blocks of new cellular programs. These “programs” are then injected into cells and enforced. The studies have arrived at a complexity threshold that AI researchers can help overcome.

The current procedures in synthetic biology demand practitioners to project organisms at the DNA level. This processes are mainly manual and become too complex when the dimension of the project grows. These kinds of processes can be considered analogous to writing a computer program in assembly language which itself turns difficult as the size of the program grows. As Yaman and Alder (2013) further say: “Currently the synthetic biology engineering workflow is mostly manual and relies heavily on domain expertise, a limited amount of which is shared through publications. There are several points in the workflow where informed decision making would improve the efficiency and reduce the time to engineer an artificial genetic circuit. Such decision points are where the biologist can use the help of AI researchers”. AI researchers can help synthetic biologists with reasoning, multiagent systems, and robotics taking advantage of results in the areas of machine learning, knowledge-based systems, knowledge representation and reasoning, reasoning under uncertainty, heuristic search and optimization. To conclude, even if biological organisms are complex and not completely understood, there are various opportunities for AI techniques to make a major difference in the efficacy of organism engineering.Footnote 30 In this last case we deal with the reverse of the role of computation I have described in this article when illustrating morphological computing: indeed it is a computation (that we can call “natural computation”) that aims at miming (biological) morphological entities and not a morphology that supplies computation in a digital environment that can take advantage of it.

Finally, coming back to the problems of computationally domesticating new substrates, it is interesting to refer to the following recent development of biocomputing, described by Grozinger et al. (2019). The so-called “cellular supremacy” refers to a set of problems that biocomputers can practically solve but that traditional microprocessor-based devices cannot, there is a kind of “added value” that living systems offer beyond silicon-based hardware: “Many current implementations of cellular computing are based on the ‘genetic circuit’ metaphor, an approximation of the operation of silicon-based computers. Although this conceptual mapping has been relatively successful, we argue that it fundamentally limits the types of computation that may be engineered inside the cell, and fails to exploit the rich and diverse functionality available in natural living systems. We propose the notion of ‘cellular supremacy’ to focus attention on domains in which biocomputing might offer superior performance over traditional computers. We consider potential pathways toward cellular supremacy, and suggest application areas in which it may be found”.Footnote 31

6 Natural computing: computing inspired by nature and computing occurring in nature

Morphological computing is just one example of the exploitation of new substrates for computation: a recent handbook (Rozenberg et al. 2012) on the so-called Natural Computing illustrates both human-designed computing inspired by nature and computing (and information processing) occurring in nature. Examples of the first kind of research described in the handbook include neural computation inspired by the functioning of the brain; evolutionary computation inspired by Darwinian theory of evolution; cellular automata inspired by intercellular communication; swarm intelligence inspired by the behavior of groups of organisms; artificial immune systems inspired by the natural immune system; artificial life systems inspired by the properties of natural life in general; membrane computing inspired by the compartmentalized ways in which cells process information; and amorphous computing inspired by morphogenesis.

Among the studies which aim at substituting traditional silicon devices with new substrates, which I am illustrating in this article taking advantage of the case of morphological computing, the handbook lists molecular computing and quantum computing. In molecular computing, information is encoded as biomolecules and then molecular biology entities are used to process the data, thus generating computations. In quantum computing quantum-mechanical phenomena are used to concretize computations and produce communications more efficiently than classical physics and, hence, in the case of standard hardware. Of course morphological computing represents an important aspect of this area of research, because it involves entities that become part of computation thanks to the features of their structure.

The second kind of research, regarding computation taking place in natureFootnote 32 refers for example to the study of the computational nature of self-assembly, which lies at the core of nanoscience, the computational nature of developmental processes, of biochemical reactions, of bacterial communication, of brain processes, and the systems biology approach to bionetworks, where cellular processes are studied taking advantage of communication and interaction, and, hence, of computation.

We can finally observe that the fear by some critics of natural computing, of loss of generality (universality) of computing is not well grounded. We can have different kinds of computers, some of them universal, some of them specialized for specific computations. Quantum computers will be such specialized computers, far better than our present ones for certain applications, not as good for others. In this perspective computers are not that different from other tools and instruments. We would not think of constructing an apparatus that would at the same time be good as a microscope and a telescope. In the same way, we will also have computers that will be exquisite simulators of quantum systems but not useful for playing computer games.

6.1 Information units unconventionally computed by biological systems

An interesting case of exploitation of biological entities to perform computation has to be stressed, taking advantage of my eco-cognitive metaphor. Special types of informational units can be computed by biological entities, units that can be found into unexpected and unconventional computing systems, different from the ones I have illustrated in the previous section concerning morphological computation. It is the case of the DNA computers, which represent another prominent kind of unconventional non silicon-based computation. The so-called molecular computing is a relatively recent area or research situated at the crossroad of various disciplines that created the tools and the methods to program molecules to the aim of getting a wanted computation, or build a desired object, or control the functioning of a given molecular system (Rozenberg et al. 2012, pp. viii–ix): “The central idea behind molecular computing is that data can be encoded as (bio)molecules, e.g., DNA strands, and tools of molecular science can be used to transform these data. In a nutshell, a molecular program is just a collection of molecules which, when placed in a suitable substrate, will perform a specific function (execute the program that this collection represents)” (ibid.)Footnote 33

In sum, DNA computing is founded on the idea that molecular biology processes can be exploited to perform arithmetic and logical operations on information encoded as DNA strands (Hagiya et al. 2016). DNA bio-operations can be used for computations so that a biocomputation will consist of a succession of bio-operations: the DNA-encoded information is very different from silicon-based electronically-encoded information, and bio-operations obviously differ from electronic computer operations.Footnote 34 To build what she calls our “DNA laptops” Kari (2013) usefully observes that the most important analogy that had to be discovered to open the door of DNA computing was the one between two processes, one biological and the other mathematical, that is: (a) the complex structure of an organic being is the consequence of the application of simple operations (copying, splicing, etc.) to initial information encoded in a DNA sequence, and (b) the result f(w) of the application of a computable function to an argument w can be found thanks to the application of a combination of basic simple functions to w.

The success of the computational domestication of new substrates, as it is illustrated by the case of morphological computation, DNA and quantum computing, etc., also suggests some philosophico/epistemological considerations aimed to express some doubts about the consistency of positions such a pancognitivism, paniformationalism, pancomputationalism, relatively popular in the current debate concerning philosophy of computing. In the following final section I will describe those positions together with the concept of overcomputationalization in its relationships with the process of domestication of ignorant entities I have treated in this article.

7 Protecting ignorance: beyond overcomputationalization

7.1 Pancognitivism, paninformationalism, pancomputationalism

What I call overcomputationalization refers to the presence of too many entities and artifacts that carry computational tasks and powers, for example digital machines embedded in laptops, cell phones, servers, videos, cameras, social networks, robots, and also various unconventional substrates (I have described in detail in the previous sections the case of morphological computation), etc. Here the discourse becomes moral, and solutions are multiple. Whereas many would agree that the diffusion of computation is bringing havoc, some would say (like me) that it is caused by an excess of computation, others would content the opposite, i.e. computation has not spread enough.

After having introduced in the previous sections how our era is characterized by a progressive transformation of ignorant entities not only in cognitive entities but also in computational ones I would like to suggest some humble considerations concerning the potential excesses of this trend, that I think have to be analyzed and criticized. Indeed, I think that currently we are facing with an excess of computationalization of the world and I am consequently complaining that too many ignorant entities are more and more “domesticated” (in the sense I have explained in this article). I will say more words on this issue, related to desire and suggestion of protecting the ignorance of some entities, in the last subsection of this article.

In the last decades philosophers have celebrated the computational revolution and positively evaluated many of its aspects and consequences. An interesting position that has been elaborated is pancomputationalism that from a philosophical perspective, as we will soon see, is in some sense the rightful heir of paninformationalism and pancognitivism. To better grasp my concerns regarding overcomputationalization it is useful to analyze the intertwining between these positions.

Pancomputationalist philosophers have imagined that everything is computing. The problem of pancomputationalism can be nicely synthesized thanks to the following argumentation, provided by Horsman et al. (2014, p. 2). To answer the question when does a physical system compute? is very difficult and no accepted answer is currently available. There is not a formalism able to tell us whether a computation is occurring physically, and various confusions affect the discussion concerning non-standard forms of computation. It is obviously accepted that a laptop processing a Matlab calculation and a server running a LaTeX  text are physical systems operating a computation, but “However, when we move beyond standard and mass produced technology, the question becomes more difficult to answer. Is a protein performing a compaction computation as it folds? Does a photon (quantum) compute the shortest path through a leaf in photosynthesis? Is the human mind a computer? A dog catching a stick? A stone sitting on the floor?” If we say that all are computers we affirm that everything that possesses a physical status is executing a computation by virtue of its own existence. It is obvious to derive that from a conviction like this one, that sees a universe in which everything is a computer, the concept itself of physical computation becomes empty. Indeed “To state that every physical process is a computation is simply to redefine what is meant by a ‘physical process’—there is, then, no non-trivial content to the assertion. A statement such as ‘everything is computation’ is either false, or it is trivial; either way, it is not useful in determining properties of physical systems in practice” (ibid.)

Pancomputationalism is often defended explaining that, when we state that everything is computational, we are simply using the word computational as a way of “interpreting” the system at stake, and of course everything can be interpreted using that metaphor. We have to say that this position creates a confusion between computational modeling and computational explanation, as I will soon specify in the following paragraph. Other kinds of pancomputationalism, appropriately linked to paninformationalism, contend that the entire universe can be seen as computational and also arrive to maintain that information and/or computation possess a kind of priority with respect to physical materiality. Gordana Dodig-Crnkovic (2011,2013,2017) proposes a richer info-computational and more circumscribed and integrated view as a synthesis of pancomputationalism—naturalist computationalism—with informational structural realism, and defends it by noting the fundamental role of computing in nature (natural computing).

As I have illustrated in Magnani (2018a), adopting an evolutionary view and exploiting some of Thom’s ideas expressed in the so-called catastrophe theory (Thom 1988), paninformationalism is puzzling when seen in a naturalistic perspective. Let us consider the following example. When we are analyzing the case of an infection as a pregnance (carried by a virus, that is a material/biological mediator) that involves healthy agents, who represent the invested saliences that in turn can re-project that same contagion as a pregnance in a specific natural environment (where, in turn, other mediators such as air or blood are the transmitters), it seems bizarre to affirm that information (or even computation) is at play: biomedical scientific explanations of the phenomenon are already patently available and it seems superfluous to resort to other concepts such as the ones of information or of computation. One thing is to build a “model” of that event from an informational point of view or by using a computational program, and another is to produce a biological scientific knowledge of it. Of course this critique does not jeopardize the project (see for example the info-computational view, Dodig-Crnkovic 2017) of enabling a view of nature also referred to on computational modeling (information processing) thanks to a conjugation of all those specialized approaches in a perspective that does not aim at replacing natural sciences by computational models, but that aims at establishing a fruitful epistemological collaboration.Footnote 35

However, defending paninformationalism and pancomputationalism can be good for two fundamental reasons. The first one relates to the centrality of natural human evolution, which presents that widespread activity of semiosis, I quoted above (including all kinds of signs, not only the propositional ones) that has been already generated by our ancestors at the times of the birth of the so-called material culture. Indeed humans have constructed those huge cognitive niches (Odling-Smee et al. 2003; Laland and Sterelny 2006; Laland and Brown 2006), characterized by the presence of informational, cognitive (and more recently, computational) processes. This evolutionary evidence explains the philosophers’ attitude to grant some ontological status to information and computation. However, I contend that adopting an approach in terms of “disseminated computation”, such as the one I have proposed in the previous sections—the expression is introduced and explained above in Sect. 5.2—helps consider pancomputationalism in a naturalized way, acknowledging its limits and tempering its ontological and metaphysical overstatements.

In this article there is no room for dealing with the intricate jungle of the various concepts of pancomputationalism: the reader will find a detailed and rich description of various kinds of pancomputationalism, unlimited, limited, causal, interpretivist, ontic, etc. in Piccinini (2017).Footnote 36 Here it is sufficient to note that Horsman et al.’s hostility against pancomputationalism seems to refer to the “ontic” version of it, that is the idea that the universe itself is a computing system, and everything in it is a computing system too (or at least to the view that every physical system is a digital computing one and can be illustrated in computational terms).

We have to add that the strong advocates of the idea that universe computes contemplate a rather uncomplicated idea, and maintain that this happens thanks to different kinds of computation on different levels of organization, a contention that does not imply the acceptation that “everything is computation” (cf., for example, the already quoted info-computational view Dodig-Crnkovic 2017), which would instead become a vacuous statement equivalent to the claim that nothing is computation. In this perspective, they say, by introducing the new notion of universal computation where “universe” is a whole universe and not only the symbol manipulation logical device, an epistemological gain is reached: for example, we can build new theoretical frameworks across disciplines that can help common understanding of the information processing in our brains and computations in the DNA or quantum computer. Grand unifications have always be attractive in physics and mathematics and they are not an obstacle to appreciating diversity in which real world unfolds for us. Again, the strong advocates of the idea that universe computes, conclude that there would be absolutely no danger in broadening computational frameworks as much as it is justified by the behaviors of the systems that we study within the computational area of research.

After all, as contended by Kari and Rozenberg (2008) in the article “The many facets of natural computing”, the most important property of computation consists in the multiplicity of its aspects and their efficacy. If a concept is capable of providing grand unification such as in the case of the concepts of matter, energy, information etc., it is worth to be used, developed, and explored. Finally, pancomputationalism is seen as a claim not critically dependent on the idea of discrete computation because the universe as an analog computer does not necessarily presuppose discrete computation. A main objection to this perspective is related to the following problem: one thing is modeling phenomena, systems, and data from a computational point of view, or exploiting new and unconventional “ignorant” substrates as computational tools, another is the adoption of the ontological idea that the objectivity itself has a computational nature (this last conviction leading to the so-called “ontic pancomputationalism”, Piccinini 2017). Analogously, as I have already said, one thing is to produce computational models for example of physical systems, another is to produce scientific knowledge of them.Footnote 37

The second reason is cognitive and epistemological: in the literature of various disciplines often there is even an all-embracing concept of information, a kind of paninformationalism, in which physical (or biological) information is generalized to every state of a physical (or biological) system that is consequently defined as an information-pregnant state (Wolfram 2002; Lloyd 2006), which has to be considered relevant. Indeed, this approach has fecundated various excellent studies that physicists (for example Chiribella et al. 2012; Goyal 2011), and some logicians, have realized, for instance developing mathematical frameworks for considering quantum theory in the light of the principles of information processing. This information physics is an attempt to reconstruct physics based on information. These studies will not replace physics by computational theory of it, because those types of knowledge operate in different realms: physical computation and information physics would not replace but complement physics with new insights. These results are appreciable, I am just showing my worries about the possible abuse of the concepts of information (and of computation) in physics and biology, as I have already said above.

Finally, what about pancognitivism? An old-fashioned metaphysical theory, called synechism, contends that matter and mind are interrelated and in some sense it is not possible to differentiate them. The human cognitive mind would have germinated according to the same metaphysical laws that control the entire universe leading us to guess that the mind has an inclination (as Peirce contended) to find true abductive hypotheses regarding nature and the entire universe itself. As I have illustrated in Magnani (2009) it is Peirce himself that was convinced that “thought” is not only related with the brain: he said that it appears in the work of bees, crystals and throughout the purely physical world. In sum, seeing information, and/or cognition, and/or computation everywhere presents some virtues but some limitations have to be established, as I will sketch in the last subsection.

7.2 Protecting ignorant entities

Since the birth of the so-called material culture (Mithen 1996, 1999), when handaxes were made by Early Humans, the transformation of what I called ignorant entities in cognitive ones (and more recently in computational ones), also thanks to ancient techniques and modern technological advances, obviously had a tremendous positive impact on western civilization. However, as I have anticipated, a kind of warning has to be presented: I contend we are dealing in our era with what I call overcomputationalization. This means we are in presence of too many entities and artifacts that carry computational tasks and powers. This implies some potential dangers: for example overcomputationalisation (1) furnishes too many opportunities to promote plenty of possible unresolvable disorganizational consequences in the activities of our societies and, (2) favors philosophical reflections that, taking advantage of the seduction exercised by old-fashioned metaphysical allures disguised as technical “analytical” studies, depict an oversimplified vision of the world.

There is not enough space here to deal with the first of the two issues above but I can suggest that a lot of research in human sciences has been produced to deal with the ethical consequences of our technological computational era.Footnote 38 Instead, I would like to say few words concerning the second issue: we face with a decrease of entities that remain simply “ignorant” from the informational/cognitive but especially from the computational point of view. Buildings, streets, furniture, clothes, etc. became carriers not only of simple traditional informational/cognitive obvious qualities but also, thanks to computationalization, of further complicated propositional and model-based information, sometimes related to redundant and oppressive political, ideological, and economical aims.

Moreover, more and more engineered and artefactual ignorant entities, laptops, cell phones, servers, videos, cameras, social networks, robots, and also various unconventional substrates, intentionally computationally domesticated, as I have illustrated in this article, are invading human environments. The recent relevance acquired by the studies on the so called Internet of Things is a clear indication of the importance of these processes (Smart et al. 2019). The excesses of overcompulationalism can also generate a decrease of attention to entities when simply seen as more or less “ignorant” (that is devoid of complicated cognitive and computational endowments) and for this reason they tend to appear more and more irrelevant. The same happens in the case of the excess of juxtaposition between new and complicated imposed informational/cognitive computational issues and various traditional entities. Unfortunately these effects (1) tend to cause, in the case of overcomputationalism, too many constraints, limitations, and a weakening of human fruitful standard and creative (abductive) cognitive activities, as I have illustrated in the last chapter of my book (Magnani 2017) and, (2) in the case of the excess of redundant cognitive/informational features computationally attributed to entities (features in this case exogenous to the original functions of those entities) they incline to impede to benefit from the simplification that would derive thanks to a reduction of the consequent cognitive and informational overload, often potentially and negatively distracting.

The reader does not have to misunderstand me, I believe that we should choose carefully what to defend from computationalization, and what is good to computationalize, and so my urgency to “protect ignorant entities” does not have to be intended as a warning about all kinds of computationalization. Computational modeling of human body and brain can for example fosters a better understanding of diseases, degenerative processes, aging and other processes of life, including processes in our brains that generate human mind. Human brain is the largely unknown organ in the human body, and a lot of health problems, especially of ageing populations is related to brains. Just to make few examples, we do not have cures for Alzheimer, Parkinsons disease or Autism. So fruitful current and future computational modeling will not only favor further domestication of ignorant entities by transforming them in “skilled” substrates able to carry computational and so cognitive performances, but also will fecundate scientific research related to the removal of another kind of ignorance, the classical one regarding the lack of understanding of high-level cognitive capacities both in human, non human animals an simpler organisms, and other biological entities such as cells, as also hoped by the admonition contained in the recent review “Time to reinspect the foundations? Questioning if computer science is outgrowing its traditional foundations” (Copeland et al. 2016).

8 Conclusions

In this article, following the perspective I called “eco-cognitive computationalism”, I have illustrated the new conceptual horizons generated by morphological computation as the effect of an unconventional and unexpected computationalization of bodies that are normally “ignorant”. Morphological computation aims at the computational simplification of cognitive and motor tasks thanks to the building of smart “mimetic bodies“ able to render an intertwined computation simpler, resorting to that “simplexity” of animal embodied cognition which represents one of the main quality of organic agents. Morphological computation can be seen as an activity of what I call “disseminated computation”: the future of morphological computation research in robot design can lead to a new generation of robots endowed with improved adaptability and a minor quantity of involved control parameters. The final part of the paper presents a short discussion concerning the new concept of overcomputationalism, showing that the framework of disseminated computation helps us see the related concepts of pancognitivism, paniformationalism, and pancomputationalism in a more naturalized and prudent perspective, avoiding ontological or metaphysical overstatements.