1 Prelude: Beyond Computation and Relational Biology

Can there be a logico-mathematical theory of the function of life and organism, Footnote 1 whence by extension of mind and brain, Footnote 2 that goes “beyond computing?” This new question is being raised even in mainstream computer science (Brooks 2002). Ontologically, are there aspects of life that are beyond the scope of computational description? Epistemologically, is there a mode of logical description that is provably “beyond computation?” What would be the character of such a bizarre system of abstract representation? If such a system exists, is there any rational basis for supposing that it could be profitably used to represent biological function? Since no computational simulacrum would describe such an abstraction, how could biologists and cognitive scientists use it to gain insights into the operation of the processes of life and organism, and of mind and brain? Has such a theory already been discovered and validated?

The first question could be settled decisively by one of two strategies. An answer of “yes” could be established by producing an instance of a theory of organism/life or brain/mind that is demonstrably “beyond computation” and accounts for organismal or brain function—in this paper we offer Robert Rosen’s relational biology as this example. An answer of “no” results if a beyond-computing logico-mathematical theory is either not sufficient or not necessary. Insufficiency could be established by a “proof by contradiction” that the structure of biological or cognitive function and the structure of logical organization are mutually exclusive, and so no theory suffices, but it does not appear that any such proof has ever been produced. Unnecessity could be established by a rigorous proof that nothing goes beyond computing, but this possibility has been shown to be absurd (Louie 2005).

The orthodox position in the artificial life community and in cognitive science is that the first question raised in this paper is answered with an unequivocal “no due-to-unnecessity:” that a universal algorithmic-logical theory exists, and is summarized by the “strong” Church-Turing thesis. This is a claim that for every physically realizable process in nature there exists a Turing machine that provides a complete description of the process. In other words, if a sufficiently large Turing machine is a description of how the whole world works, then biological function is fully captured by some smaller Turing machine. The thesis is, therefore, there is never a necessity to go “beyond computing”, because nothing exists that goes beyond computing. The fundamental principle of the orthodoxy is that the organism is a Turing machine, and that all of biology and cognitive science boils down to the mathematical equivalent of making increasingly better estimates of the specific details (i.e., the state set, the alphabet, and the partial function that describes the state transitions).

At first glance, the case for the “strong” Church-Turing thesis seems compelling. As von Neumann said, “You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that.” (as cited in Jaynes 1994, p. 104) Although the foregoing quote seems to settle the issue, his requirement to “tell me precisely” reveals a critical flaw in his argument. After showing the equivalence of and-or-not logic, the Turing machine and the McCulloch-Pitts network, he points to an interesting consequence. “It proves that anything that can be exhaustively and unambiguously described, anything that can be completely and unambiguously put into words, is ipso facto realizable by a suitable finite neural network.” (von Neumann 1963, p. 310) In his own words, when he says “tell me precisely” he is specifying an unambiguous description.

As remains the popular wisdom to the present day, von Neumann recognized that the problem of unambiguous description is thorny, but he supposed that it was tractable. “This description may be lengthy, but it is always possible. To deny it would amount to adhering to a form of logical mysticism which is surely far from most of us. It is, however, an important limitation, that this applies only to every element separately, and it is far from clear how it will apply to the entire syndrome of behavior.” (von Neumann 1963, p. 310) In the foregoing he claims that although it may be difficult, precise description is always possible. On the other hand, when he tries to disambiguate the seemingly straightforward concept of triangularity, the more disambiguating statements he makes, the more ambiguities flow from them. “We may have a vague and uncomfortable feeling that a complete catalogue along such lines would not only be exceedingly long, but also unavoidably indefinite at its boundaries. Nevertheless, this may be a possible operation.” (von Neumann 1963, p. 310) However, when he attempts to establish the axioms from which to begin a logical proof, he finds that the choice of axioms “is neither rigorously justifiable nor humanly unambiguously justifiable.” (von Neumann 1966, p. 77)

To summarize, von Neumann begins with the claim that disambiguation is “always possible,” but undercuts the claim on the very same page by conceding that the problem is potentially intractable, and he promises no more than the hope that it “may be a possible operation.” In other work he flatly contradicts the claim that an unambiguous description is “always possible” by producing the counterexample of axiomatization which is not “humanly unambiguously justifiable.”

Despite his inability to discover an unambiguous procedure for axiomatization, he expected that the real world would be amenable to unambiguous description, but could not quite escape the niggling suspicion that perhaps it is not. He realized that this is important, because ambiguity is disallowed to computation by definition. The one thing that von Neumann knew he could not compute was any task that turned out to be inherently ambiguous. In other words, if there are any inherent ambiguities in the world, then the claim that the whole world is a Turing machine is falsified.

There are two recent philosophical developments that make this relevant. One is the demonstration by Quine (1980) that there exist ambiguous descriptions that cannot be disambiguated (Kercel 2007). As von Neumann discovered in his attempt to disambiguate the concept of triangularity, the introduction of a disambiguating context introduces new ambiguities more quickly than the old ones are resolved. The other development is the demonstration by Aczel of the logical coherence of impredicatives (processes not representable by finite syntactic algorithms) by proving the existence and uniqueness of the solution to \(\Upomega=\{\Upomega \}\) (Devlin 1991, pp. 155–159). What von Neumann dismissed as “logical mysticism” is now understood to be higher-order logic that encompasses natural language descriptions that are observed to go beyond the scope of “first-order” (traditional “if-then-else”) logic. Most critically, higher-order logic allows us to reason about ambiguous descriptions (Barwise and Etchemendy 1992).

In real-world biological function, inherent ambiguities abound. This is seen through relational biology, a term invented by Rashevsky in his now-classic paper Topology and Life (Rashevsky 1954), hence the title of the present paper. His point of departure was the observation that our classification of natural systems into the categories of “living” and “nonliving,” which is the very basis of biology, is of a relational character. Rosen’s approach to relational biology is very much in the same spirit. The crux is to throw away the matter and keep the underlying organization. The definitive exposition is Chapter 5 of Rosen (1991), entitled Entailment without states: Relational biology.

The main conclusion of Robert Rosen’s relational biology is that a living system is closed to efficient causation. He realized that the structure of that closure was identical in character to the structure of impredicatives. One of the details that we demonstrate in this paper is that Rosen’s impredicative algebraic description of a process closed to efficient cause entails ambiguity. We can expect that the corresponding process in reality has a corresponding causal ambiguity. This is consistent with Rosen’s demonstration that Turing-equivalent computation and the structure of closure to efficient cause are mutually exclusive (Rosen 1991, pp. 241–243). Comparison of recent experimental results in neurophysiology shows that a similar impredicatively organized structure of closure to efficient cause in the structure of communications in the brain (Kercel 2004b).Footnote 3

This answers two of the questions posed in the opening paragraph. There are biological functions that are beyond the scope of computation. Significantly, Rosen has produced a system of mathematical representation of biological processes which is not only “beyond computation” but opens the way for reasoning about scientific questions that are beyond the scope of computation. Rosen’s thesis is thus of fundamental importance. As with any iconoclastic theories, Rosen’s has engendered opposition, although no attempted debunking of Rosen has been successful (Louie 2005).

Considering the controversy, a more detailed discussion on Rosen’s work is of value. Thus we present this paper for this purpose, and in it we shall also further explore some topological properties of the alternate (M,R)-systems introduced in a previous paper (Louie 2006) published in this journal. In particular, the semantics of his taxonomy are studied in graph-theoretic terms. The critical point about reasoning about such processes is that the mathematics is demonstrated to be a means to the end of reasoning and is not the reasoning process itself. Impredicative representations eventually lead to seeming paradoxes. There is no algorithm to implement the task of uncovering the ambiguities that lead to seeming paradox, or finding a context in which their seeming incoherence is resolved. This is accomplished as a feat of creative insight by the researcher. The abstraction, be it impredicative abstract algebra, or hyperset representation, or graph theory, serves three supporting functions in this strategy of reasoning. First, it provides a tool for discovering seeming paradoxes. Second, it provides tools for uncovering the previously unnoticed ambiguities that lead to the seeming paradox. Finally, it serves as a means of validating or falsifying any proposed resolution of the paradox.

2 Complexity According to Rosen

In Rosen’s theory of relational biology, one of the biggest problems he managed to resolve was the account for the emergence of complexity. A living system is a dynamic process. To describe the rudiments of cell behavior, he postulated the existence of the metabolic process. Since toxins as well as nutrients may be metabolized, he postulated a repair process to keep the metabolic process intact in a hostile setting. However, what repairs the repair process? Simply postulating a hierarchy of repair process gives an infinite regress of repair processes, an absurdity in a finite cell. Thus, he faced a paradox: biology operates on a principle of a hierarchy of repair processes, but how could this be obtained without an infinite regress?

Rosen’s alternative to the infinite regress was the emergence of a closed path of efficient cause, the quintessential complex process. He had two ontological processes, metabolism and repair of metabolism, and two representations, algebraic maps that are their epistemological analogs. To break the incipient infinite regress of repairs he asked if there were conditions that would make it possible that “replication” (i.e., repairing the repair process by replicating it) might already be contained in the combination of the first two processes. To describe this containment abstractly, he asked under what conditions the first two maps would entail the third one. Essentially, once we have the first two processes, can we break the infinite regress by getting the third from the other two “for free” (i.e., without the necessity of external entailment)?

He demonstrated one way of doing so by appealing to an “inverse evaluation map.” This is also known as the “one-gene one-enzyme” paradigm. In Rosen’s development he noted that there are many encodings from which to obtain the third map “for free” given the other two. Louie (2006) presents two of the many alternative possibilities. In the present paper we detail a graph-theoretic description of those two new encodings, as well as of Rosen’s original encoding. Graph-theoretic representation provides an easy demonstration that these encodings of the metabolism-repair (or “(M,R)-”) systems are complex, or noncomputable. We shall revisit (M,R)-systems near the end of the paper.

To appreciate both the development and the significance of the various means by which complexity emerges, we must briefly review three concepts that form the core of Robert Rosen’s whole lifetime’s scientific work:

$$ simple\,\,system\quad complex\,\,system \quad mechanism $$

A crucial fourth concept, organism, will be discussed in a later section. See Rosen (1978, 1985a, b, 1991, 2000, 2003). A thorough exposition of the etymology of Rosen’s lexicon may be found in Louie (2007).

Just as life itself, the Rosen terminology evolves. The definitions of the terms simple system and complex system went through a few refinements, and finally settled on what appeared in Chapter 19 of Rosen (2000):

Definition 1

A natural system is simple if all of its models are simulable. A system that is not simple, and that accordingly must have a nonsimulable model, is complex.

A model is a commutative encoding and decoding between two systems in what Rosen called a modeling relation (see Chapters 2 and 3 of Rosen 1985a). A model is simulable if it is “definable by an algorithm”. It is variously called computable, effective, and “evaluable by a mathematical (Turing) machine” (although there are fine nuances that distinguish these near-synonymous terms; see Chapter 7 of Rosen 1991). This is in keeping with the definition given by Church (1937) that the term effective in the context of computation means fully described by a Turing machine.

Note the dichotomy in Definition 1: a complex system is defined as a natural system that is not simple. In set-theoretic terms, the two categories of simple systems and complex systems are complements of each other. This dichotomy, indeed, is essentially what distinguishes the “absolute” complexity of Rosen from the “relative” complexity of “other authors” (notably von Neumann). We find in Chapter 2 of Rosen (2000):

“A system is complex if it has noncomputable models. This characterization has nothing to do with more complication, or with counting of parts or interactions; such notions, being themselves predicative, are beside the point.

It should be noted that there are many deep parallels between the dichotomy we have drawn between simple (predicative) and complex (impredicative), and the dichotomy between the finite and the infinite. Just as ‘infinite’ is not just ‘big finite,’ impredicatives are not just big (complicated) predicatives. In both cases, there is no threshold to cross, in terms of how many repetitions of a rote operation such as ‘add one’ are required to carry one from one realm to the other, nor yet back again.”

The formal definition of mechanism appeared in Section 8B of Rosen (1991):

Definition 2

A natural system N is a mechanism if and only if all of its models are simulable.

This definition of mechanism is identical to Definition 1 of simple system above.

Chapters 8 and 9 of Rosen (1991) culminate in the conclusion

Theorem 1

A material system is a mechanism if and only if it has no hierarchical cycle of causal entailment.

Since mechanism = simple system, and complex system = not (simple system), we also have

Theorem 2

A natural system is complex if and only if it has a hierarchical cycle of causal entailment.

A formal system in Rosen’s lexicon simply means “an object in the universe of mathematics”. It includes, but is not limited and therefore not equivocated to, Hilbert’s formalism. In the realm of formal systems, then, Theorem 2 becomes

Theorem 3

A formal system is complex if and only if it has a hierarchical cycle of inferential entailment.

This structure of a hierarchic cycle of inferential entailment is known to mathematics, and is characterized as impredicativity. We shall further discuss this below.

Instead of defining a process by the properties of its models, as in Definitions 1 and 2, and having the entailment consequences in Theorems 1, 2, and 3 follow as theorems, we may equivalently reverse the logical arguments. We may therefore alternately define “complex system” using the entailment-based statements of Theorems 2 and 3 (respectively in the domains of ontology and epistemology). A “simple system” is then defined as one that is not complex. The model-based statements of the original Definitions 1 and 2 then become themselves theorems as consequences.

3 Arrow Diagrams of a Mapping

One of the most useful ways of reasoning about the properties of complexity is to construct a complex structure of algebraic maps. Let A and B be sets and f be a mapping (alias function) from A to B. The sets A and B are called respectively the domain and codomain of f. The relations among a mapping and its domain and codomain may be represented interchangeably in an arrow diagram of the form

$$ A\mathop{\longrightarrow}^{f}B $$
(1)

or

$$ f:A\rightarrow B $$
(2)

The image of the domain A under f, the subset f(A) of the codomain B, is called the range of f. Note that the range is a subset of the codomain but they need not be equal; when they are, we say that f maps from A onto B.

Rosen often used the metalanguage of category theory in his discussions. The definitive reference on this branch of abstract algebra is Mac Lane (1978); for a simple and concise introduction one may consult Louie (1985). A category consists of a collection of objects, and for each pair of objects A and B a hom-set denoted by H(A,B). A member of a hom-set is called a morphism. The general framework assumes very little about the morphisms; they only need to be closed under composition and include the identity morphisms. The category axioms are usually interpreted within set theory, whence an object is a set with a prescribed structure and a morphism is a mapping preserving this structure.

The category Set, in which the objects are sets (without any further requisite structure) and the morphisms are mappings, is the simplest example of a category. A mapping f from A to B may therefore also be represented as

$$ f\in H\left({A,B}\right) $$
(3)

Sometimes it is useful to trace the path of an element as it is mapped. If \(a\in A\) and \(b\in B\), we use the “maps to” arrow (note the short vertical line at the tail of the arrow) and write

$$ f:a\mapsto b $$
(4)

Rosen introduced relational diagrams in graph-theoretic form, in Chapter 9 of Rosen (1991), to provide a succinct depiction of the entailment patterns in machines. Then in Chapter 10, he used them to represent the entailment patterns in organisms. By comparing the two classes of relational diagrams, the differences between machines and organisms became almost immediately apparent.

A simple mapping \(f:A\rightarrow B\) has the relational diagram where a hollow-headed arrow denotes the flow from input in A to output in B, and a solid-headed arrow denotes the induction of or constraint upon this flow by the processor f. A degenerate interpretation of diagram (5) in completely mechanistic terms characterizes the flow as the software, and the processor as the hardware. The solid/hollow-headed arrow symbolism was first introduced in Section 9B of Rosen (1991). Its form evolved a few times, and settled on this depiction in arrow diagram (5) (which is [9E.4] and [10C.1] in Rosen (1991)).

When the mapping is represented in the element-chasing version \(f:a\mapsto b\), the relational diagram may be drawn as (where we have also eliminated the dots that represent the vertices of the graph). The processor and output relationship may be characterized “f entails b” (Sections 5H and 9D in Rosen (1991)). We denote this entailment as

$$ f \vdash b $$
(7)

Relational and entailment diagrams may be composed. For example, if there is a morphism \(\Upphi:X\rightarrow H(A,B)\) such that \(\Upphi: x\mapsto f\) (whence \(\Upphi \vdash f)\), then we have

and

$$ \Upphi \vdash f \vdash b $$
(9)

4 The Four Causes Represented by Components of a Mapping

A mapping \(f:a\mapsto b\) (alternatively \(f:A\rightarrow B\)) provides an excellent illustration of Aristotle’s four causes. Rosen’s version of this exercise is in Chapter 5 of Rosen (1991); now we provide our own variation on the theme of “Why mapping?” Aristotle’s original Greek term aition was translated into the Latin causa, a word which might have been appropriate initially, but which had unfortunately diverged into our contemporary notion of “cause”. The possible semantic equivocation may be avoided if one understands that the original idea had more to do with “grounds or forms of explanation”, so a more appropriate Latin rendering (in hindsight) would probably have been explanatio. It is with this “grounds or forms of explanation” sense of “cause” that we identify the four Aristotelian causes with components of mappings that represent both the causal entailment in natural systems and the inferential entailment in formal systems.

The material cause, that out of which the mapping comes to be, is its input \(a\in A\). We may choose to identify the material cause as either the input element a or the input set, the domain A.

The formal cause, the mapping’s form or its statement of essence, is the structure of the mapping itself as a morphism. Note that the Greek term for “form” is morphé, the etymological root of “morphism”. The fact \(f\in H\left({A,B} \right)\) implies the relational diagrams (5) and (6); the formal cause of the mapping is the ordered pair of arrows The arrows implicitly define the processor and the flow from input to output. The compositions of these arrows also need to follow the category rules. Alternatively, when the material cause (the exact nature of the input) is immaterial, the formal cause may just be identified with the entailment symbol

$$ \vdash $$
(11)

which implicitly defines the processor and the output. The identification of a morphism with its formal essence (10) or (11) is an interpretation of the category axioms in “arrows-only”, i.e., graph-theoretic, terms.

The efficient cause, that which brings the mapping into being, the source of change, the “production rule”, is the function of the mapping as a processor. The difference between the formal cause and the efficient cause of a mapping is that the former is what f is (i.e., \(f\in H\left({A,B}\right))\), and the latter is what f does (i.e., \(f:a\mapsto b)\). We may simply identify the efficient cause as the processor itself, whence also the solid-headed arrow that originates from the processor This isomorphism between an efficient cause and a solid-headed arrow provides an important link between a formal system and its relation diagram in graph-theoretic form:

Theorem 4

A network G of inferential entailment in a formal system contains a particular efficient cause f if and only if the path of G in the relational diagram contains the solid-headed arrow that corresponds to f.

The final cause, the purpose of the mapping, why it does what it does, is its output \(b\in B\). We may choose to identify the final cause as either the output element b or the output set, the codomain B (in which case b may be considered as the entailed effect). The Greek term télos (translated into finis in Latin), meaning “end” or “purpose”, covers two meanings: the end considered as the object entailed (i.e., b itself), or the end considered as the entailment of the object (i.e., the production of b).Footnote 4 In both cases, the final cause may be identified as b, whence also the hollow-headed arrow that terminates on the output

Here is a succinct summary of the four causes as components in the relational diagram of a mapping: Without the material cause, the three causes are in the entailment diagram:

5 Closure to Efficient Causation

After giving Definition 2 of mechanism in Section 8B of Rosen (1991), Rosen used the rest of Chapter 8 to state and prove some inherent properties of a mechanism. Then in Chapter 9, he presented a detailed argument proving the equivalence of noncomputability and closure to efficient causation. Thus he proved that certain modes of entailment are not available in a mechanism:

Theorem 5

There can be no closed path of efficient causation in a mechanism.

In terms of relational diagrams, we have

Theorem 6

In (the relational diagram of) a mechanism there is no cycle that contains solid-headed arrows.

The contrapositive statement to Theorem 5 is

Theorem 7

If a closed path of efficient causation exists in a natural system N, then N cannot be a mechanism.

This, by Definition 2 of mechanism, is equivalent to

Theorem 8

If a closed path of efficient causation exists for a natural system N, then it has a model that is not simulable.

An iteration of “efficient cause of efficient cause” is inherently hierarchical, in the sense that if a map f representing an efficient cause is contained within the codomain of a map g representing another efficient cause, then f is one step lower than g in the hierarchy of morphisms. A closed path of efficient causation must form a hierarchical cycle of containment. Both the hierarchy of containment and the cycle are essential attributes of this closure. Note that causes need not be sequential and the containment is not spatial. The hierarchy of containment is that of dynamic organization. The idea that a hierarchical cycle is logically incoherent was widely—and erroneously—believed until the appearance of Aczel’s proof of the existence and uniqueness of the solution to the fundamental hyperset equation \(\Upomega=\left\{\Upomega\right\}\) (Aczel 1988). For a detailed explication see Kercel (2002).

In formal systems, hierarchical cycles are manifested by impredicativities. The nonsimulable model in Theorem 8 contains a hierarchical cycle that corresponds to the closed path of efficient causation in the natural system being modelled. In other words, it is a formal system with an impredicative cycle of inferential entailment: a collection of mappings that are members of the codomains of one another. Thus, we also have:

Theorem 9

If an impredicative cycle of inferential entailment exists for a formal system, then it is not simulable.

For a comprehensive discussion of the Rosen theorems and the underlying mathematical logic, see Louie (2005).

Since complex systems are complementary to simple systems = mechanisms, we also have

Theorem 10

A natural system is complex if and only if it contains a closed path of efficient causation.

See Chapter 18 of Rosen (2000) for a more detailed discussion of Theorem 10 and its consequences. In terms of relational diagrams, we have the complementary theorem to Theorem 6:

Theorem 11

In (the relational diagram of) a complex system there is a cycle that contains solid-headed arrows.

A natural system is said to be closed to efficient causation if its every efficient cause is entailed within the system. In other words, the definition of “closure to efficient causation” for a natural system is that it has a corresponding formal system model that has a closed path containing all of the representations of efficient causes in its causal entailment structure. By Theorem 4, we have

Theorem 12

A natural system is closed to efficient causation if and only if its relational diagram has a closed path that contains all the solid-headed arrows.

In Section 10A of Rosen (1991), he proposes “The Answer” to the ultimate question of biology “What is life?” as “a material system is an organism if, and only if, it is closed to efficient causation.” Note that the term “organism” is taken in the sense of any general “living system”. Theorem 12 therefore says

Theorem 13

In the relational diagram of a formal system model of a living system, there is a cycle that contains all the solid-headed arrows.

Comparing Theorem 13 to Theorem 11 (or equivalently, comparing the definition of organism to Theorem 10), it is evident that living systems form a proper subset of complex systems. We shall explore this topic below.

Because of the pairwise construction of the solid-headed and hollow-headed arrows in the formal cause diagram (10), a cycle that contains all the solid-headed arrows necessarily contains all the hollow-headed arrows as well. (This necessity follows when we only consider multigraphs. We shall discuss exceptions for which this does not follow when we consider pseudographs later on in this paper. The two graph-theoretic terms will be defined presently.) Indeed, the cycle must contain all the solid-headed arrows and hollow-headed arrows in an alternating sequence. A cycle containing all the arrows corresponds to the graph-theoretic concept of Eulerian circuit, with the topological property called traversability.

6 Interlude: Traversability

Although Rosen gained his insights into the character of complexity by using abstract algebraic representations, it is not the only tool for reasoning about inherently noncomputable processes. A powerful alternative is topology, a nonmetric and nonquantitative discipline sometimes called “rubber-sheet geometry”. Its propositions hold as well for objects made of rubber, under deformations, as for the rigid figures from common metric geometry. It deals with fundamental geometric properties that are unaffected when one stretches, shrinks, twists, bends, or otherwise distorts an object’s size and shape (but without gluing or tearing). Another name for topology is analysis situs: analysis of position.

Topology deals with different problems and ideas in several different branches, including general (point-set) topology, differential topology, and algebraic topology. Note that algebraic topology is the inspiration of category theory, which is the mathematical language of Robert Rosen’s modeling relation. See, in particular, Chapter 3 of Rosen (1985a). Topology began with a paper on the puzzle of the Königsberg bridges by Leonhard Euler (1707–1783), entitled Solutio problematis ad geometriam situs pertinentis, which translates into English as “The solution of a problem relating to the geometry of position”. The title indicates that Euler was aware that he was dealing with a different type of geometry where distance was not relevant. The paper was presented to the Academy of Sciences at St. Petersburg in 1735. It appeared in the 1736 edition of Commentant Academiae Scientiarum Imperialis Petropolitanae, although the volume was not actually published until 1741. An English translation of the paper is reprinted in Euler (1736).

Euler described the problem thus:

“The problem, which I understand is quite well known, is stated as follows: In the town of Königsberg in Prussia there is an island A, called “Kneiphof”, with the two branches of the river (Pregel) flowing around it, as shown in Fig. 1.

Fig. 1
figure 1

The seven Königsberg bridges

There are seven bridges, a, b, c, d, e, f and g, crossing the two branches. The question is whether a person can plan a walk in such a way that he will cross each of these bridges once but not more than once. I was told that while some denied the possibility of doing this and others were in doubt, there were none who maintained that it was actually possible. On the basis of the above I formulated the following very general problem for myself: Given any configuration of the river and the branches into which it may divide, as well as any number of bridges, to determine whether or not it is possible to cross each bridge exactly once.”

The puzzle of the Königsberg bridges is a classic exercise in topology, a “network problem”. Network topology is more commonly known as graph theory. (The definitions and theorems in this section may be found in most introductory texts on graph theory. Note, however, that some terminology has not been standardized. Two good references are Trudeau (1993) and Gross and Yellen (1999).) Euler’s method of solution (recast in modern terminology here) was to replace the land areas by vertices, and the bridges by edges connecting these vertices (Fig. 2).

The resulting topological object is called a graph, in which only the configuration of the vertices and edges in terms of their relative ordering and connections is important, but not the distances. The problem of crossing the bridges then reduced to that of traversing the graph with a pencil without lifting it from the paper, in one continuous trace of the edges, passing along each edge exactly once. A graph for which this is possible is called traversable, and the continuous trace is now known as an Eulerian path. Stated otherwise, the geometric problem of crossing the Königsberg bridges now becomes a graph-theoretic one: to determine whether the graph in Fig. 2 is traversable. (What we have in Fig. 2 is more properly called a connected graph, in which there is a path, i.e., a continuous sequence of edges, from any vertex to any other vertex. But since we are only concerned with the topic of traversability here, we need only deal with connected graphs: a disconnected graph is clearly not traversable.)

Fig. 2
figure 2

A graph of the Königsberg bridges

A (graph) edge may be considered as an unordered pair of vertices that are its end points. For example, in Fig. 2 the edge f is the unordered pair of vertices {B,D}. When the two end points are identical, i.e., when an edge joins a vertex to itself, the edge is called a (self-)loop. Figure 2 has no self-loops. Note that the presence of self-loops does not affect the continuity of a trace of the edges—a graph with self-loops is traversable if and only if the graph reduced from eliminating all the self-loops is traversable. Multiple edges are two or more edges joining the same two vertices in a graph. In Fig. 2, a and b are multiple edges, since they are both {A,B}. A simple graph is a graph that contains no self-loops and no multiple edges. A multigraph is a graph in which multiple edges, but no self-loops, are permitted. Euler’s graph of the Königsberg bridges in Fig. 2 is a multigraph. A pseudograph is a graph in which both self-loops and multiple edges are permitted.

The number of edges meeting at a vertex is called its degree. Note that a self-loop contributes 2 to the degree of its (only) vertex. A vertex is called odd or even according to its degree. Since an edge connects two vertices, it follows that the sum of the degrees over all vertices in a graph is even, whence a graph must have an even number of odd vertices. Euler discovered that if a graph has only even vertices, then it is traversable; also, the journey may begin at any vertex, and after the trace the journey ends at the same initial vertex. A closed (when final vertex = initial vertex) Eulerian path is called an Eulerian circuit. If the graph has exactly two odd vertices, then it is still traversable, but it is not possible to return to the starting point: the journey must begin at one odd vertex and end at the other. If the graph has more than two odd vertices, then it is not traversable. The general principle is that, for a positive integer n, if the graph contains 2n odd vertices, then it will require exactly n distinct journeys to traverse it.

In the graph of the Königsberg bridges (Fig. 2), all four vertices are odd; the graph is therefore not traversable. In other words, Euler provided a mathematical proof, as some of the Königsbergers had empirically verified, that a walk crossing each of the seven bridges exactly once was impossible.

In sum, Euler proved the following

Theorem 14

  1. (a)

    A graph possesses an Eulerian circuit if and only if its vertices are all of even degree.

  2. (b)

    A graph possesses an Eulerian path if and only if it has either zero or two vertices of odd degree.

A graph in which every edge has an associated direction (i.e., each edge has an initiating vertex and a terminating vertex) is called a directed graph, or digraph for short. Rosen’s relational diagrams in graph-theoretical form (e.g., diagram (6)) are digraphs. In a digraph, an edge may be considered as an ordered pair of vertices that are its end points. For example, in diagram (6), the hollow-headed arrow is a directed edge represented by the ordered pair of vertices (a,b). In a directed graph, the degree of a vertex has to be split into two entities. The number of edges terminating at a vertex (i.e., inwardly directed edges on a vertex) is the indegree of the vertex. The number of edges initiating from a vertex (i.e., outwardly directed edges on a vertex) is the outdegree of the vertex. Traversability for a digraph has an additional requirement: the underlying (undirected) graph itself has to be traversable first, but the Eulerian path also has to follow the directions of the edges. Explicitly, we have the

Theorem 15

  1. (a)

    A directed graph possesses an Eulerian circuit if and only if the indegree of every vertex is equal to its outdegree.

  2. (b)

    A directed graph possesses an Eulerian path if and only if the indegree of every vertex, with the possible exception of two vertices, is equal to its outdegree. For these two possibly exceptional vertices, the indegree of one is one larger than its outdegree, and the indegree of the other is one smaller than its outdegree.

7 Organisms and (M,R)-Systems

In Chapter 1 of Rosen (2000), after explaining that a living system must have nonsimulable models, hence must be complex, Rosen added:

“To be sure, what I have been describing are necessary conditions, not sufficient ones, for a material system to be an organism. That is, they really pertain to what is not an organism, to what life is not. Sufficient conditions are harder; indeed, perhaps there are none. If so, biology itself is more comprehensive than we presently know.”

Stated otherwise, a necessary condition for a natural system to be an organism is that it has an impredicative cycle of causal entailment. An organism must be complex; a complex system may (or may not) be an organism. Rosen in the final, concluding Section 11H of Life Itself (Rosen 1991) wrote:

“But complexity, though I suggest it is the habitat of life, is not itself life. Something else is needed to characterize what is alive from what is complex.”

Rosen devised a class of relational models of organisms called (M,R)-systems. The comprehensive reference is Rosen (1972). Rosen also discussed them in Section 3.5 of Rosen (1985a), Sections IV and V of Rosen (1985b), Section 10C of Rosen (1991), and Chapter 17 of Rosen (2000). Two recent papers on their noncomputability and realizations are Louie (2005) and Louie (2006). The reader may refer to any or all of the above for their details, which need not be repeated here.

For our present purpose, we only need to know that an (M,R)-system is an example of a formal model of a natural system that is closed to efficient causation. In other words, all its efficient causes are contained in an impredicative cycle of inferential entailment. We shall describe this impredicative cycle presently. By Theorem 13, an (M,R)-system is therefore a model of an organism. By Theorem 9, it is nonsimulable. This means that any natural system that realizes an (M,R)-system must be complex. Note, however, the implication (hence inclusion) only goes one way; the converse is not true. An (M,R)-system has a very special relational organization: in particular it is closed to efficient causation, having all its efficient causes in a cycle. A complex system is only required to contain a cycle containing some of its efficient causes: there may be efficient causes that are not part of the cycle. So not every nonsimulable formal system is an (M,R)-system, whence not every complex system has an (M,R)-system model.

Rosen’s idea behind (M,R)-systems was, as he explained in Chapter 17 of Rosen (2000), “to characterize the minimal organization a material system would have to manifest or realize to justify calling it a cell”. Whence Rosen defined a cell thus:

Definition 3

A cell is (at least) a material structure that realizes an (M,R)-system.

The word “cell” is used here in the sense of “autonomous life form”, i.e. “organism”. For example, in Section 10C of Rosen (1991) we find (The “graph” is the arrow diagram [10C.6] of an (M,R)-system, reproduced in this paper below as (27).):

“Any material system possessing such a graph as a relational model (i.e., which realizes that graph) is accordingly an organism. From our present perspective, we can see that [10C.6] is not the only graph that satisfies our conditions regarding entailments; there are many others. A material realization of any of them would likewise, to that extent, constitute an organism.”

Definition 3 thus says that “having an (M,R)-system as a model” is a necessary condition for a natural system to be an autonomous life form. Rosen, for emphasis, added the adjectival phrase “under the condition that at least one of the appropriate inverse evaluation maps exists” to his description of an (M,R)-system in his original definition in Chapter 17 of Rosen (2000). The requisite inverse evaluation map is what completes the impredicative cycle in Rosen’s standard construction of an (M,R)-system. We shall have much more to say about this map and other means of closure (cf. “there are many others” in the quote from Rosen (1991) above) in the remainder of this paper.

Immediately after Definition 3, in the same paragraph in Chapter 17 of Rosen (2000) in fact, Rosen added:

Making a cell means constructing such a realization. Conversely, I see no grounds for refusing to call such a realization an autonomous life form, whatever its material basis may be.”

The converse statement provides the sufficiency. So we may define “organism”, meaning any “living system”, as:

Definition 4

A natural system is an organism if and only if it realizes an (M,R)-system.

This definition is consistent with “The Answer” to “What is life?” discussed above.

Note that Definition 4 is not a contradiction to the quote in the beginning of this section that there may not be sufficient conditions that characterize life. Rosen has established the necessity; he chose to state the sufficient condition in his definition of life. “Having an (M,R)-system as a model” is the necessary and sufficient condition for a natural system to be an autonomous life form, on a relational level, even if one may not readily recognize the natural system as “alive” on the material level.

An (M,R)-system is the most rudimentary model of the cell, and thus serves as a foundation for theoretical biology. The reason that this is relevant to readers of this journal is that a similar structure (a closed path of efficient causation) is found in the communications paths of the brain, and the algebraic-topological representation discussed below may just as easily serve as a foundation for a theory of cognition that goes “beyond computation”. Detailed discussions of this analogy may be found in Kercel (2004a, b).

The simplest (M,R)-system consists of three mappings, metabolism, repair, and replication, in a hierarchical cycle. Metabolism is a morphism \(f\in H(A,B)\) with

$$ f:a\mapsto b $$
(16)

whence relational diagram and entailment diagram

$$ f \vdash b $$
(18)

Repair is a morphism \(\Upphi \in H(B,H(A,B))\). Since its codomain is H(A,B), it may be considered as a function that creates new copies of f, hence a genetic component that “repairs” the metabolism function. Its element-chasing, relational, and entailment diagrams are, respectively,

$$ \Upphi:b\mapsto f $$
(19)
$$ \Upphi \vdash f $$
(21)

Replication is a morphism \(\beta \in H(H(A,B),H(B,H(A,B)))\) with \(\beta:f\mapsto \Upphi \). With codomain \(H(B,H(A,B))\), the replication map serves to repair the repair components themselves. Note that “replication” in the metabolism-repair model is strictly limited to the notion of a function in the model that makes copies of the repair process. Replication is not a description of how the entire cell makes copies of itself. The apparent infinite sequence of maps that may arise with the iteration of “repair the repair” is truncated by a mathematical argument that turns it into a hierarchical cycle of maps instead. Using an “inverse evaluation map”, a correspondence is established between \(H(H(A,B),H(B,H(A,B)))\) and B, whence \(\beta:f\mapsto \Upphi \) may be replaced by the isomorphic

$$ b:f\mapsto \Upphi $$
(22)

with relational diagram and entailment diagram

$$ b \vdash \Upphi $$
(24)

The cyclic entailment pattern when we combine lines (24), (21), and (18) is the impredicative, hierarchical cycle of the (M,R)-system. The three maps \(\left\{ { b,\Upphi ,f} \right\}\) of replication, repair, and metabolism entail one another in a cyclic permutation. See Louie (2005) for a more detailed exposition of this cyclic entailment. The three maps form the trivial traversable simple diagraph The Eulerian circuit may begin at any of the three vertices: (①, ②, ③), or (②, ③, ①), or (③, ①, ②)

In terms of relational diagrams, the combination of (23), (20), and (17) results in which is the element-chasing version of Rosen’s diagram [10C.6] in Rosen (1991):

Louie (2006) suggested an alternate depiction of the digraph (26) of the simplest (M,R)-system. The geometry was changed to enclose the repair map Φ within. This gave a graphic representation of the metabolism component as the abstract equivalent of “cytoplasm” and the genetic component as the abstract counterpart of “nucleus”. After we additionally label the arrows, digraph (26) becomes the equivalent digraph Both vertices a and Φ have indegree 1 and outdegree 1, while vertices b and f have indegree 2 and outdegree 2. Thus by Theorem 15(a), the digraph has an Eulerian circuit, which in our context is precisely the hierarchical cycle containing all the solid-headed arrows, whence rendering the (M,R)-system complex. The closed path may begin at any vertex; the Eulerian circuits are (1,2,3,4,5,6), (3,4,5,6,1,2) and (5,6,1,2,3,4), corresponding respectively to the Eulerian circuits (①, ②, ③), (②, ③, ①), and (③, ①, ②) in diagraph (25).

Note that these three Eulerian circuits of diagraph (28) respect the solid-headed-arrow–hollow-headed-arrow ordered-pairing of each morphism: the solid-headed arrows and hollow-headed arrows are in an alternating sequence. In strictly graph-theoretic terms, circuits such as (3,1,5,6,4,2) are Eulerian as well; such “out-of-phase” circuits, however, have no corresponding Eulerian circuits in the entailment digraph (25), and hence do not represent the hierarchical cycle. In particular, the fact that the Eulerian circuit (3,1,5,6,4,2) happens to “segregate” the solid-headed arrows and the hollow-headed arrows is a graphic-theoretic coincidence that has no entailment implications.

8 Impredicativity

The simple digraph (25) has as a “dual” representation in its element-chasing version which is essentially a combination of the lines (16), (19), and (22). From this diagram, we see that

$$ \Upphi \left({f\left({b(f)} \right)} \right)=f, $$
(30)

which says that

$$ \Upphi \circ f\circ b=1_{H(A,B)}, $$
(31)

the identity morphism in \(H\left({H(A,B),H(A,B)} \right)\). Similarly, we have

$$ b\circ \Upphi \circ f=1_{H\left({B,H(A,B)} \right)} $$
(32)

and

$$ f\circ b\circ \Upphi =1_B. $$
(33)

Hidden in these innocent-looking identity morphisms is the impredicative structure of the maps in an (M,R)-system. For example, equation (33) is the statement

$$ f\left({b\left({\Upphi (b)} \right)} \right)=b\in B, $$
(34)

as well as the statement

$$ f\left({b\left({\Upphi (b)} \right)} \right)=\beta \in H\left({H(A,B),H\left({B,H(A,B)} \right)} \right). $$
(35)

Thus there is an entailed ambiguity between b and β: two nonidentical yet noncontradictory outcomes that are not algorithmically derivable from each other. Since algorithms are, by definition, forbidden to include ambiguous steps or data, this ambiguous entailment demonstrates the inherent noncomputability of (M,R)-systems. The resolution “b = β”, the correspondence between B and \(H(H(A,B),H(B,H(A,B)))\), is something that cannot be derived from syntax alone. One needs to know something about the maps involved, i.e., the semantics, to reach this identification. (The semantics here is the requirement that certain maps be monomorphic. See the aforementioned references on (M,R)-systems for details. We shall also discuss this semantics below in a more general setting in the language of algebraic topology.)

The impredicative entailment structure of an (M,R)-system is summarized succinctly in the following graph of helical hierarchy: The helix has three hierarchical levels per turn in an apparent ever-increasing infinite sequence. But there are in fact only three distinct maps: the three maps \(\left\{ { b,\Upphi ,f} \right\}\) of replication, repair, and metabolism with their entailment pattern in cyclic permutation, shown in diagram (36) as the bottom circle that is the vertical projection of the helix. The bottom circle is, of course, simply the element-chasing digraph (29).

The infinity-to-three hierarchical projection may be described in terms of a sequence of algebraic-topological hom-sets \(\left\{ H_n \right\}\). Define

$$ H_0 =B, $$
(37)
$$ H_1 =H(A,B), $$
(38)

and

$$ H_n =H\left({H_{n-2}, H_{n-1} } \right) \hbox{ for } n\ge 2. $$
(39)

The infinite sequence \(\left\{ {H_n } \right\}\) reduces to three members \(\left\{ {H_0, H_1, H_2 } \right\}\) if we have

$$ H_n =H_{n-3} \hbox{ for } n\ge 3. $$
(40)

This semantic correspondence closes the hierarchical cycle, hence rendering it noncomputable.

The requisite (40) may be established analogously to the standard \(\beta =\hat{b}^{-1}\) identification. An \(x\in H_{n-3} \) defines an evaluation map \(\hat{x}\in H\left({H_{n-1}, H_{n-2}} \right)\) by

$$ \hat{x}(z)=z(x)\in H_{n-2} \quad\hbox{for}\quad z\in H_{n-1}. $$
(41)

If we impose the semantic requirement that \(\hat{x}\) be monomorphic, then the existence of the inverse map \(\hat{x}^{-1}\in H\left({H_{n-2}, H_{n-1} } \right)=H_n \) is entailed, whence

$$ x\mapsto \hat{x}^{-1} $$
(42)

is the embedding that allows \(x\in H_{n-3} \) to be interpreted as a map \(x=\hat{x}^{-1}\in H_n \). Again, the identification \(x=\hat{x}^{-1}\) is something that can only be reached through semantics and not syntax alone.

Analogous to the identity morphisms in (31), (32), and (33), we now have, for \(x\in H_{n-3} \), \(y\in H_{n-2} \), and \(z\in H_{n-1} \),

$$ z\circ y\circ x=1_{H_{n-2} }, $$
(43)
$$ x\circ z\circ y=1_{H_{n-1} }, $$
(44)

and

$$ y\circ x\circ z=1_{H_{n-3} }. $$
(45)

9 Alternate Encodings of the Replication Component in (M,R)-Systems

Rosen’s theory of the metabolism-repair representation of living cells, i.e., (M,R)-systems, rests on the capacity of the M and R to synthesize the replication function from what was already available within M and R. Rosen demonstrated this capacity with his classic “inverse evaluation map” encoding. However, he noted that there are many other encodings that lead to replication from the interaction of M and R. Knowing what some of those other ways may be is useful, since it provides further insight into what properties are allowed, required or forbidden in the metabolism-repair paradigm.

Louie (2006) gave two additional encodings of (M,R)-systems with replication represented by morphisms that are not the classic “inverse evaluation map”. The reader is referred there for the algebraic details and biological interpretations. Here we shall study these alternate descriptions in graph-theoretic terms. A graph-theoretic analysis of traversability illustrates succinctly the underlying topological structure of impredicative hierarchical cycles.

When replication is a “conjugate isomorphism”, its depiction is

$$ b:b\mapsto \Upphi $$
(46)

with relational diagram Comparing (46) with (22) and (47) with (23), we see that only the material cause of the replication morphism is changed. Thus it has the same entailment diagram as (24)

$$ b \vdash \Upphi $$
(48)

whence the same entailment digraph (25). The relational diagram for this alternate (M,R)-system is Note the distinction between digraphs (28) and (49): (28) is a multigraph, while (49) has a self-loop 1=(b,b) and is therefore a pseudograph. In digraph (49), vertices a, f, and Φ have indegree 1 and outdegree 1, while vertex b has indegree 3 and outdegree 3. Thus, again by Theorem 15(a), the digraph has an Eulerian circuit, which in our context is precisely the hierarchical cycle containing all the solid-headed arrows, whence also rendering this (M,R)-system complex.

The closed path may begin at any vertex; the Eulerian circuits are (1,2,3,4,5,6), (3,4,5,6,1,2) and (5,6,1,2,3,4), corresponding respectively to the Eulerian circuits (①, ②, ③), (②, ③, ①), and (③, ① ②) in diagraph (25). These three Eulerian circuits of diagraph (49) respect the solid-headed-arrow–hollow-headed-arrow ordered-pairing of each morphism: the solid-headed arrows and hollow-headed arrows are in an alternating sequence. All these conclusions on Eulerian circuits are identical between diagraphs (28) and (49).

When replication is a “similarity class”, its depiction is

$$ a:a\mapsto \Upphi $$
(50)

with relational diagram Comparing (50) with (22) and (51) with (23), we see that this time both the material cause and the efficient cause of the replication morphism are changed. (The formal cause, the categorical structure of a morphism, cannot change; neither can the final cause: replication has to entail Φ.) The entailment diagram is

$$ a \vdash \Upphi $$
(52)

and the corresponding relational diagram is Because of the presence of the self-loop 1 = (a,a), digraph (53) is a pseudograph. Vertices f and Φ have indegree 1 and outdegree 1, vertex b has indegree 2 and outdegree 1, and vertex a has indegree 2 and outdegree 3. This time we have to invoke Theorem 15(b) and conclude that the digraph has an Eulerian path that begins at vertex a and ends at vertex b, but no Eulerian circuit. The Eulerian path {1,2,3,4,5,6}, however, still respects the ordering of the solid-headed arrows and hollow-headed arrows in an alternating sequence.

The absence of an Eulerian circuit in this (M,R)-system does not, however, mean it is non-complex. Note that the requirement in Theorem 12 (and Theorem 13) is only that there is a cycle that contains all the solid-headed arrows, but not necessarily all the arrows, solid-headed and hollow-headed. In our two previous (M,R)-systems the cycles also happen to contain all the arrows. That it is the case for digraph (28) is a consequence of its being a multigraph. When the relational diagram is a pseudograph (i.e., has self-loops), the cycle that contains all the solid-headed arrows may or may not be an Eulerian circuit. For digraph (49) it happens to be. For digraph (53), its Eulerian path {1,2,3,4,5,6} is not a circuit, but it does include the cycle (1,2,3,4,5) that contains all the solid-headed arrows. This is the requisite hierarchical cycle that renders this third (M,R)-system complex.

10 Postlude: Intuition is not Computable

In this paper we have demonstrated several new instances of Rosen’s claim that there are many encodings that give replication “for free” from metabolism and repair. The capacity for discovering such encodings has implications beyond cell biology because it provides a technique for showing how impredicatively structured relationships (such as those observed in organism/life and brain/mind processes) can arise. We have also shown an approach to facilitate the intuitive discovery of other encodings, with the graph-theoretic representation serving as a way to test the properties of whatever novel encodings might be discovered.

The reason we call the process “intuitive” is to emphasize that the strategy reported in this paper is not reducible, even in principle, to computation. As Rosen was at pains to point out, given a complex representation of a complex process in the modeling relation, there is no algorithm for using the representation to gain novel insight about the process being modeled. Every such insight is discovered by a flight of creativity. This is the aspect of the process that is intuitive. It essentially amounts to discovering a new hypothesis that resolves a seeming paradox, and doing so by reasoning about the ambiguities that led to the seeming paradox. This kind of reasoning goes beyond inductive logic (reasoning from propositions about particulars to propositions about generalities); semiotic theorists call it abductive reasoning. The abstract representations in the formal system in a modeling relation facilitate abduction. They help us to see seeming paradoxes, and help us to test whether or not new hypotheses resolve them. However, they do not provide a technique for discovering the new hypotheses; that is the exclusive province of cognition.

It is worth remembering why these encodings into algebraic-topological metabolism-repair models are important. Each represents a set of constraints on the relationship between metabolism and repair that would close the loop of efficient causes “for free” (i.e., without the necessity of an external influence to force the closure). It is likely that many such non-isomorphic constraints (i.e., many more than those already discovered by Rosen, Louie, and others) exist in different organisms. It is easy to envision that one task of the theoretical biologist would be to identify (perhaps by using the strategy described in this paper) the encodings that describe the constraints in particular kinds of organism. Once having done so, one can ask questions about the resulting impredicative structure to gain novel insights about the organism.

The point is that hierarchic loops of containment of causation abound in organism/life and brain/mind processes. The strategy of discovering new encodings between the maps in the impredicative algebraic structures that represent those loops, and studying their implications topologically, could facilitate the gaining of a whole new class of insights into life and cognition. By familiarizing biologists and cognitive scientists with Rosen’s work, we have provided them with a non-computational mathematical strategy to study the wide range of biological and cognitive processes that are not describable by computational simulations.