Introduction

… when you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind…

Kelvin (1883)

Complexity Theory is a big issue for philosophers and biologists. First, because it is a difficult topic extending across many disciplines and second because there seems to be a widespread perception or assumption that things are just getting more complicated all the time. Biologists are struggling to see how life fits into the wider question and what might be driving evolution towards ever increasing states of complexity. One recent publication (Complexity and the arrow of time: Cambridge University Press, Cambridge) edited by Lineweaver et al. (2013) tackles these difficult questions head on. This publication arises from a Templeton Foundation Symposium ‘Is there a general principle of increasing complexity?’ held at Arizona State University in 2010. It brings together contributions from 14 scholars spanning topics from the Big Bang to contemporary religious thought, including Chapters 7–11 in Part III on ‘Biological complexity, evolution, and information’. The editors do not claim to have resolved all the leading problems, but they do claim to have made the questions more explicit. What these questions are and how effectively they might have been clarified are the subjects of this essay.

Before one can begin any such evaluation, one must first ask what is complex and what is not, and then try to come to grips with the idea of Complexity Theory as an explanatory mechanism. It is perhaps self-evident the world itself and all the things that live upon it are complex (Lloyd 2013, Chapter 5, p. 80). But what exactly does it mean to make this statement? For the author of this essay, a thing which is complex is just one that defies understanding and/or simple description at first inspection. Thus, a goldfish is complex in a way that a ball bearing is not. I also have to admit that that this simplistic subjective definition in no way meets the exacting quantitative requirements of Lord Kelvin (1883) in the quotation above. One needs to be able to both define and measure complexity if one is to say that X is more complex than Y or that X has become more complex in the time interval t to t + 1. This is a key issue, which several contributors examine in depth and upon which I will expand on in the following section. Another issue is whether we have to tools to examine such matters. Again, several contributors think that the standard reductionist approach falls short, and I agree with them. For instance, Kauffman (2013, Chapter 8, p. 163) draws on the example of balls on a billiard table. Their behaviour is deterministic and entirely predictable from Newtonian mechanics. But what happens if there are no cushions and the shape (and even topology) of the table change from minute to minute? What then is predictable? All one has to do is to substitute organisms for balls, the environment for the table and the case is made in a biological sense at least. Can other methods, including Complexity Theory, fill the gap? This debate is well covered in the book, but I think the answers remain to be seen in terms of concrete outputs, particularly for matters biological. I will return to this evaluation later in the section on biological aspects of complexity.

General issues

What is complexity? Even this is not a simple question. Conway-Morris (2013, Chapter 7, p. 136) introduces readers to no less than four types of complexity (1) static—based on unchanging patterns, (2) dynamic—where the patterns change over time, (3) evolutionary—where the change is directional and (4) self-organising. The scheme attributed to Lucas (2014) is helpful, but seems to have an element of circularity to it, as the property of self-organisation is one way to create pattern (simple or complex) out of uniformity or chaos.

I prefer to work with models. Consider once again the case of ball bearing and the goldfish. The latter has apparent complexity at any level of inspection. However, if we choose to scan the surface of the ball bearing under an electron microscope, then an exciting planetary landscape may be revealed. Thus, it may be said to have intrinsic complexity. Next, compare your daughter’s pet miniature rabbit with the cuddly toy she keeps in her bedroom. They may not seem very different from one another when just sitting together on the sofa, but throw in the incentive of a few carrots and only the real one will respond. In terms of behavioural repertoire, the live rabbit may be said to have potential complexity; i.e. a property displayed only under changed circumstances. Hence, the behaviour of the observer and the level of analytical description can colour what is deemed to be complex or otherwise and further it follows that the property of ‘complexity’ itself may be a relative matter.

Several contributors to Lineweaver et al.’s (2013) book comment on the way that biologists see complexity (e.g. Conway Morris and Kaufmann in Chapters 7 and 8, respectively). The consensus seems to be that some sort of proxy is required. What is less clear is just what that proxy ought to be; number of cell types, extent of metabolic capability, genome size, number of unique genes or control elements. All of these have value in the sense that they are objective and to some extent measurable, but their magnitudes may not always map on to intuitive ideas about what is most complex. For instance, the genome size proxy may be very misleading due to polyploidy, gene duplication, proliferation of selfish DNA transposable elements etc.; leading to the well-known ‘C-value enigma’ see Gregory (2001) as discussed by (Lineweaver et al. 2013, Chapter 1, p. 4). Equally, intuition may not be a good guide to comparative complexity either. There is a widespread prejudice that the human brain is the most complex thing ever to have evolved on the planet. This is surely anthropocentrism based on the perceived value of the outputs of its deliberations. It is unclear just how well the human brain would stand up in terms of proxy measures; volume, numbers of neurons, diversity of neuronal types, neurotransmitter repertoire, number and variety of synaptic connections etc.

How, then, are we to understand such phenomena in a general sense? I will argue later that complexity arises via the interaction of simple elements, even from a limited set of interactions between quite small numbers of simple elements. The standard reductionist programme is poorly equipped to deal with interacting systems. Even extensions such as analysis of variance, multivariant statistics and network analysis only take us so far. Other probability tools like Markov Chain Monte Carlo methods and Chaos Theory may take us further, but are often hard to apply. Hence, a whole new field called Complexity Theory has been developed to fill the gap. This approach may be defined:

Complexity Theory states that critically interacting components self-organise to form potentially evolving structures exhibiting a hierarchy of emergent system properties.

Lucas (2014)

It takes the view that systems are best regarded as wholes, rejecting as inadequate simplification and reductionism. It recognises that strongly interconnected interacting systems are inherently non-linear. They are also probabilistic and exquisitely sensitive to initial conditions (i.e. chaotic). Several contributors examine how Complexity Theory may aid understanding of cosmology, abiotic synthesis and biological evolution (e.g. Smith, Chapter 9). The concept of emergent properties is especially important. These are features that are found to arise from the system acting as a whole and which may not be readily inferred from examination of the parts and their individual properties. Hence, one may look at a brain in an anatomist’s jar for as long a one may care to do so. One may even dissect it and examine it under a microscope aided by chemical stains or fluorescent DNA probes. But one will never be able to guess from the data produced, that this object could design the Eiffel Tower or put a man on the Moon. Indeed, after Albert Einstein’s brain had been purloined post mortem scientists used many of these techniques, but were still unable to locate the Theory of General Relativity. Rather, it is better in my view to say that this intellectual product arose because of what Einstein was physiologically and psychologically, where he was and when he was. Thus, it is an emergent property of interacting systems involving the organism (young Albert Einstein) and its environment (early 20th century Bern). In other words, its birth is complex.

Physics and cosmology

The role of Complexity Theory in elucidating the origin and evolution of our Universe is the focus of Part II (Chapters 2–6) in Lineweaver et al. (2013). These questions seem well suited to this new tool as earlier introduced by Lineweaver et al. (2013, Chapter 3). The authors paint a picture of the wild mixture of galaxies and galactic clusters that have emerged from a small, relatively uniform, but highly energetic, source at the Big Bang. Each galaxy contains a central gravitational attractor called a Black Hole, destined eventually to consume all the matter in the galaxy and degrade by slowly emitting it from its poles as a form of energy known as Hawking Radiation. Ironically, the Universe as a whole is expanding at an accelerating rate with the individual galaxies and clusters moving ever further apart from one another. It is almost as if the energy (in the form of matter) consumed by the Black Holes is fuelling this process in some way, although nobody seems to have gone as far as to make this claim explicitly. The behaviour of this evolutionary process is governed by sets of laws acting at different levels of scale. Quantum Mechanics takes care of the very small, Relativity takes care of the very large. Newtonian Mechanics looks after the everyday stuff in the middle and, conveniently, works on a human scale. The first set of these laws is probabilistic and the other two deterministic. It is clear where the upper and lower bounds of applicability lie for Newtonian Mechanics and thus it may be said to be theoretically compatible with the other two. The big contemporary challenge is to make Quantum Mechanics fit with Relativity. It does not quite end there either, because one still needs to account for the asymmetrical dominance of matter over antimatter, and to explain those mysterious phenomena known as Dark Matter and Dark Energy. Hence, cosmologists are trying to construct theories to explain something (i.e. the Universe) for which perhaps only 15 % is observable by present means.

How, then, has Complexity Theory helped to make sense of all this? Lloyd (2013, Chapter 5) presents and develops the Turing Machine ideas as advanced more than a decade ago by Adami et al. (2000) and Adami (2002). Central to this contribution is the idea that systems can be self-organising, so that apparently ordered, or at least non-random, products can emerge from apparent uniformity. According to the commentators in Part II this is exactly what happened 377,000 years after the Big Bang when a hot more or less homogeneous soup of energy and fundamental particles began to develop clumps that gave rise to the everlasting glory of the stars we see at night. Probability is at work here. In any such seemingly uniform, but dynamic, mix there will be moments when particles collide or come close enough together to fuse. In turn, these may become nuclei for accretion and “Bingo!” we have emergent galaxies etc. Thus, if we were to replay the tape of time, then we would likely get something very similar to what we see today in terms of White Giants, Pulsars and Red Dwarfs etc. It is just that all the star stuff would be distributed in different places. Support for this view comes from the search for exoplanets via projects such as the NASA Kepler Telescope (www.space.com). Hundreds of such bodies are now discovered every year indicating that there is something in the way that stars form that produces planets at the same time. These many independent observations leads to the strong inference of a general underlying causative process. Further, since some of these planets are likely to have an atmosphere and be earth-sized and located in the so-called ‘Goldilocks Zone’ (i.e. at a habitable distance from their home star), then in the well-known argument from large numbers, alien life most probably exists on one or more of them; see Petigura et al. (2013) who estimate of that the number of such potentially habitable planets in our galaxy alone exceeds 11 billion. Thinking this through further leads to the seemingly inescapable conclusion that life itself is an inevitable product of the Laws of Physics!

The newly organised contents of the still expanding Universe represent a low entropy state and the Second Law of Thermodynamics requires that this takes energy input. Davis (2013, Chapter 2, p. 27) presents a disturbing model of a ‘hot earth in a box’ to demonstrate this point. Raising the temperature of the box to 4,000K is required to convert the contents to a plasma formed of various elements. One needs to go as high as ten billion degrees to convert the nuclei into a pre-cosmic soup of fundamental particles. The point about entropy is well made by this model, but yet there is something unsatisfying about it. What would happen if the hot earth were to be cooled down again? Would all the beautiful butterflies come back? Native intuition might lead one to think that the result would be just a big black cinder! However, the consensus of received wisdom is that everything I can see looking out of my window into the garden came about in exactly this way. Clayton (Chapter 14, p. 332) expresses much the same unease by quoting Tom Stoppard on the subject of stirring jam into rice pudding. No matter how hard or how long you stir the other way you will never recover a single blob of jam.

Biological evolution

There is no doubt that life arose on Earth from abiotic sources and has evolved following well known trends of increasing size, structural complexity and information content—“Once there were bacteria, now there is New York”. Questions surrounding the origin of life link Part II of Lineweaver et al. (2013) to Part III on Biological complexity which further asks if there is some sort of in-built drive to increase the complexity of living organisms and if such a phenomenon is captured by current theories of biological evolution. These issues are considered both at the level of the cell, whole organisms and their ecosystems. In the next section, I will argue that just such a complexity drive does exist as a de facto property of organisms simply because they contain information systems with hierarchical expression control elements.

Constructing a plausible story for how self-replicating life arose from and abiotic chemical soup is one of the most difficult scientific tasks that one can ever imagine. Nonetheless, a considerable intensity of effort is devoted to this cause. Complexity Theory has clear application here as the process must involve self-organisation in some fashion. All commentators in Lineweaver et al. (2013) seem to be in accord on this point. Conway Morris (2013, Chapter 7, p. 142) is less certain that real understanding has emerged with regard to elegant biological structures like sunflower seed heads and calls self-organisation “the ghost at the Darwinian banquet”.

As mentioned earlier, proxies are employed to measure biological complexity, but each of these has problems of one sort or another. For instance, one may take body size as an elementary example. There is little doubt in my mind that my pet tortoiseshell cat, Bridie, is more complex than an individual influenza virus particle. But, is a blue whale more complex than a brontosaurus? For that matter, is either one of them more complex than a house mouse. To decide on such comparisons one is then driven to employ another proxy. Smith (2013, Chapter 9) explores the novel idea of considering networks of metabolic capability as a proxy. This is an interesting concept because it extends to considering the flux of carbon atoms through higher order systems as a proxy for ecosystem complexity. This author also invokes the idea of a minimum core network as being essential to life. Others also support this view by pointing out that apparently novel, and recently derived, functions have very ancient origins. Examples quoted by Conway Morris (2013, Chapter 7, pp. 143–147) include those of the Pax-6 (vertebrate eye development) gene and SNARES loci (key role in multicellularity and cellular complexity). True, these genes did not always have these same functions in the past. Nature has acted like a tinker and adapted them to their modern roles.

To my mind one proxy stands above the rest, but with a caveat. This proxy is genome size and reflects the sophistication of information accumulated over deep time and stored in chemical form in the nucleus of every cell. Programmed sequential expression of parts of this information gives the cell, its form and function. It also governs its relationships to other cells. However, size alone is not everything, because genomes may expand for trivial or redundant reasons as discussed earlier in the Introduction to this essay. Thus both dogs and mice have similar genome sizes to humans. But, both of them do have many more olfactory receptor genes than humans. Hence, they may well be more complex in the smelling department, but are they more complex organisms overall? So the genome size proxy needs to be expanded to include some index of functional repertoire. I note in passing that the reduction of gene duplication as polyploid genomes resolve to become diploid represents one of the few genuine examples of decreasing complexity in biological systems—depending on how one chooses to value genetic redundancy.

If biological complexity really has been increasing, then one may ask if it has any limits and/or if it has reached them? There are good reasons to think that there may well be such limits. There has only been one Cambrian Explosion. Since then diversity has increased within existing phyla, but no really revolutionary new forms or body plans have appeared. Conway Morris (2013, Chapter 7) claims that his experience of convergent phenomena in biology leads him to believe that all viable regions of biological hyperspace have already been explored and exploited. It is difficult to know quite how one would test this claim short of creating genetically engineered flying shrimps, or perhaps a more practical alternative. His view does have an important theoretical consequence. It implies that if one ‘replayed the tape of life’ then one would get back pretty much what we have today, even if we might find it hard to recognise. Nature has already done this experiment for us at the ecosystem level. Consider the guilds of herbivores on the southern continents. Their trophic roles are preserved, but the fauna are radically different; camels and rodents in South America, antelopes, horses and cattle in Africa, marsupials in Australia and birds (now sadly extinct) in New Zealand.

Philosophical issues

Given that biological complexity really has increased over time, then one must ask if existing theories of evolution would predict and explain this trend. This is a major focus of articles in Part IV in Lineweaver et al. (2013). Several contributors invoke Gould’s model of the ‘drunkards walk’ where the hero of the story wanders along a street bumping into a wall on one edge of the sidewalk and eventually, and inevitably, ends by falling into the gutter. Here, we must substitute some minimal level of complexity required to sustain life for the wall and the evolving organism wandering the street of time. Progress is stochastic and, even with an equal chance of complexity increasing or decreasing, the organism must eventually gain a greater level of complexity (akin to reaching the gutter), because all decreasing pathways are destined to be reflected off the wall due to natural selection. It is a simple and compelling idea in many ways, but I prefer a model where increases in complexity are locked in via a steady accumulation of information over time (See Concluding Observations).

Ruse (2013, Chapter 12) explores at length the key question ‘Does evolutionary theory predict ever increasing complexity?’. He draws an interesting contrast between the writings of Darwin and Spencer claiming that these intellectual frameworks are carried forward by their disciples in the forms of Fisher and Wright respectively. In his view, Darwin puts natural selection forward as the driving force of evolution and anticipates ‘progress’, but does not require it. Spencer on the other hand relegates ‘survival of the fittest’ (his term for natural selection) to a relatively minor role and sees evolution driven by metaphysical forces along the lines proposed by Lamarck. One should maybe not discount Spencer’s ideas off hand as they were certainly current among professional biologists in England well into the early twentieth century (Mayr 1982). Also, under his scheme, evolutionary progress is certain in the sense of ‘a greater diversity of parts’. In my view, it is as well to be cautious if one wishes to substitute the term ‘progress’ (perhaps out of a sense of historical deference) for increasing complexity. There will always be progress of a type. Time’s arrow guarantees this. Things will inevitably change going from A to B as the years roll on. But, the real issue is; Will they become more complex? This is not a given and depends how far in time it is from A to B. I agree with Ruse that Fisher is an archetypical Neo-Dawinian, perhaps the fiercest of all fierce Neo-Darwinians. I am not so sure about seeing Wright as an intellectual descendant of Spencer. Their greatest point of similarly lies in shifting natural selection to the sidelines. Wright explores the null hypothesis of no selection and then asks if evolution will still happen—it will. He sees this in terms of ‘adaptive landscapes’ in his Shifting Balance Theory. However, movement between fitness peaks across this topology is certainly not driven by inheritance of acquired characteristics.

Many of the ideas above are also captured by Wimsatt’s (2013, Chapter 13) concept of ‘generative entrenchment’ which applies to technological and cultural evolution as much as it does to biological evolution. However, he does not generally favour a search for ‘laws of complexity’. He is joined in this to some extent by Clayton (2013, Chapter 14) who feels that ‘complexity’ (which I read as Complexity Theory) does not yet constitute a Grand Unified Theory of cosmic or biological evolution. Here I feel forced to agree, but differ in that I am far less certain that complexity theorists are making such an ambitious claim. He goes further and rejects what he terms a ‘Unity Approach’ to study the science of complexity because he feels it potentially “obscures a deeper insight that (multiple—my insertion) complexity theories offer”. His alternative is to offer a ‘Multiple Complexities’ approach (ibid, pp. 342–344). I am not certain that I share quite the same scepticism as these two authors, but I do note the warning bells.

Concluding observations

I think there is no doubt that this book has aired many of the major issues around Complexity Theory and will stimulate new thought and new work in this area. However, with a finite number of contributors no book like this one can hope to be all-inclusive. Clearly, there must be significant gaps in the coverage. Cosmological and biological aspects are extensively discussed, but there is less on matters philosophical and very little about society and religion. Again perhaps this is only to be expected given the interests of the contributing writers. I don’t believe it would be fair to say that this constitutes a failure on the part of the original seminar, but rather that it does create an opportunity for others to pick up on, if they should choose to do so. One must also agree with the Editors’ own assessment that they have not provided answers to the leading questions. But, have these questions been clarified? I am not so sure about this either, because many of these issues remain much as they were when as first raised; see for instance Carroll (2001) on morphological complexity. This volume’s major contribution would seem to be highlighting the multifaceted nature of the idea of complexity and this idea may, in time, become strongly influential.

The idea of complexity itself was first introduced to this reviewer by reading an old article in Scientific American about the message that was to be engraved on the golden disks in the two Voyager series spacecraft. This article explained how information represented in a simple letter code could build progressively to introduce numbers and mathematical operators, proceed to show that on Earth we use a base 10 counting system and know how to calculate the area of a circle etc. As each new operator is introduced to the hypothetical alien reader an expanding world of sophistication and possibility is revealed. In other words, interactions between combinations of simple pieces of information quickly create complexity. I had a similar experience in the early 1980s sitting in an evening class at a community college to learn the BASIC computer language. Each new command that the novice learns opens up an infinity of possibilities. It turns out that using a few simple IF, THEN subroutines in succession quickly has emergent properties in the form of useful computer programmes; e.g. for teaching maths to school children, making animated electronic Christmas cards for friends and producing ever-changing coloured psychedelic patterns on the TV. I may not have made the very best use of my new talents, but I did learn a valuable lesson about complexity. Hence, I count myself among the members of any school of biology that holds that the apparently increasing complexity of life is an inevitable consequence of a slow accumulation of genetic information. Further, biological systems do seem to have some sort of in built drive to increase and adapt their information content—selfish DNA, gene/partial genome duplication etc. From this process arise many emergent properties due to multifactorial interactions between genes themselves, their control elements and networks of expressed gene products. Clearly, not all will be successful as parts of biological hyperspace remain unfilled and some may not benefit from becoming more complex (see the examples of body plan design and forms of agriculture as practised by animals as used by Conway Morris in Chapter 9, pp. 154–155). So the challenge is out there. Complexity Theory has matured as a tool, the key questions have been newly reframed and the opportunity to get down to the real work has been presented. It remains to be seen if this book will become the inspiration and sustaining motivation for these new programmes.