Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Historical Overview

In the Aristotelian world view, every effect has a cause. This principle can be formally expressed by

x = f ( x ) ,
(6.1)

according to which the rule f describes how the cause x generates the effect x′. In the 16th century, Newton introduced a new way to quantify causality in terms of continuously changing states. The Newtonian world view can be encapsulated in the same formal equation, except that x′ represents the rate of change of state x at an instant, rather than a new state at a subsequent time.

Newtonian mechanics allowed us to supersede the primitive, common-sense view that patterns in structure and behavior must reflect patterns in an underlying cause. Newton showed that patterns can be generated autonomously in a physical system when the rate of change of state is a function of state.

The iconic example is Newtonʼs model of the orbits of the planets. Seen from the Earth, planets move in complex patterns among the stars. Copernicus explained that these paths would look simple if we could view them from the sun. Newton, building on Keplerʼs mathematical model of the paths that planets take around the sun, explained how they result from a simple rule of the form (6.1). The cause in Newtonʼs model is a radial force acting towards the sun, but the effect is qualitatively different, a periodic elliptical orbit around the sun. Newtonʼs beautifully simple, accurate predictive model displaced the beautifully simple ancient explanation that planets perform a complex dance in the heavens because an intelligent designer employed angels to make it so (Fig. 6.1a).

Fig. 6.1
figure 1

(a) Viewed from Earth (E) other planets follow idiosyncratic complex paths. Newton showed that the different paths can be predicted by the same simple rule. In Newtonʼs model planets move nearly at right angles to the forces that move them (arrow). Common sense is not merely useless, but misleading in trying to understand pattern formation in this simplest of dynamical systems. (b) Howard Edgertonʼs famous photograph of a milk splash, which formed the frontispiece of Darcy Wentworth Thompsonʼs On Growth and Form. In violation of common sense, a complex, lifelike form is generated from a simple, egg-shaped precursor, by dropping it from a height onto a formless constraint. Does animal morphogenesis employ analogous, autonomous pattern-forming mechanisms? Thompson thought so, but died before mathematical methods capable of explaining crown splash formation, and computer simulation methods capable of replicating it, were developed

Medieval thinkers had a very simple explanation of animal form: preformation. They proposed that a miniature human, a homunculus, is folded into each human egg. Development merely unfolds a structure that already exists. In the late 19th century the Newtonian scientific revolution began to have an influence on developmental biology. In 1874, His demonstrated how the development of anatomical structures can be mimicked by nonliving materials. Shortly thereafter, Roux coined the term Entwicklungsmechanik (or developmental mechanics) to describe this approach to explaining animal form. The approach was eloquently championed by Darcy Wentworth Thompson in the early 20th century. His epic tome, On Growth and Form [6.1], was described by Sir Peter Medawar as the greatest work of scientific literature ever written in the English language [6.2]. Its frontispiece showing a drop of milk splashing onto a surface (Fig. 6.1) has become an iconic image in biology. Thompson pointed out that the beautiful, regular, and reproducible pattern is not present in the falling milk drop. We must seek its creator in the dynamics of the splash, not in angels that intervene just at that moment.

However, the early insights of developmental mechanics were overtaken by developments elsewhere. While Thompson was writing his book in Scotland, Thomas Hunt Morgan was studying the inheritance of fruit fly body parts in New York. Building on Mendelʼs earlier work, Morgan made the Nobel Prize-winning discovery that body parts are inherited as if instructions for building them are laid out in lines like beads on a string [6.3]. He called these instructions genes. They remained theoretical objects until the middle of the 20th century, when they were identified with DNA nucleotide sequences [6.4]. This discovery ushered in a period of spectacular productivity in molecular biology, as the beautiful ideas of developmental mechanics were swept away by the ugly facts: Morphogenesis is controlled by patterns of gene expression [6.5]. Genomes evolve by blind evolutionary tinkering to construct organisms that perpetuate the genes [6.6,7]. Any resemblance between the living and the dead is merely coincidental.

As predicted by the theory that genes determine morphology, disrupting genes or gene expression can disrupt morphological development. However, genetic determination of form has turned out not to be as simple as first suggested by Mendelʼs peas and Morganʼs flies. Many of the claims that have been made about the effects of genes can be made, using comparably good evidence, about the effects of star signs. The zodiacal sign of oneʼs birth predicts height, susceptibility to mental and physical illness, career choices, sporting ability, and various other personal attributes and life outcomes [6.8,9,10,11]. These observations are real, but it requires some critical thought and a little understanding of statistical theory to understand why these correlations occur. Ought not the same standard be applied in developmental genetics? Morphological patterns are predictable from gene expression patterns during development. The question is, why?

As Newman and Forgacs [6.12] point out, the idea of a genetic program for building an organism took hold among molecular biologists despite the fact that no convincing model of a causal link between gene expression and the three-dimensional form of an animal had ever been presented. The Human Genome Project, whose goal was to print out the instruction book for human biology, marked the high point for the paradigm that every problem in organismal biology can be reduced to the problem of finding a gene for it [6.13]. However, the failure of genomics to live up to its early promise has not dampened enthusiasm for more of the same. Instead, molecular biologists are extending the paradigm to incorporate multiple genes for a trait, multiple effects of a gene, interactions between genes, and feedback loops between genes and gene products [6.13].

In the late 20th century, molecular genetics became eerily reminiscent of the last gasps of Ptolemaic astronomy. A proliferation of epicycles leads to an increasingly accurate description of rapidly accumulating data, and ever more accurate predictions [6.14]. We can only thank our lucky stars – irony intended – that Keplerʼs contemporaries did not have computers. With even a modest 21st century desktop computer, 17th century astrologers could have developed astroinformatics and bequeathed to us an ability to describe everything in the world with arbitrary precision, while leaving us utterly ignorant of our place in it. It is worth noting that the Ptolemaic model is not inherently false, just a different way of looking at the data. It remains to this day capable of describing observations with greater precision than modern astronomy, because Ptolemaic epicycles are constrained only by a need to fit the data while modern astronomers are obliged to ensure that their models obey certain constraints now called the laws of physics.

It is true that large numbers of genes and gene products interact in complex networks during development, and there is nothing inherently false in the theory that there are systematic relationships between patterns of gene expression and patterns of morphology, physiology, and behavior. The problem is that developmental systems are clearly dynamical systems. As illustrated above, even the simplest dynamical systems are beyond human comprehension if we attempt to model them as Aristotelian causal chains or networks; For example, while on the one hand we cannot help but be impressed by the prodigious effort, skill, and technology that has gone into mapping out the molecular genetic network underlying sea urchin morphogenesis [6.5], we also cannot help noticing that the result tells us nothing about how sea urchin morphology arises in this simplest of animal developmental systems. From a computational modelerʼs point of view a molecular network map makes it possible to simulate the molecular network that operates while the animal is developing, without giving us the slightest hint about how to simulate the development of the animal.

This chapter is a call for developmental genetics to become reacquainted with developmental mechanics. Modern mathematical methods and computational tools make it possible to analyze the dynamics of complex networks, and morphogenesis in expanding soft matter (i.e., the idea formerly known as growth and form). This chapter will lay out some ideas that may be fundamental to understanding the relationship between genes and proteins at the microscopic level, and morphology, physiology, and behavior at the macroscopic level. These ideas are currently not a standard part of molecular geneticistsʼ training. One of the key ideas that will be explained is that, in general, there are infinitely many sets of components and interactions that will generate a particular pattern. As a consequence of this, attempts to understand how the pattern arises by cataloguing the underlying components and their interactions are likely to be hindered by large variability at the microscopic level; For example, what appears to be the same macroscopic process in two individuals could be accompanied by overexpression of a particular gene in one individual and reduced expression in the other. The good news is that the global characteristics of a network that specify its behavior are generally much fewer than the number of components, and very much fewer than the number of interactions. This means that the cost of learning abstract mathematical concepts is repaid by a simpler reality.

A second important concept is that, while the dynamics of genetically regulated macromolecular reaction networks does have the potential to account for the diversity of pattern and form in biology, it struggles to explain the lack of it. Why, among all of the patterns that could be generated by such networks, do they appear to restrict themselves to instructing tissues to develop in ways that they could develop without instructions? This chapter will outline a possible solution, drawing on recent progress in evo-devo, a research program that is producing a new synthesis of 19th century developmental mechanics and modern molecular genetics.

Models

Ordinary Differential Equations

Ordinary differential equations quantify how the rate of change of some variable(s) depends on some other variable(s), which may include the state variables themselves.

This section introduces three simple dynamical systems that will subsequently be used to illustrate principles of dynamical pattern formation. These are systems whose behavior can be analyzed using two-dimensional plots. The methods of analysis are general, but are difficult to visualize for more complex systems.

It is not necessary to follow this section in detail in order to get the key points, which can be summarized as follows.

  1. 1.

    There is a general mathematical model, called the state space model, for any dynamical system with a finite number of components.

  2. 2.

    We can draw a map showing the kinds of behavior that a system can generate – the topology of its trajectories – by examining mathematical properties of its state space model.

  3. 3.

    There is, in general, an infinite class of state space models whose trajectories have a given topology.

  4. 4.

    Correspondingly, on the one hand we can design infinitely many networks of interacting components that exhibit any specified behavior, while on the other hand it is impossible to determine how a network will behave by examining its components and their interactions, unless you know all of them.

  5. 5.

    The ideas extend to systems with an infinite number of components, in particular to the mechanics of continuous materials.

Pendulum

Pendulums oscillate spontaneously at a frequency depending on their effective length. The oscillations die away because energy is dissipated by friction and drag. The equation of motion for a simple pendulum including velocity-dependent drag is

θ ·· + μ θ · + g r sin θ = 0 ,
(6.2)

where θ is the angular deviation from the vertical equilibrium position, g is the acceleration due to gravity, r is the length of the pendulum, and μ is a drag parameter. The dot notation represents differentiation with respect to time; i.e., θ · is angular velocity and θ ·· is angular acceleration. This equation is nonlinear because the restoring force, the component of gravity driving the pendulum back towards the vertical equilibrium position θ = 0, depends on the sine of the angle.

van der Pol Oscillator

Formally, the van der Pol oscillator resembles the pendulum. It has a linear restoring force, but has nonlinear drag that switches to antidrag when θ < 1. It generates periodic behavior that does not die away. Energy dissipated by drag in some parts of the cycle is replaced by work done by antidrag in other parts of the cycle.

θ ·· + μ ( 1 - θ 2 ) θ · + θ = 0 .
(6.3)

Lotka–Volterra

The Lotka–Volterra equations are often used to model predator–prey systems. Using r to represent the number of rabbits and f to represent the number of foxes,

r · = α r - β rf ,
(6.4a)
f · = - γ f + δ rf .
(6.4b)

The first equation states that rabbits are born at a constant birth rate, and die at a rate proportional to the number of foxes. The second says that foxes die at a constant rate and are born at a rate proportional to the number of rabbits.

This model has a simple realistic interpretation in terms of the probability that foxes and rabbits will encounter each other. If each species is randomly distributed over a region, then the probability that one will encounter the other is proportional to the product of the population densities. These densities are proportional to population numbers in a fixed region. The coefficients of the product terms are different in each equation because the effects of predator–prey encounters are not symmetric. The predator gains a little while the prey loses a lot when they meet (the life–lunch principle).

The Lotka–Volterra equations generate periodic fluctuations in rabbit and fox numbers. These patterns are qualitatively similar to patterns generated by the pendulum and van der Pol oscillator equations, and yet (6.4a) and (6.4b) seem qualitatively different from (6.2) and (6.3). Subsequently, we shall see that these equations are not as different as they seem to be at first sight.

Linearization

Because sin θ ≈ θ for small θ, for small swing angles we can approximate the nonlinear pendulum model with the linear model

θ ·· + μ θ · + g r θ = 0 .
(6.5)

The same equation (6.5) is obtained by linearizing the van der Pol oscillator at θ = 0.

A linear approximation to the Lotka–Volterra equation at f = 0, r = 0 is

r · = α r ,
(6.6a)
f · = - γ f .
(6.6b)

The linearized pendulum and the linearized van der Pol system are damped oscillators, but the linearized Lotka–Volterra system (6.6a) and (6.6b) does not oscillate. We will examine this in more detail now.

State Space Models

The pendulum can be rewritten as a pair of first-order equations like the Lotka–Volterra system, by introducing state variables x 1 = θ and x 2 = θ · . The pendulum equation becomes

x · 1 = x 2 ,
(6.7a)
x · 2 = - μ x 2 - g r sin x 1
(6.7b)

and the van der Pol equation becomes

x · 1 = x 2 ,
(6.8a)
x · 2 = μ ( 1 - x 1 2 ) x 2 - x 1 .
(6.8b)

In each case, state variable x 1 specifies the configuration while state variable x 2 specifies the rate of change of configuration.

This trick illustrated for single second-order equations can be used to convert any set of differential equations, of any order, into a system of first-order differential equations in a set of state variables. The resulting state space form is a general model for a finite-dimensional dynamical system,

x · k = f k ( x ) ,
(6.9)

where x k is the kth state variable and x is the state vector, containing all of the state variables.

The generality of the state space model for finite-dimensional nonlinear dynamical systems means that we can analyze arbitrary systems in terms of this model. Because all of the relevant variables and their rates of change are treated on the same footing, conceptually we can consider any finite-dimensional nonlinear dynamical system to be an ecosystem of interacting species: foxes, rabbits, etc. In different dynamical systems the state variables may be chemical reagent concentrations, mechanical configuration variables, or any properties of interacting components in a network.

Nonlinear dynamical systems can be difficult to understand because everything is connected to everything and everything is always changing. You generally cannot see what a dynamical system will do next by looking at its current configuration, even if you have its equation of motion. However, state space models make it possible to represent and visualize dynamical systems geometrically. Some species in a dynamical model may correspond to directly observable or measurable quantities – like the species in the Lotka–Volterra model – while others may correspond to abstract or unobservable properties that are much more troublesome to human intuition. These different kinds of variables are treated on the same footing in state space models, which quantify how the state changes as a function of the current state. It is possible to visualize this across the state space (Fig. 6.2). This is a major conceptual advantage of the state space model; it lets us freeze arbitrary nonlinear dynamics in a snapshot that characterizes what happens next. We will explore how to use state space maps to analyze dynamical systems in more detail below.

Fig. 6.2
figure 2

In a snapshot of a pendulum (a) it is impossible to tell which way the pendulum is moving. As illustrated in (a), it can be impossible to see how a simple mass will move even if you know the forces acting on it. (b) In contrast, the pendulumʼs future behavior is easy to predict and visualize from a snapshot in state space. At any point the state space model specifies a vector showing how the state changes at that point. A trajectory starting at any point can be determined by following the flow of the vector field

Another advantage of the state space model is that it is relatively straightforward for a numerical algorithm to map out trajectories from arbitrary starting positions in state space, given such a model. The most commonly used numerical integration routines for solving ordinary differential equations require the equations to be specified in state space form [6.15].

Linear State Space Models

We derived linear approximations to three nonlinear differential equations in Sect. 6.2.5. It is generally easy to linearize a state space model at any specified point in state space by taking partial derivatives of the functions f k in (6.9) at that point. The state space model linearized at x 0 is

x · = F x ,
(6.10)

where F is the matrix of partial derivatives F kj = f k / x j | x = x 0 ; For example, the state space model (6.6a) and (6.6b) for the pendulum becomes

x · 1 = x 2
(6.11a)
x · 2 = - μ x 2 - g r x 1 ,
(6.11b)

which can be written in the form (6.10) with

F = ( 0 1 - g r - μ ) .
(6.12)

The linearized state space model describes the local behavior of a smooth dynamical system near x 0. The behavior can be characterized in terms of the properties of the matrix F; that is, analyzing the local behavior of a dynamical system comes down to matrix algebra. If the system is linear then its global behavior can be determined by matrix algebra (Sect. 6.3).

Critical Points

A critical point is a point in state space at which the state derivatives are zero,

f k ( x ) = 0 , for each  k .
(6.13)

At a critical point, the linear approximation to a smooth nonlinear system is

x · k = 0 ,
(6.14)

which implies that the system will freeze up if it reaches a critical point. This can happen, but a critical point may be unstable, meaning that arbitrarily small perturbations cause the system to move away.

Critical points of nonlinear systems are important for understanding the kinds of behavior that they can generate, because near unstable critical points small perturbations of the state can cause large changes in a systemʼs behavior. Near stable critical points, small perturbations have little or no effect.

In one dimension there are three kinds of critical point (Fig. 6.3). The critical point is either stable, in which case the system will converge to the critical point from nearby states, or unstable, in which case the system will diverge away from the critical point.

Fig. 6.3
figure 3

Critical points in two-dimensional state space. (a) Stable critical point, all state derivative vectors point inwards. (b) Unstable critical point, states diverge away. (c) Saddle point, some trajectories lead in to the critical point, while others lead away

Nonlinear systems are qualitatively linear away from critical points, in the sense that small perturbations have proportionately small effects on trajectories.

Autonomy

The nonlinear state space model (6.9) describes how the rate of change of state of a system depends on its current state. It conspicuously fails to include external inputs that may also influence the evolution of the state. Equation (6.9) should be modified to include external influences,

x · k = f k ( x , u ) ,
(6.15)

where u(t) is an external signal acting on the system.

However, by introducing time as a state variable,

x n + 1 = t
(6.16)

and an additional equation of motion for this state variable,

x · n + 1 = 1 ,
(6.17)

(6.15) can be transformed into (6.9).

The equivalence of models (6.9) and (6.15) means that technically it makes no difference at all whether we regard system and environment as separate entities that interact, or as parts of one larger system. From a mathematical point of view, then, the nature–nurture debate is epistemological (about how we describe things) rather than ontological (about things). The mathematical solution is unambiguous: Choose the model that is simpler to analyze (rather than getting caught up in the nature–nurture debate – the map is not the territory).

Partial Differential Equations

We want to consider dynamics of continua, such as spatially inhomogeneous chemical reactions and the mechanics of continuous materials. In these systems the states are functions of location and time x(r, t), not just functions of time x(t). Models of continuum dynamics require partial differential equations (PDEs), rather than the ordinary differential equations (ODEs) that we have been considering thus far.

We can model a continuum approximately by considering states at grid points. In two dimensions we can choose an array of locations r kj and replace the spatially distributed state with a finite set of state variables,

x kj ( t ) x ( r kj , t ) .
(6.18)

In this way we can model a continuum using a large set of ordinary differential equations instead of one partial differential equation. As we have seen in Sect. 6.2.5 this set of ODEs can be rewritten in state space form, so (6.9) is a general model for continuum systems.

This observation that continuum systems can be modeled as very large dynamical networks of discrete interacting components may seem a little simplistic. However, the truth is that, under the hood, many numerical methods for solving partial differential equations explicitly solve systems of equations like (6.18). Conversely, materials that have classically been modeled as continua are in fact very large collections of very small interacting components. As computing technology advances, we are increasingly able to simulate macroscopic phenomena explicitly in terms of microscopic mechanisms, and as we will see below there are modern computing environments such as NetLogo that make it remarkably easy to do this. I am not trying to suggest that PDE models have nothing to contribute to developmental biology, only pointing out that for present purposes we can avoid the complexities of PDEs and treat everything as a network.

Networks

In integral form, (6.9) becomes

x k = f k ( x ) d t .
(6.19)

It follows that, via a state space model, any dynamical system can be modeled using an array of integrators whose outputs loop back to the inputs via a transformation. It is worth the effort to understand how the networks in Fig. 6.4 can be drawn by inspection of the corresponding ODEs in state space form.

Fig. 6.4
figure 4

Network implementations of differential equations. (a) Linear pendulum with drag, (b) van der Pol oscillator, (c) Lotka–Volterra system. These networks can be implemented as analog electronic circuits using off-the-shelf components. A state space model of a dynamical system can be directly interpreted as a circuit diagram for a network of interacting operators that mimics the system. The reader is encouraged to check the correspondence between signals and operations in the illustrated networks, and the operands and operators in the differential equation models

In the light of preceding theory, Fig. 6.4 shows that networks of integrators and static transformations can mimic arbitrary dynamical systems. This result is applied, for example, in analog circuit design. Given a state space model, an engineer can design a circuit whose behavior mimics any dynamical system. The task is made easy in electronics by the availability of components designed to implement standard mathematical operations. Given the ability to select from a sufficiently diverse set, circuits could be constructed using other kinds of components; For example, the state variables could be implemented using reagent concentrations in a macromolecular reaction network, or spiking probabilities in a neural network [6.16].

Where Patterns Come From

Oscillations in Linear Systems

It is easy to verify by substitution that the function

θ ( t ) = A e - t τ cos ω t
(6.20)

satisfies the pendulum equation (6.2), when τ = 2/μ and ω = g / r - 1 / τ 2 . This function describes an oscillation that may decay or grow (Fig. 6.5c). For a real pendulum, μ is positive, corresponding to a velocity-dependent drag force, and in this case the oscillation decays exponentially with time constant τ. The decay reflects energy dissipated by the drag force.

Fig. 6.5
figure 5

Oscillations due to second-order coupling between state variables and their rates of change. (a) State trajectories of a pendulum with different values of the drag parameter μ. A closed cycle appears when μ = 0. There is a stable point attractor S at the origin when μ > 0. (b) , (c) Configuration of the pendulum over time with different values of μ. Persistent sinusoidal oscillation occurs when μ = 0. (d) State trajectories of a van der Pol oscillator, showing trajectories that start near the periodic attractor A converging onto it. (e) Configuration θ of the van der Pol oscillator over time. When μ is small, oscillations generated by the van der Pol oscillator closely resemble an undamped pendulum (simple harmonic motion)

If there is no drag (μ = 0), as in a vacuum, then the solution is simple harmonic oscillation (Fig. 6.5b),

θ ( t ) = A cos g r t .
(6.21)

Linear oscillations occur in a mechanical system when there is a force driving its configuration towards a point, proportional to how far away it is from that point; For example, such restoring forces can be approximately generated by gravity acting through a rotational constraint, as in a pendulum, or by springs. Oscillations can be generated by second-order dynamical loops in other kinds of physical systems. In a second-order loop, the rates of change of two state variables are coupled in a closed chain.

Feedback and Dynamic Stability

Suppose we use an actuator to apply a force on the pendulum proportional to its velocity. This is called feedback, because the applied force is a function of state. Adding this term to the linearized model (6.5), we obtain the closed-loop equation of motion

θ ·· + μ θ · + g r θ = λ θ · .
(6.22)

By choosing λ = μ, we create a pendulum that oscillates periodically.

A pattern is dynamically stable if it persists when the state is perturbed. The feedback-controlled pendulum oscillator is not stable because small perturbations of θ and/or θ · alter its amplitude. However, it is not unstable either, because while a perturbation will move the pendulum off one trajectory, it will move it onto a similar, neighboring trajectory. This is an example of neutral stability.

When λ < μ in the feedback system, so that μ − λ > 0, the pendulumʼs state moves onto the trajectory ( θ , θ · ) = ( 0 , 0 ) ; i.e., it comes to rest at the origin. This trajectory is dynamically stable, but not very interesting from a biological pattern-formation point of view.

External Pattern Generators

Persistent oscillations in the feedback-controlled pendulum are not free. The actuator must use energy to do the work necessary to compensate for dissipation due to drag. This means that the controlled system must have an external energy source, powering an actuator that delivers a periodic force to the pendulum. Without the illumination provided by dynamical systems theory, intelligent observers might agree that this periodic external force causes the pattern of movement. However, from a dynamical systems point of view the external feedback loop is a component added to an autonomous pattern-generating system in order to select and stabilize a desirable pattern. The feedback element simply defends a naturally occurring pattern from the ravages of the second law of thermodynamics.

Structural Stability

Dynamical stability, considered in Sect. 6.2.8, is about whether a pattern persists when the state is perturbed. Structural stability is about whether a systemʼs behavior persists when its structure is perturbed.

The controlled pendulum (6.22) is structurally unstable. Arbitrarily small errors in the feedback parameter λ destroy its periodic oscillation. If λ < μ, the oscillation decays and the pendulum comes to a halt at the origin in state space. This behavior is structurally stable. When there is net drag, small changes in its magnitude do not qualitatively alter this behavior. They only affect how long it takes the pendulum to stop swinging.

The growing oscillation that occurs when λ > μ is also structurally stable. If the feedback is too strong then small changes in its strength only affect how rapidly the oscillations grow. Note that structural stability is a formal mathematical property of the model (6.20). Excessive feedback in a real oscillating system will eventually result in some kind of physical breakdown that invalidates the model. Investigating that breakdown would require a more sophisticated model.

Attractors

An attractor is a locus in state space onto which a systemʼs trajectories converge from nearby trajectories. Closed loops in the state space of the feedback-controlled pendulum when net drag is zero (Fig. 6.5a) are not attractors, because the pendulum will shift onto a neighboring closed-loop trajectory if the state is perturbed. The origin is an attractor when net drag is positive, however, because a damped pendulum will slow to a halt at the origin and stay there in the face of perturbations.

However, as noted before, attractors in pendulum dynamics are not very interesting from a biological pattern-generating point of view. The periodic trajectories of a linear pendulum are not attractors (they are not dynamically stable), and not structurally stable. The only stable attractor of (6.5) is a point at the origin, where it goes to die.

The van der Pol oscillator, on the other hand, has a structurally stable periodic attractor (Fig. 6.5d). When its parameter μ is small, the behavior of a van der Pol oscillator closely resembles the behavior of an undamped pendulum (Fig. 6.5e), but this pattern resists perturbations in the state and structure of the oscillator. This stability incurs a design cost and a running cost. The oscillator must incorporate a mechanism that provides appropriate nonlinear state feedback, and this mechanism must draw power from an external source because the nonlinear term dissipates energy when the coefficient of θ · is positive and does work when it is negative.

We have seen on the one hand that pattern formation is easy to analyze in linear systems using analytical solutions of ODE models, but this is not directly relevant to biological pattern formation because the patterns that linear systems generate are unstable and/or uninteresting. On the other hand, the van der Pol example illustrates how nonlinear ODE models can generate structurally and dynamically stable patterns, but it is usually impossible to solve nonlinear ODEs analytically.

Fortunately, it is straightforward in principle to characterize and map the attractors of a nonlinear system from a state space model. We need not consider the details of how to do this, because our present concern is not to know how it is done so much as to know that it can be done. The technical procedure is very clearly explained by Strogatz [6.17]. The result is a map of the state space showing critical points and attractors, and how the systemʼs trajectories flow around them, i.e., diagrams such as Fig. 6.5a for the simple pendulum and Fig. 6.5d for the van der Pol oscillator.

Dynamical systems are said to be topologically equivalent if their critical points and attractors can be matched up by continuously warping the state space. Topologically equivalent systems have essentially the same sets of behaviors.

Bifurcations

By adding a parameter to the van der Pol model we obtain a model that can be smoothly modified into a damped linear pendulum model,

θ ·· + μ ( 1 - λ 2 θ 2 ) θ · + θ = 0 .
(6.23)

This model becomes (6.3) when λ = 1 and (6.5) when λ = 0. Assuming that μ is small and positive, this system can have either a stable point attractor or a stable periodic attractor. This change happens suddenly as the parameter λ changes gradually.

This sudden qualitative change in global dynamics, from actively holding still to oscillating periodically, is a Hopf bifurcation. In general, a bifurcation occurs when a parameter change causes a change in the systemʼs dynamical topology. A critical point or an attractor may appear or disappear, and although the underlying cause may be a small continuous change in a model parameter, the effect is the sudden emergence or extinction of some pattern(s) of behavior in the system.

Global Dynamics

There is a trivial sense in which each network in Fig. 6.4 is just one of an infinite set of networks that generate a particular pattern; For example, in network Fig. 6.4a, we could note that 2λμ = λμ + λμ and have two pathways each feeding λμ back around the integrator for x 2 instead of one pathway feeding back 2λμ. However, there is a more subtle and important way in which infinite families of networks are functionally equivalent. Suppose that we construct new state variables y 1 and y 2 by transforming the original x 1 and x 2,

y = Ax ,
(6.24)

so that

x = A - 1 y .
(6.25)

Expressed in terms of the new state variables, (6.10) becomes

y · = A F A - 1 y .
(6.26)

Equations 6.25 and 6.26 define an infinite family of dynamical networks whose outputs x – what we actually observe – are indistinguishable.

In general, the dynamics of a linear system with an N-dimensional state vector are characterized not by the N 2 coefficients of its dynamical matrix F but by the N eigenvalues of this matrix. Correspondingly, the behavior of a linear integrator network depends not on the components and their interactions but on a relatively small number of global characteristics of the network.

This result means that engineers have considerable flexibility in analog circuit design. They can use the similarity transform, F ( A ) = A F A - 1 , which leaves eigenvalues unchanged, to change circuit components and layout without altering the function of the circuit [6.16]. Systems related by a similarity transform are said to be similar. This is not about having free parameters that are unconstrained by the function of the circuit, but about the ability to transform signals and operations to achieve the same function in different ways.

In biology, evolution selects dynamical networks according to their function. Different molecular components of these networks may have very similar properties, and superficially very different networks may generate similar patterns. This implies that what must be conserved across organisms to achieve common goals is not particular molecules or pathways but global characteristics of the molecular reaction networks. As in engineering, there are likely to be tasks for which particular components and circuit topologies tend to be used commonly or even universally, for reasons other than that they produce an advantageous behavior; For example, it may be particularly easy and cheap to produce certain components, they may draw less power to perform the task, or it may simply be that one design became standard many years ago and it would be too disruptive to introduce a new design now, even if it would be better in the long run.

It is possible that the role played by a particular gene product in one species could be carried out by an unrelated gene product in a related species. For example, bicoid expression seems to be essential for establishing the anterior–posterior axis in Drosophila embryos, but other insects appear to use different gene products for the task [6.18]. From a dynamical systems point of view, insect development would appear to require something to be expressed near one pole of the embryo so that its concentration gradient can guide anterior–posterior differentiation during development. The particular molecule that is selected for this task may be just one of many different gene products capable of performing it.

In complex networks where many similar components are available, even if different individuals employ the same molecular components, they do not need to exhibit the same molecular concentration patterns (internal state variables) to achieve the same outcomes. In particular, a pathological perturbation of any pathway can be compensated by adjustments in other pathways to maintain global function. This is straightforward from an engineering perspective; For example, given a functioning network F plus a constraint such as a maximum allowable value for some coefficient (maximum possible reaction rate on the corresponding pathway), it is a simple exercise in algebra to find a similar network that satisfies the constraint – if there is one. If there is not, then the constraint is fatal to the operation of the network. If similarity is merely difficult or expensive to achieve under the constraint, then we might label the constraint pathological.

Dynamical systems theory suggests that we should not necessarily be surprised or concerned about substitution of unrelated gene products in homologous pathways, or large interindividual variation in molecular profiles even within a species. By contrast, the natural prediction of the genetic program theory of biological organization is that there is some optimal level for each gene product, that selection acts to tune gene expression to these optimal levels, and that pathology can be identified by abnormally large deviations from population norms. This strategy does work in some cases, but, as noted above, it has not lived up to early expectations. Extending the same idea to look at multiple gene products may simply be a more difficult and expensive path to the same disappointment.

The genetic program paradigm encourages scientists to try to correlate the expression of particular genes to outcomes in morphology, physiology or behavior, rather than asking how a gene product operates as a network element and how the network is constructed and regulated to generate the outcomes; For example, the function of a resistor is to limit current flow in an electronic pathway. However, if we try to determine function by observing the consequences of removing or modifying resistors in functioning circuits, we would discover that they are pleiotropic devices with multiple roles. They can prevent a system from overheating or generating blue smoke, alter the loudness of sounds or the brightness of lights. Their roles can be contradictory: Removing a resistor from an amplifier can make it a siren, while removing a resistor from a siren can silence it. In addition, it might be observed that manufacturers may substitute resistors with different sizes and shapes, made from different materials, without altering the function of a device. This would be an excellent indoor game to play if there was a reward for discovering and classifying patterns in the relationship between circuit structure and function, because it occupies the mind without stressing it and there is always something to do next. An exponent of this game might learn how to make copies of simple electronic devices and to repair more complex ones, but this would resemble primitive folklore and witchcraft more than modern physical science and engineering.

Evolution and Development of Morphology

The Origin of Order

In cellular metabolism, chemical reactions among thousands of nucleic acid and protein species are coordinated to produce simple, stable patterns in relation to changing conditions at the cell boundary. Metabolism is not a collection of many simple, independent reactions but a molecular ecosystem in which all species interact in a single giant network. Extending the methods illustrated before by simple two-species networks to analyze and design networks with thousands of species seems beyond the capacity of finite intelligence. However, Kauffman [6.20] has shown that, under mundane conditions, large networks of interacting macromolecules are almost certain to form spontaneously.

The basic principle of Kauffmanʼs model is simple. A diverse population of macromolecules presents a diverse set of potential reactions and catalytic interactions. Given some probability that macromolecules will react, and some probability that a macromolecule will catalyze the reaction, the probability that some catalyzed reactions will occur in a collection of macromolecular species grows as the number of species grows. Kauffman demonstrated that, given some small initial probability of catalyzed reactions, increasing the number of molecular species eventually leads to a tipping point at which all of the species join a single large reaction network with probability approaching 1. At this critical point there is a phase transition where a large set of macromolecules containing small subsets of interacting species suddenly coalesces into a single network of interactions.

Kauffmanʼs model shows that complex interacting networks inevitably crystalize out of macromolecular soup containing a sufficiently large number of ingredients. He has discussed it in the context of a metabolism first model of the evolution of living cells. This is beyond the brief of the current chapter, but it may be noted in passing that network phase transitions, first identified mathematically some decades earlier [6.21], provide a simple potential explanation for the irreducible complexity of the very complex and apparently very finely tuned molecular machinery in living cells. Natural selection does not have to construct such machines by gradual modification of simpler ones. It only has to select and modify complex whole networks that necessarily occur for thermodynamic reasons.

While Kauffmanʼs work has primarily been theoretical, using mathematical models and computer simulations, examples of self-catalyzing molecular reaction networks have been generated in laboratory experiments.

Turing Patterns

In 1952 the British mathematician Alan Turing developed a theory of how chemical reactions can create spatial patterns [6.22]. The mechanism is a chemical reaction in which one reagent catalyzes the formation of the other, while the second inhibits the formation of the first. This is a molecular analog of a predator–prey system. We have already seen how temporal oscillations can arise when two quantities are dynamically coupled in this way.

In Turingʼs reaction–diffusion model, the reaction takes place in a thin layer of solute, in which the reagents diffuse at different rates. Initial small spatial variations in relative reagent concentrations are amplified into spatial patterns whose wavelengths are determined by the reaction–diffusion kinematics.

The ideas put forward by Turing have been picked up and extended by others [6.23,24,25,26,27,28]. Various patterns can be generated by Turingʼs mechanism, most commonly stripes and spots resembling patterns on animal coats (Fig. 6.6). The patterns are affected by the size and shape of the surface in which the reaction occurs. The Turing–Murray theorem famously asserts that a spotty animal may have a stripy tail but a stripy animal cannot have a spotty tail, a prediction that appears to be confirmed in nature [6.29].

Fig. 6.6
figure 6

Turing patterns, generated by simulating Turingʼs reaction–diffusion equations using NetLogo (after [6.19])

Despite many examples of two-dimensional patterns on surfaces of organisms that bear a compelling resemblance to Turing patterns, it took the best part of half a century to demonstrate that there really is a Turing-like chemical reaction–diffusion process underlying any of these patterns [6.30,31]. Other patterns that superficially resemble Turing patterns have been shown not to be generated by this mechanism.

Segments: Growth Transforms Time into Space

In 1894 Bateson noted that periodic patterns in animal form, such as vertebrate somites and earthworm segments, could be generated by oscillating processes coupled to growth [6.32]. In 1976, Cooke and Zeeman formalized this idea in the mathematical clock-wavefront model [6.33] (Fig. 6.7). An oscillating reaction generates periodic fluctuations in reagent concentrations, while the tissue grows steadily. The reaction is activated when the concentration of another chemical produced at one end of the tissue is above a threshold level. As the tissue grows steadily, the reagent concentrations are frozen when the level of the activating substance falls below the required threshold, thus converting predator–prey-like temporal oscillations into a periodic spatial pattern.

Fig. 6.7
figure 7

Clock-wavefront model for segmentation. A periodic reaction network of genes and gene products forms a clock (C), periodically changing cell states. Meanwhile, gene products diffuse along the tissue. Behind the advancing wavefront (W), the periodic reaction stops, leaving alternating stripes of cell states

About 10 years ago, Pourquie and colleagues observed that certain genes are expressed with a temporal periodicity that matches the time taken for one somite to form in chick embryos [6.34,35]. They and others have subsequently identified the gene products involved and confirmed the clock-wavefront model for vertebrate somite generation, more than a century after the basic idea was put forward and three decades after the dynamics of the pattern-forming mechanism were formulated in a mathematical model [6.12].

Sticking Together: The Invisible Hand of Adhesion

The drop in electrostatic potential energy that occurs when oppositely charged entities approach each other means that it is energetically favorable for them to be adjacent to each other and work must be done to pull them apart. This microscopic effect is responsible for the macroscopic phenomenon of adhesion, or stickiness.

In a fluid of polarized and nonpolarized particles, the polarized particles will tend to clump together. Surface tension arises because there is free energy associated with unmatched charge on the surface. In the absence of other forces and constraints the clump will contract into a sphere; For example, water molecules are polarized and air molecules are not. It costs 7.3×10−8 J to increase the surface area of a drop of water in air by 1 mm2. At small scales this overwhelms other forces, and this is why small water droplets are spherical.

In a mix containing particles with different adhesivity, less adhesive particles will form layers around more adhesive particles. If localized charges (the sticky bits) are restricted to parts of larger particles then the minimum energy configurations can be sheets, tubes or shells rather than spheres [6.12].

Cells have polarized molecules, appropriately called cell adhesion molecules (CAMs) and surface adhesion molecules (SAMs), embedded in their membranes [6.36]. CAMs stick cells to each other, while SAMs stick cells to the extracellular matrix. CAMs and SAMs are gene products, and so genes can exploit adhesion to create tissues that spontaneously self-organize into layered structures by modulating the expression of these proteins [6.12,36]. In contrast, according to the genetic program model, cell adhesion molecules are simply labels that tell cells where they should be in the body and how they should choose their neighbors, analogous to color coding on joints and fasteners of kitset furniture.

Surface tension–adhesion effects dominate other forces at small scales, and therefore probably play a major role in organizing the overall form of organisms in early development. As the organism grows, adhesion takes on the more mundane role of just keeping it together. The freshwater predator Hydra can be dissociated into single cells that can spontaneously reassemble into an intact animal. This effect appears to be due to differential cell adhesion [6.37]. Hydra is apparently small and simple enough that the state space of position and movement of isolated floating cells has a single point attractor, a minimum-energy configuration corresponding to the adult morphology. More complex organisms appear to require specific developmental pathways to reach their stable adult morphologies. They tend to drop into nonviable configurations corresponding to local energy minima if these pathways are disrupted.

Making a Splash

Thompson and White [6.38] famously noted the morphological similarity of splashes and certain animals (Fig. 6.1). There is still technical debate about the physically correct mathematical model for crown formation in splashing fluid droplets [6.39,40,41], but relevant fluid-dynamical principles can be explained in simple terms.

The kinetic, gravitational, and adhesion energies of molecules in a body of water are dynamically coupled. Consequently, mechanical work that locally accelerates water molecules or alters their height causes ripples to propagate radially across the surface. The dynamical coupling is such that any kinetic energy in bulk flow (near field) is quickly converted to surface energy in the form of ripples that increase the surface area. Energy is dissipated by friction between water molecules as their kinetic, gravitational, and adhesion energies exchange periodically, and the surface eventually returns to the flat, minimum-energy configuration. Water molecules bounce up and down like masses suspended by springs. Waves radiate outwards but water molecules do not. As in a pendulum, the wavelengths of these ripples are characteristics of the fluid, not of the perturbing force.

If the perturbation is sufficiently energetic, a second set of ripples can emerge around the crest of a radiating wave. Random fluctuations along the ridge are amplified as kinetic energy is transferred into surface energy by rippling. The number of peaks in the crown is determined by the wavelength of these ripples. As the radiating wave continues to expand outwards, if there is still enough energy, the peaks will pinch off and form droplets, transferring additional kinetic energy into surface energy.

This informal description of crown splash formation illustrates that the crown morphology depends on the dynamical properties of the fluid: viscosity, density, and surface energy. In particular, other things being equal, because the number of points in the crown depends on the wavenumber of the secondary ripples around a circular wave crest, the number of points depends on the dynamical parameters of the fluid. This suggests an interesting thought experiment: It ought to be possible to selectively breed or genetically engineer cows to alter the number of points in milk droplet splash crowns, not by selecting a molecular program that runs during splash formation, but by selecting for gene products that affect the viscosity, density, and surface energy of the milk.

Buckling the Trend

Metazoan development does not involve being dropped from a height, and in any case our tissues are too viscous for the crown splash mechanism to be a realistic model of embryogenesis. Metazoan tissues are soft matter, viscoelastic materials that can be shaped by applied forces. Kinetic energy is negligible in soft matter dynamics, but elastic strain energy may play an important role. In this section a simple model and numerical simulation of morphogenesis by buckling when two tissues grow at different rates is presented.

Growth-driven morphogenesis in soft matter is illustrated by a simple two-dimensional model implemented in MATLAB (Fig. 6.8). The model consists of a two-dimensional viscoelastic mesoderm surrounded by a line of viscoelastic ectoderm. Mesoderm is modeled as a continuum that stores strain energy if it is compressed or dilated. Ectoderm is modeled as a closed chain of linear springs, representing cells, connected at cell junctions by angular springs. These springs store energy when the ectoderm is stretched, compressed or bent.

Fig. 6.8
figure 8

Computer simulation of morphogenesis by mechanical symmetry breaking in two dimensions. (a) When tissue growth rates match so that there is no stress on either tissue, the organisms develop as circles (right column). If the mesoderm grows at a slower rate, they collapse into frozen splashes. The specific morphology depends on the relative growth rate, which increases from left to right. Columns are replicates with the same relative growth rate. (b) A single mutant cell is introduced into an initially circular embryo. Its desecendents are labelled by heavy line segments. If mutant cells are less stiff than normals (−) then they are more likely to end up in the mouths of the adults, while if they are stiffer they are more likely to end up near the tips of the arms (+). Mutants with normal stiffness (0) end up at random locations on the adult. Details in text

The structure adopts the shape that minimizes total energy or, as Newton would say, balances the net forces of the tissues against each other. The visco in viscoelastic means that inertia is negligible, so this adjustment takes place as a gradual smooth movement. During slow tissue growth the embryo will track the minimum-energy configuration.

The embryo is initialized so that the unstressed area of mesoderm equals the area enclosed by a regular polygon formed by ectodermal cells at their rest length. Ectodermal cell lengths and junction angles are then randomly perturbed, and changes that result in lower total energy are selected until the embryo settles into the minimum-energy configuration. Initially, because the system was initialized so that the unstressed area of the mesoderm equals the area enclosed by a regular polygon of unstressed ectoderm, it morphs into that regular polygon.

Now the two tissues begin to grow. On each cycle, a cell is added to the ectoderm and the unstressed area of the mesoderm is increased. Ectodermal cell positions are adjusted by random perturbations to reduce the total strain energy of the organism.

If the area of mesoderm grows so that it remains equal to the area of the regular polygon enclosed by the current number of ectodermal cells at their rest lengths, then the embryo develops as a regular polygon, ageing gracefully into a circle. However, if mesoderm grows at a slower rate, the ectoderm buckles.

The underlying cause of ectodermal buckling in this model is that the energetic cost of bending the ectoderm is smaller than the energetic benefit of reducing mesodermal stretching and ectodermal compression. Buckling tends to form uniform ripples around the organism, rather than sharp folds, because it is energetically favorable to distribute strain energy uniformly in the ectoderm rather than concentrate it at a point.

The morphology of these embryo models can be systematically modified by adjusting relative growth rates and elastic parameters of the tissues. In Fig. 6.8a, the columns are outcomes of repeated runs using the same relative growth rates for the two tissues. If the ectoderm grows much faster than the mesoderm, the embryos quickly collapse into two-armed critters. As the relative growth rate of the mesoderm increases, the collapse is delayed and the number of arms tends to increase. Finally, when the mesodermal growth rate is such that its area always equals the unstressed area enclosed by the unstressed ectoderm, the critters grow up to be circles.

Other tissue parameters were fixed for these simulations, which were able to generate two-, three-, and four-armed critters. The number of arms in the adult morphology is consistent given a particular relative growth rate of the tissues. Although this parameter is continuous, four qualitatively distinct morphologies are generated as the parameter is varied, with sudden switching from one morph to the next as the parameter gradually increases.

Genes for Regional Specification

In the model of Sect. 6.4.6 (Fig. 6.8a), buckling is initiated by amplification of small random perturbations in the ectoderm. As a consequence, the adult morphs are randomly oriented, i.e., arms are produced at random locations on the body.

One can imagine that arms might confer certain advantages on a creature that evolved the capacity to develop them. However, to exploit those advantages it might be quite handy to be able to coordinate developmental processes so that tissues and organs are arranged in repeatable configurations rather than sprouting in random locations with respect to each other.

In Fig. 6.8b, mutant ectodermal cells are introduced at random locations in the initial embryos. These mutants have either stiffer or softer angular springs than other ectodermal cells. In real organisms, such differences could be due to altered amounts or types of CAM gene expression.

If the mutant cells are soft their descendants tend to end up in the mouths of the adult creatures, but if they are stiff their descendants tend to end up near the ends of the arms. The descendants of control mutants, with normal stiffness, tend to end up in random parts of the adult body (Fig. 6.8b).

This simulation illustrates a simple principle. The mutant cell breaks the mechanical symmetry of the early embryo, so that morphogenesis tends to be aligned with the locations of that cellʼs descendants. Tissues that develop from the mutant tend to be in particular locations in the body in the adult.

In general there is no need for the organizer of the morphology-generating process to be the progenitor of tissues destined for particular location in the adult. It is only necessary for there to be some kind of local marker that the two different processes, tissue specialization and mechanical buckling, can align with; For example, if the embryos in Fig. 6.8 were generated by budding off from an adult, there might be some cytological differences in the cells at the budding point that could on the one hand affect mechanical properties and influence buckling, and on the other hand affect gene expression in those cells.

In this model a local alteration of gene expression precedes and predicts the appearance of an arm (or a mouth) at that location, but it is not a gene for an arm (or a mouth). As Fig. 6.8a shows, arms are perfectly capable of building themselves without the help of such genes. The expression of the mutant gene in a particular location does, however, predict that an arm will appear at that location. The symmetry-breaking signal occurs early in development, long before the morphology emerges, making it possible in principle for other genes to be activated in spatial patterns that align with the as-yet invisible adult morphology. This makes it possible in principle for evolution to capitalize on morphogenesis by coordinating other developmental processes around it.

In this model the signal breaks the mechanical symmetry directly; i.e., the mutant cell has different mechanical properties. For evolution to be able to take advantage of such a signal by systematically altering the expression of other genes, it would be simpler if the signalling molecule was a transcription factor rather than a structural protein such as a CAM or a SAM. A transcription factor is simply a gene product that influences gene expression by influencing DNA transcription. In particular, developmental genes, such as the HOX genes that appear to sketch out the morphology of the adult before it appears, produce transcription factors. Modulated CAM expression leading to mechanical symmetry breaking could then be just one of a number of developmental processes that could be localized to specific regions of the developing embryo marked by prior expression of transcription factors.

Genes and Development

Morphology First

The facts of molecular biology show that morphology is presaged by spatial patterns of gene products, and the relationship is evidently causal because disrupting expression of these genes or manipulating the concentrations of their products disrupts morphogenesis. However, there is something rather odd about this picture: Genes apparently instruct embryos to develop in ways that they would develop without instructions.

As outlined in Sect. 6.3, macromolecular reaction networks could in principle generate arbitrary spatial and temporal patterns. However, pattern formation in animal morphogenesis is actually restricted to a small repertoire of basic motifs that, as Thompson and Whyte [6.38] observed, occur spontaneously in growing materials. As hinted at by the simple model in Sect. 6.4.6, one kind of possible explanation for this observation is that the patterns generated by patterning genes are a consequence of, not a cause of, morphogenesis due to physical properties of expanding soft matter.

Newman et al. [6.42] proposed that morphogenesis in the first metazoans may have been determined by mechanical properties of growing tissues, which were subsequently stabilized and elaborated by genetic mechanisms.

Fossil evidence and comparative morphology both indicate that animals evolved from sponge larvae that developed into pelagic suspension feeders. Sponges, or poriferans, evidently evolved by aggregation of choanoflagellates, unicellular organisms that can express cell adhesion molecules and spontaneously aggregate into clumps that cooperate as suspension-feeding colonies [6.43].

Poriferans are benthic suspension feeders with simple, variable morphologies including hollow blobs, barrels, and cylinders. Fragments of a sponge, even when completely dissociated into cells, can spontaneously reorganize into their species-specific form. Sponge morphology and development seem to be largely, if not entirely, due to the self-organizing properties of differential adhesion among particles in a viscous fluid [6.44,45].

In addition to the ability to reproduce and disperse by simply fragmenting and floating away, sponges can reproduce sexually and produce larvae. The simplest of these are spherical cell aggregates that disperse passively in ocean currents. Larvae of some species have streamlined shapes, and the ability to actively respond to environmental cues and thereby increase the probability of settling in a favorable habitat [6.46,47,48].

Ctenophores or comb jellies are the closest living relatives to poriferans, and both fossil and comparative evidence suggest that ctenophores evolved directly from sponges. According to Nielsenʼs trochaea theory, the first eumetazoans – animals with distinct tissues and organs – were derived poriferan larvae that matured and started feeding in the water column as pelagic suspension feeders, instead of settling onto the benthos [6.43].

The trochaea, the hypothetical ancestor of ctenophores and all other eumetazoans, is morphologically a collapsed spherical shell of tissue (Fig. 6.9). On one hand, this morphology results from mechanical symmetry breaking in an expanding shell of viscoelastic material [6.49,50]. On the other hand, this is the basic morphology of ctenophores, the simplest eumetazoans.

Fig. 6.9
figure 9

(a) Eumetazoans evidently evolved from sponge larvae, the simplest of which are spheroidal masses of ciliated cells, represented diagrammatically here. (b) An expanding shell can spontaneously collapse, creating an asymmetrical mass with an invagination. This morphology has the potential to provide certain advantages to a pelagic suspension feeder, but only if tissue differentiation can be coordinated to align with the mechanical symmetry break. See text for details

Nielsen examines trochaea evolution from a Darwinian perspective, detailing the adaptive advantages of its morphological features and associated tissue specializations. In brief these are that it is somewhat larger than its larval ancestors, giving it a lower Reynolds number [6.51] that enables it to move, and therefore feed, more efficiently in the water column; it is radially symmetrical and streamlined, favoring motion in one direction; anterior–posterior tissue differentiation takes advantage of this hydrodynamic asymmetry; sensory cells at the anterior pole detect environmental cues correlated with higher nutrient density; ciliated motor cells along the sides propel and steer the organism, under the influence of neural signals from the anterior sensors; the caudal invagination collects food particles because of hydrodynamic eddy currents as the trochaea moves forwards; and tissues in this mouth/gut region are specialized for capturing and digesting the particles.

There is nothing new in any of the specialized tissue functions of the trochaea, relative to its pre-Cambrian predecessors. Colonial choanoflagellates are now, and presumably were prior to the Cambrian explosion, capable of sensing and moving, and of capturing and digesting food. What is new among Cambrian eumetazoans is not only that these capabilities have been delegated to specialized subgroups of cells, but that these subgroups are systematically arranged within the organism in relation to its overall morphology.

Figure 6.9a is a diagrammatic representation of a symmetrical pelagic suspension feeder whose epithelial cells are all sensors, motors, and feeders. Figure 6.9b shows a suspension feeder with a collapsed morphology. Its morphological asymmetry means that it now has a front and a back and it moves more efficiently forwards than backwards. Information about what is ahead is more valuable than information about what is behind, and nutrients naturally accumulate in the caudal invagination [6.43]. Thus, there would be a selective advantage for these organisms if they could evolve some mechanism(s) so that anterior cells (i) specialize for sensing, lateral cells (ii, iii) specialize for propulsion, and cells in the invagination or mouth specialize for feeding. These tissue specializations, coordinated with morphogenesis, appear to be the crucial steps that marked the transition from brainless ancestors with variable morphology to eumetazoans, animals with regular, reproducible morphology, and tissues and organs including a nervous system [6.43].

Post Hox ergo Propter Hox?

Nielsen discusses the adaptive significance of morphological features and tissue specializations of the trochaea, and gives a detailed explanation of how these could have arisen in a sequence of small steps by random modification of prior structures. However, at the end of that fateful day, 543 million years ago at the onset of the Cambrian explosion, we have an organism whose morphology is a predictable consequence of the morphology and material of its ancestors (blobs of viscoelastic soft matter consisting of replicating sticky particles) under selection pressure to get larger, because being bigger makes suspension feeding more efficient [6.51,52].

It is evidently possible to believe that natural selection could gradually sculpt random perturbations of morphology into arbitrary forms, given that generations of biologists seem to have believed that story. However, as outlined above, viscous fluids of sticky particles are self-organizing. With given boundary conditions an aggregate of differentially adhesive cells will have a preferred, energetically favored morphology. Because it will be energetically expensive to maintain any slightly different morphology, natural selection cannot produce new morphologies by accumulating small random morphological changes. Developmental processes may steer morphogenesis towards particular stable morphologies while avoiding nonviable forms, but they cannot arbitrarily create new ones.

In Darwinian terms the evolution of the basic morphology of the hypothetical trochaea and actual ctenophores can be explained in terms of soft matter dynamics with initial selection for increased size, rather than selection for morphology. This implies that Newman and colleagues are right [6.12,42]. The spatial patterns of molecular concentration that presage morphogenesis in development must have followed morphogenesis in evolution. The simple model outlined in Sects. 6.4.6 and 6.4.7 explains how developmental genes could evolve to coordinate tissue differentiation with morphogenesis, leading to the situation that we see today in which transcription factors are expressed in spatial patterns that predict the adult morphology before it starts to appear.

Discussion and Future

During the second half of the 20th century explanations of biological pattern formation and animal morphogenesis were dominated by the theory that spatial patterns and structures are a result of patterned gene expression during development. According to the new synthesis of evolutionary theory and molecular genetics, patterns arise at random and the forms that we see are simply those that survived Darwinian natural selection. Compelling evidence in favor of this theory accumulated over the century. After the development of technology that permitted spatial patterns of gene expression preceding the appearance of corresponding morphology to be clearly visualized in developing embryos, the case seemed closed.

In spite of this, a small group of researchers continued to build on the ideas of 19th century developmental mechanics. Thompsonʼs beautiful exposition of those ideas was recognized as great literature, but his arguments were based on analogy and esthetics rather than rigorous mathematical models. In fact, Thompson was an excellent mathematician in his day, but the mathematics of the day was not up to the task. There has been considerable progress in dynamical systems theory especially in the last quarter of the 20th century [6.17,49,50]. Whether this theoretical framework is now adequate to complete Thompsonʼs program is unclear, but at the start of the 21st century we have a mathematical language and the computational capacity to develop and test self-organizing dynamical systems models of pattern formation and morphogenesis.

It is clear from 20th century advances in molecular biology that 19th century developmental mechanics cannot be an alternative to evolutionary molecular genetics; the truth must be a synthesis of the two. This newer synthesis, called evo-devo, is now gathering steam. This chapter outlined the mathematical principles of self-organizing dynamical systems and proposed how such systems, containing reaction networks of genes and gene products as well as soft matter components, may generate patterns and forms in biology. More comprehensive treatments of ideas that will form the framework for evo-devo in the coming century may be found in books by Strogatz [6.17], Stewart [6.49,50], Raff, Raff, and Kauffmann [6.53,54], Kauffman [6.20,55], Newman and colleagues [6.12,42,56], Hall [6.57], and Carroll [6.58,59].

The promise of developmental mechanics synthesized with molecular genetics is that it may become possible to explain not only the morphology of extant organisms, but to predict morphologies that might have, or might one day, exist. It has the potential to explain phylogenesis in terms of a mathematical taxonomy of form, demoting random mutation from the creator to a mere explorer of animal morphology. We might then be less surprised by the anatomy of the first alien beings that we encounter than the first Europeans to arrive in Australia were by kangaroos, platypus, and black swans.