Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 From Integrative Physiology to Systems Biology

Yet, biological questions do not end in the gene at all: they start there. (Ball 2004)

(…) physiology, or whatever we wish to call that part of the science of the logic of life that deals with bodily function and mechanism, will not only continue to exist as an identifiable body of knowledge: it will be indispensable to the proper interpretation of molecular biology itself. (Noble and Boyd 1993)

The French physiologist Claude Bernard in his classical: “L’introduction a l’étude de la médicine expérimentale” (1865) stated that the control of the environment in which molecules function is at least as important as the identification of the organic molecules themselves, if not more. In an incisive essay, published almost 20 years ago, Noble and Boyd (1993) put forward the following three aims with which Physiology should be concerned, beyond merely determining the mechanisms of living systems: (1) integrative questions of order and control; (2) self-organization, in order to link how such order may have emerged; and (3) make this challenge exciting and possible. This is precisely what has happened, and in the meantime the emergence of System Biology emphasizes interconnections and relationships rather than component parts.

To describe a biological system we need to know the structure, the pattern of organization, and the function (Capra 1996; Kitano 2002a). The first refers to a catalog of individual components (e.g., proteins, genes, transcriptional factors), the second insert as to how the components are wired or linked between them (e.g., topological relationships, feedbacks), and third how the ensemble works (e.g., functional interrelationships, fluxes, response to stimuli, growth, division).

The analytical phase of biology has led to a detailed picture of the biochemistry of living systems and produced wiring diagrams connecting chemical components and processes such as metabolic, signaling, and genetic regulatory pathways. Integration of those processes to understand properties arising from their interaction (e.g., robustness, resilience, adaptation) and ensuing dynamics from collective behavior have become a main focus of Systems Biology(Noble 2006; Saks et al. 2009).

Systems Biology started to emerge as a distinct field with the advent of high throughput, -omics technologies, i.e., gen-, transcript-, prote-, and metabol-omics. Massive data gathering from -omics technologies, together with the capability for generating computational models, have made possible the massive integration and interpretation of information. High throughput technologies combined with the growing facility for constructing mathematical models of complicated systems constitute the core of Systems Biology. As such, Systems Biology has the potential to allow us gaining insights into not only the fundamental nature of health and disease but also with their control and regulation.

2 Dynamics, the Invention of Calculus, and the Impossibility of Prediction

Newton is considered the inventor of the science of dynamics, and he shares with Leibniz the invention of differential calculus (Gleick 2003; Mitchell 2009). Born the year after Galileo—who had launched the scientific revolution—died, Newton introduced the laws of motion that laid out the foundations of dynamics. These laws—constant motion, inertial mass, and equal and opposite forces—that apply to objects on earth and in heavens as well, gave rise to the notion of a “clockwork universe.” This led Laplace to assert that given Newton’s laws and the current position and velocity of every particle in the universe, it would be possible to predict everything for all time.

Poincaré, one of the most influential figures in the development of the modern field of dynamical systems theory, described a sensitive dependence to initial conditions in dealing with the “three body problem”—the motion of a third planet orbiting in the gravitational field of two massive planets (Poincare 1892). This finding rendered prediction impossible from knowledge of the situation at an initial moment to determine the situation at a succeeding moment. Otherwise stated: “…even if we knew the laws of motion perfectly, two different sets of initial conditions…even if they differ in a minuscule way, can sometimes produce greatly different results in the subsequent motion of the system” (Mitchell 2009). The discovery of chaos in two metaphorical models applied to meteorology (Lorenz 1963) and population dynamics (May 1974), and two different mathematical approaches—differential continuous and difference discrete equations, respectively—[see (Gleick 1988) and (May 2001) for historical accounts] introduced the notion that irregular dynamic behavior can be produced from purely deterministic equations. Thus, the intrinsic dynamics of a system can produce chaotic behavior independently of external noise. The existence of chaotic behavior with its extreme sensitivity to initial conditions limits long-term predictability in the real world.

The power of using mathematical modeling based on differential calculus was shown in two papers published in 1952 by Turing and Hodgkin & Huxley (Hodgkin and Huxley 1952; Turing 1952). Turing employed a theoretical system of nonlinear differential equations representing reaction-diffusion of chemical species (“morphogens” because he was trying to simulate morphogenesis). With this system Turing attempted to simulate symmetry breaking, or the appearance of spatial structures, from an initially homogeneous situation. This class of “conceptual modeling” contrasts with the approach adopted by Hodgkin and Huxley (1952) in which they modeled electrical propagation from their own experimental data obtained in a giant nerve fiber to account for conduction and excitation in quantitative terms. The “mechanistic modeling” approach of Hodgkin and Huxley is an earlier predecessor of the experimental–computational synergy described in the present book. These two works had a long-lasting influence in the field of mathematical modeling applied to biological systems.

3 From Multiple Interacting Elements to Self-Organization

Systems Biology aims at “system-level understanding of biological systems” (Kitano 2002b). It represents an approach to unravel interrelations between components in “multi-scale dynamic complex systems formed by interacting macromolecules and metabolites, cells, organs, and organisms” (Vidal 2009). One of the fundamental problems addressed by Systems Biology is about the relation between the whole and its component parts in a system. This problem, that pervades the history of “systems thinking” in biology (Haken 1978; Nicolis and Prigogine 1977; Von Bertalanffy 1950), begs the central question of how macroscopic behavior arises from the interaction between the elementary components of a system. It represents a connecting thread that different generations of scientists have formulated and attempted to solve in their own conceptual and methodological ways with the technologies available at the time (Junker 2008; Skyttner 2007; Yates 1987).

The notion that variation in any element affects all the others bringing about changes in the whole system is one of the foundations of systemic thinking. However, interactions in a biological system are directed and selective: this result in organization obeying certain spatial and temporal constraints. For example, in cellular systems molecular components exist either individually or as macromolecular associations or entire structures such as cytoskeleton, membranes, or organelles. Interactions are at different degrees of organization and can be visualized at structural or morphological levels assessed on molecular or macroscopic scales. However, biological interactions are not random and in organized systems they follow certain topological properties, i.e., more or less and preferentially connected to each other. For example, molecular–macromolecular functional interactions are ruled by thermodynamics and stereo-specificity. Finally, function in biological systems is a dynamic process resulting from interactions between structurally arranged components under defined topological configurations.

Dynamic organization is the realm where complexity manifests as a key trait of biological systems. As a matter of fact, biological systems are complex because they exhibit nontrivial emergent and self-organizing behaviors (Mitchell 2009). Emergent, self-organized behavior results in macroscopic structures that can be either permanent (e.g., cytoskeleton) or transient (e.g., Ca2+ waves), and have functional consequences. Indeed, macroscopically self-organized structures are dissipative [“dissipative structures”: (Nicolis and Prigogine 1977)], i.e., they are maintained by a continuous flow of matter and energy. Dissipative structures emerge as complexity increases from cells to organisms and ecosystems that are thermodynamically open, thus subjected to a constant flux of exchange of matter (e.g., substrates in cells) and energy (e.g., sunlight in ecosystems such as forests) far from thermodynamic equilibrium. Therefore, emergent macroscopic properties do not result merely from static structures, but rather from dynamic interactions occurring both within the system and between the system and its environment (Jantsch 1980).

A remarkable example of the latter is given by the adaptation of an organism’s behavior to its environment that depends upon biological rhythm generation. The role of biological clocks in adapting cyclic physiology to geophysical time was highlighted by Sweeney and Hastings (1960). Timing exerted by oscillatory mechanisms is foundational of autonomous periodicity, playing a pervasive role in the timekeeping and coordination of biological rhythms (Glass 2001; Lloyd 1992). Winfree (1967) pioneered the analysis of synchronization among coupled oscillators in a network, later refined by Kuramoto (1984) [reviewed in (Strogatz 2003)].Considering idealized systems of nearly identical weakly coupled sinusoidal oscillators, Winfree found that below a certain threshold of coupling, each oscillator runs at its own frequency, thus behaving incoherently until a further increase in coupling overcomes the threshold for synchronization (Winfree 1967, 2002). This synchronization event was characterized as the analog of a phase transition, revealing an insightful connection between nonlinear dynamics and statistical physics (Strogatz 2003).

4 Dynamics in Developing Systems

Feedback is a prominent source of nonlinear behavior, and biological systems exhibit both negative and positive types of feedback. The central importance of negative feedback as a control device in biological systems was formulated by Wiener (1948). The discovery of negative feedback devices in a variety of biological systems revealed the universality and simplicity of this control mechanism, whereby a process generates conditions which discourage the continuation of that process. End-product inhibition [later renamed “allosteric” inhibition by Monod and Jacob (1961)] is a prominent example of the latter; Umbarger (1956) and Pardee and Yates (1956) showed that the end product in the biosynthesis of isoleucine or pyrimidine inhibited the pathway. Feedback control was highlighted as a mechanism of avoiding behavioral extremes, echoing the concept of constancy of the milieu intérieur by Claude Bernard [1865; 1927 translation by Green (Bernard 1927)] and Walter Cannon’s (1932) notion of homeostasis. Long before the discovery of feedback inhibition, Max Delbruck had introduced a mathematical model of mutually inhibiting chemical reactions (Delbruck 1949). By such a system of cross-feedback, two independent metabolic pathways can switch between stable steady states under unaltered environmental conditions or as a response to the stimulus of transient perturbations.

Positive feedbacks like autocatalysis are also ubiquitous in biology; their importance as a source of instability giving rise to bifurcation and nonlinear behavior was put forward by Turing (1952) in the context of morphogenesis. Turing’s pioneer work demonstrated that an autocatalytic reaction occurring in an initially uniform or isotropic field, when coupled to the transport of matter through diffusion, can produce symmetry breaking visualized as spatial patterns. This work was ground breaking because it explained that stable spatial structures could arise—without assuming a preexistent pattern—through self-organization arising from bifurcations in the dynamics. This work opened the way to a reaction-diffusion theory of pattern formation (Meinhardt 1982) rather than its original goal “to account for the main phenomena of morphogenesis” (Turing 1952). Later on, Wolpert (1969) proposed “positional information” to account for the mechanisms by which cells seem to know where they are. In order to differentiate, cells interpret their position according to the concentration of a morphogen in a gradient. Wolpert’s concept of positional information became important thanks to studies of Nusslein-Volhard and Wieshaus when they first identified the genes required for the formation of the body plan of the Drosophila embryo, and then showed how these genes were involved in morphogenesis. A main finding was that even before fertilization a pattern, formed by the differential distribution of specific proteins and mRNA molecules, is already established. As a consequence of differential rates of transcription, gradients in the concentrations of the new mRNA molecules and proteins are generated. Position along the antero-posterior axis of the Drosophila body plan is determined by a cascade of events that is initiated by the initial localization of bicoid mRNA or with the gradient of bicoid protein to which that localization gives rise (Driever and Nusslein-Volhard 1988a). These authors stated that the bicoid protein has the properties of a morphogen that autonomously determines position in the anterior half of the embryo (Driever and Nusslein-Volhard 1988b). This discovery enabled the combination of three keywords: diffusion, gradient, and morphogen since the distribution of the bicoid protein—a morphogen—is made at a source and both diffuses and is broken down, thereby generating a gradient.

Another big achievement in the field of developmental dynamics was given by the realization that the mammalian genome does not undergo irreversible change in the course of development. Initially, Gurdon (Gurdon et al. 1979) showed that at least in frogs the nucleus of a fully differentiated cell can be reprogrammed when transferred into an enucleated zygote. The sheep, Dolly, was the first mammal to be cloned by transferring the nucleus of an adult cell into an enucleated oocyte of another of the same species (Wilmut et al. 1997).

5 Metabolism—Epigenetics—Genetics: Waddington and the Epigenetic Landscape

Life requires both nucleic acid and a metabolic system for self-maintenance. The emergence of living systems as we know them could have come about as a result of a symbiotic fusion between a rapidly changing set of self-reproducing but error prone nucleic acid molecules and a more conservative autocatalytic metabolic system specializing in self-maintenance [Dyson (1985), quoted by Fox Keller (2000)]. Along this line of reasoning, early on Waddington (1957) had already introduced the concept of “epigenetic landscape” to describe cellular differentiation beyond genetic inheritance. Metaphorically, the “epigenetic landscape” can be visualized as “a mountainous terrain whose shape is determined by … the influence of genes…The valleys represent possible pathways along which the development of an organism could in principle take place. A ball rolls down the landscape, and the path it follows indicates the actual developmental process in a particular embryo” (Saunders and Kubal 1989). The epigenetic landscape illustrates the fact that isolated genes will have little effect on the shape of the landscape, which will depend more on the underlying dense gene interactive network. The outcome of a developmental process (represented by the rolling ball) depends on its dynamic trajectory, and its path through the different valleys (i.e., differentiation states) can arise from bifurcations. Morgan (1934) had previously noted that different groups of genes will come into action as development proceeds.

Waddington (1957) stressed the important implications of time in biology, distinguishing the biochemical (metabolic), developmental (epigenetic), and evolutionary, as the three realms in which time plays a central role in biology. Later, Goodwin (1963) adopts the metabolic, epigenetic, and genetic systems as basic categories for defining a system (e.g., cell) with respect to its environment.

The concept of epigenesis is a precursor of what is now known as “epigenetics,” a whole new research field. Historically, the term “epigenetics” was used to describe events that could not be explained by genetic principles. Originally, Waddington defined epigenetics as “the branch of biology which studies the causal interactions between genes and their products, which bring the phenotype into being” [Waddington (1942), quoted in Goldberg et al. (2007)]. Consequently, a phenotypic effect or an organism following a developmental path is not only brought about by genetic variation but also by the environment.

Today, we know that in addition to primary DNA sequence information, much of the information regarding when and where to initiate transcription is stored in covalent modifications of DNA and its associated proteins. Modifications along the chromatin involve DNA cytosine methylation and hydroxymethylation, and acetylation, methylation, phosphorylation, ubiquitination, and SUMOylation of the lysine and/or arginine residues of histones are thought to determine the genome accessibility to transcriptional machinery (Lu and Thompson 2012). Recent data indicate that information about a cell’s metabolic state is also integrated into the regulation of epigenetics and transcription; cells constantly adjust their metabolic state in response to extracellular signaling and/or nutrient availability. One of the challenges is to visualize how levels of metabolites that control chromatin modifiers in space and time, translate a dynamic metabolic state into a histone map (Katada et al. 2012).

6 The Core of the Living: Biochemistry and Genomes

The elucidation of the basic biochemistry of living systems and the recognition of its similarity across kingdoms and phyla represent major achievements of the twentieth century research in biology. The description of metabolic pathways, mechanisms of energy transduction and of genetic transmission, replication, regulation, and expression stand out as main ones.

The pathways utilized by cells to break down carbohydrates and other substrates like lipids, roughly divided into glycolysis, respiration, and β-oxidation, were already known to biochemists by the 1950s. Respiration includes the complete breakdown of the two carbon unit acetyl-CoA into carbon dioxide—discovered by Krebs (1953; Krebs and Johnson 1937)—and the transfer of electron from NADH to molecular O2 through the respiratory chain to produce water—pioneered by the discovery of cytochromes that changed their spectroscopic properties in the presence of O2 (Keilin 1929).

Lipmann (1941) proposed that ATP is the universal carrier of biological energy when the phosphate bond energy released from its hydrolysis is used to drive most biochemical reactions that require energy. However, missing from this picture was the regeneration of ATP that involves the phosphorylation of ADP with energy provided by the oxidative breakdown of foodstuffs, hence “oxidative phosphorylation.” This riddle was solved by the “chemiosmotic hypothesis.” As postulated by Mitchell (1961), the “chemiosmotic hypothesis” proposed that the energy released by respiration is used by the respiratory enzymes to transport protons across the mitochondrial membranes building up a proton motive force (pmf) composed of an electric potential and an osmotic component (Mitchell 1961). This pmf is used by the ATP synthase to phosphorylate ADP [see (Weber 2005), for a useful historical and epistemological account]. The system is self-regulated by the availability of ADP (Chance and Williams 1956).

The fact that DNA is the carrier of biological specificity in bacteria was demonstrated directly by Avery et al. (1944) and Hershey and Chase (1952). Watson and Crick (1953) introduced the double helix model of the DNA thus providing a mechanism for self-replication and fidelity; complementary base-pairing ensured both replication and conservation. However, “indications that the cell was involved in the maintenance of genetic stability had begun to emerge from studies of radiation damage in bacteria and bacterial viruses (phages), especially from the discovery that certain kinds of damage could be spontaneously reversed” (Fox Keller 2000).

The association of the sequence of bases in the DNA and a protein came after the direct demonstration of the synthesis of a polymer string of the amino acid phenylalanine from a uniform stretch of nucleic acid consisting of a single nucleotide (uridine) (Nirenberg and Matthaei 1961). The central dogma was born; “DNA makes RNA, RNA makes protein, and proteins make us” (Crick 1957).

In 1961, Jacob and Monod introduced the concept of genetic program, extending their success, and in analyzing the operon as a mechanism of regulation of enzyme synthesis in Escherichia coli (Jacob and Monod 1961). This provided a more general description of the role of genes in embryonic development (Fox Keller 2002). These investigations led to the proposal of “structural” and “regulatory” genes thereby locating in the genome the program as a means of controlling its own execution, i.e., structural genes and regulatory elements are coordinated by the product of a regulatory gene. At present, the genome sequencing of more than 350 species, including Homo sapiens, and the informational content of genes and proteins systematized in databases constitute a fertile field for data mining and the ground work for exploring genetic interrelationships within and between species and their evolutionary meaning.

7 Scaling: A Fundamental Concept in Systems Biology

Size is a crucial biological property. As the size and complexity of a biological system increases, the relationship among its different components and processes must be adjusted over a wide a range of scales so that the organism can continue to function (Brown et al. 2000). Otherwise stated, the organism must remain self-similar. Self-similarity is a main attribute of fractals—a concept introduced by Mandelbrot (1977)—therefore the relationships among variables from different processes can be described by a fractal dimension or a power function. Geometrically, fractals can be regarded as structures exhibiting scaling in space: this is because their mass as a function of size, or their density as a function of distance, behave as a power law. If a variable changes according to a power law when the parameter on which it depends is growing linearly, we say it scales, and the corresponding exponent is called scaling exponent, b:

$$ Y={Y_{\mathrm{ o}}}{M^b} $$
(1.1)

where Y can be a dependent variable, e.g., metabolic rate; M is some independent variable, e.g., body mass, while Y o is a normalization constant (Brown et al. 2000). If b = 1, the relationship represented by (1.1) is called isometric, whereas when b ≠ 1 is called allometric—a term coined by Julian Huxley (1932). An important allometric relationship in biology is the existing between metabolic rate and body mass, first demonstrated by Kleiber (1932). Instead of the expected b = 2/3 according to the surface law (i.e., surface to volume area), Kleiber showed that b = ¾ (i.e., 0.75 instead of 0.67), meaning that the amount of calories dissipated by a warm-blooded animal each day scales to the ¾ of its mass (Whitfield 2006).

Scaling not only applies to spatial organization but to temporal organization as well. The dynamics of a biological system—visualized through time series of its variables (e.g., membrane potential, metabolites concentration)—exhibits fractal characteristics. In this case, short-term fluctuations are intrinsically related to the long-term trends through statistical fractals. On these bases, we can say that scaling reflects the interaction between the multiple levels of organization exhibited by cells and organisms, thus linking the spatial and temporal aspects of their organization. The discovery of chaotic dynamics by Lorenz (1963) and criticality in phase transitions by Wilson (1983) enabled the realization that scaling is a common fundamental and foundational concept of chaos and criticality as it is with fractals.

8 Networks

The concept of networks is basic for understanding biological organization. Specifically, networks enable address of the problems of collective behavior and large-scale response to stimuli and perturbations exhibited by biological systems (Alon 2007; Barabasi and Oltvai 2004). Scaling and topological and dynamical organization of networks are intimately related concepts. Networks exhibit scale-free topologies and dynamics. Topologically, networks are scale free because most of the nodes in a network will have only a few links and these will be held together by a small number of nodes exhibiting high connectivity (Barabasi 2003). Dynamically, the scale-free character of networks manifests as diverse frequencies across multiple and highly dependent temporal scales (Aon et al. 2012; Lloyd et al. 2012).

The present topological view of networks evolved from multiply and randomly interacting elements in a system (Erdos and Renyi 1960) to “small worlds” (Watts and Strogatz 1998) to “scale-free” networks (Barabasi 2003). From the classical work of Erdos and Renyi based on random graphs, in which every node is linked to other node irrespective of their nature and connectivity, the “small world” concept introduced the notion that real networks as disparate as the neural network of the worm Caenorhabditis elegans, or those of power grids exhibit high clustering (i.e., densely connected subgraphs) and short path lengths. Barabasi and collaborators presented the view that nodes in a network are held together by a small number of nodes exhibiting high connectivity, rather than most of the nodes having the same number of links as in “random” networks (Barabasi and Albert 1999; Barabasi and Oltvai 2004). The “scale-free” organization of networks expresses the fact that the ratio of highly connected nodes or “hubs” to weakly connected ones remains the same irrespective of the total number of links in the network (Albert and Barabasi 2002; Helms 2008). Mechanistically, it has been proposed that the scale-free topology of networks is based on growth and preferential attachment (Albert and Barabasi 2002; Barabasi 2003).

The networks approach was introduced into biochemistry as metabolic control analysis (MCA). Independently developed in the second half of the past century by Kacser and Burns (1973) and Heinrich and Rapoport (1974), MCA represents an experimental approach with mathematical bases founded on the kinetics of enzymatic and transport networks in cells and tissues. MCA deals with networks of reactions of any topology and complexity to quantifying the control exerted by each process on systemic and local levels (Fell 1997; Westerhoff et al. 2009). Metabolic flux analysis (MFA), also called flux balance analysis, represents another methodological approach to the study of reaction networks (Savinell and Palsson 1992a, b). Developed in the 1990s MFA is based on stoichiometric modeling and accounts for mass–energy relationships among metabolic network components.

9 Systems Biology: A Twenty First Century Approach to Complexity

Our potential to address and solve increasingly complex problems in fundamental and applied research has expanded enormously. The following developments underscore our possibilities to address increasingly complex behavior in complex systems (Aon et al. 2012; Cortassa et al. 2012):

  • The ability and the computational power to mathematically model very complicated systems, and analyze their control and regulation, as well as predict changes in qualitative behavior

  • An arsenal of theoretical tools (each with its own plethora of methods)

  • High throughput technologies that allow simultaneous monitoring of an enormous number of variables

  • Automation and accessibility of databases by newly developing methods of bioinformatics

  • Powerful imaging methods and online monitoring systems that provide the means of studying living systems at high spatial and temporal resolution of several variables simultaneously

  • The possibility of employing detailed enough bottom-up mathematical models that may help rationalize the use of key integrative variables, such as the membrane potential of cardiomyocytes or neurons, in top-down conceptual models with a few state variables.

A Complex Systems Approach integrating Systems Biology with nonlinear dynamic systems analysis, using the concepts and analytical tools of chaos, fractals, critical phenomena, and networks has been proposed (Aon and Cortassa 2009). This approximation is needed because the focus of the integrative physiological approach applied to biology and medicine is shifting toward studies of the properties of complex networks of reactions and processes of different nature, and how these control the behavior of cells and organisms in health and disease (Cortassa et al. 2012; Lloyd and Rossi 2008; Saks et al. 2007, 2012).

The mass–energy transformation networks, comprising metabolic and transport processes (e.g., metabolic pathways, electrochemical gradients), give rise to the metabolome and fluxome, which account for the whole set of metabolites and fluxes, respectively, sustained by the cell. The information-carrying networks include the genome, transcriptome, and proteome, which account for the whole set of genes, transcripts, and proteins, respectively, possessed by the cell. Signaling networks modulate (activating or repressing) the interactions between information and mass–energy transducing networks, thus mediating between the genome–transcriptome–proteome and metabolome–fluxome. As such, signaling networks pervade the whole cellular network playing the crucial role of influencing the unfolding of its function in space and time. The output of signaling networks consists of concentration levels of intracellular metabolites (e.g., second messengers such as cAMP, AMP, phosphoinositides, reactive oxygen, or nitrogen species), ions, proteins or small peptides, growth factors, and transcriptional factors.

The underlying difficulty of the question of how the mass–energy and information networks of the cell interact with each other to produce a certain phenotype arises from the dual role of, e.g., metabolites or transcriptional factors; they are at the same time a result of the mass–energy or information networks while being active components of the signaling networks that will activate or repress the networks that produced them (see Chap. 2). The presence of these loops, in which the components are both cause and effect, together with their self-organizing properties, constitute the most consistent defining trait of living systems and the source of their inherent complexity (Cortassa et al. 2012). Indeed, it is increasingly recognized that the regulatory state of a cell or tissue, as driven by transcription factors and signaling pathways, can impose itself upon the dynamics of metabolic state, but the reciprocal—the feedback of metabolic state on regulatory state—must be equally true (Katada et al. 2012; Lu and Thompson 2012; McKnight 2010). Along this vein, one of the main undertakings of this book is to understand how the components and dynamics of signaling networks affect and is affected by the other cellular mass–energy and information networks in health and disease to produce a certain phenotype or a cellular response under defined physiological conditions.