Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Principle of Self-Organization and Dissipative Structures

The phenomenon of spontaneous generation of spatial patterns of chemical concentration gradients was first observed in a purely chemical system in 1958 (see Fig. 3.1) (Babloyantz 1986; Kondepudi and Prigogine 1998; Kondepudi 2008) and inside the living cell in 1985 (see Fig. 3.2) (Sawyer et al. 1985). These observations demonstrate that, under appropriate experimental conditions, it is possible for chemical reactions to be organized in space and time to produce oscillating chemical concentrations, metastable states, multiple steady states, fixed points (also called attractors), etc., all driven by the free energy released from exergonic (i.e., ΔG < 0) chemical reactions themselves. Such phenomena are referred to as self-organization, and physicochemical systems exhibiting self-organization are called dissipative structures (Prigogine 1977; Babloyantz 1986; Kondepudi and Prigogine 1998; Kondepudi 2008). It has been found convenient to refer to dissipative structures also as X-dissipatons, X referring to the function associated with or mediated by the dissipative structure. For example, there is some evidence (Lesne 2008; Stockholm et al. 2007) that cells execute a set of gene expression pathways (GEPs) more or less randomly in the absence of any extracellular signals until environmental signals arrive and bind to their cognate receptors, stabilizing a subset of these GEPs. Such mechanisms would account for the phenomenon of the phenotypic heterogeneity among cells with identical genomes (Lesne 2008; Stockholm et al. 2007). Randomly expressed GEPs are good examples of dissipatons, since they are dynamic, transient, and driven by dissipation of metabolic energy. Ligand-selected GEPs are also dissipatons. All living systems, from cells to multicellular organisms, to societies of organisms and to the biosphere, can be viewed as evolutionarily selected dissipatons. As indicated above, attractors, fixed points, metastable states, steady states, oscillators, etc., that are widely discussed in the nonlinear dynamical systems theory (Scott 2005) can be identified as the mathematical representations of dissipatons.

Fig. 3.1
figure 1

The Belousov–Zhabotinsky (BZ) reaction. The most intensely studied chemical reaction–diffusion system (or dissipative structure) known. Reproduced from Prigogine (1980)

Fig. 3.2
figure 2

Intracellular Ca++ ion gradients generated in the cytosol of a migrating human neutrophil. The intracellular Ca++ ion concentration was visualized using the Ca++-sensitive fluorescent dye, Quin2. The pictures in the first column are bright-field images of a human neutrophil, and those in the second column are fluorescent images showing intracellular calcium ion distributions (white = high calcium; gray = low calcium). The pictures in the third column represent the color-coded ratio images of the same cell as in the second column. Images on the first row = unstimulated neutrophil. Images on the second row = the neutrophil migrating toward an opsonized particles, “opsonized” meaning “being treated with certain proteins that enhance engulfing” by neutrophils. Images on the third row = the neutrophil with pseudopods surrounding an opsonized particle. Images on the fourth row = the neutrophil after having ingested several opsonized particles. Before migrating toward the opsonized particle (indicated by the arrows in Panels D and G), the intracellular Ca++ ion concentration in the cytosol was about 100 nM (see Panel C), which increased to several hundred nM toward the advancing edge of the cell (see Panel F) (Reproduced from Sawyer et al. (1985)

The theory of dissipative structures developed by Prigogine and his coworkers (Prigogine 1977; Nicolis and Prigogine 1977; Prigogine 1980; Kondepudi and Prigogine 1998; Kondepudi 2008) can be viewed as a thermodynamic generalization of previously known phenomena of self-organizing chemical reaction–diffusion processes discovered independently by B. Belousov in Russia (and by others) working in the field of chemistry and by A. Turing in England working in mathematics (Gribbins 2004, pp. 128–134). That certain chemical reactions, coupled with appropriate diffusion characteristics of their reactants and products, can lead to symmetry breakings in molecular distributions in space (e.g., the emergence of concentration gradients from a homogeneous chemical reaction medium; see Fig. 3.2) was first demonstrated mathematically by A. Turing (1952; Gribbins 2004, pp. 125–140). Murray (1988) has shown that the Turing reaction-diffusing models can account for the colored patterns over the surface of animals such as leopards, zebra, and cats.

Prigogine suggested that the so-called far-from-equilibrium condition is both necessary and sufficient for self-organization, but the general proof of this claim may be lacking as already pointed out. Nevertheless, Prigogine and his group have made important contributions to theoretical biology by establishing the concept that structures in nature can be divided into two distinct classes – equilibrium and dissipative structures and that organisms are examples of the latter. It should be noted that these two types of structures are not mutually exclusive, since many dissipative structures (e.g., the living cell) require equilibrium structures as a part of their components such as phospholipid bilayers of biomembranes (which last much longer than, say, action potentials upon removing free energy supply).

One of the characteristic properties of all self-organizing systems is that the free energy driving them is generated or produced within the system (concomitant to self-organization), most often in the form of exergonic chemical reactions, either catalyzed by enzymes (e.g., see Fig. 3.2) or uncatalyzed (Fig. 3.1). In contrast, there are many organized systems that are driven by forces generated externally, such as the Bernard instability (Prigogine 1980), which is driven by externally imposed temperature gradients and paintings drawn by an artist’s brush. To describe such systems, it is necessary to have an antonym to “self-organization,” one possibility of which being “other organization.” It is unfortunate that, most likely due to the lack of the appropriate antonym, both self-organized (e.g., the flame of a candle) and other-organized entities (e.g., a painting, or the Bernard instability) are lumped together under the same name, that is, self-organization.

Dissipative structures are material systems that exhibit nonrandom behaviors in space and/or time driven by irreversible processes. Living processes require both equilibrium and dissipative structures. Operationally, we may define the equilibrium structures of living systems as those structures that remain, and dissipative structures as those that disappear, upon removing free energy input. Some dissipative structures can be generated from equilibrium structures through expenditure of free energy, as exemplified by an acorn and a cold candle, both equilibrium structures, turning into an oak and a flaming candle, dissipative structures, respectively, upon input of free energy:

$$ {\text{Equilibrium Structures}}\xrightarrow{\text{Free Energy}}{\text{Dissipative Structures}} $$
(3.1)

The flame of a candle is a prototypical example of dissipative structures. The pattern of colors characteristic of a candle flame reflects the space- and time-organized oxidation-reduction reactions of hydrocarbons constituting the candle that produce transient chemical intermediates, some of which emit photons as they undergo electronic transitions from excited states to ground states. From a mechanistic point of view, the flame of a candle can be viewed as high-temperature self-organizing chemical reaction–diffusion systems in contrast to the Belousov–Zhabotinsky reaction (Fig. 3.1) which is a low-temperature self-organizing chemical reaction–diffusion system.

1.1 Belousov–Zhabotinsky Reaction–Diffusion System

The Belousov–Zhabotinsky (BZ) reaction was discovered by Russian chemist, B. P. Belousov, in 1958 and later confirmed and extended by A. M. Zhabotinski (Babloyantz 1986; Gribbins 2004, pp. 131–34). The spatial pattern of chemical concentrations exhibited by the BZ reaction results from the chemical intermediates formed during the oxidation of citrate or malonate by potassium bromate in acidic medium in the presence of the redox pair, Ce+3/Ce+4, which acts as both a catalyst and an indicator dye. Ce+4 is yellow and Ce+3 is colorless. The BZ reaction is characterized by the organization of chemical concentrations in space and time (e.g., oscillating concentrations). The spatial patterns of chemical concentrations can evolve with time. “Patterns of chemical concentrations” is synonymous with “chemical concentration gradients.” The organization of chemical concentration gradients in space and time in the BZ reaction is driven by free energy-releasing (or exergonic) chemical reactions. The BZ reaction belongs to the family of oxidation-reduction reactions of organic molecules catalyzed by metal ions. The mechanism of the BZ reaction has been worked out by R. Field, R. Noyes, and E. Koros in 1972 at the University of Oregon in Eugene. The so-called FNK (Field, Noyes, and Koroso) mechanism of the BZ reaction involves 15 chemical species and 10 reaction steps (Leigh 2007). A condensed form of the FNK mechanism still capable of exhibiting spatiotemporally organized chemical concentrations is known as the Oregonator. A simplified mathematical model of the BZ reaction was formulated in 1968 and is known as the Brusselator (Babloyantz 1986; Gribbins 2004, pp. 132–34).

1.2 Intracellular Dissipative Structures (IDSs)

Living cells are formed from two classes of material entities that can be identified with Prigogine’s equilibrium structures (or equilibrons for brevity) and dissipative structures (or dissipatons) (Sect. 3.1). What distinguishes these two classes of structures is that equilibrons remain and dissipatons disappear when cells run out of free energy. Dissipatons are also theoretically related to the concept of “attractors” of nonlinear dynamical systems (Scott 2005).

All of the cellular components that are controlled and regulated are dissipatons referred to as intracellular dissipative structures (IDSs) (Ji 1985a, b, 2002b). One clear example of IDSs is provided by the RNA trajectories of budding yeast subjected to glucose-galactose shift that exhibit pathway- and function-dependent regularities (Panel a in Fig. 12.2), some of which were found to obey the blackbody radiation-like equation (see Panels a through d in Fig. 12.25). The main idea to be suggested here is that IDSs constitute the immediate causes for all cell functions (Ji 1985a, b, 2002b). In other words, IDSs and cell functions are synonymous:

IDSs constitute the internal (or endo) aspects and cell functions constitute the external (or exo) aspects of the living cell.(3.2)

The concepts of dissipative structures or self-organizing chemical reaction–diffusion systems are not confined to abiotic (or inanimate) systems, but can be extended to biotic (or animate) systems such as intracellular chemical reaction–diffusion processes, which were first demonstrated experimentally in chemotaxing human neutrophils by Sawyer, Sullivan, and Mendel (1985) (see Fig. 3.2). What is interesting about the findings of these investigators is that the direction of the intracellular calcium ion gradient determines the direction of the chemotactic movement of the cell as a whole. This is one of the first examples of intracellular dissipative structures (IDSs), that is, intracellular calcium gradients, in this case, that are observed to be linked to cell functions. Figure 3.2 offers two important take-home messages – (1) dissipative structures in the form of ion gradients can be generated inside a cell without any membranes (see Panels F, I, and L), and (2) IDSs determine cell functions.

There are three major differences to be noted between the dissipative structures in the Belousov–Zhabotinsky (BZ) reaction shown in Fig. 3.1 and the dissipative structures shown in Fig. 3.2: (1) The boundary (i.e., the reaction vessel wall) of the BZ reaction is fixed, and (2) The boundary of IDSs (such as the intracellular calcium ion gradients) is mobile, and (3) The BZ reaction is a purely chemical reaction–diffusion system, while the intracellular dissipative structures in Fig. 3.2 are chemical reactions catalyzed by enzymes which encode genetic information. Hence, the cell can be viewed as dissipative structure regulated by genetic information or as a “genetically informed dissipatons (GIDs).”

1.3 Pericellular Ion Gradients and Action Potentials

The action potential is another example of dissipative structures with a well-defined biological function, for example, the transmission of information along the axon. Action potentials (APs) differ from intracellular calcium ion gradients as shown in Fig. 3.2, in that they implicate a movement of ions across the cell membrane. For this reason, it may be more accurate to refer to action potentials as “transmembrane” or “pericellular dissipative structures” (TDSs or PDSs) in contrast to cytosolic calcium ion gradients which are “intracellular dissipative structures” (IDSs). APs can be viewed as a network of transmembrane transport processes of four key ions, namely, K+, Na+, Ca++, and Cl that are precisely coordinated in time and space with respect to the direction and speed of ion movements.

According to the Bhopalator model of the cell (Ji 1985a, b, 2002b), the final form of gene expression is not proteins as is widely believed but a set of intracellular dissipative structures (IDSs) or dissipatons, including transmembrane dissipative structures and mechanical stress gradients of the cytoskeleton (Ingber 1998; Chicurel et al. 1998). Since IDSs and cell functions are determined by genes to a large extent, to that extent it would follow logically that cell functions and IDSs are equivalent or synonymous. We may express this idea in the form of a syllogism:

$$ \begin{array}{llllllll} {(1}) {\hbox{ Premise 1}}:\ {\hbox{Genes}} \quad\quad\quad\quad\quad\ \!\;\quad\,\,\, \Rightarrow {\hbox{ Dissipative Structures}} \hfill \\ {(2}) {\hbox{ Premise 2}}:\ {\hbox{Genes}} \,\,\,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \Rightarrow {\hbox{ Functions}} \hfill \cr {(3}) {\hbox{ Conclusion}}:\, {\hbox{Dissipative Structures}}\,\! \ \Rightarrow {\hbox{ Functions}} \hfill \\ \end{array} $$
(3.3)

where the ⇒ reads “determine” or “cause.” Or we may regard functions as the external (or exo) aspect or view and dissipative structures as the internal (or endo) aspect or view of the same phenomenon called life on the cellular level (see Statement 3.2).

1.4 Three Classes of Dissipative Structures in Nature

Although organisms are dissipative structures, not all dissipative structures are organisms. I agree with Pattee (1995) who stated that

a productive approach to the theories of life, evolution, and cognition must focus on the complementary contributions of nonselective law-based material self-organization and natural selection-based symbolic organization (meaning the genetic mechanisms; my addition).(3.4)

According to this so-called matter-symbol complementarity view, dissipative structures alone, as exemplified by the Belousov–Zhabotinsky (BZ) reaction, is not sufficient to give rise to life, because they are devoid of any symbolic elements that encode evolutionary history/record. There is a great similarity between Pattee’s emphasis on a symbolic aspect of organisms and my emphasis on the role of genetic information in life (see “liformation” in Table 2.6). Thus, we can recognize three distinct classes of dissipative structures, depending on the physicochemical nature of the boundaries delimiting dissipative structures as shown in Table 3.1.

Table 3.1 Three classes of dissipative structures: (1) dissipative structures with fixed boundaries, (2) dissipative structures with moving boundaries, and (3) dissipative structures with informed boundaries

The main difference between moving boundaries and informed moving boundaries is that the latter is not only mobile (e.g., the intracellular calcium ion gradient or the action potentials) but also “communicates” with the chemical reactions that they catalyze through exchanging energies with both the chemical reactions and thermal environment (see Sect. 2.1.2). The conformon-based mechanisms of enzymic catalysis are consistent with Circe effect of Jencks (1975), the essence of which is that a part of the substrate-binding energy is stored in enzyme-substrate complex as mechanical energy to be later utilized to lower (or, more accurately, to regulate) the activation free energy barrier for the enzyme-catalyzed reaction. The evidence for enzymes regulating their own catalytic rates (and activation energy barriers) came from the fact that the waiting times of single-molecule enzymes are distributed not randomly but in accordance with Planck’s radiation law-like manner (see Sect. 11.3.3).

If the classification scheme in Table 3.1 is valid, we can identify all molecular machines and motors in action driven by chemical reactions as “dissipative structures with informed moving boundaries.” Because of the informed nature of the molecular structures of enzymes, enzymes can search out their target molecules to bind or target reactions to catalyze and execute motions in the direction of achieving informed/instructed functions. When a right set of such informed molecular machines are put in a confined space such as the interior of the cell, the molecular machines (Alberts 1998) can find their correct targets to interact with, forming a molecular machine network, which executes collective nonrandom molecular motions that we recognize as life. Therefore, we are entitled to view the living cell as a “super-dissipative structure with informed boundaries” or “SDSIMB.” I suggest that SDSIMBs are capable of any computation, communication, and construction on the molecular level, which may be regarded as the microscopic realization of the Turing machine and the von Neumann’s Universal Constructor (von Neumann 1966) combined.

1.5 The Triadic Relation Between Dissipative Structures (Dissipatons) and Equilibrium Structures (Equilibrons)

The living cell can be viewed as a prototypical example of dissipative structures or a dissipaton. We can recognize two kinds of structures in the cell – those that disappear within time τ upon the cessation of free energy input and those that remain unaltered for times longer than τ following the removal of the free energy from the cell. We will identify the former as processes (since all processes will stop without free energy dissipation) and the latter as equilibrium structures or equilibrons. Here I am distinguishing between dissipative structures and processes. Dissipative structures are processes but not all processes are dissipative structures. For example, unless meticulous experimental conditions are satisfied (such as the concentration ranges of the reactants, the surface condition of the reaction vessel, temperature, pressure, etc.) the same set of reactions (i.e., processes) giving rise to pattern formations in the Belousov–Zhabotinsky reaction under the right set of conditions may proceed without producing any patterns of chemical concentration gradients. Another more mundane example would be the combustion engine: Without the mechanical boundaries provided by the cylinder block and the mobile piston, the oxidation of gasoline in the combustion chamber would lead to an explosion without producing any directed motions of the crankshaft. Thus it is clear that the boundary conditions (and in some cases the initial conditions as well) of chemical reactions are of an utmost importance in successfully producing dissipative structures. The boundaries that constrain motions to produce coordinated motions leading to some functions will be referred to as the Bernstein–Polanyi boundaries to recognize the theoretical contributions made by Bernstein (1967) and Polanyi (1968) in the fields of structure–function correlations at the human-body and molecular levels (see Sect. 15.12). Thus, we can view a dissipaton or a dissipative structure as an irreducible triad as shown in Fig. 3.3.

Fig. 3.3
figure 3

The triadic relation between dissipatons (dissipative structures) and equilibrons (equilibrium structures)

Dissipatons are defined as those processes, selected by some goal-directed or teleonomic mechanisms because of their ability to accomplish some functions. For this reason, dissipatons carry “meanings” whereas processes do not. Goal-directed or teleonomic mechanisms include enzyme-catalyzed chemical reactions and the biological evolution itself. The subscript, BPB, on the right-hand side of the bracket in Fig. 3.3 stands for the Bernstein–Polanyi boundaries, the boundary conditions essential for harnessing the laws of physics and chemistry to constrain motions to achieve functions. Thus, the following dictum suggests itself:

Without Bernstein-Polanyi boundaries, no function.(3.5)

Figure 3.3 indicates that equilibrons are a necessary condition for dissipatons but not a sufficient one. The sufficient condition includes the mechanism that selects dissipatons out of all possible processes derived from a set of equilibrons and associated thermodynamic forces, the selection being based on functions. Since organisms are examples of dissipatons and since biology is the study of organisms, Fig. 3.3 suggests a novel way of defining biology in relation to physics and chemistry (which are widely acknowledged as the necessary conditions of life) as shown in Fig. 3.4. One unexpected consequence of Fig. 3.4 is the emergence of the fundamental role of biological evolution as the mechanism that selects those chemical reactions and physical processes that contribute to the phenomenon of life.

Fig. 3.4
figure 4

Biology as the triadic science of physics, chemistry, and evolution (or history). Physics is viewed as the study primarily of material objects themselves (e.g., three-dimensional structures of matter), chemistry as the study of material transformations from one kind to another (i.e., chemical reactions), and biology as the study of those processes and structures that have been selected by the biological evolution (e.g., metabolic networks, the cell cycle, morphogenesis)

1.6 Four Classes of Structures in Nature

As discussed in Sect. 3.1, Prigogine (1917–2003) divides all structures in the Universe into equilibrium (e.g., rocks, three-dimensional structures of proteins, amino acid sequences of proteins, nucleotide sequences of DNA and RNA) and dissipative (e.g., flames, concentration gradients, DNA supercoils) structures. It appears that Prigogine’s classification of structures into equilibrium and dissipative structures is based on dynamics, the study of the causes of motions, namely, the energies and forces causing motions. Since the science of mechanics comprises dynamics and kinematics that are complementary to each other (see Sect. 2.3.5) according to Bohr (Murdoch 1987; Plotnitsky 2006), it may be logical to classify structures into two groups based on kinematics as well. Kinematics is defined as the study of the space and time coordinations of moving objects without regarding their causes. In contrast to the classification of structures into equilibrium and dissipative structures based on dynamics, it is here suggested that the two divisions of structures based on kinematics are (1) local and (2) global motions, including the division into microscopic and macroscopic motions. Therefore, the structures of the Universe can be divided into four distinct classes based on the kinematics–dynamics complementarity – (1) local equilibrons, (2) global equilibrons, (3) local dissipatons, and (4) global dissipatons as summarized in Table 3.2 with specific examples given for each class. Several points emerge from Table 3.2. First, equilibrons (equilibrium structures) can be identified with “thermal motions” or “random motions,” which entail no dissipation of free energy, while dissipatons (dissipative structures) can be identified with “directed motions” or “non-random motions,” which entail free energy dissipation. Second, thermal motions are divided into local and global thermal motions, the former being identified with “thermal fluctuations,” essential for enzymic catalysis (see Sect. 7.1.1) (Welch and Kell 1986; Ji 1974a, 1991), and the latter with “Brownian motions,” which may play an essential role in the regulation of cell metabolism and motility. Another example of local versus global equilibrons is provided by individual bond vibrations versus domain or segment motions of an enzyme involving hundreds and thousands of covalent bonds whose vibrational motions can be coupled into coherent modes.

Table 3.2 The classification of structures into four groups based on the principle of the kinematics–dynamics complementarity (Sect. 2.3.5). Equilibrons dissipate no free energy, that is, dG/dt = 0, while dissipatons do, that is, dG/dt < 0, where dG is Gibbs free energy change

The cell can be viewed as a dynamic system of molecules (biochemicals, proteins, nucleic acids, etc.) that are organized in space and time to form local dissipatons (e.g., enzyme turnovers driven by conformons; Chaps. 7 and 8) and global dissipatons (e.g., cell cycles, cell motility driven by local dissipatons). Since all organizations in the cell are driven by the free energy supplied by chemical reactions catalyzed by enzymes, which in turn are driven by conformons, examples of local dissipatons, it would follow that all global dissiptons of the cell are ultimately driven by local dissipatons, which may be a case of the local–global coupling. Local–global couplings are important in biology in general and cell biology in particular, and are likely controlled by the generalized Franck–Condon principle or the Principle of Slow and Fast Processes discussed in Sect. 2.2.

1.7 Activities versus Levels (or Concentrations) of Bioploymers and Biochemicals in the Cell

The molecular entities (or biomolecules) of the cell may exist in two distinct states – active and inactive. For example, genes are inactive when they are buried deep inside chromosomes and active only when they are unpacked and brought out onto the surface of chromatins so that they can interact with transcription factors and enzymes. Another example would be RNA molecules that are free versus bound to other molecules to affect their actions. Biomolecules need not be stable structures but include dynamic, multisubunit complexes (e.g., hyperstructures of Norris et al. (1999, 2007a, b)) that are formed transiently to carry out needed metabolic functions and disassemble when their work is done. In analogy to the concept of activity coefficients in physical chemistry (Moore 1963, pp. 192–195; Wall 1958, pp. 341–344; Kondepudi and Prigogine 1998, pp. 199–203), we may define what may be called “bioactivity coefficient,” β, as follows:

$$ {\beta_{\rm{i }}} = { }{{\hbox{C}}_{{{\rm{a}},{\rm{i}}}}}/({{\hbox{C}}_{{{\rm{a}},{\rm{i}}}}} + { }{{\hbox{C}}_{{{\rm{i}},{\rm{i}}}}}) { } = { }{{\hbox{C}}_{{{\rm{a}},{\rm{i}}}}}/{{\hbox{C}}_{{{\rm{t}},{\rm{i}}}}} $$
(3.6)

where βi is the bioactivity coefficient of the ith component of the cell, Ca,i is the concentration (i.e., the number of molecules in the cell) of the active form of the ith component, Ci,i is the concentration of the ith component in its inactive form, and Ct,i is the total concentration of the ith component. Therefore, the active or effective concentration of the ith cell component is given by

$$ {{\hbox{C}}_{{{\rm{a}},{\rm{i}}}}} { } = { }{\beta_{\rm{i}}}{{\hbox{C}}_{{{\rm{t}},{\rm{i}}}}} $$
(3.7)

The mechanisms by which a component of the cell is activated or inactivated include (1) covalent mechanisms (e.g., post-replicational and post-translational modifications such as phosphorylation, methylation, acetylation, formylation, protonation, reduction, oxidation, etc.), and (2) noncovalent mechanisms (e.g., conformation changes of biopolymers and their higher-order structures induced by pH, ionic strength, mechanical stresses, local electric field, and ligand binding).

The bioactivity coefficient as defined in Eq. 3.7 is synonymous with the “fractional activity of biomolecules,” namely, the fraction of the total number of the ith biomolecule that is activated or active at any given time t at a given microenvironment located at coordinates x, y, and z. In other words, βi in Eq. 3.7 is not a constant, as activity coefficients are in chemistry, but a function of space and time, leading to the following expression:

$$ {1 } > { }{\beta_{\rm{i}}}(x,{ }y,{ }z,{ }t) { } \ge { }0 $$
(3.8)

Inequality 3.8 states that the activity of the ith biomolecule inside the cell is dependent not only on the intrinsic physicochemical properties of the molecule itself but also on its microenvironment and time. We may refer to this statement as the Principle of the Space-Time Dependent Bioactivity Coefficient (PSTDBC). PSTDBC is consistent with the “metabolic field theory of cell metabolism,” also known as “cytosociology,” formulated by Welch and his colleagues (Welch and Keleti 1981; Welch and Smith 1990; Smith and Welch 1991). It is very likely that PSTDBC has provided important additional degrees of freedom for the living cell to complexify its internal states, thereby enhancing its survivability in the increasingly complexifying environment of the biosphere over the evolutionary time scale (see Sect. 5.2.3). The emerging importance of “crowding” effects on cell functions (see Fig. 12.28) (Minton 2001) is predictable from the perspective of PSTDBC.

2 Configurations versus Conformations; Covalent versus Noncovalent Interactions (or Bonds)

It is important to distinguish between conformations and configurations on the one hand, and between noncovalent and covalent interactions (or bonds) on the other. Conflating these two sets of terms in chemistry is comparable to conflating protons and neutrons in particle physics and first (words --> sentences) and second (letters --> words) articulations in linguistics (Culler 1991). The conformation of a molecule is a three-dimensional arrangement of atoms that can be altered without breaking or forming covalent bonds, while the configuration of a molecule is a three-dimensional arrangement of atoms in a molecule that cannot be changed unless at least one of the covalent bonds in the molecules is broken. Covalent bonds are strong taking 50–100 kcal/mol to break, since they are formed between two or more nuclei through sharing of one or more pairs of valence electrons (i.e., the electrons residing in the outermost electronic shell in an atom or a molecule). Noncovalent bonds are relatively weak taking only 1–3 kcal/mol to break, because they do not require sharing any electron pairs.

It is very common to hear experts in X-ray crystallography of biopolymers or in the field of signal transductions say that the “phosphorylation of group X in protein Y produced conformation changes.” Such statements, strictly speaking, are incorrect (Ji 1997a). The correct expression entails replacing conformation with configuration. To understand why, it is necessary to know how these two terms are defined in physical organic chemistry (Fig. 3.5).

Fig. 3.5
figure 5

Distinguishing between configurations and conformations. (Upper) A configuration refers to the arrangement of atoms in a molecule that cannot be changed without breaking or forming at least one covalent bond. One of the two C–C bonds must be broken and reformed to convert the trans-1,2-difluoreethylene to the cist isomer. (Lower) A conformation is the arrangement of atoms in a molecule that can be changed by bond rotations without breaking or forming any covalent bonds. No covalent bond needs to be broken to convert the trans-1,2-difuoroethane conformation to the cis conformer. Conformers are defined as the molecular structures that can be interconverted without breaking any covalent bonds

Notice that all that is needed to convert a trans-conformer to a cis-conformer is to rotate the carbon atoms around the carbon–carbon single bond relative to each other, and no covalent bond needs be broken or formed in the process. Configurational changes in contrast involve breaking or forming at least one covalent bond, and are usually slow activation energy barriers being in the order of several dozen Kcal/mole. Conformational changes are fast because they implicate the activation energy barriers in the range of thermal energies, that is, about 1–3 kcal/mol. The biological importance of distinguishing between conformational (also called noncovalent) structures and configurational (or covalent) structures rests on the following facts:

  1. 1.

    All protein–protein, protein–nucleic acid, and RNA–DNA interactions are completely determined by the three-dimensional shapes of proteins and nucleic acids.

  2. 2.

    Molecular shapes carry molecular information (e.g., the molecular shape of a transcription factor is recognized by and influences the structure and activity of a regulatory segment of DNA).

  3. 3.

    There are two kinds of molecular shapes, to be denoted as Type I and Type II:

    • “Type I shapes” can be changed from one to another through conformational (i.e., noncovalent) changes only.

    • “Type II shapes” can be changed from one to another through configuration (i.e., covalent) changes only.

  4. 4.

    Type I shapes are sensitive to microenvironmental conditions (e.g., temperature, pH, ionic strength, electric field gradient, mechanical stress gradient, etc.), while Type II shapes are relatively insensitive to such factors.

  5. 5.

    It was postulated that Type I shapes are utilized to transmit information through space, while Type II shapes are used to transmit information through time (Ji 1988).

Therefore, it may be reasonable to conclude that one possible reason for there being two (and only two) kinds of molecular interactions and shape changes in molecular and cell biology is to mediate information transfer through space and time in living systems.

3 The Principle of Microscopic Reversibility

In formulating possible mechanisms for an enzyme-catalyzed reaction, it is important to obey two principles – the generalized Franck–Condon principle (GFCP) introduced in Sect. 2.2.3 and the principle of microscopic reversibility (PMR) described below. PMR is well known in the field of chemical kinetics (Gould 1959; Hine 1962; Laidler 1965) and statistical mechanics (Tolman 1979), and is succinctly stated by Hine (1962, pp. 69–70) in the form that is useful in enzymology:

… the mechanism of reversible reaction is the same, in microscopic detail … for the reaction in one direction as in the other under a given set of conditions. …(3.9)

Gould (1959, p. 319) describes PMR in another way:

… if a given sequence of steps constitutes the favoured mechanism for the forward reaction, the reverse sequence of these steps constitutes the favoured mechanism for the reverse reaction. ….(3.10)

Enzymologists often write a generalized enzymic reaction thus:

$$ \mathop{{{\hbox{S}} { } + {\hbox{ E}}}}\limits_a \leftrightarrow { }\mathop{{{\hbox{S}} \cdot {\hbox{E}}}}\limits_b { } \leftrightarrow { }\mathop{{{\hbox{P }} + {\hbox{ E}}}}\limits_c $$
(3.11)

where E is the enzyme, S the substrate, and P the product. Clearly, Scheme 3.11 is not microscopically reversible, since the sequence of events followed in the direction from left to right is not the same as that from right to left. There is no PċE in the scheme. In order to modify Scheme 3.11, so as to make it microscopically reversible, it is necessary to use GFCP (Sect. 2.2.3) as shown in Scheme 3.12:

$$ \begin{gathered} {\text{S }} + {\text{ E }} \leftrightarrow {\text{ S }} + {\text{ E}}* \leftrightarrow {\text{ S}} \cdot {\text{E}}* \leftrightarrow { }[{\text{S}} \cdot {{\text{E}}^{\ddag }} \Leftrightarrow {\text{P}} \cdot {{\text{E}}^{\ddag }}]{ } \leftrightarrow {\text{ P}} \cdot {\text{E}}*{ } \leftrightarrow {\text{ P }} + {\text{ E}}*{ } \leftrightarrow {\text{ P }} + {\text{ E}} \hfill \\ \,\,\,\,\,\,\,{\text{a}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{b}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{e}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{d}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{e}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{f}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{g}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{h}} \hfill \\ \end{gathered} $$
(3.12)

where the two superscripted Es represent the so-called Franck–Condon states, which are conformationally strained high-energy states that are in thermal equilibrium with their associated ground states (Reynolds and Lumry 1966). Of the two Franck–Condon states, E* is long-lived (with lifetimes thought to be much longer than ~10−12 s, the typical time required for electronic transitions) and E is short-lived, lasting long enough for electronic transitions to take place as a part of a chemical reaction, that is, covalent rearrangements. Hence, we may refer to E* and E as “stable” and “unstable” Franck–Condon states, the latter often symbolized by square brackets, […] (Ji 1974a, 1979). Evidently, Scheme 3.12, which is a species of Eq. 2.26, is microscopically reversible, that is, the scheme is mechanistically symmetric with respect to the inversion around the symbol ⇔.

There are several unusual features about Scheme 3.12 that require special attention:

  1. 1.

    Enzymes are postulated to undergo thermal fluctuations between their ground state, E, and energized states, E* (called “stable Franck-Condon states”) in the absence of its substrate.

  2. 2.

    The substrates bind only to the stable Franck–Condon states of enzymes, E*, and not to its ground state, E. This contrasts with the traditional induced-fit hypothesis of Koshland (1958). To highlight this difference, the Franck–Condon principle-based mechanism of ligand binding is referred to as the “pre-fit” hypothesis.

  3. 3.

    Enzyme-catalyzed chemical reactions can occur only at the unstable Franck–Condon state, denoted as E and enclosed within the square brackets, […].

  4. 4.

    The energy stored in E* at state b is thermally derived and hence cannot be utilized to do any work lest the Second Law of Thermodynamics is violated (see Sect. 2.1.4), but the energy stored in E* at state c is derived from the free energy binding of S to E* and thus able to do work either internally (e.g., modulation of the rate of electronic transition) or externally on enzyme’s environment as in myosin head exerting a force on the actin filament (see Sect. 11.4).

  5. 5.

    The transition from a to c (without being mediated by state b) is what is involved in the Circe mechanism of enzymatic catalysis as proposed by Jencks (1975). Since this mechanism is not based on the generalized Franck–Condon principle, the Circe effect mechanism may be viewed as theoretically incomplete.

In Sect. 11.3, PMR as stated in Statements 3.9 and 3.10 and GFCP will be applied to elucidate the molecular mechanisms underlying the action of cholesterol oxidase based on the single-molecule fluorescence measurements made in (Lu et al. 1998).