Cells live in complex and dynamic environments. They sense and respond to both external environmental cues and to each other through cell-to-cell communication. Adapting to changing environments often requires cells to perform complex information processing, and cells have developed elaborate signaling networks to accomplish this feat. These biochemical networks are ubiquitous in biology, ranging from the quorum-sensing [57] and chemotaxis networks [78] in single-celled organisms to developmental networks in higher organisms [54]. Inspired by both these natural circuits and physical computing devices, synthetic biologists are designing sophisticated synthetic circuits that can perform complicated “computing-like” behaviors. Synthetic biologists have designed gene circuits executing a wide range of functionalities including switches [32], oscillators [28], counters [31], and even cell-to-cell communicators [21].

Despite these successes, many challenges to harnessing the full potential of synthetic biology persist [18, 20, 46, 47, 59, 63, 67, 72]. While there are guiding principles to synthetic biology [86], actual construction of synthetic circuits often proceeds in an ad-hoc manner through a mixture of biological intuition and trial-and-error. Furthermore, the functionality and applicability is limited by a dearth of biological components [48]. For this reason, it would be helpful to identify general principles that can improve the design of synthetic circuits and help guide the search for new biological parts. One promising direction along these lines is recent work examining the relationship between the information processing capabilities of these biochemical networks and their energetic costs (technically this is usually a cost in free energy, but for the sake of brevity we will refer to this as energy). Energetic costs place important constraints on the design of physical computing devices [50] as well as on neural computing architectures in the brain and retina [79], suggesting that thermodynamics may also influence the design of cellular information processing networks. As the field of synthetic biology seeks to assemble increasingly complex biochemical networks that exhibit robust, predictable behaviors, natural questions emerge: What are the physical limitations (thermodynamic and kinetic) on the behavior and design of these biological networks? How can one use energy consumption to improve the design of synthetic circuits?

In a classic paper written at the advent of modern computing [50], Landauer asked analogous questions about physical computing devices. He argued that a central component of any general purpose computing device is a memory module that can be “reset” to a predefined state, and pointed out that such a device must obey certain thermodynamic and kinetic constraints. In particular, he convincingly argued that resetting memory necessarily leads to power dissipation, implying that heat generation and energy consumption are unavoidable consequences of the computing process itself. The paper also outlined three general sources of error resulting from kinetic and thermodynamic considerations: incomplete switching between memory states due to long switching times, the decay of stored information due to spontaneous switching, and what he called a “Boltzmann” error due to limited energy supplies. Furthermore, the paper showed that there exist fundamental trade-offs between these types of errors and energetic costs in these memory devices. These considerations suggested general strategies for designing new devices and parts for physical memory modules.

The goal of this review is to synthesize recent theoretical work on thermodynamics and energy consumption in biochemical networks and discuss the implications of this work for synthetic biology. Theoretical papers in this field are often highly technical and draw on new results in non-equilibrium statistical mechanics. For this reason, our goal is to organize the insights contained in these papers [24, 8, 16, 16, 3338, 40, 41, 44, 49, 62, 65, 68, 69, 73, 74, 76, 80, 88] into a few simple, broadly applicable principles. We find that energy consumption in cellular circuits tends to serve five basic purposes: (1) increasing specificity, (2) manipulating dynamics, (3) reducing variability, (4) amplifying signal, and (5) erasing memory. Furthermore, for each of these categories, there exist implicit tradeoffs between power consumption and dynamics. We emphasize, however, that this list is not exhaustive and there exist other functions of energy consumption in biology, both known and unknown.

In the future, energetic costs are likely to become an increasingly important consideration in the design of synthetic circuits. Presently, synthetic biology is adept at making circuits that can be controlled and manipulated by external users by, for example, adding or removing small signaling molecules. A major challenge facing the field is to move beyond such externally controlled circuits to autonomous circuits that can function in diverse environments for extended periods of time. Such autonomous circuits must be able to accurately sense the external environment, amplify small signals, and store information—processes that require or can be improved through energy consumption. Energy consumption necessarily imposes a fitness cost on cells harboring the synthetic circuit, and over many generations, even a small fitness cost can cause synthetic circuits to be lost due to competition. For this reason, understanding how the information processing capabilities of a biochemical network are related to its energy consumption is an important theoretical problem in synthetic biology.

Beyond synthetic biology, biochemical networks offer a unique setting to explore fundamental physics questions in non-equilibrium statistical mechanics. Recently there has been a surge of interest among physicists in the relationship between information and thermodynamics [10, 12]. For example, using sophisticated optical traps groups have recently experimentally tested Landauers principle [14, 43], and there is an active debate on how to extend Landauers principle to quantum regimes [85]. A flurry of recent work has focused on extending concepts like entropy and free-energy to non-equilibrium regimes, often using information theoretic concepts [22, 25, 42, 55, 66, 82, 83]. Living systems are perhaps the most interesting example of non-equilibrium systems, and thinking about information and thermodynamics in the context of cells is likely to yield new general insights into non-equilibrium physics.

1 Increasing Specificity

One common role of energy consumption in biochemical circuits is to increase the specificity of an enzyme or signaling pathway. The most famous example of this is kinetic proofreading. In a landmark paper [39], John Hopfield showed how it is possible to increase the specificity of an enzyme beyond what would be expected from equilibrium thermodynamics by consuming energy and driving the system out of equilibrium. Kinetic proofreading-type mechanisms are also thought to underlie the exquisite specificity of eukaryotic pathways such as the TCR signaling network [81], in which a few-fold difference in the affinities between molecules can lead to several orders of magnitude difference in response. A full review of kinetic proofreading and all its applications is beyond the scope of this review, but we highlight some important lessons for synthetic biology. (See Ninio [64], Ehrenberg [27], and Bennett [11] for some of the important developments in kinetic proofreading.)

The first general principle that emerges from kinetic proofreading is that greater specificity requires greater energy consumption. In particular, the error rate in kinetic proofreading depends exponentially on the amount of energy consumed in each step of the proofreading cascade. This increased specificity comes at the expense of a more sluggish dynamic response (see [61, 62] for an interesting exploration of this tradeoff). This highlights a second theme about energy consumption: there generally exist trade-offs between greater specificity and other desirable properties such as a fast dynamical response or sensitivity to small signals.

The latter trade-off is clearest in the context of non-specific activation of an output in a synthetic circuit. For example, in a transcriptional synthetic circuit an output protein may be produced at low levels even in the absence of an input signal. A common strategy for dealing with such background levels of activation is to place a strong degradation tag on the protein that increases its degradation rate [19]. This ensures that in the absence of an activating signal, proteins are quickly degraded. However, increasing the degradation rate clearly comes at a steep energetic cost as more proteins have to be produced to reach the same steady-state. At the same time, the gene circuit loses sensitivity to small input signals due to their fast degradation.

Fig. 1
figure 1

Consuming energy to increase modularity. a A transcription factor regulates downstream promoters via direct coupling according to Eq. 1. Sequestration of the transcription factor upon binding to promoters can lead to “retroactivity”, i.e. a change in the dynamics of the transcription factor levels as a result of coupling to outputs. b Coupling the transcription factor through an insulating element consisting of a phosphorylation/dephosphorlyation cycle with fast dynamics reduces the effect of retroactivity

2 Manipulating Dynamics

Another general role for energy consumption is to manipulate dynamics. By coupling a chemical reaction to energy sources such ATP or GTP, it is possible to change the dynamics of a biochemical network. One of the most interesting recent examples of how energy consumption can be used to change dynamics is the recent work of retroactivity [6, 23, 58]. The central problem addressed in these papers is the observation that biochemical signal transduction circuits often have their dynamical behavior altered upon coupling to external outputs due to sequestration of proteins, a property dubbed “retroactivity”. Such coupling is particularly undesired when there are a number of downstream outputs. These works demonstrate, both theoretically and experimentally, that it is possible to introduce insulating elements that reduce the magnitude of this retroactivity and thereby restore the modular dynamical behavior of synthetic circuits. A key property of these insulating elements is that they utilize enzymatic futile cycles and hence actively consume energy. Moreover, a detailed theoretical analysis shows that the effectiveness of an insulating element is directly related to its energy consumption [6].

To demonstrate these concepts, we will consider the simple example of a protein Z that is produced at a time-dependent rate k(t) and is degraded at a rate \(\delta \) (see Fig. 1). In addition, Z regulates a family of promoters, with concentration \(p_{\mathrm {tot}}\), by binding/unbinding to the promoter to form a complex C at rates \(k_{\mathrm {on/off}}\). The kinetics of this simple network is described by the set of ordinary differential equations

$$\begin{aligned} {dZ \over dt}= & {} k(t) -\delta Z - \tau ^{-1 }[k_{\mathrm {on}}Z(p_{\mathrm {tot}}-C)-k_{\mathrm {off}} C], \nonumber \\ {dC \over dt}= & {} \tau ^{-1}[k_{\mathrm {on}} Z(p_{\mathrm {tot}}-C)-k_{\mathrm {off}} C], \end{aligned}$$
(1)

where we have introduced an overall dimensionless timescale \(\tau \) for the binding/unbinding dynamics. Notice that if \(\tau ^{-1} \gg 1\), then the timescale separation between the Z and C dynamics means that the Z dynamics are well approximated by setting \({dC \over dt}=0\) so that

$$\begin{aligned} {dZ \over dt} \approx k(t) -\delta Z. \end{aligned}$$
(2)

Thus, when Z is coupled to a system with extremely fast dynamics, the retroactivity term, \(\tau ^{-1}[k_{\mathrm {on}}Z(p_{\mathrm {tot}}-C)+k_{\mathrm {off}} C]\), is negligible.

This basic observation motivates the idea behind kinetic insulators. Instead of coupling Z directly to the complex C, one couples Z to C indirectly through an intermediary insulating element with fast kinetics. Similar analysis of this more complex network shows that this dramatically decreases the amount of retroactivity. In practice, the insulating element is a phosphorylation/dephosphorylation cycle with fast kinetics (see Fig. 1). The faster the intermediary kinetics, and hence the more energy consumed by the futile cycle, the better the quasi-static approximation and the more effective the insulator. (For simplicity, we don’t provide the equations governing Fig. 1b. See [6, 58] for details).

3 Reducing Variability

Biochemical circuits can also consume energy to reduce variability and increase reproducibility. One of the best studied examples of this is the incredibly reproducible response of mammalian rod cells in response to light stimulation (see [15] and references therein). This reproducibility of the rod cell response is especially surprising given that the response originates from the activation of a single rhodopsin molecule. A simple biophysically plausible model for an active rhodopsin is that its lifetime is exponentially distributed (i.e. the deactivation of rhodopsin is a Poisson process). In this case, the trial-to-trial variability, measured by the squared coefficient of variation, \(CV^2=\sigma ^2/\mu ^2\), would be equal to 1. Surprisingly, the actual variability is much smaller than this naive expectation.

Experiments indicate that discrepancy is at least partially explained by the fact that the shut-off of active rhodopsin molecules proceeds through a multi-step cascade [15, 26, 70, 71] (i.e the active rhodopsin molecule starts in state 1, then transitions to state 2, etc. until it reaches state L). If each of these steps were identical and independent, then from the central limit theorem the coefficient of variation of the L step cascade would be L times smaller than that of a single step, i.e. \(\sigma ^2/\mu ^2=1/L\).

Notice that in order for such a multi-step cascade to reduce variability it is necessary that each of the transitions between the L states be irreversible. If they were not, then one could not treat the L-steps as independent and the progression of the rhodopsin molecule through the various states would resemble a random walk, greatly increasing the variability [15]. For this reason, reducing variability necessarily consumes energy. Consistent with this idea is the observation that the variability of rhodopsin seems to depend on the number of phosphorylation sites present on a rhodopsin molecule [26].

Fig. 2
figure 2

Reducing variability in a multi-step cascade through energy consumption. a A protein (blue ovals) is repeatedly phosphorylated L times. b The coefficient of variation, defined as the variance over the mean squared of the time it takes to complete L phosphorylations, as a function of the free-energy consumed during each step in the cascade, \(\Delta G\), for \(L=1,4,16,64\) (Color figure online)

In fact, it is possible to directly compute the coefficient of variation [9, 60] as a function of the ratio of the forward and backward rates at each step, \(\theta \). The logarithm of this ratio is simply the free-energy consumed at each step, \(\Delta G= \log {\theta }\). Figure 2 shows that the coefficient of variation is a monotonically decreasing function of \(\Delta G\) and hence the energy consumed by the cascade. Note that this decrease in the variability comes at the expense of a slower dynamic response, since the mean completion time scales linearly in the cascade length.

Recent calculations have applied these ideas to the problem of a non-equilibrium receptor that estimates the concentration of an external ligand [52]. It was shown that by forcing the receptor to cycle through a series of L states, one can increase the signal-to-noise ratio and construct a biochemical network that performs Maximum Likelihood Estimation (MLE) in the limit of large L. Since MLE is the statistically optimal estimator, this work suggest that it should be possible to improve the performance of synthetic biology based biodetectors by actively consuming energy.

Moreover, this trade-off between variability and energy consumption is likely to be quite general. Analytical arguments and numerical evidence suggest there may exist a general thermodynamic uncertainty relation relating the variance, of certain quantities in biochemical networks and the energy consumption [5, 35, 75]. In particular, achieving an uncertainty, \(\sigma ^2\), in a quantity such as the number of consumed/produced molecules in a genetic circuit or the number of steps in a molecular motor, requires an energetic cost of \(2k_BT/\sigma ^2\). This suggests that any strategy for reducing noise and variability in synthetic circuits will require these circuits to actively consume energy.

4 Amplifying Signal

Biochemical networks can also consume energy to amplify upstream input signals. Signal amplification is extremely important in many eukaryotic pathways designed to detect small changes in input such as the phototransduction pathway in the retina [24] or the T cell receptor signaling pathway in immunology. In these pathways, a small change in the steady-state number of input messenger molecules, dI, leads to a large change in the steady-state number of output molecules, dO. The ratio of these changes is the number gain, often just called the gain,

$$\begin{aligned} g_0 = {dO \over dI} \end{aligned}$$
(3)

with \(g_0>1\) implying the ratio of output to input molecules is necessarily greater than 1.

Before proceeding further, it is worth making the distinction between the number gain, which clearly measures changes in absolute number, with another commonly employed quantity used to describe biochemical pathways called logarithmic sensitivity [24]. The logarithmic sensitivity, \({d\log {[O]} \over d \log {[I]}}\), measures the logarithmic change in the concentration of an output signal as a function of the logarithmic change in the input concentration and is a measure of the fractional or relative gain. Though logarithmic sensitivity and gain are often used interchangeably in the systems biology literature, the two measures are very different [24]. To see this, consider a simple signaling element where a ligand, L binds to a protein X and changes its conformation to \(X^*\). The input in this case is L and the output is \(X^*\). To have \(g_0>1\), a small change in the number of ligands, dL must produce a large change in the number of activated \(X^*\). Notice that by definition, in equilibrium, \({dX^* \over dL} <1\) since each ligand can bind only one receptor. If instead n ligands bind cooperatively to each X, then one would have \({dX^* \over dL} <1/n\). Thus, cooperatively in fact reduces the number gain. In contrast, the logarithmic sensitivity increases dramatically, \({d \log {[X]} \over d \log {[L]}}=n\). An important consequence of this is that amplification of input signals (as measured by number gain) necessarily requires a non-equilibrium mechanism that consumes energy.

The fact that energy consumption should be naturally related to the number gain and not logarithmic gain can be seen using both biological and physical arguments. The fundamental unit of energy is an ATP molecule. Since energy consumption is just a function of total number of ATP molecules hydrolyzed, it is natural to measure gain using changes in the absolute numbers and not concentrations. From the viewpoint of physics, this is simply the statement that energy is an extensive quantity and hence depends on the actual number of molecules.

Fig. 3
figure 3

Amplifying signals in a push–pull amplifier by consuming energy. Schematic illustrates a simple push–pull amplifier where a kinase, \(E_a\), modifies a protein from X to \(X^*\) and a phosphatase, \(E_d\), catalyzing the reverse reaction. The plot illustrates that larger gain can be accomplished at the expense of a slower response time \(\tau \)

In biochemical networks, this signal amplification is accomplished through enzymatic cascades, where the input signal couples to an enzyme that can catalytically modify (e.g. phosphorylate) a substrate. Such basic enzymatic “push–pull” amplifiers are the basic building block of many eukaryotic biochemical pathways, and are a canonical example of how energy consumption can be used to amplify input signals (see Fig. 3). A push–pull amplifier consists of an activating enzyme \(E_a\) and a deactivating enzyme \(E_d\) that interconvert a substrate between two forms, X and \(X^*\). Importantly, the post-translational modification of X is coupled to a futile cycle such as ATP hydrolysis. The basic equations governing a push–pull amplifier are

$$\begin{aligned} {d X^* \over dt} = \Gamma _a (E_a) X - \Gamma _d(E_d) X^*, \end{aligned}$$
(4)

where \(\Gamma _a(E_a)\) is the rate at which enzyme \(E_a\) converts X to \(X^*\) and \(\Gamma _d(E_d)\) is the rate at which enzyme \(E_d\) converts \(X^*\) back to X. This rate equation must be supplemented by the conservation equation on the total number of X molecules,

$$\begin{aligned} X + X^* = X_{\mathrm {tot}}. \end{aligned}$$
(5)

In the linear-response regime where the enzymes work far from saturation, one can approximate the rates in (4) as \( \Gamma _a (E_a) \approx k_a [E_a]\) and \(\Gamma _d(E_d) \approx k_d [E_d]\), with \(k_a= k_a^{\mathrm {cat}}/K_a\) and \(k_d= k_d^{\mathrm {cat}}/K_d\) the ratios of the catalytic activity, \(k^{cat}\), to the Michaelis-Menten constant, \(K_M\), for the two enzymes. It is straightforward to show that the steady-state concentration of activated proteins is

$$\begin{aligned} {\bar{X}^*} = \frac{X_{\mathrm {tot}} k_a [E_a]}{k_a[E_a]+k_d[E_d]} \end{aligned}$$
(6)

Furthermore, one can define a “response time”, \(\tau \), for the enzymatic amplifier to be the rate at which a small perturbation from steady-state \(\delta X^* = X^*-\bar{X^*}\) decays. This yields (see [24] for details)

$$\begin{aligned} \tau = (k_a[E_a]+k_d[E_d])^{-1}. \end{aligned}$$
(7)

As discussed above, a key element of this enzymatic amplifier is that it works out of equilibrium. Each activation/deactivation event where the substrate cycles between the states \(X \mapsto X^* \mapsto X\) is coupled to a futile cycle (e.g. ATP hydrolysis) and hence dissipates an energy \(\Delta G_{\mathrm {cycle}}\). At steady-state, the power consumption of the enzymatic amplifier is

$$\begin{aligned} P=k_a [E_a] \bar{X}\Delta G_{\mathrm {cycle}}=k_d [E_d]\bar{X^*}\Delta G_{\mathrm {cycle}}. \end{aligned}$$
(8)

The input of the enzymatic amplifier is the number of activating enzymes \(E_a\) and the output of the amplifier is the steady-state number of active substrate molecules \(X^*\). This is natural in many eukaryotic signaling pathways where \(E_a\) is often a receptor that becomes enzymatically active upon binding an external ligand. Using (8), one can calculate the static gain and find

$$\begin{aligned} g_0= (P/[E_a] ) \tau (\Delta G_{\mathrm {cycle}})^{-1}. \end{aligned}$$
(9)

This expression shows that the gain of an enzymatic cascade is directly proportional to the power consumed per enzyme measured in the natural units of power that characterize the amplifier: \(\Delta G_{\mathrm {cycle}}/\tau \). This is shown in Fig. 3 where we plot the gain as a function of power consumption for different response times.

Notice that the gain can be increased in two ways, by either increasing the power consumption or increasing the response time. Thus, at a fixed power consumption, increasing gain comes at the cost of a slower response. This is an example of a general engineering principle that is likely to be important for many applications in synthetic biology: the gain-bandwidth tradeoff [24]. In general, a gain in signal comes at the expense of a reduced range of response frequencies (bandwidth). If one assumes that there is a maximum response frequency (ie a minimal time required for a response, a natural assumption in any practical engineering system), the gain-bandwidth tradeoff is equivalent to tradeoff between gain and response time. For this reason, energy consumption is likely to be an important consideration for synthetic circuits such as biosensors that must respond quickly to small changes in an external input. More generally, the gain-bandwidth tradeoff highlights the general tension between signal amplification, energy consumption, and signaling dynamics.

5 Erasing Memory

Memory is a central component of all computing devices. In a seminal 1961 paper [50], Landauer outlined the fundamental thermodynamic and kinetic constraints that must be satisfied by memory modules in physical systems. Landauer emphasized the physical nature of information and used this to establish a connection between energy dissipation and erasing/resetting memory modules. This was codified in what is now known as Landauer’s principle: any irreversible computing device must consume energy.

The best understood example of a cellular computation from the perspective of statistical physics is the estimation of a steady-state concentration of chemical ligand in the surrounding environment by a biochemical network. This problem was first considered in the seminal paper [13] by Berg and Purcell who showed that the information a cell learns about its environment is limited by stochastic fluctuations in the occupancy of the receptors that detect the ligand. In particular, they considered the case of a cellular receptor that binds ligands at a concentration-dependent rate and unbinds particles at a fixed rate. They argued that cells could estimate chemical concentrations by calculating the average time a receptor is bound during a measurement time.

In these studies, the biochemical networks downstream of the receptors that perform the desired computations were largely ignored because the authors were interested in calculating fundamental limits on how well cells can estimate external concentrations. However, calculating energetic costs requires an explicit model of the downstream biochemical networks that implement these computations. As Feynman [30] and Laundauer [51] emphasized, “Information is physical.”

Fig. 4
figure 4

A two-component network as a computational module. a Cellular network that calculates the Berg–Purcell statistic for estimating the concentration of an external-ligand. b Table summarizing the relationship between the network and standard computational elements and techniques

Recently, we considered a simple two-component biochemical network that directly computes the Berg–Purcell estimator [56]. Information about external ligand concentration is stored in the levels of a downstream protein (shown in Fig. 4). Such two-component networks are common in bacteria and are often used to sense external signals with receptors phosphorylating a downstream response regulator. Receptors convert a downstream protein from an inactive form to an active form at a state-dependent rate. The proteins are then inactivated at a state-independent rate. Interestingly, one can explicitly map components and functional operations in the network onto traditional computational tasks (see Fig. 4). Furthermore, it was shown that within the context of this network, computing the Berg–Purcell statistic necessarily required energy consumption. The underlying reason for this is that erasing/resetting memory requires energy (we note that while Landauer emphasized that erasing and not writing requires energy [50], a recent paper argues energy consumption is bounded by writing to memory [74]). These results seem to be quite general and similar conclusions have been reached by a variety of authors examining other biochemical networks [16, 35, 36, 66]. Note that there also exists an extensive literature on reversible computing which we will not address here [1].

These ideas have important implications for synthetic biology. Much as memory is central to the function of modern computers, biological memory modules are a crucial component of many synthetic gene circuits [17, 77]. Any reusable synthetic circuit must possess a memory module that it can write and erase. Currently, synthetic circuits use two general classes of reusable memory modules: protein-based bistable genetic switches [32] and recombinase-based DNA memory [17, 77] (we are ignoring DNA mutation-based memories that can only be used once [29]). In both cases, resetting the memory involves consuming energy by expressing and degrading proteins (proteins involved in bistability and recombinases, respectively). Although this energy consumption is fundamental to any reusable memory module, it is desirable to find less energetically costly reusable memories that can still be stable over many generations such as chromatin-based memory modules [45, 46]. As synthetic circuits become increasingly complex, these energetic costs are likely to be ever more important.

6 Using Energy Consumption to Improve Synthetic Circuits

Energy consumption is a defining feature of most information processing networks found in living systems. The theoretical work reviewed here provides new insights into biochemical networks. The greatest difference between equilibrium and non-equilibrium systems is that in equilibrium, the energy differences between states fundamentally determines the dynamics of the system, while in a non-equilibrium system the energy differences and dynamics become decoupled. This can be utilized in a variety of ways by biochemical networks and we broadly divided up the useful cases into relatively independent roles: increasing specificity, manipulating dynamics, reducing variability, amplifying signals, and erasing memory. We believe that focusing on examples of each role will allow theorists and experimentalists to establish a common language and further both non-equilibrium physics and synthetic biology. One beautiful outcome of the interplay between theory and experiment is the recent work showing that a kinetic insulator that actively consumes energy can restore modularity and eliminate retroactivity in a simple synthetic circuit [58].

The theoretical results reviewed here can be summarized into several broad lessons on energy consumption that may prove useful for synthetic biology as well as providing theorists with future connections to experiments.

  • Fundamental Trade-Offs The ultimate limits of response speed, sensitivity, and energy consumption are in direct competition.

  • Saturation of Trade-Offs Current works suggest that saturation effects are ubiquitous [11, 49, 61, 62, 82] in energy consumption of biochemical networks and therefore only a few ATP may be enough [52] to nearly achieve the fundamental limits.

  • Futile Cycles are NOT Futile Futile cycles appear to be useless when only considering energy costs, but can provide benefits in terms of the fundamental trade-offs.

  • Reusable Logic Must Consume Energy This is just the biological realization of Landauer’s principle. Memory is especially important for circuits that function in stochastic environments where it is necessary to time-average over stochastic input signals.

  • Chains are Useful While it may seem redundant to have long chains of identical parts, if the chain consumes energy this can improve specificity and reduce variation.

  • Time Reversal Symmetry While equilibrium systems respect time reversal symmetry (forward and backwards flows are equivalent), energy consumption and non-equilibrium systems necessarily break this symmetry. This is especially important for synthetic circuit that seek to time-average stochastic inputs.

  • Manipulate Time Scales Consuming energy can be useful to change the time scale of dynamics, as illustrated by the example of retroactivity and the introduction of energy consuming insulators.

  • Information is Physical Theorists should heed Landauer and Feynman’s advice and attempt to translate theoretical advances into physical/biological devices.

We will end by focusing on one specific example that we believe is especially timely for synthetic biology. In naturally occurring biochemical networks, the primary source of energy for biochemical networks are futile cycles associated with post-translational modifications such as phosphorylation and methylation of residues. In contrast, energy dissipation in most synthetic circuits takes the form of the production and degradation of proteins. From the viewpoint of both energy and dynamics, protein degradation is an extremely inefficient solution to the problem. Proteins are metabolically expensive to synthesize, especially when compared to post-translational modifications. This may be one reason that most of the information processing and computation in eukaryotic signaling pathways is done through enzymatic cascades.

Designing synthetic circuits that can reap the full benefits of energy consumption requires developing new biological parts based around post-translational modifications of proteins. Such a “post-transcriptional” synthetic biology would allow to harness the manifold gains in performance that come from actively consuming energy without the extraordinary metabolic costs associated with protein synthesis. Currently, the power of this approach is limited by the dearth of circuit components that act at the level of post-translational modifications of proteins. Two promising directions that are seeking to overcome these limitations are phosphorylation-based synthetic signaling networks [7, 53, 84, 87] and chromatin-based synthetic biology [46] that exploits reversible chromatin marks such as acetylation and methylation. In both cases, synthetic biologists are starting to engineer modular libraries of enzymes (kinases, phosphatases, chromatin reader–writers) to post-translationally modify specific protein substrates in response to particular signals. This will allow synthetic biology to take advantage of the increased information processing capabilities that arise from energy consumption.