Keywords

1 Introduction

Structural analysis of a dynamical system aims at revealing behavioral patterns that occur regardless of the adopted parameters, or, at least, for wide parameters ranges. Due to their parametric variability, biological models are often subject to structural analysis, which can be a very useful tool to reveal or rule out potential dynamic behaviors.

Even for very simple networks, simulations are the most common approach to structural investigation. For instance, three-node enzymatic networks are considered in [1], where numerical analysis shows that adaptability is mostly determined by interconnection topology rather than specific reaction parameters. In [2], through numerical exploration of the Jacobian eigenvalues for two, three and four node gene networks, the authors isolate a series of interconnections which are stable, robustly with respect to the specific parameters; the isolated structures also turn out to be the most frequent topologies in existing biological networks databases. For other examples of numerical robustness analysis, see, for instance [38].

Analytical approaches to the study of robustness have been proposed in specific contexts. A series of recent papers [9, 10] focused on input/output robustness of ODE models for phosphorylation cascades. In particular, the theory of chemical reaction networks is used in [10] as a powerful tool to demonstrate the property of absolute concentration robustness. Indeed, the so-called deficiency theorems are to date some of the most general results to establish robust stability of a chemical reaction network [11]. Monotonicity is also a structural property, often useful to demonstrate certain dynamic behaviors in biological models by imposing general interaction conditions [12, 13]. Robustness has also been investigated in the context of compartmental models, common in biology and biochemistry [14]. A survey on the problem of structural stability is proposed in [15].

Here we review and expand on the framework we proposed in [16], where we suggest a variety of tools for investigation of robust stability, including Lyapunov and setinvariance methods, and conditions on the network graph. We will assume that certain standard properties or assumptions are verified by our model, for example positivity, monotonicity of key interactions, and boundedness. Based on such general assumptions, we will show how dynamic behaviors can be structurally proved or ruled out for a range of examples. Our approach does not require numerical simulation efforts, and we believe that our techniques are instrumental for biological robustness analysis [17, 18].

The chapter begins with a motivating example, and a brief summary of the analysis framework in  [16]. Then we consider a certain number of “paradigmatic behaviors" encountered in biochemical systems, including multistationarity, oscillations, and adaptation; through simple examples, we show how these behaviors can be deduced analytically without resorting to simulation. As relevant case studies, we consider a simplified model of the MAPK pathway and the lac Operon. Finally, we prove some general results on structural stability and boundedness for qualitative models that satisfy certain graphical conditions.

1.1 Motivating Example: A Qualitative Model for Transcriptional Repression

Consider a molecular system where a protein, \(x_1\), is translated at a certain steady rate and represses the production of an RNA species \(x_2\). In turn, \(x_2\) is the binding target of another RNA species \(u_2\) (\(x_2\) and \(u_2\) bind and form an inactive complex to be degraded); unbound \(x_2\) is translated into protein \(x_3\). A standard parametric model is, for example, in Eq. (2.1) [19].

$$\begin{aligned} \dot{x}_1&= k_1 u_1 - k_2 x_1,\nonumber \\ \dot{x}_2&= k_3 \frac{1}{K_1^n+x_1^n} - k_4x_2 - k_5 x_2 u_2, \\ \dot{x}_3&= k_6 x_2 - k_7 x_3.\nonumber \end{aligned}$$
(2.1)

One might ask what kind of dynamic behaviors can be expected by this system. Since we cannot analytically solve these ODEs, numerical simulations would provide us with answers that depend on the parameters we believe are the most accurate in representing the physical system. Parameters might have been derived by fitting noisy data, so they are uncertain in practically all cases. The purpose of this chapter, is to highlight how we can achieve important conclusions on the potential dynamic behavior of a molecular system without knowing the value of each parameter.

In this specific example, we know that the system parameters are positive and bounded scalars. The Hill function \( H(x_1)=k_3 / \left( {K_1^n+x_1^n}\right) \) is a decreasing function, sufficiently “flat” near the origin (i.e. with zero derivative), with a single flexus (second derivative has a single zero) [19, 20]. Then, we can say that for given \(u_1\) and \(u_2\) constant or varying on a slower timescale than this system, \(x_1\) will converge to its equilibrium \(\bar{x}_1=k_1 u_1/k_2\). Similarly, \(\bar{x}_2= H(\bar{x}_1)/(k_4+k_5u_2)\), \(\bar{x}_3=k_6\bar{x}_2/k_7\). Regardless of the specific parameter values, and therefore robustly, the system is stable. While the equilibrium value for the protein \(\bar{x}_1\) could grow unbounded with \(u_1\), the RNA species \(\bar{x}_2\) is always bounded.

2 Qualitative Models for Biological Dynamical Systems

The interactions of RNA species, proteins and biochemical ligands are at the basis of cellular development, growth, and motion. Such interactions are often complex and impossible to measure quantitatively. Thus, qualitative models, such as boolean networks and graph based methods, are useful tools when trying to make sense of very coarse measurements indicating a correlation or static relationship among different species. When dynamic data are available, it is possible to build qualitative ordinary differential equation models. Rather than choosing specific functional forms to model species interactions (such as Hill functions or polynomial terms), one can just make general assumptions on the sign, trend and boundednessof said interactions. While such models are clearly not amenable to data fitting, they still allow us to reach useful analytical conclusions on the potential dynamic behaviors of a system.

The general class of qualitative biological models we consider are ordinary differential equations whose terms belong to four different categories:

$$\begin{aligned} \dot{x}_i(t) =&\sum _{j \in \mathcal{A}_i} a_{ij}(x) x_j - \sum _{h \in \mathcal{B}_i} b_{ih}(x) x_h + \sum _{s \in \mathcal{C}_i} c_{is}(x) + \sum _{l \in \mathcal{D}_i} d_{il}(x). \end{aligned}$$
(2.2)

Variables \(x_i\), \(i=1,..., n\) are concentrations of species. The different terms in Eq. (2.2) are associated with a specific biological and physical meaning. Terms \(a_{ij}(x)x_j\) are associated with production rates of reagents; typically, these functions are assumed to be polynomial in their arguments; similarly, terms \(b_{ih}(x)x_h\) model degradation or conversion rates and are also likely to be polynomial in practical cases. Finally, terms \(c(\cdot )\) and \(d(\cdot )\) are associated with monotonic nonlinear terms, respectively non-decreasing and non-increasing; these terms are a qualitative representation of Michaelis-Menten or Hill functions [20].

Sets \(\mathcal{A}_i\), \(\mathcal{B}_i\), \(\mathcal{C}_i\), \(\mathcal{D}_i\) denote the subsets of variables affecting \(x_i\). In general, more than one species can participate in the same term affecting a given variable. For instance one may have an interaction \(2 \rightarrow 1\) influenced also by species \(x_3\): \(a_{12}(x_1,x_3) x_2\). (The alternative notation choice, \(a_{13}(x_1,x_2) x_3\) would be possible.) To keep our notation simple, we do not denote external inputs with a different symbol. Inputs can be easily included as dynamic variables \(\dot{x}_u=w_u(x_u,t)\) which are not affected by other states and have the desired dynamics.

2.1 General Assumptions

We denote with \( \tilde{x}_i = [x_1~x_2 \dots x_{i-1}~x_{i+1} \dots x_n] \) the vector of \(n-1\) components complementary to \(x_i\) (e.g. in \(\mathrm{I\!R}^4\) \(\tilde{x}_2 = [x_1~x_3~x_4]\)). Then \(f(x) = (\tilde{x}_j, x_j)\) for all \(j\). In the remainder of this chapter, we assume that system (2.2) satisfies the following assumptions:

A 1

(Smoothness) Functions \(a_{ij}(\cdot )\), \(b_{ih}(\cdot )\), \(c_{is}(\cdot )\) and \(d_{il}(\cdot )\) are nonnegative, continuously differentiable functions.

A 2

Terms \(b_{ij}(x)x_j=0\), for \(x_i=0\). This means that either \(i=j\) or \(b_{ij}(\tilde{x}_i,0)=0\).

A 3

Functions \(b_{ij}(x)x_j\) and \(a_{ih}(x)x_h\), are strictly increasing in \(x_j\) and \(x_h\) respectively.

A 4

(Saturation) Functions \(c_{is}(\tilde{x}_s ,x_s)\) are nonnegative and non-decreasing in \(x_s\), while \(d_{il}(\tilde{x}_l ,x_l) \) are nonnegative and, respectively, non-decreasing in \(x_l\). Moreover \(c_{is}(\tilde{x}_s,\infty ) >0\) and \(d_{il}(\tilde{x}_l,0)>0\). Moreover they are globally bounded.

In view of the nonnegativity assumptions and Assumption 2, our general model (2.2) is a nonlinear positive system and its investigation will be restricted to the positive orthant. We note that reducing dynamic interactions to a form \(b_{ij}(x)x_j\) and \(a_{ih}(x)x_h\) is always possible under mild assumptions: for instance, if species \(j\) affects species \(i\) with a monotonic functional term \(f_{ij}(\tilde{x}_j ,x_j)\), if such term has a locally bounded derivative, with \(f(\tilde{x}_i ,0)=0\), it can always be rewritten as: \( f_{ij}(x) = \left( {f_{ij}(x)}/{x_j}\right) x_j = a_{ij}(x) x_j \) (see [14], Sect. 2.1). Using the general class of models (2.2) and assumptions A1–A4 as a working template for analysis, we will focus on a series of paradigmatic dynamic behaviors which can be structurally identified or ruled out in example systems of interest.

2.2 Glossary of Properties

The structural analysis of system (2.2) can be greatly facilitated whenever it is legitimate to assume that functions \(a\), \(b\), \(c\), \(d\) have certain properties such as positivity, monotonicity, boundedness and other functional characteristics that can be considered “qualitative and structural properties” [15]. Through such properties, we can draw conclusions on the dynamic behaviors of the considered systems without requiring specific knowledge of parameters and without numerical simulations. However, it is clear that our approach requires more information than other methods, such as boolean networks and other graph-based frameworks.

For the reader’s convenience, a list of possible properties and their definitions is given below, for functions of a scalar variable \(x\).

P 1

\(f(x)=\text {const}\ge 0\) is nonnegative-constant.

P 2

\(f(x)=\text {const}>0\) is positive-constant.

P 3

\(f(x)\) is sigmoidal: it is non-decreasing, \(f(0)=f'(0)=0\), if \(0 < f(\infty ) <\infty \) and its derivative has a unique maximum point, \(f'(x) \le f'(\bar{x})\) for some \(\bar{x} >0\).

P 4

\(f(x)\) is complementary sigmoidal: it is non-increasing, \(0 < f(0)\), \(f'(0)=0\), \(f(\infty )=0\) and its derivative has a unique minimum point. In simple words, \(f\) is a CSM function iff \(f(0)-f(x)\) is a sigmoidal function.

P 5

\(f(x)\) is constant-sigmoidal, the sum of a sigmoid and a positive constant.

P 6

\(f(x)\) is constant-complementary-sigmoidal, the sum of a complementary sigmoid and a constant.

P 7

\(f(x)\) is increasing-asymptotically-constant: \(f'(x)>0\), \(0 <f(\infty )<\infty \) and its derivative is decreasing.

P 8

\(f(x)\) is decreasing-asymptotically-null: \(f'(x)<0\), \(f(\infty )=0\) and its derivative is increasing.

P 9

\(f(x)\) is decreasing-exactly-null: \(f'(x)<0\), for \(x < \bar{x}\) and \(f(x)=0\) for \(x \ge \bar{x}\) for some \(\bar{x} >0\).

P 10

\(f(x)\) is increasing-asymptotically-unbounded: \(f'(x)>0\), \(f(\infty )=+\infty \).

As an example, the terms \(d(\cdot )\) and \(c(\cdot )\) in general are associated with Hill functions, which are sigmoidal  and complementary sigmoidal  functions. In some cases it will be extremely convenient to introduce assumptions which are mild in a biological context but assure a strong simplification of the mathematics. One possible assumption is that a sigmoid or a complementary sigmoid is cropped (Fig. 2.1). A cropped sigmoid is exactly constant above a certain threshold \(x^-\) and exactly null below another threshold \(x^+\). A cropped complementary sigmoid is exactly null above \(x^-\) and exactly constant below \(x^+\).

Fig. 2.1
figure 1

Cropped sigmoids and complementary sigmoids

These assumptions extend obviously to multivariable functions just by considering one variable at the time. For instance \(f(x_1,x_2)\) can be a sigmoid in \(x_1\) and decreasing in \(x_2\).

2.3 Network Graphs

Building a dynamical model for a biological system is often a long and challenging process. For instance, to reveal dynamic interactions among a pool of genes of interest, biologists may need to selectively knockout genes, set up micro RNA assays, or integrate fluorescent reporters in the genome. The data derived from such experiments are often noisy and uncertain, which implies that also the estimated model parameters will be uncertain. However, qualitative trends can be reliably assessed in the dynamic or steady state correlation of biological quantities. Graphical representations of such qualitative trends are often used by biologists, to provide intuition regarding the network main features.

Building on the general model (2.2), we can associate species to nodes of a graph, and different qualitative relationships between species with different types of arcs: terms \(a\), \(b\), \(c\) and \(d\) can be represented as arcs having different end–arrows, as shown in Fig. 2.2.

Fig. 2.2
figure 2

Arcs associated to the different terms of our general model (2.2), and example graph

These graphs can be immediately constructed, by knowing the correlation trends among the species of the network, and serve as a support for the construction and analysis of a dynamical model. For simple networks, these graphs may facilitate structural robustness analysis.

Our main objective is to show that, at least for reasonably simple networks, structural robust properties can be investigated with simple analytical methods, without the need for extensive numerical analysis. We suggest a two stage approach:

  • Preliminary screening: establish essential information on the network structure, recognizing which properties (such as P1–P10) pertain to each link.

  • Analytical investigation: infer robustness properties based on dynamical systems tools such as Lyapunov theory, set invariance and linearization.

2.4 Example, Continued: Transcriptional Repression

The model for the transcriptional repression system in Eq. (2.1) [19] can be recast in the general class of models (2.2), and we can immediately draw the corresponding graph (Fig. 2.3).

$$\begin{aligned} \dot{x}_1&= u_1 - b_{11} x_1,\\ \dot{x}_2&= d_{21}(x_1) - b_{22} x_2 - b_{2u_2}\; x_2\; u_2, \nonumber \\ \dot{x}_3&= a_{32} x_2 - b_{33} x_3.\nonumber \end{aligned}$$
(2.3)
Fig. 2.3
figure 3

Graph corresponding to the transcriptional repression example in Sect. 2.1.1

Terms \(a_{ij}\) capture first order production rates; \(b_{ih}\) capture first order degradation rates. Term \(d_{21}(x_1)\) is our general substitute for the Hill function [19, 20]; we assume it is a decreasing function with null derivative at the origin, whose second derivative has a single zero (flexus), and it is negative on the left of the zero and positive on the right (such as \(1/(1+x_1^p),\) \(n >1\)).

3 Robustness and Structural Properties

We now clarify the concepts of robustness and structural properties and their relations.

Definition 1

Let \(\mathcal{C}\) be a class of systems and \(\mathcal{P}\) be a property pertaining such a class. Given a family \(\mathcal{F} \subset \mathcal{C}\) we say that \(\mathcal{P}\) is robustly verified by \(\mathcal{F}\), in short robust, if it is satisfied by each element of \(\mathcal{F}\).

Countless examples can be brought about families \(\mathcal{F}\) and candidate properties. Stability of equilibria, for instance, is one of the most investigated structural properties [2, 13, 21].

When we say structural property we refer to the properties of a family \(\mathcal{F}\) whose “structure” has been specified. In our case, the structure of a system is the fact that it belongs to the general class (2.2), thus it satisfies assumptions 1–3, and it enjoys properties in the set P1–P8.

A realization is any system with assumed structure and properties achieved by specific functions which satisfy these assumptions. The set off all realization is a class. For instance, going back to the transcriptional repression example, the dynamical system:

$$\begin{aligned} \dot{x}_1&= u_1 - 2 x_1,\\ \dot{x}_2&= \frac{1}{1+x_1^n} - x_2 - 2 x_2 u_2, \\ \dot{x}_3&= 2 x_2 - 2 x_3,\\ \end{aligned}$$

is a realization of the class represented by system (2.3).

Definition 2

A property \(\mathcal{P}\) is structural for a class \(\mathcal{C}\), if any realization satisfies \(\mathcal{P}\).

Note that demonstrating a structural property for a system is harder than proving that it does not hold (the latter typically only requires to show the existence of a system which exhibits the considered structure but does not satisfy the property). For example, consider matrices:

$$ A_1 = \left[ \begin{array}{cc} -a &{} ~ ~b \\ -c &{} -d \end{array} \right] ~~~ A_2 = \left[ \begin{array}{cc} -a &{} ~ ~b \\ ~c &{} -d \end{array} \right] $$

with \(a\), \(b\), \(c\) and \(d\) positive real parameters. To show that \(A_1\) is structurally stable one has to show that its eigenvalues have negative real part, (in this case, a simple proof). Conversely to show that \(A_2\) is not structurally stable, it is sufficient to find a realization which is not stable, such as \(a=1\) \(b=1\) \(c=2\) and \(d=1\).

4 Paradigmatic Structural Properties

We introduce an overview of properties particularly relevant in systems and synthetic biology. Through simple examples, we highlight how our general approach can be used to determine analytically the structural nature of such properties.

4.1 Multistationarity

A multistationary system is characterized by the presence of several possible equilibria. Of particular interest are those systems in which there are three equilibria, of which two are stable and one unstable, i.e., the system is bistable.

Fig. 2.4
figure 4

Sketch of a bistable system

We consider a simple example of a multistationary system (Fig. 2.4):

$$\begin{aligned} \dot{x}_1&= x_0 + c_{12}(x_2) - b_{11} x_1 \\ \dot{x}_2&= a_{21} x_1 - b_{22} x_2\nonumber \end{aligned}$$
(2.4)

with \(b_{11}\), \( b_{22}\) and \(a_{21}\), positive constants, and with \(c_{12}(x_2)\) a (non-decreasing) sigmoidal function. We assume \(x_0 \ge 0\). The following proposition holds:

Proposition 1

For \(x_0\) small enough and for \( b_{11} b_{22}/a_{21}\) small enough, system (2.4) has three equilibria, two stable and one unstable. Conversely, for \(x_0\) large or \(b_{11} b_{22}/a_{21}\) large the system admits a unique, stable equilibrium.

Explanation. Setting \(\dot{x}_1=0\) and \(\dot{x}_2=0\) we find the equilibria as the roots of the following equation:

$$ c_{12}(x_2)+ x_0 = \frac{b_{11} b_{22}}{a_{21}}x_2 $$

From Fig. 2.5, it is apparent that if \(x_0\) is small and the slope of the line \(\frac{b_{11} b_{22}}{a_{21}}x_2\) is small, there must be three intersections. Conversely, there is a single intersection for either \(x_0\) or \(\frac{b_{11} b_{22}}{a_{21}}\) large. \({ {\square }}\)

Fig. 2.5
figure 5

Sketch of the nullclines for system (2.4)

Fig. 2.6
figure 6

Schematic representation of oscillatory behavior

If three intersections (points \(A\), \(B\), \(C\) in Fig. 2.5) are present, there are two stable points \(A\) and \(B\) and one unstable. This can be seen by inspecting the Jacobian:

$$ J = \left[ \begin{array}{cc} -b_{11} &{} c_{12}'(\bar{x}_2) \\ a_{21} &{} -b_{22} \end{array} \right] , $$

whose characteristic polynomial is:

$$ p(s) = s^2 + (b_{11}+b_{22}) + b_{11}b_{22}-a_{21}c_{12}'(\bar{x}_2). $$

This second order polynomial is stable if \(b_{11}b_{22}-a_{21}c_{12}'(\bar{x}_2)>0\) or

$$ c_{12}'(x_2) <\frac{b_{11} b_{22}}{a_{21}}x_{2}, $$

namely the slope of the sigmoidal function must be smaller that the slope of the line \({b_{11} b_{22}}/{a_{21}}\). This is the case of points \(A\) and \(C\), while the condition is violated at point \(B\).

4.2 Oscillations

Oscillations in molecular and chemical networks are a well-studied phenomenon (see, for instance [22]). Periodicity in molecular concentrations underlies cell division, development, and circadian rhythms. One of the first examples considered in the literature is the well known Lotka Volterra predator-prey system, whose biochemical implementation has been studied and attempted in the past [23, 24]. In our general setup, the Lotka Volterra model is (Fig. 2.6):

$$\begin{aligned} \dot{x}_1&= \, a_{11}x_1 - b_{12}(x_2) x_1 \\ \dot{x}_2&= \,a_{21}(x_1)x_2 - b_{22}x_2, \end{aligned}$$

where all functions are strictly increasing and asymptotically unbounded in all arguments. The system admits a single non-trivial equilibrium, the solution of equations:

$$\begin{aligned} 0&= a_{11} - b_{12}(x_2) \\ 0&= a_{21}(x_1) - b_{22}. \end{aligned}$$

The Jacobian of this system at the unique equilibrium is:

$$ J = \left[ \begin{array}{cc} 0 &{} -b_{12}'(x_2) x_1 \\ a_{21}'(x_1) x_2&{} 0 \end{array}\right] . $$

This matrix clearly admits pure imaginary eigenvalues for any realization of the functional terms. Thus, oscillations are a structural property.

In second order systems, sustained oscillations require the presence of a positive self loop (autocatalytic reactions) represented in this case by the \(a_{11}\) term.

To achieve oscillations without a positive loop reaction, the system must be of at least third order. For instance the following model

$$\begin{aligned} \dot{x}_1&= x_{10}d_{13}(x_3)-b_{11} x_1\\ \dot{x}_2&= a_{21} x_1 -b_{22} x_2,\nonumber \\ \dot{x}_3&= a_{32}x_2 -b_{33} x_3,\nonumber \end{aligned}$$
(2.5)

where \(d_{13}(x_3)\) is a complementary sigmoid and the constant are positive, is a candidate oscillator. Term \(x_{10}\) is an external input which catalyzes the production \(d_{13}(x_3)\).

Proposition 2

System (2.5) admits a unique equilibrium. If the minimum value of the slope \(d_{13}'(x_3)\) is sufficiently large, there exists an interval (possibly unbounded from above) of input values \(x_{10}\) inducing an oscillatory transition to instability.

Explanation The unique equilibrium point can be derived by the conditions \(\dot{x}_1 = \dot{x}_2 = \dot{x}_3 = 0\):

$$ x_{10}d_{13}(x_3)= \frac{b_{11}b_{22}b_{33}}{a_{21}a_{32}}x_3. $$

Figure 2.7 shows the qualitative trend of the nullclines above, and clearly highlights that they admit a single intersection.

Fig. 2.7
figure 7

Qualitative trend of the nullclines for system (2.5).

Assume that the slope in the intersection point \(A\) is large. The Jacobian of the system at this equilibrium point is

$$ J = \left[ \begin{array}{ccc} -b_{11} &{} 0 &{} -\mu \\ a_{21} &{} -b_{22} &{} 0 \\ 0 &{} b_{32} &{} -b_{33} \end{array}\right] ,\qquad \mu = -x_0 d_{13}'(\bar{x}_3) >0. $$

The corresponding characteristic polynomial is

$$ p(s) = (s+b_{11}) (s+b_{22})(s+b_{33}) + a_{21}a_{32}\mu = s^3 + p_2 s^2 + p_1 s + p_0 + a_{21}a_{32}\mu . $$

This polynomial has a pair of complex conjugate roots with positive real part, as it can be inferred from the Ruth–Hurwitz table:

for large \(\mu \) there are two sign in the first column of the table, which means that there are two unstable roots. These roots cannot be real because the polynomial coefficients are all positive, so unstable roots must be complex conjugate.

In general, we can say there is an “interval" in parameter space in which oscillations are admissible: for \(x_0\) small, the intersection occurs in a region where the slope of \(\mu = - x_0 d_{13}'(\bar{x}_3)\) is small, thus there are no changes in the Routh-Hurwitz table and the system is stable. \({ {\square }}\)

Note that it is not necessarily true that for large \(x_0\) the system is unstable; in addition, the instability interval of \(x_0\) may be bounded. In fact, the equilibrium \(\bar{x}_3\) increases for large \(x_0\), but it may transition to a region where \(d_{13}'\) is very small, compensating for the increase of \(x_0\).

4.3 Adaptation

A system is adaptive if, when perturbed by a persistent input signal, its output always reverts to a neighborhood of its value prior to the perturbation, in general after a transient [1, 25, 26]. A sketch of this behavior is in Fig. 2.8. Adaptation is said to be perfect if the system’s output reverts to its exact value prior to the perturbation.

Fig. 2.8
figure 8

System capable of adaptation

For small perturbations, linearization analysis suggests that adaptation requires the presence of a zero in the system’s transfer function. If the system includes a feedback loop, then the presence of a pole at the origin (integrator) is required [25, 26]. Establishing criteria to detect a system’s capability for adaptation is thus simple. Consider the system:

$$\begin{aligned} \dot{x}_1&= -b_{21}(x_1) x_2 + x_{0},\end{aligned}$$
(2.6)
$$\begin{aligned} \dot{x}_2&= a_{12}x_1 -b_{22}x_2 + u. \end{aligned}$$
(2.7)

We assume all the constants are positive, and that function \(b_{21}(x_1)\) is a cropped sigmoid, namely it is strictly increasing and exactly positive constant above a certain threshold. Term \(x_{0}\) is a constant, and \(u \ge 0\) is a perturbing input.

Proposition 3

If \(x_{0}\) is sufficiently large and \(u=0\), then system (2.6) has a stable equilibrium point. Takin \(y=x_2\) as the system’s output, perfect adaptation is achieved with respect to constant perturbations on \(u >0\).

Explanation. For \(u=0\) the equilibrium conditions are \(b_{21}(x_1) x_2 = x_{0}\) and \(a_{12}x_1 -b_{22}x_2\). Therefore the equilibrium \(\bar{x}_1\) can be expressed as the solution of:

$$\begin{aligned} b_{21}(x_1) \frac{a_{12}}{b_{22}}x_1 = x_{0}. \end{aligned}$$
(2.8)

For \(x_{0}\) suitably large, \(\bar{x}_1\) increases until it falls in the range where \(b_{21}\) (a cropped sigmoid) is constant, thus \(b_{21}(x_1) = b_{21}(\infty )\), and \(b_{21}'(x_1) =0\).

In this range, the linearized system is

$$ \left[ \begin{array}{c} \dot{x}_1\\ \dot{x}_2 \end{array} \right] \ \left[ \begin{array}{cc} 0 &{} -b_{21}(x_1) \\ a_{21} &{} -b_{22} \end{array} \right] \left[ \begin{array}{c} x_1\\ x_2 \end{array} \right] + \left[ \begin{array}{c} 0\\ 1 \end{array} \right] u~~~~~y = \left[ \begin{array}{cc} 0&~ 1 \end{array} \right] \left[ \begin{array}{c} x_1\\ x_2 \end{array} \right] $$

with output \(y(t) = x_2(t)\). The state matrix is a stable matrix, with characteristic polynomial \(p(s) = s^2 + b_{22}s + b_{21}(x_1)a_{21}\). The transfer function is \(w(s) = s/p(s)\), has a zero at the origin and thus the system locally exhibits perfect adaptation.

If \(u>0\) increases as a step input, after a transient the output \(x_2\) returns to its original value \(\bar{x}_2\) prior to the perturbation. However, the equilibrium of \(x_1\) increases to a new value such that \(\bar{a}_{12}x_1 =b_{22}\bar{x}_2 + u\).\(\square \)

4.4 Spiking and Persistency: The MAPK Network as a Case Study

Spiking is a phenomenon observed in several molecular networks, in which a system subject to a step input grows rapidly and subsequently undergoes a relaxation, as sketched in Fig. 2.9. The relaxation bring the system to a new equilibrium, distinct from the equilibrium prior to the input stimulation.

Fig. 2.9
figure 9

System presenting a spiking behavior

Persistency is closely related to bistability: it occurs when a transient input variation causes the system to switch its output to a new value, which persists upon removal of the input, as shown in Fig. 2.10.

Fig. 2.10
figure 10

System presenting a persistent response

4.4.1 A Qualitative Model of the MAPK Pathway

Experiments show that the mitogen-activated proteinkinase (MAPK) pathway in PC12 rat neural cells exhibits dynamic behaviors that depend on the growth factor they are exposed to as an input. The response to Epidermal Growth Factor (EGF) is a spike followed by a relaxation, while the response to Nerve Growth Factor (NGF) is persistent. In the latter case, the system can be driven to a new state, which persists after the stimulus has vanished. Ultimately, these dynamic behaviors correspond to different cell fates: EFG stimulation induces proliferation, while NGF stimulation induces differentiation. The biochemical mechanisms responsible for the different input-dependent dynamic response are still unclear. One hypothesis is that each input generates a specific interaction topology among the kinases. Starting from experimental results that support this hypothesis [27], in our prevous work we considered the two network topologies, and we derived and analyzed qualitative models which exhibit structural properties [28]. Here we use a simplified, third order model for the pathway. We refer the reader to [28] for a more detailed model and its derivation. In our reduced order model, we neglect double-phosphorylation dynamics, and model the active concentration of each MAPK protein with a single state variable. We also neglect mass conservation assumptions regarding the total amount of MAPK protein [13, 16].

$$\begin{aligned} \text {MAP3K:}\qquad \dot{x}_1&= u(x_3,x_0) - b_{11} x_1\end{aligned}$$
(2.9)
$$\begin{aligned} \text {MAP2K:}\qquad \dot{x}_2&= c_{21}(x_1) - b_{22} x_2\end{aligned}$$
(2.10)
$$\begin{aligned} \text {MAP1K:}\qquad \dot{x}_3&= c_{32}(x_2) - b_{33} x_3\end{aligned}$$
(2.11)
$$\begin{aligned} \text {Output:}\qquad y&= x_3 \end{aligned}$$
(2.12)

We assume: \(c_{21}\) and \(c_{32}\) are strictly increasing asymptotically constant, i.e. \(c_{21}(\infty ) = \hat{c}_{21} < \infty \) \(c_{32}(\infty ) = \hat{c}_{32} < \infty \), and null at the origin \(c_{21}(0) =c_{32}(0)=0\). Terms \(b_{ii}\) are positive constants. In essence, this model captures the fact that each protein in the cascade is activated by its predecessor in the chain; in the absence of term \(u(x_3,x_0) \), the system would be an open loop, monotonic cascade [12]. Term \(u(x_3,x_0) \) is a feedback term modulated by an external input \(x_0\), and we consider two cases:

 

EGF:

\(u = a_{10}(x_3)x_0\), where \(a_{10}(x_3)\) is a complementary sigmoid, exactly constant below a threshold \(\eta \) and exactly null over a threshold \(\xi \). This configuration is characterized by the presence of a negative feedback loop.

NGF:

\(u = a_{10}(x_3) + x_0\), where \(a_{10}(x_3)\) is a sigmoid, exactly null below a threshold \(\eta \) and exactly constant over a threshold \(\xi \). This configuration is characterized by the presence of a positive feedback loop.

  Under these assumptions, we show that in the EFG configuration the output exhibits a spike, while in the NGF configuration the output is persistent.

4.5 The EGF-Induced Pathway and Its Spiking Behavior

The system in this configuration admits a single equilibrium; this can be shown as for the third order oscillator model (2.5).

Consider \(c_{21}(\infty ) = \hat{c}_{21}\), \(c_{32}(\infty ) = \hat{c}_{32}\), the saturation value. Let \(\hat{x}_2= \hat{c}_{21}/b_{22}\) be the corresponding “saturation”, limit value of \(x_2\). Let, in turn,

$$\hat{x}_3 = c_{32}(\hat{x}_2)/b_{33}$$

be the limit value of \(x_3\). For large, increasing values of the input \(x_0\), the variable \(\hat{x}_1\) increases and the equilibrium values of \(x_2\) and \(x_3\) approach \(\hat{x}_2\) and \(\hat{x}_3\). The following proposition holds:

Proposition 4

Assume that the limit value for \(x_3\) is \(\hat{x}_3 > \xi \). Then, for \(x_0\) constant sufficiently large, and for \(x_i(0)=0\), we have: (a) First, \(x_3\) grows arbitrarily close to \(\hat{x}_3\). (b) Subsequently, \(x_3\) relaxes below \(\xi \).

Proof

Since \(a_{10}(x_3)\) is constant for a small values of \(x_3\), if \(x_0\) is large then by continuity \(x_1\) can grow arbitrarily large in an arbitrarily small amount of time \(\tau >0\). Then, considering the time interval \([\tau ,T]\) where \(T\) is arbitrarily large, and given an arbitrary \(\mu >0\), by picking \(x_1(\tau )\) sufficiently large we can guarantee:

$$\begin{aligned} x_1(t) \ge \mu ~~~\text{ for }~~~t\in [\tau ,T]. \end{aligned}$$
(2.13)

In fact, we have \(\dot{x}_1 \ge - b_{11} x_1\), thus \(x_1(t) \ge x_1(\tau ) e^{-b_{11}t}\) on \([\tau ,T]\); therefore, picking a large initial value \(x_1(\tau )\), equation (2.13) is verified. Thus, we can guarantee that variables \(x_2\) and \(x_3\) have values arbitrarily close to the upper limit \(\hat{x}_2\) and \(\hat{x}_3\), being \(\mu \) and \(T\) arbitrarily large.

If \(x_3\) increases, at some point in time the condition \(a_{10}(x_3) =0\) is met. This “switches off” the first variable, whose dynamics become: \(\dot{x}_1 =- b_{11} x_1\), thus \(x_1\) starts decreasing; variables \(x_2\) and \(x_3\) follow the same pattern. These concentrations decrease until \(x_3 \le \xi \). \(\square \)

4.6 The NGF-Induced Pathway Is an Example of Persistent Network

Let us now define \(a_{10}(\infty )=\bar{a}_{10}\) as a saturation value. If \(\bar{x}_3\) is greater than the threshold \(\xi \), then \(a_{10}(\bar{x}_3)=\bar{a}_{10}\); then, for \(x_0=0\) we can find the equilibria from the following conditions:

$$\begin{aligned} 0&= a_{10}- b_{11} \bar{x}_1,\end{aligned}$$
(2.14)
$$\begin{aligned} 0&= c_{21}(\bar{x}_1) - b_{22} \bar{x}_2,\end{aligned}$$
(2.15)
$$\begin{aligned} 0&= c_{32}(\bar{x}_2) - b_{33} \bar{x}_3, \end{aligned}$$
(2.16)

which yield \(\bar{x}_1 = \bar{a}_{10}/b_{11}\); \(\bar{x}_2= c_{21}(\bar{x}_1)/b_{22}\); \(\bar{x}_3 = c_{32}(\bar{x}_2)/b_{33}\). The assumption \(\bar{x}_3 > \xi \) means that the positive feedback given by the term \(\bar{a}_{10}\) is able to sustain this positive equilibrium.

Now consider the case where the input \(x_0\) becomes arbitrarily large. Thus, \(\bar{x}_1\) becomes arbitrarily large. Defining \(\hat{c}_{21} = c_{21}(\infty )\), we find the corresponding limit values for the steady states: \(\hat{x}_2 = \hat{c}_{21}/b_{22}\) and \(\hat{x}_3 = c_{32} (\hat{x}_2)/b_{33}\). It is immediate that \(\hat{x}_1\ge \bar{x}_1\), \(\hat{x}_2\ge \bar{x}_2\), \(\hat{x}_3\ge \bar{x}_3\), because the “hat” equilibrium values are achieved by means of an arbitrarily large input \(x_0\), while the “bar” values are achieved by the bounded input \(\bar{a}_{10}\).

Proposition 5

Assume that \(\bar{x}_3 > \xi \) and that the previous inequalities are strict: \(\hat{x}_1> \bar{x}_1\), \(\hat{x}_2>\bar{x}_2\), \(\hat{x}_3> \bar{x}_3\). Then, for \(x_i(0)=0\) the following happens:

  1. (a)

    If \(x_{0}\) is constant and sufficiently large, and it is applied for a sufficiently long time interval \([0,T]\), then \(x_3\) grows arbitrary close to \(\hat{x}_3\).

  2. (b)

    If, after time \(T\), the input signal \(x_{0}\) is eliminated \((x_0=0)\), then \(x_3\) remains above \(\xi \).

  3. (c)

    Finally, \(x_3\) converges to \(\bar{x}_3\) from above.

Proof

We have seen that when \(x_0=0\), \(\bar{x}_1, \bar{x}_2,\bar{x}_3\) are admissible equilibria of the system. Exactly as done in the EGF-driven network example, we can show that for a sufficiently large input \(x_0\), variables \(x_1\) \(x_2\) and \(x_3\) can grow arbitrarily close to \(\hat{x}_1\), \(\hat{x}_2\) and \(\hat{x}_3\), above \(\bar{x}_1\), \(\bar{x}_2\) and \(\bar{x}_3\).

We only need to show that if all \(x_i(t)\) grow above the corresponding \(\bar{x}_i\), then they will not reach values below \(\bar{x}_i\) after \(x_0\) is removed.

We begin by defining the new variables \(z_i = x_i - \bar{x}_i\); then, \(\dot{z}_i = \dot{x}_i\) given by equations (2.9)–(2.11). After \(x_0\) is removed, the input is \(a_{10}(x_3)\); in addition, since we assume \(x_3 \ge \bar{x}_3 \ge \xi \) (so \(z_3 \ge 0\)), we have \(a_{10}(x_3) = \bar{a}_{10}\). If we consider also the steady state equations (2.14)–(2.16), we get

$$\begin{aligned} \dot{z}_1&= - b_{11} z_1 \end{aligned}$$
(2.17)
$$\begin{aligned} \dot{z}_2&= c_{21}(z_1 +\bar{x}_1) -c_{21}(\bar{x}_1) - b_{22} z_2 \end{aligned}$$
(2.18)
$$\begin{aligned} \dot{z}_3&= c_{32}(z_2 + \bar{z}_2) - c_{32}(\bar{z}_2) - b_{33} z_3 \end{aligned}$$
(2.19)

This is a positive system in the \(z\) variables. Because we assumed that at some point \(z_i(\tau ) >0\) (prior to the removal of \(x_0\)), we can immediately see that this situation is permanent.

To prove convergence, note that \(z_1\) goes to zero in view of Eq. (2.17). Then \(c_{21}(z_1 +\bar{x}_1) -c_{21}(\bar{x}_1)\) goes to \(0\), so \(z_2\) converges to \(0\). For the same reason, \(z_3\) converges to \(0\). \(\square \)

5 Structural Boundedness and Stability

Our qualitative modeling framework is generally described by Eq. (2.2):

$$\begin{aligned} \dot{x}_i(t) =&\sum _{j \in \mathcal{A}_i} a_{ij}(x) x_j - \sum _{h \in \mathcal{B}_i} b_{ih}(x) x_h + \sum _{s \in \mathcal{C}_i} c_{is}(x) + \sum _{l \in \mathcal{D}_i} d_{il}(x). \end{aligned}$$

The general assumptions we made on functions \(a\), \(b\), \(c\), and \(d\) guarantee non-negativity of the states, which is a required feature to meaningfully model concentrations of molecules. Another important feature of most biochemical system models is boundedness of their states (possibly with the exception of pathological cases). In the following, we outline additional assumptions and consequent results regarding structural boundedness of the solutions to our general model (2.2).

5.1 Structural Boundedness

Consider the case in which states in model (2.2) are dissipative, i.e. the dynamics of each variable include a degradation term \(-b_{ii}(x)x_i\). We also assume that

$$ b_{ii}(x) > \beta _i >0. $$

Obviously, this property alone does not assure the global boundedness of the solution. However, if no unbounded \(a\)-terms were present, it would be simple to show that the solutions are globally bounded.

Let us assume that each \(a_{ij}(x)\) term is bounded by a positive constant \(0 \le a_{ij}(x) < \bar{a}_{ij}\). Then, we ask under what conditions we can assure structural boundedness of the solutions. We build a graph \(G(\mathcal{A})\) associated with the \(a_{ij}\) terms, where there is a directed arc from node \(j\) to node \(i\) for every term \(a_{ij}\). Then, the following theorem holds.

Theorem 1

The system solution is structurally globally bounded for any initial condition \(x(0) \ge 0\) if and only if \(G(\mathcal{A})\) has no cycles (including self-cycles) including \(a_{ii}\) terms.

In other words, structural boundedness is guaranteed if and only if there is no autocatalysis in the system.

Proof

We first show that the condition is structurally necessary. Assume, ab absurdo, that there is a cycle which includes a term \(a_{ij}\). Without restriction assume that the cycle if formed by the first \(r\) nodes \(1\),\(2\),...,\(r\), forming a sequence \(a_{12}\), \(a_{23}\), ..., \(a_{r1}\); also, assume that each term \(a_{ij}\) is lower bounded by a constant \(\kappa \). We finally assume that the sum of all \(b_{ik}\) terms appearing in the first \(r\) equations is upper bounded by \(\eta \):

$$ \sum _{i=1}^r~\sum _{k \in \mathcal{B}_i} b_{ik} \le \eta . $$

Consider the Lyapunov-like function:

$$ V(x_1,x_2,\ldots ,x_r) =x_1 + x_2 + \cdots + x_r, $$

and its derivative

$$\begin{aligned} \dot{V}&= \sum _{i=1}^r~~\dot{x}_i \ge \sum _{i=1}^r \left[ a_{i,i+1} x_{i+1} - \sum _{k \in \mathcal{B}_i} b_{ik} x_k \right] \ge \sum _{i=1}^r a_{i,i+1} x_{i+1} - \eta \sum _{i=1}^r x_i \\&\ge (r\kappa -\eta ) \sum _{i=1}^r x_i = (r\kappa -\eta )V. \end{aligned}$$

Then, if \(\eta < r \kappa \), \(V\) increases and the equilibrium is not stable. Thus, structural boundedness cannot hold.

Let us now consider the sufficiency part. If there are no cycles in \(G(\mathcal{A})\), then there exists necessarily a node which is a root, i.e. its dynamics do not include \(a_{ij}\) terms. Let us assume, without loss of generality, that node \(x_1\) does not have any \(a_{1j}\) term. Then:

$$\begin{aligned} \dot{x}_1&= - \sum _{h \in \mathcal{B}_1} b_{ih}(x) x_h + \sum _{s \in \mathcal{C}_1} c_{1s}(x) + \sum _{l \in \mathcal{D}_1} d_{1l}(x)\\&\le - \beta _1 x_1 + \sum _{s \in \mathcal{C}_1} c_{1s}(x) + \sum _{l \in \mathcal{D}_1} d_{1l}(x) \end{aligned}$$

Since the \(c\) and \(d\) terms are bounded, then the solution \(x_1\) is bounded; without loss of generality, assume \(x_1 \le \xi _1\), \(\xi _1>0\).

If \(x_1\) is bounded, then all terms (if any) of type \(a_{k1}(x) x_1\) in other equations remain bounded: \(a_{k1}(x) x_1 \le \bar{a}_{j1} \xi _1\).

Let us consider the other nodes \(x_2,x_3, \ldots , x_n\). Since there are no cycles including \(a_{ij}\) terms, there is at least one variable whose equation has either no \(a\) terms, or has only \(a_{k1}(x) x_1\) terms from \(x_1\), which are bounded. Let us assume node \(x_2\) fulfills this statement. Then:

$$\begin{aligned} \dot{x}_2&= a_{i1}(x) x_1 - \sum _{h \in \mathcal{B}_2} b_{ih}(x) x_h - \sum _{h \in \mathcal{B}_2} b_{ih}(x) x_h + \sum _{s \in \mathcal{C}_2} c_{2s}(x) + \sum _{l \in \mathcal{D}_2} d_{2l}(x) \\&\le - \beta _2 x_2 + \bar{a}_{j1} \xi _1 + \sum _{s \in \mathcal{C}_2} c_{2s}(x) + \sum _{l \in \mathcal{D}_2} d_{2l}(x). \end{aligned}$$

The above inequality implies boundedness of the solution \(x_2\).

The proof can be concluded recursively, by noticing that there must exists a new variable, say \(x_3\) whose equation includes either no \({a_{ij}}\) terms or only bounded \(a_{3j}\) terms coming from \(x_1\) and \(x_2\), and so on. \(\square \)

The following corollary holds.

Corollary 1

The solution to the general model (2.2) is bounded if and only there are no \(a_{ij}\) terms and all \(b_{ii}\) terms are lower bounded by a positive constant, \(b_{ii} > \beta _i\).

This corollary highligths that boundedness is structurally assured in systems where each species is degraded by terms of at least first order, and all the interaction terms are bounded.

Example 1

As an example we consider the well known lac Operon genetic network. We will propose and analyze a qualitative model or class: the classical model proposed in [29] is a realization whitin this class. The state variables of our model are: the concentration of nonfunctional permease protein \(x_1\); the concentration of functional permease protein \(x_2\); the concentration of inducer (allolactose) inside the cell \(x_3\), and the concentration of \(\beta \)-galactosidase \(x_4\), a quantity that can be experimentally measured. The concentration of inducer external to the cell is here denoted as an input function \(u\). A model for this system can be written in the following form (see [16] for details).

$$\begin{aligned} \nonumber \dot{x}_1&= c_{13}(x_3)-b_{11} x_1, \\ \dot{x}_2&= a_{21} x_1 -b_{22} x_2,\\ \nonumber \dot{x}_3&= a_{32}(u)x_2 - b_{32}(x_3) x_2 + c_{3u}u - b_{33} x_3,\\ \nonumber \dot{x}_4&= c_{43}(x_3) - b_{44} x_4, \nonumber \end{aligned}$$
(2.20)

where \( c_{13}(x_3)=f_1(x_3)\), \(b_{11}=\delta _1\), \(a_{21} = \beta _1\), \(b_{22} =\delta _2\), \( a_{32}(u)=f_{2}(u) =\), \(b_{32}(x_3) =f_3(x_3)\), \(c_{3u} = \beta _2\), \(b_{33}=\delta _3\), \( c_{43}(x_3)= \gamma f_1(x_3)\) and \(b_{44}=\delta _4\). This corresponds to the network in Fig. 2.11.

We assume that \(c_{13}\) is constant-sigmoidal, \(a_{32}(u)\) and \(b_{32}(x_3)\) are increasing-asymptotically-constant, and the remaining functions \(a_{21}\) \(, b_{11}\), \(b_{22}\) and \(b_{33}\) are positive-constant.

The arcs associated to \(a_{ij}\) terms in Fig. 2.11 do not form any cycles. Each node is dissipative, therefore the solution is structurally bounded.

Fig. 2.11
figure 11

Graph of the lac operon network

The requirement of having no \(a_{ij}\) cycles can be strong, especially in chemical reaction networks [11]. However, the conditions in Theorem 1 are necessary and sufficient; we believe it is unlikely that stronger results can be found without assuming bounds on the dynamic terms.

Note that Theorem 1 only requires that bounds on the functional terms exist, while their specifc values need not be known. If such bounds are known, we obtain less restrictive conditions. Note that model (2.2) can be written compactly as:

$$\begin{aligned} \dot{x}(t) = A(x(t))x(t) - B(x(t))x(t) + C(x(t)) + D(x(t)), \end{aligned}$$
(2.21)

or as:

$$\begin{aligned} \dot{x}(t) = M(x(t))x(t) + C(x(t)) + D(x(t)), \end{aligned}$$
(2.22)

where \( M(x(t)) = A(x(t)) - B(x(t))\). If the elements of matrix \(M(x(t))\) are constrained in a closed (even better if compact) set, \( M(\cdot ) \in \mathcal{M}\), and if and if we can demonstrate exponential stability of the associated differential inclusion [30]

$$ \dot{x} \in \mathcal{M}x, $$

then we can show the overall boundednessof the systems’ solution. To prove boundedness it is convenient to exclude a neighborhood of the origin: \( \mathcal{N}_\nu = \{x:~~x_i \ge \nu \} \).

Theorem 2

Assume that \(M(x) \in \mathcal{M}\) for \(x \in \mathcal{N}\) and assume that the differential inclusion is bounded and admits a positively homogeneous function \(V(x)\) as Lyapunov function

$$ \dot{V}(x) = \nabla V(x) Mx \le -\gamma V(x) $$

for all \(M \in \mathcal{M}\). Then the system solution is bounded.

Proof

The proof is an immediate consequence of the fact that the trajectories of the original linear systems are a subset of the possible trajectories of the linear differential inclusions.

An exponentially stable differential inclusion has bounded solutions if perturbed by bounded terms

$$ \dot{x} \in \mathcal{M}x +C +D $$

as in our case. \(\square \)

Example 2

Consider a biological network composed by two proteins \(x_1\) and \(x_2\):

$$\begin{aligned} \dot{x}_1&= + c_{10} + a_{12}(x_1) x_2 - b_{11} x_1 ,\\ \dot{x}_2&= + c_{20} - b_{21}(x_2) x_1 - b_{22} x_2. \end{aligned}$$

In this model, we suppose that both \(x_1\) and \(x_2\) are produced in active form at some constant rates (terms \(c_{10}\) and \(c_{20}\)), but they are inactivated, or degraded, at some speed proportional to their concentration (terms \(b_{11}\) and \(b_{22}\)). However, suppose protein \(x_1\) is activated by binding to \(x_2\); this interaction in turn inactivates \(x_2\): this pathway is modeled by terms \(a_{12}(x_1)\) and \(b_{21}(x_2)\), which we assume are sigmoidal functions asymptotically constant, consistently with a cooperative, Hill function-type protein interaction.

We can rewrite the above equations as:

$$ \left[ \begin{array}{c} \dot{x}_1 \\ \dot{x}_2 \end{array} \right] = \left[ \begin{array}{cc} -b_{11} &{} \bar{a}_{12} + \delta _{12} \\ -\bar{b}_{21} - \delta _{21} &{} -b_{22} \end{array} \right] \left[ \begin{array}{c} x_1 \\ x_2 \end{array} \right] + \left[ \begin{array}{c} c_{10} \\ c_{20} \end{array} \right] , $$

where \(\delta _{12} =a_{12}(x_1)-\bar{a}_{12} \) and \(\delta _{21} = a_{21}(x_1)-\bar{a}_{21} \) and where \(\bar{a}_{12} = a_{12}(\infty )\) and \(\bar{b}_{21}=b_{21} (\infty )\).

If the region near the origin is delimited by a “radius” \(\nu \) sufficiently large, the bounds on \(\delta _{12}\) and \(\delta _{21}\) can be taken arbitrarily tight.

So inside \(N_{\nu }\) , for large \(\nu > 0\), we may assume \(|\delta _{12}| \le \epsilon \) and \(|\delta _{21} | \le \epsilon \) with small \(\epsilon \). Since the nominal system, for \(\delta _{12}=\delta _{21}=0\) is quadratically stable, it admits a quadratic Lyapunov function, inside \(N_\nu \), this is a Lyanpunov function. Inside \(N_{\nu }\) this is a Lyapunov function for the system because the contribution of terms \(\delta _{12}x_2\) and \(\delta _{21}x_1\) is negligible.

This technique allows us to prove boundedness, but not stability of the original system. Boundedness does imply the existence of equilibria, but their stability may be or may be not verified.

5.2 Structural Stability of Equilibria

If we can establish boundedness of a system, the existence of equilibria is automatically assured. Then, we can ask two main questions:

  • How many equilibria are present?

  • Which equilibria are stable?

Several results from the so-called degree theory help us find answers; see, for instance, [3134]. Here, we recall one particularly useful theorem:

Theorem 3

Assume that all the system’s equilibria \(\bar{x}^{(i)}\) are strictly positive, and assume that none of them is degenerate, i.e. the Jacobian evaluated at each equilibrium has non-zero determinant. Then:

$$ \sum _i~~\mathrm{sign}~\det \left[ -J\left( \bar{x}^{(i)}\right) \right] =1 $$

How does this theorem help us answer our questions? We describe informally three cases that we can immediately discriminate as a consequence of this theorem. Suppose analytical expressions for the Jacobian are available, as a function of a generic equilibrium point.

 

1.:

If we can establish that the determinant of \(-J\) is always positive, regardless of specific values for parameters or equilibria, then there is a unique equilibrium.

2.:

If at an equilibrium point we have \(\det [-J] <0\), then such equilibrium must be unstable (because the characteristic polynomial has a negative constant term \(p_0 = det [-J]\).) A consequence of Theorem 3 is that other equilibria must exist; if they are not degenerate, then there must be at least two equilibria.

3.:

If there are two stable equilibria, then necessarily another unstable stable equilibrium must exist.

 

In a qualitative/parameter-free context, general statements about stability of equilibria are difficult to demonstrate. If we restrict our attention to specific classes of systems, however, we can find structural stability results. We mention a few, well known examples:

  • Chemical reaction networks modeled with mass action kinetics: the zero-deficiency theorem [11] guarantees uniqueness of the equilibrium and asymptotic stability of networks satisfying specific structural conditions that do not depend on the reaction rate parameters.

  • Monotone systems: if a system is monotone [35], then its Jacobian has nonnegative non-diagonal entries, in other words it is a Metzler matrix. For a Metzler matrix, stability is equivalent to having a characteristic polynomial with all positive coefficients. This property is easy to check analytically in systems of small dimension.

  • Planar systems. Plenty of straightforward methods are available to find structural stability conditions.

We conclude this section with a paradox:

 

Difficulty::

Structural stability investigation is, generally speaking, an unsolved problem which typically requires a case-by-case study.

Interest::

Stability is generally of little interest to biologists, because many natural behaviors in biology are known to be (obviously) stable. In other words, formal proofs of stability are not very informative. However, lack of stability of an equilibrium can be a hallmark for other interesting behaviors, such as multistationarity and periodicity.

 

6 Conclusions

A property is structurally robust if it is satisfied by a class of models regardless of the specific expressions adopted or of the parameter values in the model. This chapter highlights that qualitative, parameter-free models of molecular networks can be formulated by making general assumptions on the sign, trend and boundedness of the species interactions. Linearization, Lyapunov methods, invariant sets and graphical tests are examples of classical control theoretic tools that can be successfully employed to analize such qualitative models, often reaching strong conclusions on their admissible dynamic behavior.

Robustness is often tested through simulations, at the price of exhaustive campaigns of numerical trials and, more importantly, with no theoretical guarantee of robustness. We are far from claiming that numerical simulations are useless: they are useful, for instance, to falsify “robustness conjectures” by finding suitable numerical counterexamples. In addition, for very complex systems in which analytical tools cannot be employed, simulations are the only viable method for analysis. A limit of our qualitative modeling and analysis approach is its lack of systematic scalability to complex models. However, the techniques we employed can be successfully used to study a large class of low dimension systems, and are an important complementary tool to simulations and experiments.