Abstract
This chapter presents basics of mathematical modelling in systems biology. In Sect. 1.1, a brief introduction to the topic is given, mainly in terms of examples such as problems from population dynamics or from drug administration. In Sect. 1.2, the assembly of large ODE networks from simple chemical and physiological mechanisms, given in terms of chemical reaction modules, is described. Reasons are given, why the so-called Michaelis-Menten kinetics is no longer needed in the numerical simulation of such systems. For reaction diagram parts, where only the properties “stimulating” or “inhibiting” are known, the formulation in terms of Hill functions is presented. Finally, in Sect. 1.3, necessary mathematical background material is collected as far as it seems important for the class of applications in question. Main topics are the uniqueness and sensitivity of solutions as well as asymptotic stability. Mathematical contents are typically explained by examples rather than by theorems, while emphasis is laid on consequences for practical calculations.
Access provided by Autonomous University of Puebla. Download chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This chapter presents basics of mathematical modelling in systems biology. In Sect. 1.1, a brief introduction to the topic is given, mainly in terms of examples such as problems from population dynamics or from drug administration. In Sect. 1.2, the assembly of large ODE networks from simple chemical and physiological mechanisms, given in terms of chemical reaction modules, is described. Reasons are given, why the so-called Michaelis-Menten kinetics is no longer needed in the numerical simulation of such systems. For reaction diagram parts, where only the properties “stimulating” or “inhibiting” are known, the formulation in terms of Hill functions is presented. Finally, in Sect. 1.3, necessary mathematical background material is collected as far as it seems important for the class of applications in question. Main topics are the uniqueness and sensitivity of solutions as well as asymptotic stability. Mathematical contents are typically explained by examples rather than by theorems, while emphasis is laid on consequences for practical calculations.
1.1 Introduction
To start with, Sect. 1.1.1 gives a short overview about ODE initial value problem types that occur in systems biology. Next, two simple model problems are worked out in some detail, one from population dynamics (Sect. 1.1.2), one on multiple dose administration of drugs (Sect. 1.1.3).
1.1.1 Problem Types in Systems Biology
Let us first give a brief list of problems that typically come up in systems biology. In the subsequent Sects. 1.1.2 and 1.1.3 we will present a few elementary examples.
1.1.1.1 Non-autonomous Initial Value Problems
This book predominantly focuses on initial value problems for systems of d ordinary differential equations (ODEs)
for given initial values y 0. The notation indicates that the time variable t appears explicitly in the right-hand side f; in this case we speak of a non-autonomous problem. However, apart from special problems of drug administration where the time point of administration enters crucially into the modelling, this case is the non-standard case in systems biology.
1.1.1.2 Autonomous Initial Value Problems
Throughout the book we will mainly deal with the case, when the time variable t does not explicitly enter into the right-hand side f. Then we have
A specialty of this type of problem is that for any given solution trajectory y(t) satisfying (1.2) there exists a continuum of further solution trajectories \(z(t) = y(t-\tau )\) with time shift τ satisfying the same ODE
and the same initial condition
Due to this so-called translation invariance the initial point t 0 can be chosen arbitrarily, so that we are free to set t 0 = 0 in (1.2).
1.1.1.3 Parameter Dependent Problems
In the majority of problems in systems biology, a (possibly large) number of unknown parameters p = (p 1, …, p q ) enters in the form
Of course, one would like to identify such parameters by matching the above type of model with given experimental data. The corresponding mathematical problem is far more subtle than often recognized in systems biology literature. Because of its central importance in modelling, it will be carefully elaborated in Chap. 3 below.
1.1.1.4 Linear ODEs
In the non-autonomous case, a linear system may be written as
where A(t) denotes a time dependent (d, d)-matrix and \(b(t) \in \mathbb{R}^{d}\) a corresponding vector function. In the autonomous case, we will most often encounter the homogeneous situation b = 0 so that
in terms of some time-independent (d, d)-matrix. This kind of system plays a role in stability analysis of general ODE systems, see Sect. 1.3.3.
1.1.1.5 Singularly Perturbed Systems
In the mathematical literature for systems biology, now and then so-called singularly perturbed problems of the kind
arise. Such systems have been designed in the early days of computational science to be able to solve them by standard explicit integrators. The approach is more or less dispensable, since today efficient so-called stiff integrators are available that solve such problems, see Sect. 1.3.4 for a more detailed discussion.
1.1.1.6 Delay or Retarded Differential Equations
Quite often processes do not just depend on the current state but also on the “history” of the system. Such systems also arise as a phenomenological description, when not enough information about a chain of intermediate processes is at hand.
In the simplest case such a differential system contains a retardation or delay time τ > 0 so that
with a given initial function Θ. In contrast to the standard ODE case we typically have
i.e., the derivative of the solution is discontinuous at the initial point t = 0. The discontinuity propagates along the trajectory, but is gradually smoothed. This feature has to be taken into account in the numerical simulation! Typical for systems biology is the fact, that the delay may depend on the solution, too, which means that one should write τ(y) above instead of just τ. Throughout the book we will not go into too much detail of this problem type, but give a hint on available codes in Sect. 2.5.1
1.1.1.7 Periodic ODE Problems
In systems biological modelling, internal clocks or circadian rhythms play an important role. The modelling of such processes leads to ODE problems of the kind (mostly autonomous)
with unknown period T. In contrast to the initial value problems mentioned so far, this problem is of boundary value type, which is more complex and beyond the scope of this book. Interested readers may want to look up theoretical and algorithmic details in the textbook [15, Section 7.3].
1.1.2 Example: Population Dynamics
This kind of mathematical model describes the dynamics of populations. Let p(t) denote the number of individuals at time t and Δ t some finite time step. Then the change of population p within time interval [t, t +Δ t] will be
which means that the longer the time interval, the greater the change will be. Note that g > 0 represents growth, g < 0 decay. The typical derivation step now is to write down the above relation as a difference equation and pass to the limit as follows:
This differential equation (ODE) may also be written as p ′ = g. Note that here we have tacitly applied some continuum hypothesis assuming that \(p(t) \in \mathbb{R}^{+}\), even though \(p(t) \in \mathbb{N}\), since the number of individuals can be counted. In addition, ODE models are based on the assumption of well-mixing, i.e. individuals are homogeneously distributed in space such that spatial gradients can be neglected in the model. We now turn to some special cases for the rate coefficient g = g(t, p).
1.1.2.1 Exponential Growth
We start with the assumption of a constant fertility rate λ 0 (interpretation: the more individuals, the higher the birth rate):
This is a linear ODE, which can be solved to yield
The case of growth occurs with λ 0 > 0. In this case, there would be only members of this species after some time in the corresponding local neighborhood – obviously ignoring the limited nutrition basis or any other environmental constraints.
1.1.2.2 Saturation Model
The insufficiency of a purely linear model has already been pointed out 1838 by P.-F. Verhulst [61], who suggested to modify the ODE in the form
Obviously, this is a nonlinear ODE, often also called logistic equation. The associated dynamics has two fixed or stationary points with p ′ = 0, namely p ≡ 0, which can only occur for p 0 = 0, and p ≡ p max, which is approached by any trajectory with \(0 < p_{0} \leq p_{\max }\). An illustration is given in Fig. 1.1.
As one of the rare examples, the initial value problem (1.8) can be solved analytically by separation of variables (let t 0 = 0, since we have an autonomous ODE):
After some short calculation we obtain the analytic solution
This function is often also called the logistic law of growth. Note that for t = 0 one actually obtains the initial value p(0) = p 0. Moreover, one easily verifies that \(p(t) \leq p_{\max }\) and \(\lim _{t\rightarrow \infty }p(t) = p_{\max }\), if \(0 < p_{0} < p_{\max }\).
1.1.2.3 Predator-Prey Model
Consider the dynamics of a closed ecological system, in which two species interact, predators (number N 2) and prey (number N 1); as an example, you may take fox for the predator and hare for the prey. The behavior can be described by the model
with prescribed positive parameters α, β, γ, δ. This pair of first-order nonlinear differential equations is known as Lotka-Volterra model , named after A. J. Lotka and V. Volterra, who independently developed these equations already in 1925/1926, see, e.g., the textbook [45] by J. D. Murray.
The nonlinear terms N 1 N 2 enter, since the prey population would grow unboundedly, if the predator population were zero, while the predator population would die out, if the prey population were zero. The meaning of the parameters is:
-
α: prey reproduction rate (with unbounded nutrition resources),
-
β: rate at which prey is eaten by predators (per unit prey), which is equivalent to mortality rate of prey per unit predator,
-
γ: mortality rate of predators in the absence of prey,
-
δ: reproduction rate of predators per unit prey.
A short calculation yields
Upon integrating both sides, we arrive at
As a consequence, the quantity
is an invariant along any trajectory. In Fig. 1.2, left, two oscillatory solution curves N 1(t), N 2(t) are depicted. Figure 1.2, right, shows H(N 1, N 2) = const in an (N 1, N 2)-plane, also called phase plane, where a closed orbit arises for each initial value.
In Sect. 3.5.1 below we treat the identification of parameters from given data of the Canadian lynx (predator) and snowshoe hare (prey).
1.1.3 Example: Multiple Dose Administration of Drugs
We follow the presentation in the illustrative book of D. S. Jones et al. [40] to show how drug concentrations in body fluids can be described by differential equations. Assume that the drug concentration c(t) within the blood plasma can be described by the following simple law:
In this linear ODE, the constant τ, often called relaxation time, characterizes the decay rate of the concentration
Suppose now that some prescribed constant dose c 0 is administered regularly at times \(t_{n} = nt_{0},\;n = 0,1,\ldots\). Then the concentration will grow in a sawtooth pattern, which is illustrated in Fig. 1.3.
Let us now try to model this situation quantitatively. For that purpose, we introduce the notation c n = c(t n ). Due to the above decay law, we get
and with the regular administration eventually
For convenience of writing we introduce the quantity \(q =\exp (-t_{0}/\tau ) < 1\) and thus arrive at the recursion
from which we obtain (check yourself)
or, in the original notation,
In addition, we obtain the so-called concentration residue
Taking the limit n → ∞, we observe that the concentration never exceeds
and that the residue approaches
Usually, the therapeutic goal is to reach \(c_{\max }\) in only a few dose steps (t 0∕τ large), whereas r should be kept above a certain level (t 0∕τ small). Obviously, these two goals are in contradiction to each other. One strategy is to avoid the sawtooth build-up by giving an initial large dose of c 0 + r or \(c_{\max }\) and thereafter again doses of c 0. The optimal treatment strategy, however, usually depends on several factors like production costs and patterns of human behavior.
1.2 ODE Systems from Chemical or Physiological Networks
In systems biology, typical ODE systems originate from chemical kinetics . Apart from these, so-called compartment models arise, which we will explain in Example 1 below and, in a more realistic setting, subsequently in Sect. 2.5.3 In Sect. 1.2.1, we start with isolated simple chemical mechanisms and their translation into ODE models. Such models will comprise the building blocks of large networks whose construction we will discuss in Sect. 1.2.3 below. In between, in Sect. 1.2.2, we discuss some traditional model type for enzyme kinetics called Michaelis-Menten kinetics, which is still around in the literature, but is no longer needed nowadays.
1.2.1 Elementary Chemical Mechanisms
Part of the presentation here closely follows Section 1.3 in the textbook [16].
1.2.1.1 Monomolecular Reaction
In chemical language, this reaction is written in terms of two chemical species A, B as
In a particle model we may denote n A, B as the number of particles of A, B. In Boltzmann’s kinetic gas theory, which needs to be carefully discussed when applied within the human body (under the assumptions of constant pressure, volume V, and temperature T!), one obtains for the changes Δ n A, B of particle numbers n A, B within some time interval Δ t
where the second equation is the conservation of particles. In a continuum model, the associated concentrations are defined as
For ease of writing, one usually identifies the names for the concentrations with the names of the corresponding chemical species, i.e. \(c_{A} \rightarrow A,c_{B} \rightarrow B\) etc. Upon defining k as a reaction rate coefficient, we thus arrive at the ODEs
If we set initial conditions
then we can solve these simple equations analytically to obtain
In passing we note that mass conservation still holds in the two equivalent forms
1.2.1.2 Bimolecular Reaction
In chemical language, this reaction reads
Using the same kinetic reaction principles as before, one is led to the ODE model (again identifying species and concentration names)
In passing we again note that conservation of mass holds:
Important special cases of this mechanism to arise in systems biology are
-
catalysis: B = C
-
autocatalysis: B = C = D, e.g., DNA replication: nucleotide + DNA \(\rightleftharpoons \) 2 DNA
Stationary state. The equilibrium phase, also called the stationary state, is characterized by
From this we arrive at the classical law of mass action kinetics (often called the Arrhenius law) :
where we have already inserted the Boltzmann formula with Δ E the activation energy, which is the energy difference between reactants and products, R the universal gas constant, and T the temperature (as above). If only the equilibrium phase of this reaction is to be modeled, then the above equilibrium coefficient \(k_{21} = k_{2}/k_{1}\) is the only degree of freedom that is well-defined. In this case, a model reduction is possible. A simple illustrative example for this phenomenon will be worked out in Sect. 3.5.2
1.2.1.3 General Reaction Scheme
For the sake of completeness, we mention that a reaction of the general type
would give rise to the equilibrium relation (the general law of mass action kinetics )
A general reaction of the type
results in the reaction rate equation
where
Remark 1
Often, the factorials in the above denominators are absorbed into the constant k, giving rise to a reaction rate equation in the form
Both forms can be found in the literature; note that the value of the reaction rate coefficient will vary accordingly.
Remark 2
Whenever the copy numbers of species involved in a chemical reaction get small, random fluctuations come into play. In this case, the ODE models based on mass action kinetics must be replaced by the chemical master equation (CME). The CME is the fundamental equation of stochastic chemical kinetics. This differential-difference equation (continuous in time and discrete in the state space) describes the temporal evolution of the probability density function for the states of a chemical system. The state of the system represents the copy numbers of interacting species, which are changing according to a list of possible reactions. The solution of the CME in higher dimensions is mathematical challenging and the topic of ongoing research. A detailed discussion would go beyond the scope of this book.
1.2.1.4 Inhibitory or Stimulatory Impact
In quite a number of chemical reactions in biology detailed knowledge about the individual reaction mechanisms is not available, but only some information of the kind “inhibitory or stimulatory impact”. This qualitative insight is usually captured quantitatively in terms of so-called Hill functions . Let S denote some input substrate concentration and P the corresponding output product concentration. Then, in terms of threshold values \(T,T^{-},T^{+}\) and Hill coefficients n, the following modelling schemes are in common use (see Fig. 1.4):
-
Inhibitory processes. These are described by negative feedback Hill functions (with the notation \(X = S/T\))
$$\displaystyle{ h^{-}(S,T,n) = \frac{1} {1 + X^{n}},\quad P^{{\prime}} = p^{-}h^{-}(S,T,n)\;, }$$(1.17)where p − denotes some reaction rate coefficient.
-
Stimulatory processes. These are described by positive feedback Hill functions (with the notation \(X = S/T\))
$$\displaystyle{ h^{+}(S,T,n) = \frac{X^{n}} {1 + X^{n}},\quad P^{{\prime}} = p^{+}h^{+}(S,T,n)\;, }$$(1.18)where p + is again some reaction rate coefficient.
-
Switch processes. Whenever two process directions are mutually independent, then they can be modeled by biphasic Hill functions
$$\displaystyle{ h^{\pm }(S,T^{-},T^{+},n) = h^{-}(S,T^{-},n) + h^{+}(S,T^{+},n)\;, }$$(1.19)which gives rise to the ODE parts
$$\displaystyle{P^{{\prime}} = p^{\pm }h^{\pm }(S,T^{-},T^{+},n)\;,}$$with p ± as reaction rate coefficient. The switch takes place at \(T_{s} = \sqrt{T^{- } T^{+}}\), compare Fig. 1.4. In passing we note that
$$\displaystyle{h^{-}(S,T,n) + h^{+}(S,T,n) = 1\;.}$$Whenever the two process directions are mutually dependent, they should be coupled multiplicatively, i.e.
$$\displaystyle{ h^{\pm }(S,T^{-},T^{+},n) = h^{-}(S,T^{-},n) \times h^{+}(S,T^{+},n)\;. }$$(1.20)
1.2.2 Enzyme Kinetics
A special case of reaction mechanism is the case of enzyme kinetics , which we here give in some detail, since this mechanism can be treated numerically in different ways. This mechanism involves four chemical species: substrate S, product P, enzyme E, and complex C. In chemical language, this kind of reaction scheme is written as
The corresponding mathematical formulation in terms of an ODE system is:
Observe that all parameters above enter linearly, compare the remarks in Sect. 3.4 in the context of parameter sensitivity analysis. As initial conditions we typically have
As for mass conservation, we now have two chemical reactions:
Upon eliminating \(E = E_{0} - C\) and \(P = S_{0} - S - C\) from the above four ODEs, we obtain a reduced model with only two ODEs
to be completed by the two above initial conditions \(S(0) = S_{0},\;C(0) = 0\). In Fig. 1.5, we show the results of numerical simulations.
Michaelis-Menten kinetics. We continue with an analysis of the above enzyme reaction mechanism by introducing the so-called quasi-steady state approximation , in short: QSSA . In this framework, we set
Insertion into the ODE (1.22) then yields
where K m denotes the so-called Michaelis constant. Inserting this expression into (1.21) leads to
The ODE (1.23) is called Michaelis-Menten kinetics .
Generally speaking, the advent of modern stiff integrators (see Sects. 2.3 and 2.4) has made the QSSA including the Michaelis-Menten kinetics superfluous. Nevertheless such models have survived even in recent literature, which is why they are also accepted as possible mechanisms in the modelling language SBML [11].
1.2.3 Assembly of Large ODE Networks
The previous two sections have shown that there exists a one-to-one correspondence between elementary chemical reactions and ODE schemes. In actual systems biological modelling such small blocks will have to be assembled to large chemical reaction networks. For this purpose, it is convenient to construct a so-called chemical compiler that automatically generates the ODE system. (We deliberately skip here the possible addition of further physiological mechanisms that give rise to ODEs of different kind; they will need an extra treatment.)
1.2.3.1 Chemical Compiler
Such a programming tool generates the right-hand sides f of an ODE system (1.2) from elementary pieces. This is a comparatively easy task, since the mechanisms of Sect. 1.2.1 lead to known functions such as polynomials or Hill functions. Simultaneously, anticipating Sect. 1.3.2 below, the Jacobians f y , f p (with respect to variables y and parameters p) will be needed. In the polynomial terms the parameters usually enter linearly, which is why we explicitly advise users to avoid any Michelis-Menten kinetics (see (1.23)) wherever possible, since they would give rise to parameters entering nonlinearly. Of course, any additional right-hand sides originating from other source terms should be treated aside. In particular, approximation of the Jacobians f y , f p by numerical differentiation might be applicable, which requires special software, see [5].
Such a compiler permits a user to concentrate on modelling questions without getting too much involved with the arising ODE system. At the same time it helps to reduce programming errors. That is why already in the 1980s FORTRAN codes like CHEMKIN due to [41] or LARKIN due to [4, 22] have been developed, mainly oriented towards physical chemistry. Nowadays, CHEMKIN is developed by the company ReactionDesign,Footnote 1 whereas an open-source version, named Cantera,Footnote 2 is developed by the group of Dave Goodwin at the California Institute of Technology. More recent developments oriented towards systems biology are the SBML package [46] to be combined with numerical codes like Copasi due to [39] or BioPARKIN due to [25].
1.2.3.2 Compartment Modelling
This modelling technique is quite popular in computational biology. It consists in splitting the system under consideration into separate compartments, which are coupled by ODEs that describe the quantitative connections between these parts of the model. Within each of the compartments, concentrations are assumed to be uniformly distributed. Rather than discussing this technique abstractly, we illustrate it below by a recent elaborate example. In addition, we present the results of assembling chemical reaction mechanisms. Thus it may stand for a class of typical examples in systems biology. Moreover, in Sect. 2.5.3, we work out a larger compartment model concerning cancer cells.
Example 1
In [53], a model of the human menstrual cycle has been worked out in detail. The selected compartments are: the hypothalamus, the pituitary, and the ovaries, connected by the blood stream, as illustrated in Fig. 1.6.
In Fig. 1.7, part of the corresponding chemical model is presented in the usual form of a reaction diagram. The species have been colored according to their occurrence in different compartments. The full model comprises 33 chemical species (and, of course, the same number of ODEs) as well as 76 chemical reactions and physiological processes. For mere illustration purposes, we just give a selection out of the rather large compiled ODE system.
Luteinizing Hormone (LH):
LH receptor binding:
We deliberately dropped the equations for the gonadotropin releasing hormone (GnRH, already left out in Fig. 1.7), for the follicle stimulating hormone (FSH), the physiological mechanisms for the development of various stages of follicles and corpus luteum as well as the reaction mechanisms for estradiol (E2), progesterone (P4) and the two inhibins (IhA,IhB). Readers interested in all details may want to look up the original paper [53].
1.3 Mathematical Background for Initial Value Problems
From the vast mathematical background material concerning ODE initial value problems we here want to select only such items that need to be understood when modelling and simulating networks in systems biology. In the following we will treat questions of uniqueness, sensitivities, condition numbers, and asymptotic stability.
1.3.1 Uniqueness of Solutions
Given an ODE model, it should be clear whether this model has a unique solution. If this were not the case, then any “good” numerical integrator will run into difficulties. That is why we discuss the topic here. Let \(y^{{\ast}}(t),t \in [t_{0},t_{+}[\) denote a unique solution existing over the half-open interval \(t_{0} \leq t < t_{+}\).
1.3.1.1 Uniqueness Criteria
As worked out in mathematical textbooks (see again, e.g., [16, Section 2.2]), there are three cases that may occur, from which we select two that may come up in systems biological modelling:
-
(a)
The solution y ∗ exists “forever”, i.e. \(t_{+} = \infty \).
-
(b)
The solution “blows up” after finite time, i.e. t + < ∞.
Case (a) essentially requires that the right-hand side f satisfies a global Lipschitz condition
wherein the term ‘global’ means that it holds for all arguments x, y. Typically, this so-called Lipschitz constant L is identified via the derivative of the right-hand side, to be denoted by
The expression f y is often called the Jacobian (matrix) of the right-hand side. With this definition the Lipschitz constant can be calculated as
where the maximum (supremum sup) is taken over all possible arguments y. This seemingly only theoretical quantity will play an important role later in connection with the definition of “stiffness” of ODEs, see Sect. 2.1.4 For illustration purposes, we give two scalar examples of the above cases.
Example 2 (Case (a))
Consider an example similar to the monomolecular reaction (1.11),
The right-hand side is linear so that \(\vert f_{y}(y)\vert = k\). There exists a global Lipschitz constant L = k and thus a unique solution over all times, in the special case
As k > 0, the solution is bounded for all t ≥ 0.
Example 3 (Case (b))
Consider the nonlinear example,
similar to the bimolecular reaction (1.13). Here we obtain \(\vert f_{y}(y)\vert = 2\vert y\vert \), which is only bounded, if we restrict the values of y. Thus we have only local Lipschitz continuity of f. In fact, by solving this equation analytically (using separation of variables), we see that there exists a unique solution
only up to some finite time \(t_{+} = 1\). In Fig. 1.8 we give the graph of the solution.
Remark 3
In systems biology, such a Lipschitz condition will typically only hold locally, i.e. for restricted arguments y, which would formally allow for case (b) as well. However, due to mass conservation (see the examples (1.12) or (1.14)) case (b) can be excluded, since any bounded sum of positive terms assures that each term is bounded. In actual modelling, some scientists ignore mass conservation – with the danger that then solutions may “blow up”. In addition, note that in numerical simulation things turn already to be bad when the solution only “nearly” blows up. Such events occur, e.g., in the realistic example in Sect. 3.5.3, where there is no mass conservation in the model.
1.3.1.2 Phase Flow and Evolution
Suppose a linear system of equations were given, say Ax = b. If it has a unique solution, say x ∗, then this can be written as \(x^{{\ast}} = A^{-1}b\). The definition of the matrix inverse A −1 is just a clean notation to indicate the uniqueness of the solution; by no means should the linear equation be solved by first computing the matrix inverse and then multiply it to the right-hand side b.
In a similar way, a notation to indicate that an ODE initial value problem has a unique solution, say y ∗, has emerged. For an autonomous initial value problem
we write
in terms of some phase flow (often just called flow) Φ t satisfying a semigroup property
For a non-autonomous IVP
the unique solution y ∗ is defined via the evolution \(\varPhi ^{t,t_{0}}\) as
The evolution satisfies the semigroup property
The notations \(\varPhi ^{t}y_{0}\) and \(\varPhi ^{t,t_{0}}y_{0}\) should not be misunderstood: these mappings are nonlinear functions of the initial values y 0. As in the case of the matrix inverse for linear equations, these notations should not be regarded as a recipe to solve the given ODE problem.
1.3.2 Sensitivity of Solutions
In a first step, we want to study the effect of a perturbation of the initial value y 0 in the form
1.3.2.1 Propagation Matrices
In the autonomous case, the question is how this deviation propagates along the solution \(y(t) =\varPhi ^{t}y_{0}\). In order to study this propagation, let us start with Taylor’s expansion with respect to the initial perturbation δ y 0, i.e.
Upon dropping terms of second and higher order in δ y 0, we arrive at some linearized perturbation theory
wherein the notation \(\doteq\) denotes the linearization. The thus defined perturbation δ y(t) is given by the linear mapping
in terms of the (d, d)-matrix
called the propagation matrix or Wronskian matrix. This matrix can be interpreted as the sensitivity of the nonlinear mapping Φ t with respect to the initial value y 0. Just like in the nonlinear case (1.33), we get some semigroup property
For non-autonomous IVPs, we merely modify the definition of the Wronskian matrix by expanding the notation to
and thus obtain the analogous linear relation
The corresponding semigroup property reads
1.3.2.2 Variational Equation
Starting from (1.34) we may derive an ODE for the perturbation according to
Upon recalling the definition of the propagation matrix, we find that
Insertion of this ODE above then yields
The thus arising linear ODE
is called the variational equation . Note that this equation is non-autonomous due to the time dependent argument in the derivative matrix f y . Its formal solution is (1.34), which shows that the Wronskian matrix is just the flow (or evolution, respectively) of the variational equation. Note that (1.38) is just the variational equation for the Wronskian matrix itself.
For the non-autonomous case \(y^{{\prime}} = f(t,y)\), we would obtain the modified variational equation
Analogously to the autonomous case, Eq. (1.36) supplies the solution of this non-autonomous variational equation.
1.3.2.3 Condition Numbers
With the above preparations, we are now ready to define the condition of initial value problems. Recall from introductory textbooks on Numerical Analysis (such as [17]) that the condition of a problem is independent of any algorithm applied to solve it. There are two basic possibilities depending on the focus of interest. For notation, we introduce | ⋅ | as the modulus of the elements of a vector or matrix to be well distinguished from \(\|\cdot \|\), the norm of a vector or matrix.
-
(a)
Assume one is interested only in the solution y(t) at a specific time t. Then the pointwise condition number κ 0(t) may naturally be defined as the smallest number for which
$$\displaystyle{ \vert \delta y(t)\vert \leq \kappa _{0}(t) \cdot \vert \delta y_{0}\vert \;. }$$(1.41)On the basis of (1.34), we thus arrive at the definition
$$\displaystyle{ \kappa _{0}(t) =\| W(t)\|\;,\quad \kappa _{0}(0) = 1\;. }$$(1.42) -
(b)
If one is interested in the entire course of the solution y(t) on the whole time interval [0, t], then the interval condition number κ[0, t] may be defined as the smallest number for which
$$\displaystyle{\max _{s\in [0,t]}\vert \delta y(s)\vert \leq \kappa [0,t] \cdot \vert \delta y_{0}\vert \;}$$which then implies
$$\displaystyle{ \kappa [0,t] =\max _{s\in [0,t]}\kappa _{0}(s)\;. }$$(1.43)The above semigroup property (1.37) directly leads to the following relations:
-
(i)
κ[0, 0] = 1,
-
(ii)
κ[0, t 1] ≥ 1,
-
(iii)
\(\kappa [0,t_{1}] \leq \kappa [0,t_{2}],\quad 0 \leq t_{1} \leq t_{2}\)
-
(iv)
\(\kappa [0,t_{2}] \leq \kappa [0,t_{1}] \cdot \kappa [t_{1},t_{2}],\quad 0 \leq t_{1} \leq t_{2}\)
-
(i)
The role of the local condition number can be seen in the following example.
Example 4
For the famous Kepler problem , which describes the motion of two bodies (say Earth-Moon) in a gravitational field, one may show that
which is a mild increase. The situation is very different in molecular dynamics , see, e.g., [16, Section 1.2], where one obtains
This means that after some very small critical time \(t_{\mathrm{crit}}\) the initial value problems turn to get ill-posed. As a consequence, a different type of computational approach is necessary, called conformation dynamics , more recently also Markov state modelling , see, e.g., the survey article by P. Deuflhard and C. Schütte [21] and references therein.
The just introduced two different condition numbers will be needed below in Sect. 2.1.2, where we discuss error concepts in the numerical simulation, and in the following Sect. 1.3.3.
1.3.2.4 Parameter Sensitivities
In the majority of problems in systems biology, parameter dependent systems arise in the form
Here we are naturally interested in the effect of perturbations
with respect to \(p = (p_{1},\ldots,p_{q})\). For this purpose we define the parameter sensitivities
Upon application of the chain rule of differentiation, this quantity can be seen to satisfy a modified variational equation for each parameter component
Remark 4
The actual numerical solution of any of the variational equations (1.39), (1.40), or (1.47) requires to treat an extended ODE system including the original ODE (1.32) to compute the argument within the Jacobians f y or f p , respectively. For certain algorithmic details see Sect. 2.5.1 below.
1.3.3 Asymptotic Stability
In order to sharpen our mathematical intuition, we analyze the two types of condition numbers as introduced in the previous section for a notorious scalar ODE problem.
Example 5
Despite its simplicity this problem yields deep insight into the structure of ODEs. Let \(\lambda \in \mathbb{R}\) denote some parameter in the initial value problem
The general solution may be written in the form
Obviously, there exists an equilibrium solution y(t) = g(t) for all t where g is defined. In Fig. 1.9, we give two examples for the above model problem. Upon varying the initial values y 0, we may clearly distinguish two qualitatively different situations, asymptotic stability versus inherent instability.
Condition numbers. Let us exemplify the two condition numbers defined above. From definition (1.41) and (1.42) we immediately obtain the pointwise condition number
from which (1.43) yields the interval condition number, say κ[0, T] over an interval [0, T]. There are three qualitatively different situations for the two characteristic numbers:
-
(a)
λ < 0: Here we get
$$\displaystyle{\kappa _{0}(t) =\exp (-\vert \lambda \vert t)\stackrel{t \rightarrow \infty }{\longrightarrow }0\;,\quad \kappa [0,T] =\kappa _{0}(0) = 1,}$$i.e. any initial perturbation will decay over sufficiently large time intervals, see Fig. 1.9, left; in this case, the equilibrium solution y = g is said to be asymptotically stable;
-
(b)
λ = 0: here we obtain
$$\displaystyle{\kappa [0,T] =\kappa _{0}(T) = 1,\quad \text{for all}\;T \geq 0\;,}$$i.e. any initial perturbation is preserved;
-
(c)
λ > 0: here we get
$$\displaystyle{\kappa [0,T] =\kappa _{0}(T) =\exp (\lambda T)\stackrel{T \rightarrow \infty }{\longrightarrow }\infty \;,}$$i.e. any perturbation grows exponentially with time, the equilibrium solution y = g is inherently unstable, see Fig. 1.9, right.
The same three cases also appear for complex valued \(\lambda \in \mathbb{C}\), if we replace λ by ℜ λ in (a), (b), (c) above.
Next, we want to study a characterization of the stability properties for two more general cases.
1.3.3.1 Matrix Exponential
Suppose we have to solve the linear homogeneous autonomous initial value problem
Its formal solution is often written in terms of the matrix exponential
The careful reader will observe that the matrix exponential is just the Wronskian matrix for the special case (1.49), i.e.
In [57], a list of algorithms for the evaluation of \(\exp (tA)y_{0}\) is collected. We want to emphasize, however, that the matrix exponential, just like any phase flow in general, should preferably be understood as a formal representation of the solution of (1.49), not as a basis for actual computation.
The following property of the matrix exponential is most important. Let M be an arbitrary nonsingular matrix. Then one can show that
For principal reasons, we briefly outline the proof. Upon multiplying (1.49) by M, we get
This yields the formal solution
and, after insertion of the definitions,
which holds for every y 0 so that (1.50) is proven.
Warning. Note that generally
unless the so-called commutator \([A,B]_{-} = AB - BA\) vanishes.
1.3.3.2 Stability of Linear Homogeneous Autonomous ODEs
For simplicity, let us assume now that A is diagonalizable. The results also hold in the non-diagonalizable case, which, however, is skipped here, since it is rather technical. In this case there exists a matrix M such that
Then, with \(\bar{y} = My\), the ODE y ′ = Ay decomposes into d one-dimensional ODEs
from which we obtain
This gives rise to the following classification:
-
(a)
\(\mathfrak{R}(\lambda _{i}) < 0\; \Rightarrow \;\vert \bar{y}_{i}(t)\vert \rightarrow 0\) for t → ∞, i.e. the solution component \(\bar{y}_{i}\) dies out asymptotically,
-
(b)
\(\mathfrak{R}(\lambda _{i}) \leq 0\; \Rightarrow \;\vert \bar{y}_{i}(t)\vert \leq \vert \bar{y}_{i}(0)\vert \), i.e. the solution component \(\bar{y}_{i}\) remains bounded for all t ≥ 0,
-
(c)
\(\mathfrak{R}(\lambda _{i}) > 0\; \Rightarrow \;\vert \bar{y}_{i}(t)\vert > \vert \bar{y}_{i}(0)\vert \), i.e. the solution component \(\bar{y}_{i}\) blows up for t → ∞.
Of course, for systems, different components of \(\bar{y}\) may fall into different classes and may be mixed via the transformation matrix M. Hence, we arrive at the following stability criteria:
-
(a)
The solution y is stable, if \(\mathfrak{R}(\lambda _{i}) \leq 0\;\) for all i.
-
(b)
The solution y is asymptotically stable, if \(\mathfrak{R}(\lambda _{i}) < 0\;\) for all i.
-
(c)
The solution is unstable, if the condition \(\mathfrak{R}(\lambda _{i}) > 0\;\) holds for at least one index i; in this case, the instability will occur in at least one direction.
1.3.3.3 Stability of Nonlinear ODEs Around Fixed Points
We now consider general nonlinear autonomous ODEs. As shown above, linearized perturbations δ y are governed by the variational equation (1.39). This equation is linear, but non-autonomous, i.e. of the type
with y(t) the given solution to be studied. As a consequence, the above stability classification does not apply. Counter-examples, where the eigenvalues of some matrix A(t) satisfy the above stability criterion for all t, but the perturbations δ y(t) nevertheless blow up, can be found in the literature (see the notorious example of H. O. Kreiss, e.g., [16, Remark 3.29]).
For this reason, a simpler approach studies the behavior of the solution around some fixed point \(y^{{\ast}}\in \mathbb{R}^{d}\) defined by f(y ∗) = 0. Upon defining initial values
we arrive at the variational equation,
Obviously, in this case the Jacobian matrix \(A:= f_{y}(y^{{\ast}})\) is autonomous so that the above stability theory applies.
Recall, however, that we have used a linearized perturbation analysis. Caution against blind application of such a theory is strongly advised. For illustration, we give the following warning example.
Example 6
We compare two simple nonlinear initial value problems.
-
(a)
The ODE is given by
$$\displaystyle{y^{{\prime}} = -y^{3}\;.}$$Its solution for arbitrary initial value y 0 is
$$\displaystyle{y(t) = \left ( \frac{1} {y_{0}^{2}} + 2t\right )^{-1/2}\stackrel{t \rightarrow \infty }{\longrightarrow }0\;,}$$i.e. the system returns from any given y 0 to the fixed point \(y^{{\ast}} = 0\). Hence, it is asymptotically stable.
-
(b)
This time the ODE is given by
$$\displaystyle{y^{{\prime}} = y^{3}\;.}$$Its solution is
$$\displaystyle{y(t) = \left ( \frac{1} {y_{0}^{2}} - 2t\right )^{-1/2}\stackrel{t \rightarrow t_{ +}}{\longrightarrow }\infty \;,}$$i.e. the system blows up at \(t_{+} = 1/(2y_{0}^{2})\). Hence, it is unstable.
Observe, however, that both systems have the same variational equation \(\delta y^{{\prime}} = 0\) around the fixed point y ∗ = 0, which implies that \(\delta y(t) =\mathop{ \mathrm{\mathrm{const}}}\nolimits\), even though the qualitative behavior of the two ODEs is very different. Consequently, the linearized perturbation analysis may be misleading in predicting the qualitative behavior of a nonlinear initial value problem.
1.3.4 Singularly Perturbed Problems
Assume that a given ODE system has a solution y = (u, v) that naturally splits into a “slow” component u and a “fast” component v. Such a system may be written as a two-component system of the kind
where some “fast” time scale \(\tau = t/\varepsilon\) with \(0 <\varepsilon \ll 1\) has been introduced such that
Assume further that
and let \(\varepsilon \rightarrow 0^{+}\). Then, in the quasi-steady state approach (abbreviated: QSSA), we obtain the differential-algebraic equation (DAE)
for some two-component solution y 0 = (u 0, v 0). Due to (1.53) we may interpret the limit in such a way that, for arbitrary starting value v 0(0), the solution component v 0 “immediately” approaches a near-by value on the constraint manifold. For illustration, see Fig. 1.10.
With the availability of modern adaptive stiff integrators (see Chap. 2 below), there typically is no visible performance difference between the numerical solution of the ODE (1.52) and of the DAE (1.54). In fact, while the ODE system usually has a unique solution, the DAE may not have a unique solution, unless further assumptions hold. Instead of diving into theoretical details (to be found, e.g., in the textbook [16] and references therein) we give an illustrative example from reaction kinetics.
Example 7
This example treats a chemical network that models the thermal decomposition of n-hexane. The system comprises 47 chemical reactions for 25 chemical species, i.e. there are d = 25 ODEs. Among the 25 species, chemical insight may identify 13 species as chemically stable, while 12 are so-called “free radicals”. Hence, in a first QSSA treatment of the kind (1.54) (reported in [18]), one might be tempted to come up with 13 ODEs and 12 algebraic equations. In this case there exists no unique solution – as can be proven mathematically precisely; this result also came out by application of the stiff integrator LIMEX, which is equipped with a special uniqueness monitor (skipped here). If only 7 of the radicals are selected for the algebraic equations (after some trial and error), then a unique solution exists. However, the computing times are the same as without any QSSA preprocessing. These results are arranged in Table 1.1.
References
Amestoy, P., Duff, I., Koster, J., L’Excellent, J.Y.: A fully asynchronous multifrontal solver using distributed dynamic scheduling. SIAM J. Matrix Anal. Appl. 23(1), 15–41 (2001)
Amestoy, P.R., Buttari, A., Duff, I.S., Guermouche, A., L’Excellent, J.Y., Uçar, B.: MUMPS. In: Padua, D. (ed.) Encyclopedia of Parallel Computing. Springer, New York (2011)
Bader, G., Deuflhard, P.: A semi-implicit mid-point rule for stiff systems of ordinary differential equations. Numer. Math. 41, 373–398 (1983)
Bader, G., Nowak, U., Deuflhard, P.: An advanced simulation package for large chemical reaction systems. In: Aiken, R.C. (ed.) Stiff Computation, pp. 255–264. Oxford University Press, New York/Oxford (1985)
Bock, H.G.: Numerical treatment of inverse problems in chemical reaction kinetics. In: Ebert, K.H., Deuflhard, P., Jäger, W. (eds.) Modelling of Chemical Reaction Systems, pp. 102–125. Springer, Berlin/Heidelberg/New York (1981)
Bock, H.G.: Randwertproblemmethoden zur Parameteridentifizierung in Systemen nichtlinearer Differentialgleichungen. Ph.D. thesis, Universität zu Bonn (1985)
Boer, H.M.T., Stötzel, C., Röblitz, S., Deuflhard, P., Veerkamp, R.F., Woelders, H.: A simple mathematical model of the bovine estrous cycle: follicle development and endocrine interactions. J. Theor. Biol. 278, 20–31 (2011)
Brown, P.N., Byrne, G.D., Hindmarsh, A.C.: VODE: a variable-coefficient ODE solver. SIAM J. Sci. Stat. Comput. 10, 1038–1051 (1989)
Businger, P., Golub, G.H.: Linear least squares solutions by Householder transformations. Numer. Math. 7, 269–276 (1965)
Butcher, J.C.: Coefficients for the study of Runge-Kutta integration processes. J. Aust. Math. Soc. 3, 185–201 (1963)
Cornish-Bowden, A.: The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models. Bioinformatics 19, 524–531 (2003)
Dahlquist, G.: Convergence and stability in the numerical integration of ordinary differential equations. Math. Scand. 4, 33–53 (1956)
Deuflhard, P.: Order and stepsize control in extrapolation methods. Numer. Math. 41, 399–422 (1983)
Deuflhard, P.: Recent progress in extrapolation methods for ordinary differential equations. SIAM Rev. 27, 505–535 (1985)
Deuflhard, P.: Newton Methods for Nonlinear Problems. Affine Invariance and Adaptive Algorithms. Springer International, Heidelberg, New York (2002)
Deuflhard, P., Bornemann, F.: Scientific Computing with Ordinary Differential Equations. Texts in Applied Mathematics, vol. 42. Springer, New York (2002)
Deuflhard, P., Hohmann, A.: Numerical Analysis in Modern Scientific Computing: An Introduction. Texts in Applied Mathematics, vol. 43, 2nd edn. Springer, New York (2003)
Deuflhard, P., Nowak, U.: Efficient numerical simulation and identification of large chemical reaction systems. Ber. Bunsenges 90, 940–946 (1986)
Deuflhard, P., Nowak, U.: Extrapolation integrators for quasilinear implicit ODEs. In: Deuflhard, P., Engquist, B. (eds.) Large Scale Scientific Computing, pp. 37–50. Birkhäuser, Boston/Basel/Stuttgart (1987)
Deuflhard, P., Sautter, W.: On rank-deficient pseudoinverses. Lin. Alg. Appl. 29, 91–111 (1980)
Deuflhard, P., Schütte, C.: Molecular conformation dynamics and computational drug design. In: Hill, J., Moore, R. (eds.) Applied Mathematics Entering the 21st Century. Invited Talks from the ICIAM 2003 Congress, pp. 91–119. SIAM, Philadelphia (2004)
Deuflhard, P., Bader, G., Nowak, U.: LARKIN—a software package for the numerical simulation of LARge systems arising in chemical reaction KINetics. In: Ebert, K.H., Deuflhard, P., Jäger, W. (eds.) Modelling of Chemical Reaction Systems, pp. 38–55. Springer, Berlin/Heidelberg/New York (1981)
Deuflhard, P., Hairer, E., Zugck, J.: One–step and extrapolation methods for differential–algebraic systems. Numer. Math. 51, 501–516 (1987)
Dierkes, T., Wade, M., Nowak, U., Röblitz, S.: BioPARKIN – biology-related parameter identification in large kinetic networks. ZIB-Report 11–15, Zuse Institute Berlin (ZIB) (2011). http://opus4.kobv.de/opus4-zib/frontdoor/index/index/docId/1270
Dierkes, T., Röblitz, S., Wade, M., Deuflhard, P.: Parameter identification in large kinetic networks with BioPARKIN. arXiv:1303.4928 (2013)
Dormand, J.R., Prince, P.J.: A family of embedded Runge-Kutta formulae. J. Comput. Appl. Math. 6, 19–26 (1980)
Ehle, B.L.: On Padé approximations to the exponential function and A-stable methods for the numerical solution of initial value problems. Research Report CSRR 2010, Department of AACS, University of Waterloo, Ontario (1969)
Gear, C.W.: Numerical Initial Value Problems in Ordinary Differential Equations. Prentice-Hall, Englewood Cliffs (1971)
Gragg, W.B.: Repeated extrapolation to the limit in the numerical solution of ordinary differential equations. Ph.D. thesis, University of California, San Diego (1963)
Griewank, A., Corliss, G.F. (eds.): Automatic Differentiation of Algorithms: Theory, Implementation, and Application. SIAM, Philadelphia (1991)
Guglielmi, N., Hairer, E.: Implementing Radau II-A methods for stiff delay differential equations. Computing 67, 1–12 (2001)
Hairer, E., Ostermann, A.: Dense output for extrapolation methods. Numer. Math. 58, 419–439 (1990)
Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II. Stiff and Differential-Algebraic Problems, 2nd edn. Springer, Berlin/Heidelberg/New York (1996)
Hairer, E., Nørsett, S.P., Wanner, G.: Solving Ordinary Differential Equations I. Nonstiff Problems, 2nd edn. Springer, Berlin/Heidelberg/New York (1993)
Hengl, S., Kreutz, C., Timmer, J., Maiwald, T.: Data-based identifiability analysis on nonlinear dynamical models. Bioinformatics 23, 2612–2618 (2007)
Hindmarsh, A.C.: LSODE and LSODI, two new initial value ordinary differential equations solvers. ACM SIGNUM Newsl. 15, 10–11 (1980)
Hindmarsh, A.C., Serban, R.: User documentation for cvode v2.7.0. Technical Report UCRL-SM-208108, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory (2012)
Hindmarsh, A.C., Brown, P.N., Grant, K.E., Lee, S.L., Serban, R., Shumaker, D.E., Woodward, C.S.: SUNDIALS: suite of nonlinear and differential/algebraic equation solvers. ACM Trans. Math. Softw. 31(3), 363–396 (2005)
Hoops, S., Sahle, S., Gauges, R., Lee, C., Pahle, J., Simus, N., Singhal, M., Xu, L., Mendes, P., Kummer, U.: COPASI – a COmplex PAthway SImulator. Bioinformatics 22, 3067–3074 (2006)
Jones, D.S., Plank, M.J., Sleeman, B.D.: Differential Equations and Mathematical Biology. Mathematical and Computational Biology, 2nd edn. Chapman & Hall/CRC, Boca Raton (2010)
Kee, R.J., Miller, J.A., Jefferson, T.H.: CHEMKIN: a general-purpose, problem-independent, transportable, FORTRAN chemical kinetics code package. Technical Report SAND 80–8003, Sandia National Laboratory, Livermore (1980)
König, M., Holzhütter, H.G., Berndt, N.: Metabolic gradients as key regulators in zonation of tumor energy metabolism: a tissue-scale model-based study. Biotechnol. J. 8, 1058–1069 (2013)
Lang, J., Teleaga, D.: Towards a fully space-time adaptive FEM for magnetoquasistatics. IEEE Trans. Magn. 44(6), 1238–1241 (2008)
Maly, T., Petzold, L.: Numerical methods and software for sensitivity analysis of differential-algebraic systems. Appl. Numer. Math. 20, 57–79 (1996)
Murray, J.D.: Mathematical Biology I: An Introduction. Interdisciplinary Applied Mathematics, vol. 17, 3rd edn. Springer, Heidelberg, New York (2008)
Novère, N.L., et al.: Biomodels database: a free, centralized database of curated, published, quantitative kinetic models of biochemical and cellular systems. Nucleic Acids Res. 34, D689–D691 (2006)
Nowak, U.: Adaptive finite difference approximation of Jacobian matrices. private communication, software NLSCON (1991)
Nowak, U., Deuflhard, P.: Numerical identification of selected rate constants in large chemical reaction systems. Appl. Numer. Math. 1, 59–75 (1985)
Penrose, R.: A generalized inverse for matrices. Proc. Camb. Philos. Soc. 51, 406–413 (1955)
Peters, G., Wilkinson, J.: The least squares problem and pseudoinverses. Comput. J. 13, 309–316 (1970)
Petzold, L.R.: A description of DASSL: a differential/algebraic system solver. In: Scientific Computing, pp. 65–68. North-Holland, Amsterdam/New York/London (1982)
Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P. (eds.): Numerical Recipes in Fortran 77, 2nd edn. Cambridge University Press, Cambridge (1992)
Röblitz, S., Stötzel, C., Deuflhard, P., Jones, H., Azulay, D.O., van der Graaf, P., Martin, S.: A mathematical model of the human menstrual cycle for the administration of GnRH analogues. J. Theor. Biol. 321, 8–27 (2013)
Russell, R.D., Shampine, L.: A collocation method for boundary value problems. NM 19, 1–28 (1972)
Schlegel, M., Marquardt, W., Ehrig, R., Nowak, U.: Sensitivity analysis of linearly-implicit differential-algebraic systems by one-step extrapolation. Appl. Numer. Math. 48(1), 83–102 (2004)
Shampine, L.F., Thompson, S.: Solving DDEs in MATLAB. Appl. Numer. Math. 37, 441–458 (2001)
Sidje, R.B.: Expokit: a software package for computing matrix exponentials. ACM Trans. Math. Softw. 24, 130–156 (1998)
Stötzel, C., Plöntzke, J., Heuwieser, W., Röblitz, S.: Advances in modeling of the bovine estrous cycle: synchronization with pgf2α. Theriogenology 78(7), 1415–1428 (2012)
Stuart, A.M.: Inverse problem: a Bayesian perspective. Acta Numer. 19, 451–559 (2010)
Vanlier, J., Tiemann, C.A., Hilbers, P.A.J., van Riel, N.A.W.: Parameter uncertainty in biochemical models described by ordinary differential equations. Math. Biosci. 246, 305–314 (2013)
Verhulst, P.F.: Notice sur la loi que la population suit dans son accroissement. Corr. Math. et Phys. 10, 113–121 (1838)
Widlund, O.: A note on unconditionally stable linear multistep methods. BIT 17, 65–70 (1967)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Deuflhard, P., Röblitz, S. (2015). ODE Models for Systems Biological Networks. In: A Guide to Numerical Modelling in Systems Biology. Texts in Computational Science and Engineering, vol 12. Springer, Cham. https://doi.org/10.1007/978-3-319-20059-0_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-20059-0_1
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-20058-3
Online ISBN: 978-3-319-20059-0
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)