Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In various domains, such as robotic, control theory, or biology, mathematical models based on differential equations are used to represent the temporal behavior of a particular system. Among the many classes of differential equations, this chapter is interested in ordinary differential equations (ODEs) and differential algebraic equations (DAEs) which are widely used in these domains. These models are then used for different purposes such as parameter identification, control synthesis, or safety verification. For example, interesting problems involving differential equations are:

  • Control synthesis problem: A motion planning algorithm for a mobile robot R aims at finding a trajectory of R going from point a to a point b of the state space while avoiding an obstacle o. Note that as the movement of R depends on actuators (e.g., engine speed), finding a trajectory is translated into finding control inputs such that R can reach b.

  • Parameters identification problem: Mathematical models are an approximation of the real word. To make them more faithful, usually an identification step is necessary. Starting from a list of n measures mi ∈{1,…,n} of the behavior of the real system S and a parametric mathematical model M(p) of the temporal behaviors of S, the goal of the identification step is to find values \(\hat {\mathbf {p}}\) of p such that \(M(\hat {\mathbf {p}})\) fits the list of measures mi ∈{1,…,n}.

  • Design problems: It is closed to the parametric identification problem. The input data are a parametric model M(p) of a system S and a list of n design specifications Pi ∈{1,…,n}, for example, a car shall reach a speed of 100km/h from 0km/h in less than 6 s on flat dry road. The goal of the design problem is to find values \(\hat {\mathbf {p}}\) of p such that \(M(\hat {\mathbf {p}})\) respects all the specifications Pi ∈{1,…,n}.

Defining automatic methods to solve such kind of problems is challenging. In this context, many attempts have been started based on constraint verification of differential systems [6, 13, 18, 22]. Indeed, the framework of constraint satisfaction problems is an appealing one to express such kinds of problems associated with efficient solving methods producing rigorous results.

We mainly focus on critical problems, coming from aeronautics, robotics, or medical fields. Handling problems in these fields implies to consider the uncertainties in presence using validated methods like interval analysis [24, 28]. We also impose that constraints to solve have to be properly verified. In this context, we use inner approximation to ensure the constraint satisfaction because an enclosure (outer approximation) approach would result in some points that are not solution [24, 28]. The classical approach for these requirements is the use of a Branch-and-Prune algorithm which is a dedicated solver for constraint satisfaction problems (CSP) and the most used in the case of numerical or continuous CSP [31, 32].

To handle differential equations, abstraction based on validated simulation or reachability is a common approach [10, 28,29,30]. This abstraction needs to be deeply studied to preserve the correctness of a constraint-based problem-solving.

In this chapter, we expand a framework for Constraint Satisfaction Differential Problems (CSDP) with the requirement of preserving the guarantee of the result while dealing with differential constraints. The main contributions are:

  • A clearer definition of a CSP framework based on set-membership operations including differential equations ODEs or DAEs compared to previous work [6, 13, 18], namely, SCSDP for Set-Based Constraint Satisfaction Differential Problems. In particular, a better handling of quantified constraints is presented, and a better separation between mathematical model and solving algorithm is defined.

  • A sound solving algorithm of SCSDP based on interval analysis and guaranteed integration methods is presented as well as a complete study of the impact of the representation of sets by boxes on the guarantee of the solution of SCSDP.

  • A set of contractor operators on the solution of differential equations is defined to make Branch-and-Contract solver more efficient for SCSDP.

The chapter is organized as follows. In Sect. 2, the basics of numerical constraint satisfaction problem are provided. The mathematical formulation of CSCP is defined in Sect. 3, while solving algorithm is presented in Sect. 4. Some examples are given in Sect. 5 before concluding in Sect. 6.

Notations

Small italic letters x represent real variables, while real vectors x are in bold. The set of real numbers is denoted by \(\mathbb {R}\). Intervals \(\left [x\right ]\) and interval vectors (boxes) \(\mathbf {\left [x\right ]}\) are represented between brackets. We denote by \(\mathbb {I}\mathbb {R}\) the set of closed intervals over \(\mathbb {R}\). Sets \(\mathcal {S}\) are in uppercase calligraphic. The powerset of a set \(\mathcal {X}\) is denoted by \(\wp (\mathcal {X})\). The derivative of a function x with respect to time t is denoted by \(\dot {x}\). Uppercase typewriter letters stand for algorithmic data structures such as a stack S or a queue Q.

2 Preliminary Notions

A brief presentation of the constraint satisfaction problem framework is given in Sect. 2.1 to better understand how it is then extended in the rest of the chapter. A generic solving method based on Branch-and-Prune algorithm is presented in Sect. 2.2

2.1 Numerical Constraint Satisfaction Problems

In this section, we recall the numerical constraint satisfaction problem (NCSP) formalism, following the description given in [32], and present some basics on constraint programming. The approach of NSCP is both powerful to address complex problems (NP-hard problem with numerical issues, even in critical applications) and simple in the definition of a solving framework [1, 26].

A NCSP \((\mathcal {V},\mathcal {D}, \mathcal {C})\) is defined as follows:

  • \(\mathcal {V} := \{v_1,\hdots ,v_n\}\) is a finite set of variables which can also be represented by the vector v.

  • \(\mathcal {D} := \{[v_1],\hdots ,[v_n]\}\) is a set of intervals such that [vi] contains all possible values of vi. It can be represented by a box [v] gathering all [vi].

  • \(\mathcal {C} := \{c_1,\hdots ,c_m\}\) is a set of constraints of the form ci(v) ≡gi(v) = 0 or \(c_i(\mathbf {v}) \equiv {\mathbf {g}}_i(\mathbf {v}) \leqslant 0\), with nonlinear \({\mathbf {g}}_i : \mathbb {R}^n \to \mathbb {R}\) for \(1 \leqslant i \leqslant m\). Constraints \(\mathcal {C}\) are interpreted as a conjunction of equalities and inequalities, i.e., \(\mathcal {C} \equiv c_1 \wedge c_2 \wedge \cdots \wedge c_m\).

The solution of a NCSP is a valuation of v ranging in [v] and satisfying the constraints \(\mathcal {C}\).

2.2 Branch-and-Contract Solving Method

The classical algorithm to solve a NCSP, as previously defined, is the Branch-and-Prune method which needs only an interval evaluation of the constraints and an initial domain for variables. More precisely, an interval \([ \underline {x},\overline {x}]= \{x\in \mathbb {R} :\underline {x} \leqslant x \leqslant \overline {x}\} \in \mathbb {I}\mathbb {R}\) is defined by its lower and upper bounds \( \underline {x}\) and \(\overline {x}\) as a compact set of \(\mathbb {R}\). An interval vector (or box) [x] of dimension n is a Cartesian product of intervals \([ \underline {x}_0,\overline {x}_0] \times \dots \times [ \underline {x}_n,\overline {x}_n]\). An inclusion function \([f]:\mathbb {I}\mathbb {R}^n\to \mathbb {I}\mathbb {R}\) for \(f :\mathbb {R}^n \to \mathbb {R}\) satisfies

$$\displaystyle \begin{aligned} \forall [\mathbf{x}] \in \mathbb{I}\mathbb{R} \{f(\mathbf{x}) | \mathbf{x} \in [\mathbf{x}]\} \subseteq [f]([\mathbf{x}]). \end{aligned} $$
(1)

A natural inclusion function [f] is obtained by substituting all variables and operations involved in f by their interval counterpart. The evaluation of the range of functions over intervals using inclusion function leads in general to some overapproximation; see [24] for more details.

A more elaborated solving method named Branch-and-Contract is usually applied to accelerate the solving process of an NCSP. A generic version, using interval analysis, of this algorithm is given in Algorithm 1.

Algorithm 1 A generic Branch-and-Contract

The key feature of Algorithm 1 is the function Check(),

$$\displaystyle \begin{aligned} \text{Check}() : \mathcal{V} \times \mathcal{D} \times \mathcal{C} \to \mathbb{B} \end{aligned}$$

with \(\mathbb {B} = \{\text{True}, \text{False} \}\). Check() is then a decision procedure which is able to verify if constraints \(\mathcal {C}\) are satisfied by all the values of the domain \(\mathcal {D}\). Note that due to pessimism of interval analysis approach, it may be not possible to decide if \(\mathcal {C}\) is satisfied or not. In this case, False has to be interpreted to “undecidable.”

Two kinds of constraints are considered in Algorithm 1, one with constraints \(\mathcal {C}_{\text{acc}}\) and one with constraints \(\mathcal {C}_{\text{rej}}\). The first set of constraints is used to accept the solutions, while the second is used to reject the nonsolutions, i.e., the points guaranteed to be outside the solution domain. It is optional but useful to speed up the algorithm by quickly eliminating unfeasible domain. \(\mathcal {C}_{\text{rej}}\) is in a certain point of view the negation of \(\mathcal {C}_{\text{acc}}\), but in general, \(\mathcal {C}_{\text{acc}} \neq \neg \mathcal {C}_{\text{rej}}\), due to the abstraction of continuous domains (and the issue of disjunction). Note that this approach follows classical method in SIVIA (Set Inversion Via Interval Analysis) method [23].

The second key feature in Branch-and-Contract algorithm is the Contract() procedure which simultaneously reduces the domain studied ([v]current in the algorithm) by the help of contractors [3]. A contractor associated to a constraint c ≡ g(x) ◇ 0 with \(\diamond \in \{=, \leqslant \}\) is a function Cc taking a box [x] as parameter and returns a box such that

$$\displaystyle \begin{aligned} C_c([\mathbf{x}]) & \subseteq [\mathbf{x}] & \text{(Reduction)} \end{aligned} $$
(2a)
$$\displaystyle \begin{aligned} g([\mathbf{x}]) \cap [z] & = g(C_c([\mathbf{x}])) \cap [z] & \text{(Correction)} \end{aligned} $$
(2b)

where [z] = [0, 0] if ◇≡= and [z] = [−, 0] if \(\diamond \equiv \leqslant \). The main strength of contractors is that they can reduce the domain [x] while preserving solution without using bisection, and so they can reduce, in practice, the algorithmic complexity of Algorithm 1. For more details on contractors, see [3].

The result of Algorithm 1, also known as a paving, is made of three lists of boxes Sacc, Srej, and Sunc such that

  • There is no solution of the NCSP in Srej.

  • All the solutions of the NCSP, included in the initial domain, are in Sacc ∪Sunc.

  • All the values in Sacc are solution of the NCSP.

Example 1

The result of a Branch-and-Contract method on the constraints \(1 \leqslant x^2+y^2\leqslant 2\) with (x, y) ∈ [−2, 2] × [−2, 2] is given in Fig. 1. In this figure, blue boxes are elements of Sacc, red boxes are in Srej, and white boxes (between red and blue boxes) are in Sunc. \(\blacksquare \)

Fig. 1
figure 1figure 1

Paving of a circle

2.3 Some Limitations on NCSP

2.3.1 Equality Constraints

One of the main difficulties in NCSP approach is the handling of equality constraint, i.e., g(v) = 0. Indeed, a classical solving approach for NCSP is based on interval analysis which considers computations over boxes instead of points. So proving equality constraints usually involves some relaxation techniques such as proving a simpler constraint of the form g(v) ∈ [−ε, ε] for a small positive value ε. Moreover specific algorithms have to be used to prove the existence and uniqueness of the solution of g(v) = 0 such as the interval Newton operator [24]. In consequence, solving equality constraints is often dependent on the solving algorithm as the relaxation is generally done internally in the solver. The aim of our work is to avoid this implementation trick and push out the relaxation choice to the designer by only allowing set-based constraints as inclusion constraint; see Sect. 3.

2.3.2 Differential Constraints

The framework of NCSP lacks expressiveness when dealing with differential equation. In [6], a first approach was given by introducing Constraint Satisfaction Differential Problems (CSDP). Basically, new variables are added to the set of variables of NCSP to represent time derivative, and a new type of constraints is added too to represent the dynamic of the differential system. The time variable being handled separately from the other variables, temporal properties, cannot be encoded with CSDP. For example, if a trajectory described by a differential system has to avoid an obstacle in a given time interval, modeling this using CSDP cannot be done in an obvious manner.

Another work in [18] also dealt with this problem. The dynamical system is abstracted with a solution operator ϕ representing the solution of the system. Differential equations are then naturally embedded in the NCSP framework. Its limitation is also this abstraction because constraints on the dynamical system cannot be easily expressed.

In these previous works, another drawback is the lack of quantification on variables. Bringing a solution to this is one of the motivations of this work.

3 Set-Based Constraint Satisfaction Differential Problems

The proposed extension of NSCP is based on set-based constraints and the embedding of differential constraints. These extensions allow to increase the class of problems which can be modeled and solved.

3.1 Dynamical Systems

In the rest of this article, a general class of differential equations is considered which can represent ordinary differential equations (ODEs), differential algebraic equations (DAEs) of index 1, and a mix of these equations with additional constraints, e.g., to model energy preservation. More precisely, differential systems of the form

$$\displaystyle \begin{aligned} \left\{ \begin{aligned} \dot{\mathbf{y}}(t) & = \mathbf{f}(t, \mathbf{y}(t), \mathbf{x}(t), \mathbf{p}), \\ 0 & = \mathbf{g}(t, \mathbf{y}(t), \mathbf{x}(t)) \\ 0 & = \mathbf{h}(\mathbf{y}(t) , \mathbf{x}(t)) \end{aligned} \right. . \end{aligned} $$
(3)

with nonlinear functions \(\mathbf {f}: \mathbb {R} \times \mathbb {R}^n \times \mathbb {R}^m \times \mathbb {R}^p \to \mathbb {R}^n\), \(\mathbf {g}: \mathbb {R} \times \mathbb {R}^n \times \mathbb {R}^m \to \mathbb {R}^m\), \(\mathbf {h}: \mathbb {R}^n \times \mathbb {R}^m \to \mathbb {R}\), t ∈ [0, tend], \(\mathbf {y}(0) \in \mathcal {Y}_0\), and \(\mathbf {p} \in \mathcal {P}\) are considered. More precisely, initial value problems (IVP) for parametrized differential equations are considered over a finite time horizon [0, tend]. Note that a bounded set of initial values and a bounded set of parameters are considered in this framework. This implies to deal with set of trajectories solution of Eq. (3). We assume classical hypothesis on f, q, and h to ensure the existence and uniqueness of the solution of Eq. (3).

In the rest of this section, we denote by \(\mathcal {Y}(\mathcal {T},\mathcal {Y}_0,\mathcal {P})\) the set

$$\displaystyle \begin{aligned} \mathcal{Y}(\mathcal{T},\mathcal{Y}_0,\mathcal{P}) = \{ \mathbf{y}(t; {\mathbf{y}}_0,\mathbf{p}) : t \in \mathcal{T}, {\mathbf{y}}_0 \in \mathcal{Y}_0, \mathbf{p} \in \mathcal{P} \}\enspace. \end{aligned} $$
(4)

Intuitively, \(\mathcal {Y}(\mathcal {T},\mathcal {Y}_0,\mathcal {P})\) gathers all the points reached by the solution y(t;y0, p) of Eq. (3) starting from all scalar initial values y0 and all scalar parameters p. Note that \(\mathcal {Y}(\mathcal {T},\mathcal {Y}_0,\mathcal {P})\) is hardly computable in general, and the implementation issue is addressed in Sect. 4. Note also that the difference of the proposed approach comparing to [18] is that we consider a set-based solution operator which offers a convenient way to deal with quantification over variables.

The purpose of the proposed framework is to check if \(\mathcal {Y}(\mathcal {T},\mathcal {Y}_0,\mathcal {P})\) fulfills some specification defined in terms of set-based constraints.

3.2 Set-Based Constraints

To avoid problematic issue due to equality constraints (see Sect. 2.3), set-based constraints are considered. More precisely, inclusion and disjunction operators are considered. The proposed framework will consider constraints of the form

$$\displaystyle \begin{aligned} \mathbf{g}(\mathcal{A}) & \subseteq \mathcal{B} \end{aligned} $$
(5a)
$$\displaystyle \begin{aligned} \mathbf{g}(\mathcal{A}) & \supseteq \mathcal{B} \end{aligned} $$
(5b)
$$\displaystyle \begin{aligned} \mathbf{g}(\mathcal{A}) & \cap \mathcal{B} = \emptyset \end{aligned} $$
(5c)
$$\displaystyle \begin{aligned} \mathbf{g}(\mathcal{A}) & \cap \mathcal{B} \neq \emptyset \end{aligned} $$
(5d)

where \(\mathcal {A}\) and \(\mathcal {B}\) are real compact sets and g is a nonlinear function. The lifting of g over sets is defined as usual by \(\mathbf {g}(\mathcal {X}) = \{ \mathbf {g}(x) : x \in \mathcal {X} \}\).

Note that these constraints can be seen as Boolean functions, but while, from a mathematical formulation, the truth value can always be obtained, it may not be the case when they have to be solved on a computer. The safe computer resolution of these kinds of constraints is one of the main contributions of this article, detailed in Sect. 4.

3.3 Set-Based Differential Constraint Satisfaction Problems

The handling of differential constraints here follows the approach given in [18] in the exception of the solution operator of Eq. (3), which is here represented as a set of solution \(\mathcal {Y}(\mathcal {T},\mathcal {Y}_0,\mathcal {P})\) in order to unify the objects manipulated into constraints which are also sets.

Set-Based Constraint Satisfaction Differential Problems (SCSDP) based on a set-membership constraints and embedding differential constraints can now be defined.

Definition 1 (SCSDP)

A SCSDP is a NCSP made of

  • A finite set \(\mathcal {S}\) of differential systems Si as defined in Eq. (3)

  • A finite set of variables \(\mathcal {V}\) including the parameters of the differential systems Si, i.e., (y0, p), a time variable t and some other algebraic variables q

  • A domain \(\mathcal {D}\) made of the domain of parameters \(\mathbf {p}: \mathcal {D}_p\), of initial values \({\mathbf {y}}_0: \mathcal {D}_{y_0}\), of the time horizon \(t: \mathcal {D}_t\), and the domains of algebraic variables \(\mathcal {D}_q\)

  • A set of constraints \(\mathcal {C}\) which may be defined by inclusion or disjunction constraints (see Sect. 3.2) over variables of \(\mathcal {V}\) and special variables \(\mathcal {Y}_i(\mathcal {D}_t, \mathcal {D}_{y_0}, \mathcal {D}_p)\) representing the set of the solution of Si in \(\mathcal {S}\)

Example 2

A cruise control system based on PI controller for a nonlinear dynamic of a car is considered [12]. The dynamic of the car is defined by

$$\displaystyle \begin{aligned} S \equiv\left\{ \begin{aligned} \dot{v} & = \frac{k_p(v_{\text{set}} - v) + k_i s - 50.0 v - 0.4 v^2}{m} \\ \dot{s} & = v_{\text{set}} - v \end{aligned} \right.. \end{aligned} $$
(6)

with v the speed of the vehicle and s the integral part of the PI controller, kp and ki the parameters of the PI controller, m ∈ [990, 1010] the mass of the vehicle, and vset = 10 the target speed of the car from initial conditions v(0) = 0 and s(0) = 0. The term kp(vset − v) + kis is the PI controller, − 50.0v is the resisting force due to the road, and − 0.4v2 is the aerodynamic friction.

The specification of the PI controller is such that it should stabilize in 10 s with a tolerance of 2%, and its overshoot should not be more than 5% of the target speed. These are translated into constraints such that

$$\displaystyle \begin{aligned} v(10) & \subseteq [9.8, 10.2] & \mbox{(At t=10, }v_{\text{set}}\pm 2\%\text{)} \\ \dot{v}(10) & \subseteq [-\varepsilon, \varepsilon] & \mbox{(At t =10, acceleration is around zero, with a small }\varepsilon > 0\text{)} \\ v([0,10]) & \subseteq [0, 10.5] & \mbox{(For }t\in[0,10], v\mbox{ should not be above }v_{\text{set}}+5\%\text{)} \end{aligned} $$

Note that in Eq. (6), the mass m is uncertain, so the solution of S is a thick function so the use of inclusion constraints. In summary, a SCSDP is defined by

  • \(\mathcal {S} = \{ S \mbox{ defined in Eq. \text {(6)}} \}\)

  • \(\mathcal {V} = \{ k_p, k_i \}\)

  • \(\mathcal {D} = \{ [1, 4000], [1, 120] \} \)

  • \(\mathcal {C} = \{ v(10) \subseteq [9.8, 10.2], \dot {v}(10) \subseteq [-\varepsilon , \varepsilon ], v([0,10]) \subseteq [0, 10.5]\} \)

\(\blacksquare \)

Note that following NCSP and its solving algorithm, variables in \(\mathcal {V}\) are quantified existentially, and other variables (not in \(\mathcal {V}\)) are quantified universally, e.g., the mass m in Eq. (6). Hence, there is no need to introduce quantifier in constraints. The proposed SCSDP framework is hence simpler than previous work [6, 13, 18] in embedding quantification constraints while taking into account differential equations. Nevertheless, one important challenge is to solve SCSDP in a guaranteed way, and for this purpose a computable representation of sets has to be defined. As interval analysis [28] brings very efficient techniques over boxes, it is a natural mean to solve SCSDP on computers.

4 Solving SCSDP

To solve a SCSDP as defined in Sect. 3.3, a proper representation of sets has to be defined. Interval analysis provides a simple representation of compact sets by the mean of interval values or boxes. This representation is simple enough to represent complex sets while being associated with fast computational methods. To solve SCSDP on a computer, set-based constraints defined in Sect. 3.2 have to be properly translated into boxes, and dynamical systems as defined in Sect. 3.1 have to be solved.

4.1 Interval-Based Constraints

In order to solve set-based constraints appearing in SCSDP, as defined in Sect. 3.2, an interval-based abstraction is given in this section. As shown in Sect. 2.1, complex compact sets can be represented by paving and so can be represented either by inner approximation (boxes in Sacc) or outer approximation (boxes in Sacc ∪Sunc). In consequence, translating constraints defined in Eq. (5) to interval-based constraints, a proper representation of sets has to be defined. In particular, the validity of the translation is important to guarantee the result of solving a SCSDP on a computer.

In the sequel, \(\text{Int}\mathcal {X}\) will stand for the interior of the compact set \(\mathcal {X}\), while \(\text{Hull}\mathcal {X}\) will stand for the outer approximation of \(\mathcal {X}\). In each case, the inner approximation or the outer approximation can be defined by a box or a list of boxes. Note that the outer approximation of a nonlinear function g in interval arithmetic is given by inclusion function as defined in Eq. (1), and more information can be found in [24], while inner approximation of g requires special treatments as defined in [16, 17, 19, 21]. Note also that excepting a complete computation of inner approximation or outer approximation, there is no meaning to consider outer approximation of g with inner approximation of its parameter and reciprocally.

In Table 1, a summary of the translation of constraints defined in Eq. (5) into an interval counterpart is given. For each case, inner approximation or outer approximation, the truth value of the constraints is inspected. When a value is true, that is, the constraint can be proved to be true, and when a value is false, then the constraint can be proved to be false. Otherwise, no conclusion can be made on the constraint and it is denoted by “?.” For example, a proof of a constraint of the form \(g(\mathcal {X}) \subseteq \mathcal {A}\) can be obtained only by considering an outer approximation of g and its parameter \(\mathcal {X}\) and an inner approximation of \(\mathcal {A}\). As a second example, a proof of unsatisfiability of the constraints \(g(\mathcal {X}) \subseteq \mathcal {A}\) can be obtained considering an inner approximation of g and its parameter \(\mathcal {X}\) and an outer approximation of \(\mathcal {A}\).

Table 1 Set-based constraint evaluation in interval analysis framework

As it is clear from Table 1, an interval counterpart of constraints defined in Eq. (5) is not so obvious in order to guarantee the result of a SCSDP.

4.2 Interval-Based Differential Constraints

As for the interval representation of compact sets which can be from an inner approximation or an outer approximation, dynamical systems can be solved to produce interval-based representation containing all the trajectories or a subset of the set of trajectories. In other terms, \(\mathcal {Y}(\mathcal {T},\mathcal {Y}_0,\mathcal {P})\) can be inner-approximated or outer-approximated. A short review of these methods is given in the rest of this section as \(\mathcal {Y}(\mathcal {T},\mathcal {Y}_0,\mathcal {P})\) can appear in constraints of SCSDP.

4.2.1 Outer Approximation of Differential Constraints

The computation of \(\mathcal {Y}(\mathcal {T},\mathcal {Y}_0,\mathcal {P})\) solution of a system S described in Eq. (4) has been studied for a long time with interval analysis methods. A complete approach named guaranteed numerical integration has been defined from the seminal work of Moore [28]. More precisely, initial value problems of ordinary differential equations have been solved with interval analysis mainly based on Taylor series [4, 27,28,29] and more recently with Runge–Kutta-based methods [2, 10, 15]. Initial value problems for algebraic differential equations have been studied in [11, 30]. All the works aim at producing an outer approximation of the solution of the dynamical systems using interval analysis tools.

The goal of a guaranteed numerical integration method to solve Eq. (3) is to compute a sequence of time instants 0 = t0 < t1 < ⋯ < tn = tend and a sequence of boxes [y0], …, [yn] such that ∀j ∈ [0, n], [yj+1] ⊇y(tj;[yj], [p]). In this article, we focus on single-step methods that only use [yj] and approximations of \(\dot {\mathbf {y}}(t)\) to compute [yj+1].

The main approach in a guaranteed numerical integration method, as presented in [29], is that each step of a validated integration scheme consists of two phases

Phase 1:

One computes an a priori enclosure \([\tilde {\mathbf {y}}_j]\) of the solution such that

  • y(t;[yj]) is guaranteed to exist for all t ∈ [tj, tj+1], i.e. along the current step, and for all yj ∈ [yj].

  • \(\mathbf {y}(t;[{\mathbf {y}}_j]) \subseteq [\tilde {\mathbf {y}}_j]\) for all t ∈ [tj, tj+1].

  • the step-size hj = tj+1 − tj > 0 is as large as possible in terms of accuracy and existence proof for the IVP solution.

Phase 2:

One computes a tighter enclosure of [yj+1] at time tj+1 such that y(tj+1, [yj]) ⊆ [yj+1].

A guaranteed numerical integration for a system S, as defined in Eq. (3), starts with an outer approximation of initial condition \(\text{Hull} \mathcal {Y}_0=[{\mathbf {y}}_0]\), the parameters \(\text{Hull}\mathcal {P}=[\mathbf {p}]\), and an integration step size h (or a finite horizon). It applies the two-step approach until the end of the simulation time is reached. This process builds two lists of boxes:

  • The list of discretization time steps: {[y0], …, [yend]}

  • The list of a priori enclosures: \(\{[\widetilde {\mathbf {y}}_{0}],\hdots ,[\widetilde {\mathbf {y}}_{\text{end}}]\}\)

Based on these lists, two functions depending on time can be defined

$$\displaystyle \begin{aligned} R: \left\{\begin{array}{c} \mathbb{R} \mapsto \mathbb{I}\mathbb{R}^n \\ t \rightarrow [\mathbf{y}] \end{array}\right. \end{aligned} $$
(7)

with {y(t;y0) : ∀y0 ⊆ [y0]}⊆ [y] and

$$\displaystyle \begin{aligned} \widetilde{R}: \left\{\begin{array}{c} \mathbb{I}\mathbb{R} \mapsto \mathbb{I}\mathbb{R}^n \\ \left[\underline{t},\overline{t}\right] \rightarrow [\widetilde{\mathbf{y}}] \end{array}\right. \end{aligned} $$
(8)

with \(\{ \mathbf {y}(t; {\mathbf {y}}_0): \forall {\mathbf {y}}_0 \in [{\mathbf {y}}_0] \wedge \forall t \in [ \underline {t},\overline {t}]\} \subseteq [\widetilde {\mathbf {y}}]\).

Function R, defined in (7), is obtained by new applications of validated integration method starting from [yk] at tk and finishing at t with tk < t < tk+1. Function \(\widetilde {R}\), defined in (8), is obtained with the union of \([\widetilde {\mathbf {y}}_k]\) with k = a, …, b and \(t_a <\underline {t} < \overline {t} < t_b\). These functions are then strictly conservative.

More abstractly, the functions R and \(\widetilde {R}\) define two interval enclosures of the solution function of differential equations defined in Eq. (3).

4.2.2 Inner Approximation of Differential Constraints

More recent work deals with the inner approximation of the reachable sets of ordinary differential equations such as [5, 20]. But a lot of work remains to be done to elevate this technique to a maturity level of guaranteed numerical integration as defined in Sect. 4.2.1. A short review of [20] is made in this section as it closely follows the two-step approach of guaranteed numerical integration.

In [20] a new approach for computing inner approximations of reachable sets of dynamical systems defined by nonlinear, uncertain, ordinary differential equations is defined. It extends [19, 21] which focused on discrete-time dynamical systems. It consists in using generalized affine forms combining modal interval analysis [25] (an extension of interval arithmetic dealing with quantifiers) with affine arithmetic [8] (an extension of interval arithmetic which can take into account some correlation between variables) to produce both inner and outer approximations of the flow of an uncertain ODE with a Taylor series approach.

The given algorithm consists of three steps: (1) computing rough enclosures over a time interval [ti, ti+1] of the solution and its Jacobian over the initial conditions (which is the solution of the variational equation), (2) building the Taylor models of the solution and its Jacobian, and (3) computing the inner approximations of the flow pipe using generalized affine forms.

4.3 Revisiting Branch-and-Contract Solving Method

After defining a correct interval representation of compact sets in Sect. 4, a focus on the application of Branch-and-Contract algorithm to solve SCSDP is given. More precisely, as differential constraints imply set of trajectories, an extension of contractors to deal with this new object has to be defined.

In our approach based on interval analysis and contractor programming [3], an application of constraints at some given instants in the set of trajectories and a propagation can be performed on the interval representation of the trajectories. In the rest of this section, only outer approximation of dynamical systems is considered. This section defines the two methods, contraction and propagation, on set of trajectories.

4.3.1 Contraction

The considered approach allows one to contract a specific value [y] = R(t), an outer approximation of the solution of the IVP at time t such that y(t) ∈ [y] with respect to a constraint g.

The simplest example is as follows: considering a system defined by \(\dot {y} = f(y)\) and y(0) = y0, if a set of measures \(\{{\mathbf {y}}^*_1,\dots ,{\mathbf {y}}^*_m\}\) are taken at some specific instants \(t^*_1,\dots ,t^*_m\), then a contraction can be applied following the rule \([\mathbf {y}(t^*_i)] = [\mathbf {y}(t^*_i)] \cap [y^*_i], \forall i=1,\dots ,m\). In a more complex example, if the states of the system are constrained by the help of a function g, then a contractor such as HC4-Revise or interval Newton [24] can be used. For example, a constraint such as \(y(t^*)^2 - 3 \cos {}(y(t^*)) \subset [-\infty , 0] \) can be also considered.

Remark 1

If t is not in the already computed time steps, then the contraction procedure adds a kth integration step to the time discretization: {[y0], …, [yi], [yk], [yi+1], …, [yN]} such that tk = t.

Remark 2

The contraction at a time t can be easily generalized to a contraction along an interval of instants \([ \underline {t}^*, \overline {t}^*]\), by the help of the \(\tilde {R}\) function and a priori enclosures [y].

4.3.2 Propagation

If a contraction has been obtained, then a Picard contractor [11] on \([\tilde {\mathbf {y}}_i]\) and a validated Runge–Kutta contractor [11] on [yi] can be applied on each integration step i, in order to propagate this information on the whole simulation, i.e., on all the boxes in the lists:

  • Forward for t > t with the considered differential equation

  • Backward for t < t with the inverse of the considered differential equation

A fixed-point algorithm (a loop calling alternatively the forward and the backward steps till not enough improvement is obtained with respect to a given threshold) can be also used.

Example 3

Van der Pol system is considered and it is defined by

$$\displaystyle \begin{aligned} \dot{x} & = y \\ \dot{y} & = 2.0 (1.0 - x^2) y - x \end{aligned} $$

with the initial conditions x(0) ∈ [2.0, 2.2] and y(0) ∈ [0.0, 0.1], simulated from t = 0 to t = 2.0 (see Fig. 2). A measure is obtained at t = 1.0, such that y(1.0) ∈ [1.58, 1.62] and x(1.0) ∈ [−0.74, −0.69]. A contraction and a forward propagation are applied; then a backward and finally a fixed point are applied (see Fig. 2). \(\blacksquare \)

Fig. 2
figure 2figure 2

Van Der Pol problem: initial (up, left), after contraction and forward propagation (up, right), after backward propagation (down, left), and after fixed-point mixing forward and backward propagation (down, right)

5 Numerical Example

DynIBEX library [9] implements the outer-approximation version of the proposed CSDP framework. This library allows to solve complex constraint satisfaction problems mixing bounded uncertainties, variable quantification, and differential constraints.

Kinetic parameter estimation of an enzymatic reaction example has already been considered in [18]. It aims at illustrating the SCSDP framework described in this chapter. The goal is to obtain the kinetic parameters of an enzymatic reaction as described in [14]. The differential equation is as follows:

$$\displaystyle \begin{aligned} (\mathcal{S}) \begin{cases} \dot{s}(t) = -\frac{V_{\text{max}} s(t)}{k_s + s(t)}\\ \dot{p}(t) = \frac{V_{\text{max}} s(t)}{k_s + s(t)} \end{cases} {} \end{aligned} $$
(9)

with p(t) and s(t) the two concentrations and Vmax and ks the two parameters to infer from a series of measures on the concentration p during time. These measurements are shown in Table 2.

Table 2 Measurements for the enzymatic reaction (± 0.1)

The corresponding SCSDP is as follows:

  • \(\mathcal {S}\) from Eq. (9)

  • \(\mathcal {V} = \{ p_0, s_0, V_{\text{max}}, k_s, t \}\)

  • \(\mathcal {D} = \{ [25], [0], [90, 110], [0,10], [0,1.0] \}\)

  • \(\mathcal {C} = \begin {cases} \text{Proj}_p(\mathcal {Y}(t_i, p_0, s_0, V_{\text{max}}, k_s)) \subset p_i & \text{(Measurements)}\\ \mathcal {Y}(t, p_0, s_0, V_{\text{max}}, k_s) \subset [0, +\infty ]^2 & \mbox{(Nonnegative concentrations)} \end {cases} \)

with the operator Proj\(_x(\mathcal {Y})\) the projection over component x of the set \(\mathcal {Y}\). This problem can be treated in two ways, whether we choose to consider contractors or not as depicted in Algorithm 1 with the addition of line 5 or not. A first resolution scheme is to directly apply a Branch-and-Prune algorithm on the parameters p. The function Check() used to verify constraints \(\mathcal {C}\) is as defined in Algorithm 2.

Algorithm 2 Check for the kinetic parameter estimation

Another way is to consider the contractor described in Sect. 4.3.1. We recall that these contractors only apply to the state space of the solution of \((\mathcal {S})\), so the parameters have to be embedded into the state space. This is done in a classical way as follows:

$$\displaystyle \begin{aligned} (\mathcal{S}') \begin{cases} \dot{s}(t) = -\frac{V_{\text{max}} s(t)}{k_s + s(t)}\\ \dot{p}(t) = \frac{V_{\text{max}} s(t)}{k_s + s(t)}\\ \dot{V}_{\text{max}} = 0\\ \dot{k_s}(t) = 0 \end{cases} {} \end{aligned} $$
(10)

A contractor can then be defined for the resolution of our problem by contracting and propagating each measurement. The solution of this example is depicted in Fig. 3.

Fig. 3
figure 3figure 3

Solution for the parameter estimation problem for enzymatic reaction

6 Conclusion

A constraint satisfaction problem framework has been extended to deal with differential constraints and quantification on variables. It extends other approaches [7, 18] by dealing more naturally with uncertainties and variable quantification. A discussion on the correctness of the interval representation of compact sets has been given. It emphasizes the problem of preserving the correctness of the approach when computing with tools coming from interval analysis.

A future work and extension of DynIBEX with inner approximation will be useful to address a more important class of problems. From a theoretical point of view, an extension of SCSDP with techniques coming from SMT approaches could be beneficial to increase the expressiveness of the framework, as for example, dealing with disjunctive constraints.