Keywords

1 Introduction

Model-based design [20, 27] is an effective approach to tackle the increasing complexity of cyber-physical systems. In this approach, physical systems, e.g., plant, are usually modelled by differential equations while computer parts are described by transition systems. Combining these models allows to simulate the behaviour of the whole model of the system in order to predict its behaviour to avoid faults or to synthesize control algorithms.

Safety critical cyber-physical systems require strong guarantees in their execution in order to assess the safety of the mission or the users. Formal methods can produce rigorous evidence for the safety of cyber-physical systems, i.e., based on mathematical reasoning. For example, reachability analysis is an efficient technique to compute the set of reachable states of cyber-physical systems. Once, knowing the set of reachable states, the avoidance of bad states can be formally proved. The main feature of reachability analysis is its ability to propagate sets of values through dynamical systems instead of performing several numerical simulations.

One weakness of formal verification methods, in particular, reachability analysis, is the scalability with respect to the dimension (number of states) of cyber-physical systems. Applying a cosimulation approach to reachability analysis is attractive since it could broaden the class of problems which can be solved with this technique. A set-based approach of cosimulation to solve differential equations has been defined [21], we explore here its use in the context of formal control synthesis.

Contribution. We propose an extension of a controller synthesis algorithm for a particular class of cyber-physical systems, a.k.a. nonlinear sampled switched systems, it relies on a cosimulation approach for the required reachability analysis. A formal definition of the set-based cosimulation is given and then used in order to compute a safe controller for a model of an apartment with a controlled heating.

Related Work. Most of the recent work on set-valued integration of nonlinear ordinary differential equations is based on the upper bounding of the Lagrange remainders either in the framework of Taylor series or Runge-Kutta schemes [2, 3, 5, 7, 9, 10, 12, 24]. Sets of states are generally represented as vectors of intervals, a.k.a. boxes, and are manipulated through interval arithmetic [25] or affine arithmetic [11]. Taylor expansions with Lagrange remainders are also used in the work of [3], which uses polynomial zonotopes for representing sets of states in addition to interval vectors.

The guaranteed or validated solution of ODEs using interval arithmetic is studied in the framework of Taylor series in [10, 13, 22, 25, 26], and Runge-Kutta schemes in [2, 5, 6, 14, 18]. The former is the oldest method used in interval analysis community because the expression of the remainder of Taylor series is simple to obtain. Nevertheless, the family of Runge-Kutta methods is very important in the field of numerical analysis. Indeed, Runge-Kutta methods have several interesting stability properties which make them suitable for an important class of problems. The recent work [1] implements Runge-Kutta based methods which prove their efficiency at low orders and for short simulations.

Cosimulation has been extensively studied in the past years [16, 17], and has been reported in a number of industrial applications (see [16] for an extensive list domain applications and associated publications). However, most of the uses and tools developed rely on the FMI/FMU standard [4, 8, 28], which do not allow guaranteed simulation. To our knowledge, guaranteed cosimulation of systems has never been applied on controller synthesis method.

Organization of the Paper. Section 2 presents the mathematical model of sampled switched systems as well as an algorithm to synthesize a safe controller. Set-based simulation and its extension to cosimulation are presented in Sect. 3. Experimental results are presented in Sect. 4 before concluding in Sect. 5.

2 Control Synthesis of Switched Systems

A presentation of the mathematical model of sampled switched systems is given in Sect. 2.1. An algorithm to synthetize safe controllers is described in Sect. 2.2.

2.1 Switched Systems

Let us consider nonlinear switched systems such that

$$\begin{aligned} \dot{x}(t) = f_{\sigma (t)}(x(t),d(t)) \end{aligned}$$
(1)

is defined for all \(t \ge 0\), where \(x(t) \in \mathbb {R}^n\) is the state of the system, \(\sigma (\cdot ) : \mathbb {R}^+ \longrightarrow U\) is the switching rule, and \(d(t) \in \mathbb {R}^m\) is a bounded perturbation. The finite set \(U = \{ 1, \dots , N \}\) is the set of switching modes of the system. We focus on sampled switched systems, given a sampling period \(\tau >0\), switchings will periodically occur at times \(\tau \), \(2\tau \), .... Switchings depend only on time, and not on states, this is the main difference with hybrid systems.

The switching rule \(\sigma (\cdot )\) is thus piecewise constant, we will consider that \(\sigma (\cdot )\) is constant on the time interval \([(k-1) \tau , k \tau )\) for \(k \ge 1\). We call “pattern” a finite sequence of modes \(\pi = (i_1,i_2,\dots ,i_k) \in U^k\). With such a control pattern, and under a given perturbation d, we will denote by \(\mathbf {x}(t; t_0, x_0,d,\pi )\) the solution at time \(t \ge t_0\) of the system

$$\begin{aligned} \begin{aligned} \dot{x}(t)&= f_{\sigma (t)}(x(t),d(t)), \\ x(t_0)&= x_0, \\ \forall j \in \{1,\dots ,k\}, \ \sigma (t)&= i_j \in U \ \text {for} \ t \in [t_0+(j-1) \tau , t_0+j \tau ). \end{aligned} \end{aligned}$$
(2)

We address the problem of synthesizing a state-dependent switching rule \(\tilde{\sigma }(\cdot )\) for Eq. (2) in order to verify some properties. This important problem is formalized as follows:

Problem 1 (Control Synthesis Problem)

Let us consider a sampled switched system as defined in Eq. (2). Given three sets R, S, and B, with \(R \cup B \subset S\) and \(R \cap B = \emptyset \), find a rule \(\tilde{\sigma }(\cdot )\) such that, for any \(x(0)\in R\)

  • \(\tau \)-stabilityFootnote 1: x(t) returns in R infinitely often, at some multiples of sampling time \(\tau \).

  • safety: x(t) always stays in \(S \backslash B\).

In this problem, S is a safety set in which the state should always stay. The set R is a recurrence set, in which the state will return infinitely often, it is used to make the computation of a safety controller easier. The set B is an optional obstacle, or avoid set. Under the above-mentioned notation, we propose the main procedure of our approach which solves this problem by constructing a rule \(\tilde{\sigma }(\cdot )\), such that for all \(x_0 \in R\), and under the unknown bounded perturbation d, there exists \(\pi = \tilde{\sigma }(\cdot ) \in U^k\) for some k such that:

$$\begin{aligned} \left\{ \begin{aligned} \mathbf {x}(t_0 + k\tau ; t_0, x_0,d,\pi ) \in R \\ \forall t \in [t_0,t_0 + k\tau ], \quad \mathbf {x}(t; t_0, x_0,d,\pi ) \in S \\ \forall t \in [t_0,t_0 + k\tau ], \quad \mathbf {x}(t; t_0, x_0,d,\pi ) \notin B. \end{aligned} \right. \end{aligned}$$

Such a law permits to perform an infinite-time state-dependent control. The synthesis algorithm is described in Sect. 2.2 and involves guaranteed set-based integration presented in Sect. 3, the main underlying tool is interval analysis [25].

2.2 Controller Synthesis Algorithm

Before introducing the algorithms to synthetize controller of sampled switched systems, some preliminary definitions will be introduced.

Definition 1

Let \(X \subset \mathbb {R}^n\) be a box of the state space. Let \(\pi = (i_1,i_2,\dots ,i_k) \in U^k\). The successor set of X via \(\pi \), denoted by \(Post_{\pi }(X)\), is the image of X induced by application of the pattern \(\pi \), i.e., the solution at time \(t = k \tau \) of

$$\begin{aligned} \begin{aligned} \dot{x}(t)&= f_{\sigma (t)}(x(t),d(t)), \\ x(0)&= x_0 \in X, \\ \forall t \ge 0,&\quad d(t) \in [d ], \\ \forall j \in \{1,\dots ,k\},&\quad \sigma (t) = i_j \in U \ \text {for} \ t \in [(j-1) \tau , j \tau ). \end{aligned} \end{aligned}$$
(3)

Note that \(Post_{\pi }(X)\) is usually hard to compute so an over-approximation will be computed instead in order to guarantee rigourous results.

Definition 2

Let \(X \subset \mathbb {R}^n\) be a box of the state space. Let \(\pi = (i_1,i_2,\dots ,i_k) \in U^k\). We denote by \(Tube_{\pi }(X)\) the union of boxes covering the trajectories of IVP (3), which construction is detailed in Sect. 3.

Principle of the Algorithm. We describe the algorithm solving the control synthesis problem for nonlinear switched systems (see Problem 1, Sect. 2.1). Given the input boxes R, S, B, and given two positive integers P and D, the algorithm provides, when it succeeds, a decomposition \(\varDelta \) of R of the form \(\{ V_i, \pi _i \}_{i \in I}\) verifying the properties:

  • \(\bigcup _{i \in I} V_i = R\),

  • \(\forall i \in I, \ Post_{\pi _i}(V_i) \subseteq R\),

  • \(\forall i \in I, \ Tube_{\pi _i}(V_i) \subseteq S\),

  • \(\forall i \in I, \ Tube_{\pi _i}(V_i) \bigcap B = \emptyset \).

Fig. 1.
figure 1

Principle of the bisection method.

figure a

Decomposition \(\varDelta = \{ V_i, \pi _i \}_{i \in I}\) is thus a set of boxes (\(V_i\)) covering R, each box being associated with a control pattern (\(\pi _i\)), and I is a set of indexes used for listing the covering boxes. The sub-boxes \(\{ V_i \}_{i \in I}\) are obtained by repeated bisection to produce a paving of R. At first, function Decomposition calls sub-function \(Find\_Pattern\) which looks for a pattern \(\pi \) of length at most P such that \(Post_{\pi }(R) \subseteq R\), \(Tube_{\pi }(R) \subseteq S\) and \(Tube_{\pi }(R) \bigcap B = \emptyset \). If such a pattern \(\pi \) is found, then a uniform control over R is found (see Fig. 1(a)). Otherwise, R is divided into two sub-boxes \(V_1\), \(V_{2}\), by bisecting R w.r.t. its longest dimension. Patterns are then searched to control these sub-boxes (see Fig. 1(b)). If for each \(V_i\), function \(Find\_Pattern\) manages to get a pattern \(\pi _i\) of length at most P verifying \(Post_{\pi _i}(V_i) \subseteq R\), \(Tube_{\pi _i}(V_i) \subseteq S\) and \(Tube_{\pi _i}(V_i) \bigcap B = \emptyset \), then it is a success and algorithm stops. If, for some \(V_j\), no such pattern is found, the procedure is recursively applied to \(V_j\). It ends with success when every sub-box of R has a pattern verifying the latter conditions, or fails when the maximal depth of decomposition D is reached. The algorithmic form of functions Decomposition and \(Find\_Pattern\) are given in Algorithm 1 and Algorithm 2 respectively.

Our control synthesis method being well defined, we introduce the main algorithm of this paper, stated as follows:

Proposition 1

Algorithm 1 with input (RRSBDP) returns, when it successfully terminates, a decomposition \(\{ V_i,\pi _i \}_{i \in I}\) of R which solves Problem 1.

Proof

Let \(x_0 = x(t_0=0)\) be an initial condition belonging to R. If the decomposition has terminated successfully, we have \(\bigcup _{i \in I} V_i = R\), and \(x_0\) thus belongs to \(V_{i_0}\) for some \(i_0\in I\). We can thus apply the pattern \(\pi _{i_0}\) associated to \(V_{i_0}\). Let us denote by \(k_0\) the length of \(\pi _{i_0}\). We have:

  • \(\mathbf {x}(k_0\tau ;0,x_0,d,\pi _{i_0}) \in R\),

  • \(\forall t \in [0, k_0\tau ], \quad \mathbf {x}(t;0,x_0,d,\pi _{i_0}) \in S\),

  • \(\forall t \in [0, k_0\tau ], \quad \mathbf {x}(t;0,x_0,d,\pi _{i_0}) \notin B\).

Let \(x_1 = \mathbf {x}(k_0\tau ;0,x_0,d,\pi _{i_0}) \in R\) be the state reached after application of \(\pi _{i_0}\) and let \(t_1 = k_0 \tau \). State \(x_1\) belongs to R, it thus belongs to \(V_{i_1}\) for some \(i_1 \in I\), and we can apply the associated pattern \(\pi _{i_1}\) of length \(k_1\), leading to:

  • \(\mathbf {x}(t_1 + k_1\tau ;t_1,x_1,d,\pi _{i_1}) \in R\),

  • \(\forall t \in [t_1, t_1 + k_1\tau ], \quad \mathbf {x}(t;t_1,x_1,d,\pi _{i_1}) \in S\),

  • \(\forall t \in [t_1, t_1 + k_1\tau ], \quad \mathbf {x}(t;t_1,x_1,d,\pi _{i_1}) \notin B\).

We can then iterate this procedure from the new state

$$\begin{aligned} x_2 = \mathbf {x}(t_1 + k_1\tau ;t_1,x_1,d,\pi _{i_1}) \in R. \end{aligned}$$

This can be repeated infinitely, yielding a sequence of points belonging to R \(x_0,x_1,x_2,\dots \) attained at times \(t_0,t_1,t_2,\dots \), when the patterns \(\pi _{i_0},\pi _{i_1},\pi _{i_2},\dots \) are applied.

We furthermore have that all the trajectories stay in S and never cross B:

$$\begin{aligned} \forall t \in \mathbb {R}^+, \exists k \ge 0, \ t \in [t_k , t_{k+1} ]\end{aligned}$$

and

$$\begin{aligned} \forall t \in [t_k , t_{k+1} ],\ \mathbf {x} ( t; t_k, x_k, d, \pi _{i_k}) \in S,\ \mathbf {x} (t;t_k, x_k, d, \pi _{i_k}) \notin B . \end{aligned}$$

The trajectories thus return infinitely often in R, while always staying in S and never crossing B.   \(\square \)

Remark 1

Note that it is possible to perform reachability from a set \(R_1\) to another set \(R_2\) by computing \(Decomposition(R_1,R_2,S,B,D,P)\). The set \(R_1\) is thus decomposed with the objective to send its sub-boxes into \(R_2\), i.e., for a sub-box V of \(R_1\), patterns \(\pi \) are searched with the objective \(Post_{\pi }(V) \subseteq R_2\).

Remark 2

The search space of control patterns is the set of patterns of length at most P, i.e. \(U \cup U^2 \cup \dots U^P\). In a practical way, function \(Find\_Pattern\) tests control patterns of length 1, then control patterns of length 2, iteratively up to length P. Patterns of length i are generated as combinatorial i-tuples. The set of patterns of length i is \(U^i\), its size is \(N^i\). The complexity of function \(Find\_Pattern\) is thus exponential with the length of control patterns P. The value of P leading to successful decompositions is unknown and depends on each system, but in most cases \(P=4\) leads to successful control synthesis. Longer sequences might be required if the dynamics is slow.

figure b

3 Set-Based Cosimulation

In this section, we explain how the Post and Tube operators can be computed in a distributed way through a cosimulation approach. We first explain the principle of interval analysis and standard guaranteed integration, we then suppose that the system can be written as the composition of components and explain our method for guaranteed cosimulation. In order to ease the reading of this section, we omit the notation of the switched modes \(\sigma \) and control sequences \(\pi \) associated to the Post and Tube operators.

Before presenting the details of interval analysis and cosimulation, let us introduce the following time periods:

  • \(\tau \) is the switching period,

  • H is the communication period,

  • h is the simulation period (or integration time-step).

We suppose that \(h \le H \le \tau \), H is a multiple of h, and \(\tau \) is a multiple of H. Consider \(H= k h\) and \(\tau = K H\) with \(k,K \in \mathbb {N}_{>0}\), and an initial time \(t_0\). On time intervals \([t_0,t_0+\tau )\), the switching mode is constant. In case of cosimulation a model of cyber-physical systems is broken down into different Simulation Units (SU). Those SUs will exchange information at periodic rate, i.e., at times \(t_0, t_0+H, \dots , t_0+KH\).

3.1 Interval Analysis

In this section, the main set-based tools that are required in this paper are presented.

Interval Artithmetic. The simplest and most common way to represent and manipulate sets of values is with intervals, see [25]. An interval \([x_i]=[\underline{x_i},\overline{x_i}]\) defines the set of reals \(x_i\) such that \(\underline{x_i} \le x_i \le \overline{x_i}\). \(\mathbf {\mathbb {IR}}\) denotes the set of all intervals over reals. The size or the width of \([x_i]\) is denoted by \(w([x_i])=\overline{x_i}-\underline{x_i}\).

Interval arithmetic extends to \(\mathbf {\mathbb {IR}}\) elementary functions over \(\mathbf {\mathbb {R}}\). For instance, the interval sum, i.e., \([x_1]+[x_2]=[\underline{x_1}+\underline{x_2}, \overline{x_1}+\overline{x_2}]\), encloses the image of the sum function over its arguments. In general, an arithmetic operation \(\diamond = \{+,-,\times ,\div \}\) is associated to its interval extension such that:

$$\begin{aligned}{}[a] \diamond [b] \subset [\min \{\underline{a} \diamond \underline{b}, \overline{a} \diamond \underline{b},\underline{a} \diamond \overline{b},\overline{a} \diamond \overline{b}\}, \max \{\underline{a} \diamond \underline{b}, \overline{a} \diamond \underline{b},\underline{a} \diamond \overline{b},\overline{a} \diamond \overline{b}\}]. \end{aligned}$$

An interval vector or a box \([\mathbf {x}] \in \mathbf {\mathbb {IR}}^n\), is a Cartesian product of n intervals. The enclosing property basically defines what is called an interval extension or an inclusion function.

Definition 3 (Inclusion function)

Consider a function \(f: \mathbf {\mathbb {R}}^n \rightarrow \mathbf {\mathbb {R}}^m\), then \([f]\!:\! \mathbf {\mathbb {IR}}^n \rightarrow \mathbf {\mathbb {IR}}^m\) is said to be an extension of f to intervals if

$$\begin{aligned} \forall [\mathbf {x}] \in \mathbf {\mathbb {IR}}^n, \quad [f]([\mathbf {x}]) \supseteq \{f(\mathbf {x}), \mathbf {x} \in [\mathbf {x}]\}. \end{aligned}$$

It is possible to define inclusion functions for all elementary functions such as \(\times \), \(\div \), \(\sin \), \(\cos \), \(\exp \), etc. The natural inclusion function is the simplest to obtain: all occurrences of the real variables are replaced by their interval counterpart and all arithmetic operations are evaluated using interval arithmetic. More sophisticated inclusion functions such as the centered form, or the Taylor inclusion function may also be used (see [19] for more details).

Combining the inclusion function and the rectangle rule, integral can be bounded following:

$$\begin{aligned} \int _{a}^{b}f(x)\,dx \in (b-a).[f]([a, b]). \end{aligned}$$

Set-Based Simulation. Also named validated simulation or reachability, set-based simulation aims to compute the reachable tube of an Initial Value Problem with Ordinary Differential Equation (IVP-ODE) with a set-based approach and validated computations.

When dealing with validated computation, mathematical representation of an IVP-ODE is as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{{y}}(t) = f(t, {y}(t), d(t))\\ {y}(0) \in [{y}_0] \subseteq \mathbf {\mathbb {R}}^n \\ {d}(t) \in [d] \subseteq \mathbf {\mathbb {R}}^m. \end{array}\right. } \end{aligned}$$
(4)

We assume that \(f:\mathbf {\mathbb {R}}\times \mathbf {\mathbb {R}}^n \rightarrow \mathbf {\mathbb {R}}^n\) is continuous in t and globally Lipschitz in y, so Eq. (4) admits a unique solution for a given continuous perturbation trajectory. We furthermore suppose that d is bounded in the box [d].

The set (expressed as a box) \([{y}_0]\) of initial conditions is usually used to model some (bounded) uncertainties. The set [d] is used to model (bounded) perturbations. For a given initial condition \({y}_0 \in [{y}_0]\), and a given perturbation \(d \in [d]\), the solution at time \(t > 0\) when it exists is denoted \({y}(t; {y}_0, d)\). The goal, for validated numerical integration methods, is then to compute the set of solutions of Eq. (4), i.e., the set of possible solutions at time t given the initial condition in the set of initial conditions \([{y}_0]\) and the perturbation lying in [d]:

$$\begin{aligned} {y}(t; [{y}_0], [d]) = \{{y}(t; {y}_0)\ |\ {y}_0 \in [{y}_0],\ d(t) \in [d]\}. \end{aligned}$$
(5)

Validated numerical integration schemes, exploiting set-membership framework, aims at producing the solution of the IVP-ODE that is the set defined in Eq. (5). It results in the computation of an over-approximation of \({y}(t; [{y}_0], [d])\).

The use of set-membership computation for the problem described above makes possible the design of an inclusion function for \([{y}](t; [{y}_0],[d])\), which is an over-approximation of \({y}(t; [{y}_0], [d])\) defined in Eq. (5). To do so, let us consider a sequence of time instants \(t_1,\ \dots , t_K\) with \(t_{i+1} = t_i + h\) and a sequences of boxes \([{y}_1],\ \dots , [{y}_K]\) such that \({y}(t_{i+1}; [{y}_i],[d]) \subseteq [{y}_{i+1}]\), \(\forall i \in [0, K-1]\) are computed. From \([{y}_i]\), computing the box \([{y}_{i+1}]\) is a classical 2-step method (see [23]):

  • Phase 1: compute an a priori enclosure \(\mathcal{P}^h([y_i],[d])\) of the set \(\{{y}(t_k; {y}_i,d)\ |\ t_k\in [t_i, t_{i}+h], {y}_i \in [{y}_i], d \in [d]\}\), such that \({y}(t_k; [{y}_i],[d])\) is guaranteed to exist;

  • Phase 2: compute a tight enclosure of the solution \([{y}_{i+1}]\) at time \(t_{i+1}\).

The a priori enclosure \(\mathcal{P}^h([y_i],[d])\) computed in Phase 1 is referred to as a Picard box, since its computation relies on the Picard-Lindelöf operator and the Picard theorem (see [2, 21] for more details). We omit the theoretical details, but a successful computation of this box ensures the existence and uniqueness of solutions over the time interval \([t_i,t_i+h]\) for the given box of initial conditions \([y_i]\) and perturbation box [d]. Two main approaches can be used to compute the tight enclosure in Phase 2. The first one, and the most used, is the Taylor method [25, 26]. The second one, more recently studied, is the validated Runge-Kutta method [2]. Guaranteed integration or reachability analysis consists in computing a sequence of boxes that enclose the state of the system on a given time interval. For a given switched mode (the notation being omitted) and perturbation set [d] on time interval \([t,t+\tau ]\), given a time integration period h such that \(\tau = K h\), (\(k=1\)), the Tube operator is computed as the union of enclosures \(\mathcal{P}^h([y_i],[d])\):

$$\begin{aligned} Tube ([{y}_0]) = \bigcup _{i = 1, \dots , K} \mathcal{P}^h([y_i],[d]). \end{aligned}$$

The post operator is the tight enclosure given at the final time:

$$\begin{aligned} Post ([{y}_0]) = [{y}_{K}]. \end{aligned}$$

3.2 Cosimulation of Reachable Sets

The complexity of the computation of the Picard boxes, as well as the tightening of the solutions, is exponential in the dimension of the differential equation considered. As a result, reachability analysis lacks scalability with respect to the dimension of the system. In order to break the exponential complexity of those computations, a cosimulation approach can be used with the aim of computing these objects only on parts of the system.

Cosimulation aims at simulating components of a coupled system separately. In brief, the principle is to enable simulation of the coupled system through the composition of simulators, or simulation units (SUs) [17], each SU being dedicated to only a component of the system. SUs exchange information at some given communication times in order to ensure the simulation error does not grow uncontrollably.

Let us suppose that the dynamics can be decomposed as follows:

$$\begin{aligned}&\dot{x}_1 \in f_1 (t, x_1, u_1 )\quad \text {with} \quad x_1(0) \in [x_1^0], \ u_1 \in [u_1 ], \\&\dot{x}_2 \in f_2 (t, x_2, u_2 )\quad \text {with} \quad x_2(0) \in [x_2^0], \ u_2 \in [u_2 ], \\&\dots \\&\dot{x}_m \in f_m (t, x_m, u_m )\quad \text {with} \quad x_m(0) \in [x_m^0], \ u_m \in [u_m ], \\&L(x_1, \dots , x_m, u_1, \dots , u_m) = 0, \end{aligned}$$

where the state x is decomposed in m components \(x = (x_1, \dots , x_m)\), for all \(j \in \{1,\dots ,m\}\), \(x_j \in X_j\), \(X_1 \times \dots \times X_m = \mathbb {R}^d\), and L is a coupling function between the components. The coupling condition \(L(x_1, \dots , x_m, u_1, \dots , u_m) = 0\) should hold at all time. From now on, we use index \(j \in \{1, \dots , m \}\) to denote subsystem j, and index i to denote a time interval starting at \(t_i\). Note that, in order to increase the accuracy of the method, the decomposition should be made so as to minimize the number of shared variables between sub-systems.

In the most general case, coupling L is an algebraic condition. For our applications, the coupling is supposed to be given explicitly, i.e, \(u_j\) is given as function of the other state variables: \(u_j = K_j (x_1, \dots , x_m)\). Cosimulation then consists in computing Post operators for each sub-system separately, and doing a cross product to obtain the global state. To ensure a guaranteed computation, the inputs \(u_j\) can be considered as bounded perturbations. The difficulty lies in the determination of the size of the set in which the perturbations evolve, since it has to be determined before performing the simulation of the other sub-systems. This can be done using the cross-Picard operator, introduced in [21]. The purpose of the cross-Picard operator is to over-approximate the solutions of all the sub-systems over a given time interval (the communication period, also called macro-step), using only local computations.

To compute these sets, we start by guessing a rough over-approximation \([p_j]\) of the solutions \(x_j\) over the next macro-step. This gives some rough over-approximations \([r_j] = K_j ([p_1],\dots ,[p_m])\) of the perturbations \(u_j\). We then compute local Picard boxes iteratively, until the proof of validity of the approximations is obtained for all sub-systems.

More precisely, let us denote by \(\mathcal{P}_j^H([x_j],[u_j])\) the enclosure of the set of solutions of subsystem j over the time-interval \([t,t+H]\): \(\{{x}_j(t_k; {x}_{j},u_j)\ |\ t_k\in [t, t+H], {x}_{j} \in [{x}_{j}], u_j \in [u_j]\}\), where \([{x}_{j}]\) and \([{u}_{j}]\) are the boxes of initial conditions and perturbation for sub-system j. If we can prove that for all sub-systems \(j \in \{1, \dots , m \}\), \(\mathcal{P}_j^H([x_j],K_j ([p_1],\dots ,[p_m])) \subsetneq [p_j] \), then, by application of the Picard theorem, existence and uniqueness of global solutions is ensured for the time interval \([t,t+H]\). Fortunately, this condition is in practice easily met by application of a fixed point algorithm that tightens the rough initial guesses \([p_j]\) (see [21]). Once the Picard boxes are computed and proved safe, each sub-system j can, in parallel, compute its own solution safely on the time interval \([t,t+H]\) by considering \(u_j\) as a perturbation lying in \(K_j ([p_1],\dots ,[p_m])\). We denote the cross-Picard operator as the computation of the validated Picard boxes, the result being given as the cross-product of the Picard boxes.

Our approach for guaranteed cosimulation of the Post operator over the interval \([t,t+\tau ]\) is thus summarized as follows:

  1. 1.

    Compute an over-approximation of the solutions on time interval \([t,t+H]\) (compute the cross-Picard operator),

  2. 2.

    Advance simulation of all subsystems in parallel (using a time step h) until time \(t+H\), the inputs are considered as bounded perturbations in the sets returned in Step 1,

  3. 3.

    Update initial conditions and input values,

  4. 4.

    Repeat on interval \([t+H,t+2H]\) until \([t+\tau -H,t+\tau ]\).

3.3 Discussion on Meta-parameters

The different time periods involved in the synthesis and cosimulation procedures (h, H, and \(\tau \)) play a crucial role in the accuracy of the reachability analysis, and thus in the success of the control synthesis. In mere words, a reachability analysis is performed each time a control sequence is tested. Improving the speed of the reachability analysis drastically improves control synthesis computation times, provided that the accuracy is high enough to allow control synthesis.

One of the key aspects is that the frequency at which we update the initial conditions and perturbation sets (the communication frequency 1/H) should be as small as possible in order to increase the speed of the reachability analysis, but at the cost of the accuracy of cosimulation. The speed increase when using fewer communications is due to the fact that each communication time involves the application of a fixed-point algorithm to validate the perturbation sets, thus taking a non negligible amount of computation. However, using shorter communication periods means that the perturbation sets are smaller, and the cosimulation thus leads to tighter reachable tubes, making the synthesis easier. The largest communication period we can consider is actually the switching period \(H=\tau \). If such a large communication period allows enough accuracy for control synthesis purposes, then this would lead to the best computation time gains. However, in practice, the switching period can be too large to avoid communication between switching times. If a communication is necessary between switchings, then, in order to maximize the use of the data exchange, communication frequency should be a multiple of the switching frequency.

We would like to point out that the integration time step h can actually be different for each simulation unit (for reachability analysis of separate components, once the perturbation sets are validated). The integration methods can be essentially different since we can even consider implicit and explicit methods in parallel. The only requirement is that the perturbation sets are proved safe (with the use of Picard operators). This means that complex systems involving stiff and nonstiff dynamics, or linear and nonlinear dynamics, can be divided in such a way that the computation power is dedicated to the more difficult parts to integrate. In our applications, we illustrate the scalability property of the proposed method, but industrial applications involving more complex dynamics could show even better improvements.

4 Experiments

4.1 Case Study

This case study is based on a simple model of a two-room apartment, heated by a heater in one of the rooms (adapted from [15]). Initially of dimension of the state space is 2, the case study is made scalable by concatenating two-room apartments in line, so that each room exchanges heat with its neighbouring rooms, and every other room is equipped with a heater.

In this example, the objective is to control the temperature of all rooms. There is heat exchange between neighbouring rooms and with the environment. The continuous dynamics of the system, the temperature, is given by

$$ \dot{\left( \begin{matrix}T_1 \\ T_2 \\ \vdots \\ T_n \end{matrix} \right) } = A \left( \begin{matrix}T_1 \\ T_2 \\ \vdots \\ T_n \end{matrix} \right) + B_u. $$

The dimension n is supposed to be even \(n = 2 m\). The non null coefficients \(a_{i,j}\) of matrix A are:

$$\begin{aligned} a_{1,1}&= - \alpha _{r} - \alpha _{e}-\alpha _f{u_1}&\\ a_{2i+1,2i+1}&= - 2 \alpha _{r} - \alpha _{e}&\quad i=1,\dots ,m-1 \\ a_{2i,2i}&= - 2 \alpha _{r} - \alpha _{e}-\alpha _f{u_i}&\quad i=1,\dots ,m-1 \\ a_{2m,2m}&= - \alpha _{r} - \alpha _{e}&\\ a_{i,i+1}&= a_{i+1,i} = \alpha _{r}&\quad i=1,\dots ,2m-1. \end{aligned}$$

The non null coefficients \(b_{i,j}\) of the input matrix \(B_u\) are:

$$\begin{aligned} b_{2i-1}&= \alpha _{e} T_e + \alpha _f T_f { u_{i+1}}&\quad i=1,\dots ,m \\ b_{2i}&= \alpha _{e} T_e&\quad i=1,\dots ,m. \end{aligned}$$

Here \(T_i\) for \(i=1,\dots ,2m\) is the temperature of room i, and the state of the system corresponds to \(T=(T_1,\dots ,T_n)\). The control modes are given by variables \(u_j\) for \(j=1,\dots ,m\), each can take the values 0 or 1, depending on whether the heater in room \(2j-1\) (for \(j=1,\dots ,m\)) is switched off or on. Hence, the number of switched modes is \(2^m\). Temperature \(T_e\) corresponds to the temperature of the environment, and \(T_f\) to the temperature of the heaters. The values of the different parameters are as follows: \(\alpha _{r} = 5 \times 10^{-2}\), \(\alpha _{e} = 5 \times 10^{-3}\), \(\alpha _{f} = 8.3 \times 10^{-3}\), \(T_e = 10\) and \(T_f = 50\).

The control objective is to ensure \(\tau \)-stability of the temperature in \(R = [19,21] \times \dots \times [19,21]\), while ensuring safety in \(S = [18,22] \times \dots \times [18,22]\), with a switching period \(\tau = 10\). We don’t consider any obstacle B in this example, the maximal length of patterns is set to \(P=4\), and the maximum depth of decomposition is \(D=2\).

4.2 Experimental Results

In order to validate our approach, we synthetize the control rule for the problem given in Sect. 4.1 for different number of rooms \(n=2,4,6,8\). The results are gathered in Table 1. All the simulations are performed with the classical method RK4, an explicit Runge-Kutta method with four stages at the fourth order. Our choice for this method is based on its fame and on the fact that to find a control for the case study, an order greater than two is needed. Cosimulation consists in m simulations of systems of dimension two (three with an additional dimension for time). More precisely, if \(m \ge 3\), system

$$ \dot{\left( \begin{matrix}T_1 \\ T_2 \\ \vdots \\ T_n \end{matrix} \right) } = A \left( \begin{matrix}T_1 \\ T_2 \\ \vdots \\ T_n \end{matrix} \right) + B_u, $$

for \(n=2m\), is rewritten as m systems:

where \(D^i\) is a disturbance matrix composed of coefficients of A. One can see that each subsystem is perturbed by the adjacent rooms (there is one adjacent room for the first and last system, and two adjacent rooms for the others). There is one communication per switching period, meaning that \(H = 5\) for \(\tau = 10\).

These simulations are also performed in parallel using Open Multi-Processing API for Linux. Experiments are done on a bi-processor Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40 GHz with 12 cores each. A time out (T.O.) is fixed at three days, i.e., 4320 min. Computation times seem important but the results are guaranteed and have to be computed only one time and offline.

Table 1. Results of synthesis for problem given in Sect. 4.1: computation times, in minutes, for centralized dynamics, with cosimulation and with parallelized cosimulation.

4.3 Discussion

Our method shows its efficiency, even with only 4 rooms if parallelization is used. A control rule can be synthetized for 8 rooms with cosimulation while no result can be obtained without our approach before time out. Cosimulation allows a very straightforward parallelization which reduces significantly computation time. Our experiments revealed the necessity of using one communication per switching period for this case study. Using none (communicating only at the beginning of a switching period) led to sets too wide for ensuring \(\tau \)-stability.

5 Conclusion

In this paper, we presented a procedure for control synthesis that relies heavily on (guaranteed) reachability computations, its scalability being limited by the complexity of set-based integration. We proposed to use a guaranteed cosimulation to improve the control synthesis computation times. We illustrate the scalability of our method on a scalable case-study that shows the efficiency of our approach. As of now, some expertise is required for choosing a communication frequency that allows computation time gains as well as successful control synthesis. We would like to explore the possibility of automating the determination of a good communication frequency in the context of switching systems.

The current implementation of the procedure allows to simulate subsystems in parallel, but the cross-Picard computation (involving repeated applications of Picard operators) is still sequential due to memory management issues. Our future work will be devoted to the development of a parallel implementation of the cross-Picard computation. Such an implementation would hopefully mitigate the cost of communication times in the present procedure.