1 Introduction

The formal verification aims to prove automatically some properties on dynamical systems. In our paper, we are interested in proving invariant i.e. a property valid for all reachable values without regarding the step when we reach the value. In our point-of-view, a property can be represented by a set C. Checking the property can be reduced to check whether the set of the reachable values is fully contained in C. The paper proposes a static analysis framework to solve the verification problem. In other words, we develop a technique to check the property without any system simulation. In this paper, we handle properties representable as a sublevel of a quadratic function i.e. an ellipsoid. The verification problem in this case is equivalent to solving an optimization problem where the decision variable is constrained to belong to the reachable values set. One may think that to solve the problem, it suffices to represent precisely the reachable values set to check the property. Nevertheless, classically, in static analysis, the reachable values set is approached iteratively using Kleene iterations scheme. Since the computation is slow, Kleene iterations are coupled with acceleration techniques that degrades the representation of the reachable values set. Thus the verification of property may fail because of the loss of precision. In this paper, we use a direct method based on semidefinite programming to represent the reachable values set and check the property in the same time. The main idea is to use a minimizing abstraction based on union of truncated ellipsoids.

Related Works. This method is an adaptation of [2]. Indeed in [2], we propose the synthesis of a minimizing polynomial sublevel abstraction based on sums-of-squares. Here, we allow non-polynomial abstraction but we restrict ourselves to semidefinite programming to have a better scalability. The method proposed here is also an extension of [1] since in [1], the only property handled is the boundedness.

The techniques presented in this paper uses piecewise quadratic Lyapunov conditions [9, 12]. However, in [9, 12], the authors are interested in proving stability of piecewise affine systems. As classical quadratic Lyapunov functions, piecewise quadratic Lyapunov functions provide sublevel invariant sets to the system. We use this latter interpretation for a verification purpose. In this paper, we are interesting in synthezing disjunctive invariants. This form of invariants appears for tropical polyhedra domain [3] where the author generates disjunctions of zones as invariants. The latter invariants did not encode quadratic relations between variables. The synthesis of quadratic invariants for switched systems is studied in [4]. But, the invariants generation is not guided by property or is not relied on optimization problems.

Organisation of the Paper. The paper is organised as follows. In Sect. 2, we present the context that is the systems and the properties that we consider in the paper. Then in Sect. 3, we give detail about the mathematical model using semidefinite programming to solve optimization problems. In Sect. 4, we conclude and propose some future works.

2 Proving Properties on Constrained Piecewise Affine Discrete-Time Dynamical Systems

2.1 Some Recalls About Polyhedra

In this paper, we will denote by \(\mathbb {M}_{n \times m}\) the set of matrices with n rows and m columns. For \(n\in \mathbb N\), [n] will denote the set of integers \(\{1,\ldots ,n\}\).

In our work, we suppose that a (convex) polyhedron can contain both strict and weak inequalities.

Definition 1

(Polyhedra of \(\mathbb R^d\) ). A polyhedron of \(\mathbb R^d\) is a set of the form:

$$ \left\{ x\in \mathbb R^d\left| P_s x \ll b_s,\ P_w x \le b_w\right\} \right. $$

where \(P_s\in \mathbb {M}_{n_s \times d}\), \(P_w\in \mathbb {M}_{n_w \times d}\), \(b_s\in \mathbb R^{n_s}\) and \(b_w\in \mathbb R^{n_w}\). We insist on the notation: \(y\ll z\) means that for all coordinates l, \(y_l<z_l\) and \(y\le z\) means that for all coordinates l, \(y_l\le z_l\).

Lemma 1

(Polyhedra closure). The topological closure of a nonempty or a nontrivial (not \(\mathbb R^d\)) polyhedron consists in keeping weak inequalities and replacing strict inequalities by weak inequalities.

The direct application of Motzkin’s transposition theorem [13] yields to the next proposition.

Proposition 1

(Test of the emptyness of a polyhedra). Let X be a polyhedra. Suppose that \(X=\left\{ x\in \mathbb R^d\left| P_s x \ll b_s,\ P_w x \le b_w\right\} \right. \). Then \(X\ne \emptyset \) if and only if:

$$ \left\{ \begin{array}{l} \displaystyle { \begin{pmatrix} 1 &{} &{} 0_{1\times d}\\ b_s &{} &{}-P_s \end{pmatrix}^\intercal p^s+\begin{pmatrix} b_w&{}\,&{} -P_w \end{pmatrix}^\intercal p=0}\\ \\ \displaystyle {\sum _{k=1}^{n_s+1} p_k^s=1},\ p^s\ge 0,\ p\ge 0 \end{array} \right. $$

has no solution.

Definition 2

(Indices of meeting polyhedra). We consider L finite families of polyhedra \((S^i)_{i\in [L]}\). For all \(i\in [L]\), \(S^i=\{X_1^i,\ldots , X_{N_i}^{i}\}\). We define:

$$ {\text {Com}}(S^1,S^2,\ldots ,S^L)=\{(i_1,i_2,\ldots ,i_L)\in \prod _{i=1}^L [N_i]\mid \bigcap _{j=1}^L X_{i_j}^j\ne \emptyset \}. $$

Corollary 1

Let \((S^i)_{i\in [L]}\) be L finite families of polyhedra. Suppose that for all \(i\in [L]\), \(S^i\) has \(N^i\) elements. Then \({\text {Com}}(S^1,S^2,\ldots ,S^L)\) can be computed using \(\prod _{i=1}^L N^i\) linear systems.

Definition 3

(Polyhedric partition). A polyhedric partition is a family of polyhedra of \(\mathbb R^d\), \(\{X^1,\ldots ,X^N\}\) such that:

$$ \forall \, i,j\in [N],{\text {s.t.}}i\ne j,\ X^i\cap X^j=\emptyset \quad \text { and }\quad \bigcup _{i\in [N]}X^i=\mathbb R^d\; . $$

If a polyhedric partition S contains N elements, we will say that S is a polyhedric partition of size N.

We will denote by \(f_{\mid X}\) the restriction of a function \(f:X\mapsto \mathbb R^m\) on the set X.

Definition 4

(Piecewise affine maps). A map \(f:\mathbb R^d\rightarrow \mathbb R^m\) is piecewise affine if and only if there exists a polyhedric partition \(\{X^i,i\in [N]\}\) such that \(f_{\mid X^i}\) is affine.

Definition 5

(Piecewise quadratic functions). A function \(f:\mathbb R^d\rightarrow \mathbb R\) is piecewise quadratic if and only if there exists a polyhedric partition \(\{X^i,i\in [N]\}\) such that \(f_{\mid X^i}\) is quadratic.

2.2 Piecewise Affine Discrete-Time Systems

In this paper, we are interested in proving automatically properties on piecewise affine discrete-time systems (PWA for short). The term piecewise affine means that the dynamic of the system is piecewise affine. From Definition 4, there exists a polyhedric partition \(\{X^i,i\in \mathcal {I}\}\) such that for all \(i\in \mathcal {I}\), the dynamic of the system restricted to \(X^i\) is affine. The dynamic of a PWA is thus represented by the following relation for all \(k\in \mathbb N\), for all \(i\in \mathcal {I}\):

$$\begin{aligned} \text {if } x_k\in X^i,\ x_{k+1}=f^i(x_k),\ f^i:y\mapsto A^i y+b^i \end{aligned}$$
(1)

where \(A^i\in \mathbb {M}_{d \times d}\) and \(b^i\) a vector of \(\mathbb R^d\). We assume that the initial condition \(x_0\) belongs to some polytope \(X^{\mathrm {in}}\). We will need homogeneous versions of laws and thus introduce the \((1+d)\times (1+d)\) matrices \(F^i\) defined as follows:

$$\begin{aligned} F^i=\begin{pmatrix} 1 &{} 0_{1\times d} \\ b^i &{} A^i \end{pmatrix}. \end{aligned}$$
(2)

The system defined in Eq. (1) can be rewritten as \((1,x_{k+1})^\intercal =F^i(1,x_k)\).

To sum up, we give a formal definition of what we call a piecewise affine system (PWA for short).

Definition 6

(Piecewise Affine System). A piecewise affine system (PWA) is the triple \((X^{\mathrm {in}},\mathcal {X},\mathcal {A})\) where:

  • \(X^{\mathrm {in}}\) is the polytope of the possible initial conditions;

  • \(\mathcal {X}:=\{X^i, i\in \mathcal {I}\}\) is the polyhedral partition;

  • \(\mathcal {A}:=\{f^i, i\in \mathcal {I}\}\) is the family of affine laws relative to \(\mathcal {X}\) satisfying Eq. (1).

Example 1

(Running example). Let us consider the piecewise linear system depicted in [12]. We bring some modifications: we complete the example by adding a nondeterministic initial condition and we slighty change the partition used in [12], to satisfy Definition 3.

Let us take \(X^{\mathrm {in}}=[-1,1]\times [-1,1]\). The dynamical system is defined, for all \(k\in \mathbb N\), by

$$ \begin{array}{l} x_{k+1}= \left\{ \begin{array}{lr} A^1 x_k&{} \text { if } \left\{ \begin{array}{c}x_{k,1} \ge 0\\ x_{k,2} \ge 0\end{array}\right. \\ A^2 x_k &{} \text { if }\left\{ \begin{array}{c}x_{k,1} \ge 0\\ x_{k,2}< 0\end{array}\right. \\ A^3 x_k&{} \text { if }\left\{ \begin{array}{c}x_{k,1}< 0\\ x_{k,2}< 0\end{array}\right. \\ A^4 x_k &{}\text { if }\left\{ \begin{array}{c}x_{k,1} < 0\\ x_{k,2} \ge 0\end{array}\right. \end{array} \right. \end{array} $$

with

$$ \begin{array}{c} A^1=\begin{pmatrix} -0.04 &{} -0.461\\ -0.139 &{} 0.341 \end{pmatrix},\ A^2=\begin{pmatrix} 0.936 &{} 0.323\\ 0.788 &{} -0.049 \end{pmatrix}\\ A^3=\begin{pmatrix} -0.857 &{} 0.815\\ 0.491 &{} 0.62 \end{pmatrix},\ A^4=\begin{pmatrix} -0.022 &{} 0.644\\ 0.758 &{} 0.271 \end{pmatrix} \end{array} $$

We thus have

$$ \begin{array}{c} X^1=\{x\in \mathbb R^2\mid -x_1\le 0, -x_2\le 0\},\ X^2=\{x\in \mathbb R^2\mid -x_1\le 0, x_2< 0\},\\ X^3=\{x\in \mathbb R^2\mid x_1< 0, x_2< 0\}\text { and } X^4=\{x\in \mathbb R^2\mid x_1< 0, -x_2\le 0\}. \end{array} $$

Now, we formally define the set of reachable values.

Definition 7

(Reachable values set). Let \(P=(X^{\mathrm {in}},\mathcal {X},\mathcal {A})\) be a PWA. We define the reachable values set as the set \(\mathcal {R}_P\) defined by:

$$ \mathcal {R}_P=\bigcup _{k\in \mathbb N} {\mathbb {A}^{}}^k(X^{\mathrm {in}}),\text { where } \mathbb {A}^{}:\mathbb R^d\mapsto \mathbb R^d, \mathbb {A}^{}(x)=A^i x+b^i \text { if } x\in X^i $$

In Definition 7, we put \(P\) in index, but in the rest of the paper we simply write \(\mathcal {R}\) since there will be no confusion.

Example 2

(Reachable values of the running example). Figure 1 depicts a discretized version of the reachable values set of Example 1. Actually, the set of intial conditions \(X^{\mathrm {in}}\) is discretized. The discretization step is of 0.2. Then the trajectory starting from a point in the discretization of \(X^{\mathrm {in}}\) is drawn. The trajectories are stopped at exactly 100 runs.

Fig. 1.
figure 1

A discrete version of the reachable values set of Example 1

Definition 8

(Invariant). Let \(P=(X^{\mathrm {in}},\mathcal {X},\mathcal {A})\) be a PWA. A set \(S\subseteq \mathbb R^d\) is a said to be an invariant if and only if \(\mathcal {R}_P\subseteq S\).

Example 3

(An invariant for the running example). Figure 2 depicts an invariant computed as the sublevel of a piecewise quadratic function. We will come back later on the details of its computation at Subsect. 3.5.

Fig. 2.
figure 2

An invariant sublevel of a piecewise quadratic function for Example 1, in yellow, the set of Example 2 (Color figure online)

2.3 Proving Properties on PWA

In this section, we give details about the properties that we can handle. First, we are interested in proving invariant. This means that for a set S given by the user, we aim to prove automatically that S is an invariant. Second, we only consider sets S representable as a sublevel set of a quadratic function i.e. an ellipsoid. Indeed, we can check properties of this form using quadratic programming. Then we use semi-definite relaxations of the constructed quadratic programs.

Let V be a quadratic function and let us define \(S=\{x\in \mathbb R^d\mid V(x)\le \alpha \}\) which is an ellipsoid. Then to check that S is an invariant i.e. \(\mathcal {R}\subseteq S\) is equivalent to solve to check \(\sup _{x\in \mathcal {R}} V(x)\le \alpha \). Let us consider the following optimization problem:

$$\begin{aligned} \sup _{x\in \mathcal {R}} x^\intercal M x+2p^\intercal x \end{aligned}$$
(3)

To determine the optimal value of Problem (3) represents exactly the work that we have to do to check our properties. The main problem in Problem (3) is that we cannot use \(\mathcal {R}\) as in Definition 7. Actually \(\mathcal {R}\) cannot be implementable and we propose to use an implementable over-approximation of \(\mathcal {R}\). This principle is not new and corresponds to the concept of abstract interpretation. We refer the reader to [7] for a seminal presentation of this approach. We can characterize \(\mathcal {R}\) as the collecting semantics i.e. the smallest fixed point of a transfer function. Let us denote by \(\wp (\mathbb R^d)\) the set of subsets of \(\mathbb R^d\). The transfer function \(F: \wp (\mathbb R^d) \rightarrow \wp (\mathbb R^d)\) we use is defined as follows:

$$\begin{aligned} F(C) =X^{\mathrm {in}}\cup \bigcup _{i\in \mathcal {I}} f^i\left( C\cap X^i\right) \end{aligned}$$
(4)

We equip \(\wp (\mathbb R^d)\) with the partial order of inclusion. The infimum is understood in this sense i.e. as the greatest lower bound with respect to this order. The smallest fixed point problem is:

$$\begin{aligned} \inf \left\{ C\in \wp (\mathbb R^d)\mid C=F(C) \right\} \!. \end{aligned}$$

It is well-known from Tarski’s theorem that the solution of this problem exists, is unique and in this case, it corresponds to \(\mathcal {R}\). Tarski’s theorem also states that \(\mathcal {R}\) is the smallest solution of the following Problem:

$$\begin{aligned} \inf \left\{ C\in \wp (\mathbb R^d)\mid F(C)\subseteq C\right\} \!. \end{aligned}$$

Finally, any set \(P^\mathrm {ind}\) such that \(F(P^\mathrm {ind})\subseteq P^\mathrm {ind}\) provides a safe over-approximation of \(\mathcal {R}\). In this case, such \(P^\mathrm {ind}\) is called inductive invariant. An abstraction consists in restricting the family of inductive invariants \(P^\mathrm {ind}\) to an implementable one. Here, we chose to constraint an inductive invariant to be a sublevel of a piecewise quadratic function. Indeed, to be inductive is equivalent to asking a Lyapunov condition for sets. Moreover, the sublevels of a Lyapunov function are inductive invariants if and only if they contain initial conditions. In the context of piecewise affine systems, the class of piecewise quadratic functions furnishes a relevant class of Lyapunov functions [9, 12]. Consequently, to solve Problem (3), we will compute a piecewise quadratic function such that the sublevel set :

figure a

Example 4

(The boundedness problem). To prove automatically that the reachable values set \(\mathcal {R}\) of the running example is bounded, we have to solve the optimization problem:

figure b

If the problem has a finite optimal value then \(\mathcal {R}\) is bounded. Following, Eqs. (5) and (6), we should look for a piecewise quadratic function such that the 0-sublevel set \(S^\mathrm {ind}\) provides an overapproximation of \(\mathcal {R}\) which minimizes the error between the and .

3 The Mathematical Model

Now we give the details about the computation of the piecewise quadratic function \(V^\mathrm {ind}\) such that \(S^\mathrm {ind}\) satisfies Eqs. (5) and (6). First we introduce the notion of cone-copositive matrices.

3.1 Cone-Copositive Matrices

The set \(\mathbb {S}_{m}\) denotes the set of symmetric matrices. The set \(\mathbb {S}_{m}^{+}\) denotes the set of (symmetric) positive semidefinite matrices i.e. the matrices A such that \(x^\intercal A x\ge 0\) for all \(x\in \mathbb R^m\). When the dimension of the matrices is obvious we will also use the notation \(A\succeq 0\) for A is positive semidefinite. The set \(\mathbb {S}_{m}^{\ge 0}\) denotes the set of symmetric matrices with nonnegative coefficients.

Let q be a quadratic form i.e. a function such that for all \(y\in \mathbb R^d\), \(q(y)=y^\intercal A_q y+b_q^\intercal y+c_q\) where \(A_q\in \mathbb {S}_{d}\), \(b_q\in \mathbb R^d\) and \(c_q\in \mathbb R\). We define the lift-matrix of q, the matrix of \(\mathbb {S}_{d+1}\) defined as follows:

$$\begin{aligned} \mathbf {M}(A_q,b_q,c_q)=\mathbf {M}(q)=\begin{pmatrix} c_q &{} (b_q/2)^\intercal \\ (b_q/2) &{} A_q \end{pmatrix} \end{aligned}$$
(7)

It is obvious that the \(q\mapsto \mathbf {M}(q)\) is linear. Let \(A\in \mathbb {M}_{d \times d}\), \(b\in \mathbb R^d\), and q be a quadratic form, we have, for all \(x\in \mathbb R^d\):

$$\begin{aligned} q(Ax+b)=\begin{pmatrix} 1\\ x\end{pmatrix}^\intercal \begin{pmatrix} 1&{} 0_{1\times d}\\ b &{} A \end{pmatrix}^\intercal \mathbf {M}(q) \begin{pmatrix} 1&{} 0_{1\times d}\\ b &{} A \end{pmatrix} \begin{pmatrix} 1\\ x\end{pmatrix}. \end{aligned}$$
(8)

Lemma 2

Let \(A\in \mathbb {S}_{d}\), \(b\in \mathbb R^d\) and \(c\in \mathbb R\). Then: \((\forall \, y\in \mathbb R^d,\ y^\intercal A y+b^\intercal y+c\ge ~0)\iff \mathbf {M}(A,b,c)\in \mathbb {S}_{d+1}^{+}\)

Definition 9

((Cone)-copositive matrices). Let \(M\in \mathbb {M}_{m \times d}\). A matrix \(Q\in \mathbb {S}_{d}\) which satisfies

$$ My\ge 0\implies y^\intercal Q y\ge 0 $$

is called M-copositive.

An \({\text {Id}}_{d}\)-copositive matrix is called a copositive matrix. We denote by \(\mathbf {C}_{d}\left( M\right) \) the set of M-copositive matrices and \(\mathbf {C}_{d}\) the set of copositive matrices.

Lemma 3

A quadratic function q is nonnegative over a polyhedron P if and only if q is nonnegative on the topological closure of P.

For \(P\in \mathbb {M}_{n \times m}\) and \(c\in \mathbb R^n\), we define the following matrix:

$$\begin{aligned} \mathbf {H}\left( P,c\right) = \begin{pmatrix} 1 &{} &{} 0_{1\times m}\\ c &{} &{}-P \end{pmatrix} \in \mathbb {M}_{(n+1) \times (m+1)} \end{aligned}$$
(9)

We also use the notation \(\mathbf {H}\left( C\right) \) for a polyhedron C with the same meaning i.e. if \(C=\left\{ x\in \mathbb R^d\left| P_s x \ll b_s,\ P_w x \le b_w\right\} \right. \) and we have and , then \(\mathbf {H}\left( C\right) =\mathbf {H}\left( P,b\right) \).

Lemma 4

Let \(P\in \mathbb {M}_{n \times m}\) and \(c\in \mathbb R^n\). Then, for all \(x\in \mathbb R^n\), \(Px\le c\iff \mathbf {H}\left( P,c\right) \begin{pmatrix} 1\\ x\end{pmatrix}\ge 0\).

Lemma 5

Let \(q:\mathbb R^d\rightarrow \mathbb R^d\) be a quadratic function and C be a polyhedron. Then \(\mathbf {M}(q)\in \mathbf {C}_{d+1}\left( \mathbf {H}\left( C\right) \right) \implies (q(x)\ge 0,\ \forall \, x \in C)\).

Cone-copositive matrix characterizations is an intensive research field and a list of interesting papers about can be found in [5].

Proposition 2

(Theorem 2.1 of [10]). Let \(M\in \mathbb {M}_{m \times d}\). Then:

figure c

If the rank of M is equal to m, then (\(\varDelta \)) is actually an equality.

The next proposition discusses simple a characterization of copositive matrices as a sum of a semi-definite positive matrix and a nonnegative matrix.

Proposition 3

([8, 11]). We have: \(\forall \, d\in \mathbb N\): \(\mathbb {S}_{d}^{\ge 0}+\mathbb {S}_{d}^{+}\subseteq \mathbf {C}_{d}\). If \(d\le 4\) then \(\mathbf {C}_{d}=\mathbb {S}_{d}^{\ge 0}+\mathbb {S}_{d}^{+}\).

Corollary 2

Let \(M\in \mathbb {M}_{m \times d}\). Then:

figure d

If M has full row rank and \(d\le 4\), then (\(\star \)) is actually an equality.

Copositive constraints study is a quite recent field of research. Algorithms exist (e.g. [6]) but for the knowledge of the author no tools are available. In this paper, in practice, we use Corollary 2 and we replace \(\mathbf {C}_{d}\left( M\right) \) by the right-hand side of Eq. (\(\star \)).

Example 5

(Why is there \((1,0_{1\times d})\) in \(\mathbf {H}\left( P,c\right) \) ?). Consider \(X=\{x\in \mathbb R\mid x\le 1\}\). Let \(u(x)=(1,x)\), and \(M=(1\ -1)\) (this corresponds to \(\mathbf {H}\left( 1,1\right) \) without (1, 0)). Then \(X=\{x\mid M u(x)^\intercal \ge 0\}\).

In \(\mathbb R\), positive semidefinite matrices and matrices with nonnegative coefficients define the same set that is the set of nonnegative reals. Now let \(W\ge 0\) and define \(X'=\{x\mid u(x) M^\intercal W M u(x)^\intercal \ge 0\}\). Since \(u(x) M^\intercal W M u(x)^\intercal =W u(x)M^\intercal M u(x)^\intercal =2W(1-x)^2\), \(X'=\mathbb R\) for all \(W\ge 0\).

Now let us take \(E=\mathbf {H}\left( 1,1\right) \) and let with \(w_1,w_2,w_3\ge 0\) and define \(\overline{X}=\{x\mid u(x) E^\intercal W E u(x)^\intercal \ge 0\}\). Hence, . Taking for example \(w_2=w_1=0\) and \(w_3>0\) implies that \(\overline{X}=X\). Note that, in this case, we can choose the positive semidefinite certificate equal to .

3.2 Inductiveness

Come back to the invariant proof. Recall that we are looking for a sublevel of a piecewise quadratic function that overapproximate \(\mathcal {R}_P\) that minimizes the abstraction. Since, we are interested in a piecewise quadratic function, we need a polyhedric partition. Let us denote by \(\mathcal {S}=(S^1,\ldots ,S^n)\) the unknown polyhedric partition. We associate for all indices \(i\in [n]\) of the partition, a quadratic form \(x\mapsto x^\intercal P^i x+2x^\intercal q^i +r^i\). Then, we have:

$$\begin{aligned} S^\mathrm {ind}=\bigcup _{i\in [n]} S^{i,\mathrm {ind}}\text { where } S^{i,\mathrm {ind}}=\{x\in S^i\mid x^\intercal P^i x+2x^\intercal q^i +r^i\le 0\} \end{aligned}$$
(10)

Lemma 6

(Partial S-Lemma). Let \(q_1,q_2\) be two quadratic forms over \(\mathbb R^m\) and C be a polyhedron of \(\mathbb R^m\). If the following assertion

$$ (\exists \lambda \in \mathbb R_+,\ y\in C \implies \lambda q_1(y)-q_2(y)\ge 0) $$

holds then the following assertion

$$ (y\in C \wedge q_1(y)\le 0\implies q_2(y)\le 0) $$

is true.

Remark 1

If Lemma 6, \(q_1\) and \(-q_2\) are convex and there exists \(x_0\in \mathbb R^m\) which belongs to the interior of C and satisfies \(q_1(x_0)<0\) then the two assertions of Lemma 6 are equivalent.

We now write SDP conditions to constraint \(S^\mathrm {ind}\) to satisfy \(F(S^\mathrm {ind})\subseteq S^\mathrm {ind}\). Recall that \(F(C)= X^{\mathrm {in}}\cup \bigcup _{i\in \mathcal {I}} f^i\left( C\cap X^i\right) \). So \(F(S^\mathrm {ind})\subseteq S^\mathrm {ind}\) if and only if:

$$ X^{\mathrm {in}}\subseteq S^\mathrm {ind} \text { and } \forall \, i\in \mathcal {I},\ f^i\left( X^i\cap S^\mathrm {ind}\right) \subseteq S^\mathrm {ind} $$

First, let us consider \(X^{\mathrm {in}}\subseteq S^\mathrm {ind}\). We use the polyhedric partition \(\mathcal {S}\) to decompose \(X^{\mathrm {in}}\) in subproblems \(X^{\mathrm {in}}\cap S^i\subseteq S^\mathrm {ind}\). Then, we use the fact that for \(i\ne j\), \(S^i\cap S^j=\emptyset \) and then \(X^{\mathrm {in}}\cap S^i\) cannot meet \(S^j\) for \(j\ne i\). Hence, we have \(X^{\mathrm {in}}\cap S^i\subseteq S^\mathrm {ind}\) if and only if \(X^{\mathrm {in}}\cap S^i\subseteq S^{i,\mathrm {ind}}\). Finally, from Eq. (10), \(X^{\mathrm {in}}\subseteq S^\mathrm {ind}\) is equivalent to, for all \(i\in {\text {Com}}(\{X^{\mathrm {in}}\},\mathcal {S})\),

$$ -x^\intercal P^i x-2x^\intercal q^i -r^i \ge 0, \forall \, x\in X^{\mathrm {in}}\cap S^i $$

Recall that \(X^{\mathrm {in}}\) and \(S^i\) are both polyhedra, the constraint on \(S^\mathrm {ind}\), \(X^{\mathrm {in}}\subseteq S^\mathrm {ind}\) is equivalent on a finite number of constraints imposing the positivity of a quadratic form over a known polyhedron. Then, we can reinforce the constraint by applying Lemma 5 and finally to constraint \(S^\mathrm {ind}\) to satisfy \(X^{\mathrm {in}}\subseteq S^\mathrm {ind}\), we impose:

$$\begin{aligned} \forall \, i\in {\text {Com}}(\{X^{\mathrm {in}}\},\mathcal {S}),\ \mathbf {M}(-P^i, -2q^i, -r^i)\in \mathbf {C}_{d+1}\left( \mathbf {H}\left( X^{\mathrm {in}}\cap S^i\right) \right) . \end{aligned}$$
(11)

Now, we study the second constraint that is for all \(i\in \mathcal {I},\ f^i\left( X^i\cap S^\mathrm {ind}\right) \subseteq S^\mathrm {ind}\). For \(i\in \mathcal {I}\), we write:

$$ (f^{i})^{-1}(\mathcal {S})=\{(f^{i})^{-1}(S^1),\ldots ,(f^{i})^{-1}(S^n)\} , $$

this forms a polyhedric partition of \(\mathbb R^d\) since \(\mathcal {S}\) is. Let \(i\in \mathcal {I}\). We use the same principle as for \(X^{\mathrm {in}}\subseteq S^\mathrm {ind}\) and we decompose \(f^i\left( X^i\cap S^\mathrm {ind}\right) \) using the polyhedric partition \(\mathcal {S}\). First from Eq. (10) and since the direct image of union is the union of image, we have \(f^i\left( X^i\cap S^\mathrm {ind}\right) =\cup _{j\in [n]} f^i\left( X^i\cap S^{j,\mathrm {ind}}\right) \). Then we use the fact that \(\mathcal {S}\) is a polyhedric partition, we get \(f^i\left( X^i\cap S^\mathrm {ind}\right) =\cup _{k\in [n]}\cup _{j\in [n]} f^i\left( X^i\cap S^{j,\mathrm {ind}}\right) \cap S^k\). Again, from the fact that the polyhedra \(S^k\)s cannot meet each others, we conclude that \(f^i\left( X^i\cap S^{j,\mathrm {ind}}\right) \cap S^k\subseteq S^\mathrm {ind}\) if and only if \(f^i\left( X^i\cap S^{j,\mathrm {ind}}\right) \cap S^k\subseteq S^{k,\mathrm {ind}}\). Finally, we can restrict the indices (jk) to those such that (ijk) belongs \({\text {Com}}(\mathcal {X},\mathcal {S},{f^i}^{-1}(\mathcal {S}))\). In conclusion, the statement: for all \(i\in \mathcal {I}\), \(f^i\left( X^i\cap S^\mathrm {ind}\right) \subseteq S^\mathrm {ind}\), is equivalent to:

$$\begin{aligned}&\forall \, i\in \mathcal {I},\ \forall \, (j,k)\in [n]^2\, {\text {s.t.}}\ (i,j,k)\in {\text {Com}}(\mathcal {X},\mathcal {S},{f^i}^{-1}(\mathcal {S})), \nonumber \\&\,\,\forall \, x\in X^i\cap S^{j,\mathrm {ind}}\cap (f^i)^{-1}(S^k),\ f^i(x)^\intercal P^k f^i(x)+2f^i(x)^\intercal q^k +r^k\le 0 . \end{aligned}$$
(12)

Note that \(S^{j,\mathrm {ind}}\) is actually the intersection of the polyhedron \(S^j\) and the ellipsoid \(\{x\mid x^\intercal P^j x+2x^\intercal q^j +r^j\le 0\}\) and thus the constraint \(x\in S^{j,\mathrm {ind}}\) is the conjonction of affine constraints and a quadratic constraint. Moreover, \(X^i\cap S^j\cap (f^i)^{-1}(S^k)\) is a polyhedron then we can apply Lemma 6 to get a stronger condition to enforce the constraint on \(P^j,P^k\), \(q^j,q^k\) and \(r^j,r^k\) of Eq. (12) and we obtain:

$$\begin{aligned}&\forall \, x\in X^i\cap S^j\cap (f^i)^{-1}(S^k), \nonumber \\&\qquad \qquad \!\!\, r^j+2x^\intercal q^j+x^\intercal P^j x-f^i(x)^\intercal P^k f^i(x)-2f^i(x)^\intercal q^k -r^k\ge 0 . \end{aligned}$$
(13)

In Eq. (13), we use Lemma 6 with the positive scalar \(\lambda \) equal to 1 to avoid to introduce a bilinear constraint. In Eq. (13), we recognize a positivity constraint of a quadratic form on a polyhedron and thus can be reinforced by a stronger constraint involving cone-copositive matrices. Finally the constraint for all \(i\in \mathcal {I}\), \(f^i\left( X^i\cap S^\mathrm {ind}\right) \subseteq S^\mathrm {ind}\) is replaced by the stronger condition:

$$\begin{aligned}&\forall \, i\in \mathcal {I},\ \forall \, (j,k)\in [n]^2\, {\text {s.t.}}\ (i,j,k)\in {\text {Com}}(\mathcal {X},\mathcal {S},{f^i}^{-1}(\mathcal {S})), \nonumber \\&\,\,\,\mathbf {M}(P^j, 2q^j, r^j)-F^i\mathbf {M}(P^k, 2q^k, r^k) F^i\in \mathbf {C}_{d+1}\left( \mathbf {H}\left( X^i\cap S^j\cap (f^i)^{-1}(S^k)\right) \right) \end{aligned}$$
(14)

3.3 Optimality

We detail how to evaluate \(\sup _{x\in S^\mathrm {ind}} x^\intercal M x+2p^\intercal x\). First, since:

$$ \sup _{x\in S^\mathrm {ind}} x^\intercal M x+2p^\intercal x=\inf \{\eta \mid \eta -x^\intercal M x-2p^\intercal x\ge 0,\ \forall \, x\in S^\mathrm {ind}\} $$

Now \(\eta -x^\intercal M x-2p^\intercal x\ge 0,\ \forall \, x\in S^\mathrm {ind}\) is equivalent to say that for all \(i\in [n]\), we have \(\eta - x^\intercal M x-2p^\intercal x\ge 0\) for all \(x\in S^i\cap \{y\in \mathbb R^d\mid y^\intercal P^i y+2y^\intercal q^i +r^i\le 0\}\). Since \(x^\intercal P^i x+2x^\intercal q^i +r^i\le 0\) is a quadratic constraint and \(S^i\) is a polyhedron then we can apply Lemma 6. Finally \(\eta -x^\intercal M x-2p^\intercal x\ge 0,\ \forall \, x\in S^\mathrm {ind}\) is replaced by the stronger constraint:

$$\begin{aligned} \mathbf {M}(P^i,2q^i,r^i)-\mathbf {M}(M,2p,-\eta )\in \mathbf {C}_{d+1}\left( \mathbf {H}\left( S^i\right) \right) \end{aligned}$$
(15)

Theorem 1

(Optimality). Assume there exists \(\{(P^i,q^i,r^i),P^i\in \mathbb {S}_{d},q^i\in \mathbb R^d,\ i\in [n]\}\), a polyhedric partition \(\mathcal {S}=(S^1,\ldots ,S^n)\) and a real \(\eta \) such that Eqs. (11), (14) and (15) hold. Recall that \(S^{\mathrm {ind}}=\bigcup _{i\in [n]}\{x\in S^i\mid x^\intercal P^i x+2x^\intercal q^i+r^i\le 0\}\). Then, \(\mathcal {R}\subseteq S^{\mathrm {ind}}\subseteq \{x\in \mathbb R^d\mid x^\intercal M x+2x^\intercal p\le \eta \}\).

3.4 Implementable Model Using Semidefinite Programming

Finally, we construct a model using semidefinite programming. First, we choose as polyhedric partition the one of the system i.e. \(\mathcal {S}=\mathcal {X}\). Second, as said before, we use Corollary 2 to rewrite copositive constraints as semidefinite ones. Then Eq. (11) becomes:

(16)

where are unknown matrices with nonnegative coefficients and \(Z^i_{+}\) are unknown positive semidefinite matrices. Since \(\mathcal {S}=\mathcal {X}\), then \(X^i\cap S^j\) in Eq. (14) is replaced by \(X^i\) and Eq. (14) becomes:

figure e

where \(U^{ik}_{p}\) are unknown matrices with nonnegative coefficients and \(U^{ik}_{+}\) are unknown positive semidefinite matrices. Equation (15) becomes:

$$\begin{aligned} \forall \, i\in \mathcal {I},\ \mathbf {M}(P^i,2q^i,r^i)-\mathbf {M}(M,2p,-\eta )-{\mathbf {H}\left( X^i\right) }^\intercal \left( W^i_{p}+W^i_{+}\right) \mathbf {H}\left( X^i\right) \succeq 0 \end{aligned}$$
(18)

where \(W^i_{p}\) are unknown matrices with nonnegative coefficients and \(W^i_{+}\) are unknown positive semidefinite matrices. Let us introduce the following families:

  • \(\mathcal {P}:=\{(P^i,q^i,r^i),P^i\in \mathbb {S}_{d},q^i\in \mathbb R^d,\ i\in \mathcal {I}\}\)

  • \(\mathcal {W}:=\{\left( W^i_{p},W^i_{+}\right) \in \mathbb {S}_{n_i+1}^{\ge 0}\times \mathbb {S}_{n_i+1}^{+},i\in \mathcal {I}\}\),

  • \(\mathcal {U}:=\{\left( U^{ij}_{p},U^{ij}_{+}\right) \in \mathbb {S}_{n_{ik}}^{\ge 0}\times \mathbb {S}_{n_{ik}}^{+},(i,k)\in {\text {Com}}(\mathcal {X},{f^i}^{-1}(\mathcal {X}))\}\)

  • \(\mathcal {Z}:=\{\left( Z^{i0}_{p},Z^{i0}_{+}\right) \in \mathbb {S}_{n_{i0}}^{\ge 0}\times \mathbb {S}_{n_{i0}}^{+},i\in {\text {Com}}(\{X^{\mathrm {in}}\},\mathcal {X})\}\)

The integers \(n_i+1\) are the sizes of the matrices \(\mathbf {H}\left( X^i\right) \), \(n_{ik}\) the sizes of the matrices \(\mathbf {H}\left( X^i\cap (f^i)^{-1}(X^k)\right) \) and \(n_{i0}\) the sizes of the matrices \(\mathbf {H}\left( X^{\mathrm {in}}\cap X^i\right) \).

Let us consider the problem:

figure f

Problem (PSD) is thus a semi-definite program. The use of the sum \(-\sum _i r^i+\eta \) as objective function enforces the functions \(x\mapsto x^\intercal P^i x+2x^\intercal q^i+r^i\) to provide a minimal bound \(\eta \) and a minimal ellipsoid containing the initial conditions. However, \(r^i\le 0\) is not natural but ensures that the objective function is bounded from below. The presence of the constraint \(r^i\le 0\) does not affect the feasibility. Note that to reduce the size of the problem, we can take \(q^i=0\) and get homogeneous functions.

3.5 Results on Example 1

Boundedness Property. Recall the running example which consists of the following PWA: \(X^{\mathrm {in}}=[-1,1]\times [-1,1]\), and, for all \(k\in \mathbb N\):

$$ \begin{array}{l} x_{k+1}= \left\{ \begin{array}{lr} A^1 x_k&{} \text { if } x_{k,1} \ge 0\text { and } x_{k,2} \ge 0\\ A^2 x_k &{} \text { if } x_{k,1} \ge 0\text { and } x_{k,2}< 0\\ A^3 x_k&{} \text { if } x_{k,1}< 0\text { and } x_{k,2}< 0\\ A^4 x_k &{}\text { if } x_{k,1} < 0\text { and } x_{k,2} \ge 0 \end{array} \right. \end{array} $$

with

$$ \begin{array}{c} A^1=\begin{pmatrix} -0.04 &{} -0.461\\ -0.139 &{} 0.341 \end{pmatrix},\ A^2=\begin{pmatrix} 0.936 &{} 0.323\\ 0.788 &{} -0.049 \end{pmatrix}\\ A^3=\begin{pmatrix} -0.857 &{} 0.815\\ 0.491 &{} 0.62 \end{pmatrix},\ A^4=\begin{pmatrix} -0.022 &{} 0.644\\ 0.758 &{} 0.271 \end{pmatrix} \end{array} $$

Then, we have \(X^1=\mathbb R_+\times \mathbb R_+\), \(X^2=\mathbb R_+\times \mathbb R_{-}^*\), \(X^3=\mathbb R_-^*\times \mathbb R_-^*\) and \(X^4=\mathbb R_-^*\times \mathbb R_+\). Hence, \(\mathcal {X}=\{X^1,X^2,X^3,X^4\}\). We write \(f^{-1}(\mathcal {X})\) for the union of the polyhedric partitions \((f^i)^{-1}(\mathcal {X})\).

We are interested in proving the boundedness of the reachable values set of the PWA. Then, we have to solve the optimization problem:

$$ \sup _{(x_1,x_2)\in \mathcal {R}} ||(x_1,x_2)||_2^2 $$

To get an overapproximation of the optimal value \(\sup _{(x_1,x_2)\in \mathcal {R}} ||(x_1,x_2)||_2^2\) we solve Problem (PSD). Before it, we have to compute the two sets of indices \({\text {Com}}(\{X^{\mathrm {in}}\},\mathcal {X})\) and \({\text {Com}}(\mathcal {X},f^{-1}(\mathcal {X}))\). From Corollary 1, we computed using linear programming \({\text {Com}}(\{X^{\mathrm {in}}\},\mathcal {X})=\{1,2,3,4\}\) and \({\text {Com}}(\mathcal {X},f^{-1}(\mathcal {X}))=\{(i,j)\mid S(i,j)=1\}\) with .

Now by solving Problem (PSD), we get a (optimal) piecewise quadratic function \(V^{\mathrm {ind}}\) characterized by the following matrices:

$$ \begin{array}{c} P^1= \begin{pmatrix} 1.1178 &{} -0.1178 \\ -0.1178 &{} 1.1178 \\ \end{pmatrix},\ P^2= \begin{pmatrix} 1.5907 &{} 0.5907\\ 0.5907 &{} 1.5907 \end{pmatrix},\\ P^3= \begin{pmatrix} 1.3309 &{} -0.3309\\ -0.3309 &{} 1.3309 \end{pmatrix},\ P^4= \begin{pmatrix} 1.2558 &{} 0.2558\\ 0.2558 &{} 1.2558 \end{pmatrix} \end{array} $$

The vectors \(q^1\), \(q^2\) \(q^3\) and \(q^4\) are equal to the null vector. Since \(r^1=r^2=r^3=r^4=-2\) and \(\eta =2\), then \(\mathcal {R}\subseteq \cup _{i\in \{1,2,3,4 \}}\{x\in X^i\mid x^\intercal P^i x\le 2\}\subseteq \{x\in \mathbb R^2\mid ||x||_2^2\le 2\}\). The sets \(\mathcal {R}\) (a discretized version of it) and \(\{x\in \mathbb R^2\mid V^{\mathrm {ind}}(x)\le 2\}\) are depicted at Fig. 2.

Optimal Value for the First Coordinate. We still consider the running example Example 1 but now, we are interested in the first coordinate. Thus, we consider the following optimization problem: \(\sup _{(x_1,x_2)\in \mathcal {R}} x_1^2\). Now by solving Problem (PSD), we get a (optimal) PQL function \(V^{\mathrm {ind}}\) characterized by the following matrices:

$$ \begin{array}{c} P^1= \begin{pmatrix} 1.0585 &{} -0.1169\\ -0.1169 &{} 0.2339 \end{pmatrix},\ P^2= \begin{pmatrix} 1.0276 &{} 0.0553\\ 0.0553 &{} 0.1105 \end{pmatrix},\\ P^3= \begin{pmatrix} 1.1739 &{} -0.3478\\ -0.3478 &{} 0.6956 \end{pmatrix},\ P^4= \begin{pmatrix} 1.1220 &{} 0.2440\\ 0.2440 &{} 0.4880 \end{pmatrix} \end{array} $$

For this problem, the vectors \(q^1\), \(q^2\) \(q^3\) and \(q^4\) are equal to the null vector and \(r^1=-1.0585\), \(r^2=-1.0276\), \(r^3=-1.1739\) and \(r^4=-1.122\) and \(\eta =1.1739\). Then, the optimal value \(\sup _{(x_1,x_2)\in \mathcal {R}} x_1^2\le 1.1739\). Moreover, it seems that the maximum is reached in the cell \(X^3\).

Optimal Value for the Second Coordinate. We again consider the running example Example 1 and we consider the values taken by the second coordinate. Thus, we consider the following optimization problem: \(\sup _{(x_1,x_2)\in \mathcal {R}} x_2^2\). Now by solving Problem (PSD), we get a (optimal) PQL function \(V^{\mathrm {ind}}\) characterized by the following matrices:

$$ \begin{array}{c} P^1= \begin{pmatrix} 0.0198 &{} -0.0099\\ -0.0099 &{} 1.0050 \end{pmatrix},\ P^2= \begin{pmatrix} 0.6919 &{} 0.2292\\ 0.2292 &{} 1.0759 \end{pmatrix},\\ P^3= \begin{pmatrix} 0.5746 &{} -0.1706\\ -0.1706 &{} 1.0759 \end{pmatrix},\ P^4= \begin{pmatrix} 0.6109 &{} 0.3054\\ 0.3054 &{} 1.1527 \end{pmatrix} \end{array} $$

For this problem, the vectors \(q^1\), \(q^2\) \(q^3\) and \(q^4\) are equal to the null vector and \(r^1=-1.0050\), \(r^2=-1.3094\), \(r^3=-1.3094\) and \(r^4=-1.1527\) and \(\eta =1.3094\). Then, the optimal value \(\sup _{(x_1,x_2)\in \mathcal {R}} x_2^2\le 1.3094\). Moreover, it seems that the maximum is reached in the cell \(X^3\) and in the cell \(X^4\).

A random property. Let us prove that the reachable value set is fully contained in the ellipsoid \(\{(x,y)\in \mathbb R^2\mid -x^2\,+\,2y^2-xy\,+\,x\,-\,0.5y\le 6\}\). Thus, we consider the following optimization problem: \(\sup _{(x_1,x_2)\in \mathcal {R}} -x^2\,+\,2y^2\,-\,xy\,+\,x\,-\,0.5y\). Now by solving Problem (PSD), we get a (optimal) PQL function \(V^{\mathrm {ind}}\) characterized by the following matrices:

$$ \begin{array}{c} P^1= \begin{pmatrix} 0.4226 &{} -0.8129\\ -0.8129 &{} 2.0688 \end{pmatrix},\ P^2= \begin{pmatrix} 0.4558 &{} -0.1418\\ -0.1418 &{} 2.4936 \end{pmatrix},\\ P^3= \begin{pmatrix} 1.9959 &{} -0.5084\\ -0.5084 &{} 2.1136 \end{pmatrix},\ P^4= \begin{pmatrix} 1.4772 &{} 0.7386\\ 0.7386 &{} 2.6193 \end{pmatrix} \end{array} $$

For this problem, the vectors \(q^1\), \(q^2\) \(q^3\) and \(q^4\) are equal to the null vector and \(r^1=-2.0688\), \(r^2=-3.2331\), \(r^3=-3.0927\) and \(r^4=-2.6193\) and \(\eta =5.2936\). Then, the optimal value \(\sup _{(x_1,x_2)\in \mathcal {R}} -x^2+2y^2-xy+x-0.5y\le 5.2936\). This results validates the property.

4 Conclusion and Future Works

The paper presents an implementable method to prove automatically that the reachable values set of a PWA is contained in an ellipsoid. In this case, to check the property is equivalent to solve a maximization problem. The constraint of this maximization problem is formulated as to belong to the reachable values set of the analyzed PWA. By using abstraction by union of truncated ellipsoids, we get an overapproximation of the optimal value of the problem. The method developed in the paper uses a semidefinite relaxation of copositive constraints.

First, we plan to complete the benchmarks: to apply the method for more non-trivial properties such that the avoidance of unsafe regions or “viability” properties. Some easy extensions can be done such as the consideration of union of truncated ellipsoids as property set representations instead of ellipsoids. An extension to constrained piecewise affine systems i.e. with a stopping condition on the system can be easily considered.