Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Problem Formulation

Consider the problem of regulating to the origin the following time-invariant linear discrete-time system,

$$ x(k+1) = Ax(k) + Bu(k) $$
(4.1)

where \(x(k) \in\mathbb{R}^{n}\) and \(u(k) \in\mathbb{R}^{m}\) are respectively, the measurable state vector and the input vector. The matrices \(A \in\mathbb{R}^{n \times n}\) and \(B \in\mathbb{R}^{n \times m}\). Both x(k) and u(k) are subject to bounded polytopic constraints,

$$ \left \{ \begin{aligned}x(k) \in X, \,\, X = \big\{ x \in\mathbb{R}^n: F_xx \leq g_x\big\} \\ u(k) \in U, \,\, U = \big\{ u \in\mathbb{R}^m: F_uu \leq g_u\big\} \end{aligned} \right .\quad \forall k \geq0 $$
(4.2)

where the matrices F x , F u and the vectors g x , g u are assumed to be constant. The inequalities are taken element-wise. It is assumed that the pair (A,B) is stabilizable, i.e. all uncontrollable states have stable dynamics.

2 Interpolating Control via Linear Programming—Implicit Solution

Define a linear controller \(K \in\mathbb{R}^{m \times n}\), such that,

$$ u(k) = Kx(k) $$
(4.3)

asymptotically stabilizes the system (4.1) with some desired performance specifications. The details of such a synthesis procedure are not reproduced here, but we assume that feasibility is guaranteed. For the controller (4.3) using Procedure 2.1 or Procedure 2.2 the maximal invariant set Ω max can be computed as,

$$ \varOmega_{\mathrm{max}} = \bigl\{ x \in\mathbb{R}^n: F_ox \leq g_o \bigr\} $$
(4.4)

Furthermore with some given and fixed integer N>0, based on Procedure 2.3 the controlled invariant set C N can be found as,

$$ C_N = \bigl\{ x \in\mathbb{R}^n: F_N x \leq g_N \bigr\} $$
(4.5)

such that all xC N can be steered into Ω max in no more than N steps when a suitable control is applied. As in Sect. 3.4, the set C N is decomposed as a sequence of simplices \(C_{N}^{(j)}\), each formed by n vertices of C N and the origin. For all \(x(k) \in C_{N}^{(j)}\), the vertex controller

$$ u(k) = K^{(j)}x(k), $$
(4.6)

with K (j) given in (3.38) asymptotically stabilizes the system (4.1), while the constraints (4.2) are fulfilled.

The main advantage of the vertex control scheme is the size of the domain of attraction, i.e. the set C N . Clearly, C N , that is the feasible domain for vertex control, might be as large as that of any other constrained control scheme. However, a weakness of vertex control is that the full control range is exploited only on the boundary of C N in the state space, with progressively smaller control action when state approaches the origin. Hence the time to regulate the plant to the origin is often unnecessary long. A way to overcome this shortcoming is to switch to another, more aggressive, local controller, e.g. the controller (4.3), when the state reaches Ω max. The disadvantage of this solution is that the control action becomes nonsmooth [94].

Here a method to overcome the nonsmooth control action [94] will be proposed. For this purpose, any state x(k)∈C N is decomposed as,

$$ x(k) = c(k)x_v(k) + \bigl(1-c(k)\bigr)x_o(k) $$
(4.7)

with x v C N , x o Ω max and 0≤c≤1. Figure 4.1 illustrates such a decomposition.

Fig. 4.1
figure 1

Any state x(k) can be decomposed as a convex combination of x v (k)∈C N and x o (k)∈Ω max

Consider the following control law,

$$ u(k) = c(k)u_v(k) + \bigl(1-c(k) \bigr)u_o(k) $$
(4.8)

where u v (k) is the vertex control law (4.6) at x v (k) and u o (k)=Kx o (k) is the control law (4.3) in Ω max.

Theorem 4.1

For system (4.1) and constraints (4.2), the control law (4.7), (4.8) guarantees recursive feasibility for all initial states x(0)∈C N .

Proof

For recursive feasibility, we have to prove that,

$$\left \{ \begin{aligned}&F_uu(k) \leq g_u\\ &x(k+1) = Ax(k) + Bu(k) \in C_N \end{aligned} \right . $$

for all x(k)∈C N . For the input constraints,

$$\begin{aligned}F_uu(k) &= F_u \big\{ c(k)u_v(k) + \big(1-c(k)\big)u_o(k) \big\} \\&= c(k)F_uu_v(k) + \big(1-c(k)\big)F_uu_o(k) \\&\leq c(k)g_u + \big(1-c(k)\big)g_u = g_u \end{aligned} $$

and for the state constraints,

$$\begin{aligned} x(k+1) &= Ax(k) + Bu(k) \\&= A \big\{ c(k)x_v(k) + \big(1-c(k)\big)x_o(k) \big\} + B \big\{ c(k)u_v(k) + \big(1-c(k)\big)u_o(k) \big\} \\&= c(k) \big\{ Ax_v(k) + Bu_v(k) \big\} + \big(1-c(k)\big) \big\{ Ax_o(k) + Bu_o(k) \big\} \end{aligned} $$

Since Ax v (k)+Bu v (k)∈C N and Ax o (k)+Bu o (k)∈Ω maxC N , it follows that x(k+1)∈C N . □

Since the controller (4.3) is designed to give specified unconstrained performance in Ω max, it might be desirable to have u(k) in (4.8) as close as possible to it also outside Ω max. This can be achieved by minimizing c,

$$ c^* = \min\limits _{x_v,x_o,c}\{c\} $$
(4.9)

subject to

$$\left \{ \begin{aligned} &F_Nx_v \leq g_N,\\ &F_ox_o \leq g_o,\\ &cx_v + (1-c)x_o = x,\\ &0 \leq c \leq1 \end{aligned} \right . $$

Denote \(r_{v} = cx_{v} \in\mathbb{R}^{n}\), \(r_{o} = (1-c)x_{o} \in\mathbb {R}^{n}\). Since x v C N and x o Ω max, it follows that r v cC N and r o ∈(1−c)Ω max or equivalently

$$\left \{ \begin{aligned} &F_Nr_v \leq cg_N \\ &F_or_o \leq(1-c)g_o \end{aligned} \right . $$

Hence the nonlinear optimization problem (4.9) is transformed into the following linear programming problem,

$$ c^* = \min\limits_{r_v,c}\{c\} $$
(4.10)

subject to

$$\left \{ \begin{aligned} &F_Nr_v \leq cg_N,\\ &F_o(x-r_v) \leq(1-c)g_o,\\ &0 \leq c \leq1 \end{aligned} \right . $$

Remark 4.1

If one would like to maximize c, it is obvious that c=1 for all xC N . In this case the controller (4.7), (4.8) becomes the vertex controller.

Theorem 4.2

The control law (4.7), (4.8), (4.10) guarantees asymptotic stability for all initial states x(0)∈C N .

Proof

First of all we will prove that all solutions starting in C N Ω max will reach Ω max in finite time. For this purpose, consider the following non-negative function,

$$ V(x) = c^*(x), \quad \forall x \in C_N \setminus \varOmega_{\mathrm{max}} $$
(4.11)

V(x) is a candidate Lyapunov function. After solving the LP problem (4.10) and applying (4.7), (4.8), one obtains, for x(k)∈C N Ω max,

$$\left \{ \begin{aligned} &x(k) = c^*(k)x_v^*(k) + \big(1-c^*(k)\big)x_o^*(k)\\&u(k) = c^*(k)u_v(k) + \big(1-c^*(k)\big)u_o(k) \end{aligned} \right . $$

It follows that,

$$\begin{aligned} x(k+1) &= Ax(k) + Bu(k) \\&= c^*(k)x_v(k+1) + \big(1-c^*(k)\big)x_o(k+1) \end{aligned} $$

where

$$\left \{ \begin{aligned} &x_v(k+1) = Ax_v^*(k) + Bu_v(k) \in C_N \\ &x_o(k+1) = Ax_o^*(k) + Bu_o(k) \in\varOmega_{\mathrm{max}} \end{aligned} \right . $$

Hence c (k) is a feasible solution for the LP problem (4.10) at time k+1. By solving (4.10) at time k+1, one gets the optimal solution, namely

$$x(k+1) = c^*(k+1)x_v^*(k+1) + \bigl(1-c^*(k+1)\bigr)x_o^*(k+1) $$

where \(x_{v}^{*}(k+1) \in C_{N}\) and \(x_{o}^{*}(k+1) \in\varOmega_{\mathrm{max}}\). It follows that c (k+1)≤c (k) and V(x) is non-increasing.

Using the vertex controller, an interpolation between a point of C N and the origin is obtained. Conversely using the controller (4.7), (4.8), (4.10) an interpolation is constructed between a point of C N and a point of Ω max which in turn contains the origin as an interior point. This last property proves that the vertex controller is a feasible choice for the interpolation scheme (4.7), (4.8), (4.10). Hence it follows that,

$$c^*(k) \leq\sum\limits _{i=1}^s\beta_i^*(k) $$

for any x(k)∈C N , with \(\beta_{i}^{*}(k)\) obtained in (3.46), Sect. 3.4.

Since the vertex controller is asymptotically stabilizing, the state reaches any bounded set around the origin in finite time. In our case this property will imply that using the controller (4.7), (4.8), (4.10) the state of the closed loop system reaches Ω max in finite time or equivalently that there exists a finite k such that c (k)=0.

The proof is complete by noting that inside Ω max, the LP problem (4.10) has the trivial solution c =0. Hence the controller (4.7), (4.8), (4.10) becomes the local controller (4.3). The feasible stabilizing controller u(k)=Kx(k) is contractive, and thus the interpolating controller assures asymptotic stability for all xC N . □

The control law (4.7), (4.8), (4.10) obtained by solving on-line the LP problem (4.10) is called Implicit Interpolating Control.

Since \(r_{v}^{*}(k) = c^{*}(k)x_{v}^{*}(k)\) and \(r_{o}^{*}(k) = (1-c^{*}(k))x_{o}^{*}(k)\), it follows that,

$$ u(k) = u_{rv}(k) + u_{ro}(k) $$
(4.12)

where u rv (k) is the vertex control law at \(r_{v}^{*}(k)\) and \(u_{ro}(k) = Kr_{o}^{*}(k)\).

Remark 4.2

Note that at each time instant Algorithm 4.1 requires the solutions of two LP problems, one is (4.10) of dimension n+1, the other is to determine to which simplex \(r_{v}^{*}\) belongs.

Algorithm 4.1
figure 2

Interpolating control—Implicit solution

Example 4.1

Consider the following time-invariant linear discrete-time system,

$$ x(k+1) = \left [ \begin{array}{c@{\quad}c} 1& 1 \\ 0& 1 \end{array} \right ]x(k) + \left [ \begin{array}{c} 1 \\ 0.3 \end{array} \right ]u(k) $$
(4.13)

The constraints are,

$$ \begin{array}{c} -10 \leq x_1(k) \leq10, \qquad-5 \leq x_2(k) \leq5, \qquad-1 \leq u(k) \leq1 \end{array} $$
(4.14)

The local controller is chosen as a linear quadratic (LQ) controller with weighting matrices Q=I and R=1, giving the state feedback gain,

$$ K = [-0.5609 \quad {-}0.9758] $$
(4.15)

The sets Ω max and C N with N=14 are shown in Fig. 4.1. Note that C 14=C 15 is the maximal controlled invariant set. Ω max is presented in minimal normalized half-space representation as,

$$ \varOmega_{\mathrm{max}} = \left \{x \in\mathbb{R}^2: \left [ \begin{array}{c@{\quad}c} 0.1627& -0.9867\\ -0.1627& 0.9867\\ -0.1159& -0.9933\\ 0.1159& 0.9933\\ -0.4983& -0.8670\\ 0.4983& 0.8670 \end{array} \right ]x \leq \left [ \begin{array}{r} 1.9746\\ 1.9746\\ 1.4115\\ 1.4115\\ 0.8884\\ 0.8884 \end{array} \right ] \right \} $$
(4.16)

The set of vertices of C N is given by the matrix V(C N ), together with the corresponding control matrix U v ,

$$ V(C_N) = [V_1 \quad{-}V_1], \qquad U_v = [U_1 \quad{-}U_1] $$
(4.17)

where

$$\small \begin{array}{ll} V_1 &= \left [ \begin{array}{c@{\quad\!\!}r@{\quad\!\!}r@{\quad\!\!}r@{\quad\!\!}r@{\quad\!\!}r@{\quad\!\!}r@{\quad\!\!}r@{\quad\!\!}c} 10.0000& 9.7000& 9.1000& 8.2000& 7.0000& 5.5000& 3.7000& 1.6027& -10.0000\\ 1.0000& 1.3000& 1.6000& 1.9000& 2.2000& 2.5000& 2.8000& 3.0996& 3.8368 \end{array} \right ], \\ U_1 &= \left [ \begin{array}{r@{\quad}r@{\quad}r@{\quad}r@{\quad}r@{\quad}r@{\quad}r@{\quad}r@{\quad}r} -1& -1& -1& -1& -1& -1& -1& -1& 1 \end{array} \right ] \end{array} $$

The state space partition of vertex control is shown in Fig. 4.2(a). Using the implicit interpolating controller, Fig. 4.2(b) presents state trajectories of the closed loop system for different initial conditions.

Fig. 4.2
figure 3

State space partition of vertex control and state trajectories for Example 4.1

For the initial condition x(0)=[−2.0000 3.3284]T, Fig. 4.3 shows the state and input trajectories for the implicit interpolating controller (solid). As a comparison, we take MPC, based on quadratic programming, where an LQ criterion is optimized, with identity weighting matrices. Hence the set Ω max for the local unconstrained control is identical for the MPC solution and for the implicit interpolating controller. The prediction horizon for the MPC was chosen to be 14 to match the controlled invariant set C 14 used for the implicit interpolating controller. Figure 4.3 shows the state and input trajectories obtained for the implicit MPC (dashed).

Fig. 4.3
figure 4

State and input trajectories for Example 4.1 for implicit interpolating control (solid), and for implicit QP-MPC (dashed)

Using the tic/toc function of Matlab 2011b, the computational burdens of interpolating control and MPC were compared. The result is shown in Table 4.1

Table 4.1 Durations [ms] of the on-line computations during one sampling interval for interpolating control and MPC, respectively for Example 4.1

As a final analysis element, Fig. 4.4 presents the interpolating coefficient c (k). It is interesting to note that c (k)=0, ∀k≥15 indicating that from time instant k=15, the state of the closed loop system is in Ω max, and consequently is optimal in the MPC cost function terms. The monotonic decrease and the positivity confirms the Lyapunov interpretation given in the present section.

Fig. 4.4
figure 5

Interpolating coefficient c as a function of time example 4.1

3 Interpolating Control via Linear Programming—Explicit Solution

The structural implication of the LP problem (4.10) is investigated in this section.

3.1 Geometrical Interpretation

Let (⋅) denotes the boundary of the corresponding set (⋅). The following theorem holds

Theorem 4.3

For all xC N Ω max, the solution of the LP problem (4.10) satisfies \(x_{v}^{*} \in\partial C_{N}\) and \(x_{o}^{*} \in\partial\varOmega_{\mathrm{max}}\).

Proof

Consider xC N Ω max, with a particular convex combination

$$x = cx_v+(1-c)x_o $$

where x v C N and x o Ω max. If x o is strictly inside Ω max, one can set \(\tilde{x}_{o} = \partial\varOmega_{\mathrm{max}} \cap\overline{x,x_{o}}\), i.e. \(\tilde{x}_{o}\) is the intersection between ∂Ω max and the line segment connecting x and x o , see Fig. 4.5. Apparently, x can be expressed as the convex combination of x v and \(\tilde{x}_{o}\), i.e.

$$x = \tilde{c}x_v + (1-\tilde{c})\tilde{x}_o $$

with \(\tilde{c} < c\), since x is closer to \(\tilde{x}_{o}\) than to x o . So (4.10) leads to \(\{c^{*},x_{v}^{*},x_{o}^{*}\}\) with \(x_{o}^{*} \in\partial\varOmega_{\mathrm{max}}\).

Fig. 4.5
figure 6

Graphical illustration for the proof of Theorem 4.3

One the other hand, if x v is strictly inside C N , one can set \(\tilde{x}_{v} = \partial C_{N} \cap\stackrel{\longrightarrow}{x,x_{v}}\), i.e. \(\tilde{x}_{v}\) is the intersection between ∂C N and the ray starting from x through x v , see Fig. 4.5. Again, x can be written as the convex combination of \(\tilde{x}_{v}\) and x o , i.e.

$$x = \tilde{c}\tilde{x}_v + (1-\tilde{c})x_o $$

with \(\tilde{c} < c\), since x is further from \(\tilde{x}_{v}\) than from x v . This leads to the conclusion that for the optimal solution \(\{ c^{*},x_{v}^{*},x_{o}^{*}\}\) we have \(x_{v}^{*} \in\partial P_{N}\). □

Theorem 4.3 states that for all xC N Ω max, the interpolating coefficient c is minimal if and only if x is written as a convex combination of two points, one belonging to C N and the other to ∂Ω max. It is obvious that for xΩ max, the LP problem (4.10) has the trivial solution c =0 and thus \(x_{v}^{*} = 0\) and \(x_{o}^{*} = x\).

Theorem 4.4

For all xC N Ω max, the convex combination x=cx v +(1−c)x o gives the smallest value of c if the ratio \(\frac {\Vert x_{v}-x\Vert }{\Vert x-x_{o}\Vert }\) is maximal, where ∥⋅∥ denotes the Euclidean vector norm.

Proof

It holds that

$$\begin{aligned} &x =\, cx_v + (1-c)x_o \\ &\quad \Rightarrow \quad x_v - x = x_v - cx_v - (1-c)x_o = (1-c)(x_v - x_o) \end{aligned} $$

consequently

$$ \Vert x_v-x\Vert = (1-c)\Vert x_v-x_o\Vert $$
(4.18)

Analogously, one obtains

$$ \Vert x - x_o\Vert = c\Vert x_v-x_o\Vert $$
(4.19)

Combining (4.18) and (4.19) and the fact that c≠0 for all xC N Ω max, one gets

$$\frac{\Vert x_v-x\Vert }{\Vert x-x_o\Vert } = \frac{(1-c)\Vert x_v-x_o\Vert }{c\Vert x_v-x_o\Vert } = \frac{1}{c}-1 $$

c>0 is minimal if and only if \(\frac{1}{c}-1\) is maximal, or equivalently \(\frac{\Vert x_{v}-x\Vert }{\Vert x-x_{o}\Vert }\) is maximal. □

3.2 Analysis in \(\mathbb{R}^{2}\)

In this subsection an analysis of the optimization problem (4.9) in the \(\mathbb{R}^{2}\) parameter space is presented with reference to Fig. 4.6. The discussion is insightful in what concerns the properties of the partition in the explicit solution. The problem considered here is to decompose the polyhedral X 1234 such that the explicit solution c =min{c} is given in the decomposed cells.

Fig. 4.6
figure 7

Graphical illustration for the proof of Theorem 4.5

For illustration we will consider four points V 1,V 2,V 3,V 4, and any point \(x \in \operatorname{Conv}(V_{1},V_{2},V_{3},V_{4})\). This schematic view can be generalized to any pair of faces of C N and Ω max. Denote V ij as the interval connecting V i and V j for i,j=1,…,4. The problem is reduced to the expression of a convex combination x=cx v + (1−c)x o , where x v V 12∂C N and x o V 34∂Ω max providing the minimal value of c.

Without loss of generality, suppose that the distance from V 2 to V 34 is greater than the distance from V 1 to V 34, or equivalently the distance from V 4 to V 12 is smaller than the distance from V 3 to V 12.

Theorem 4.5

Under the condition that the distance from V 2 to V 34 is greater than the distance from V 1 to V 34, or equivalently the distance from V 4 to V 12 is smaller than the distance from V 3 to V 12, the decomposition of the polytope V 1234, V 1234=V 124V 234 is the result of the minimization of the interpolating coefficient c.

Proof

Without loss of generality, suppose that xV 234. x can be decomposed as,

$$ x = cV_2 + (1-c)x_o $$
(4.20)

where x o V 34, see Fig. 4.6. Another possible decomposition is

$$ x = c^{\prime}x_v^{\prime} + \bigl(1-c^{\prime}\bigr)x_o^{\prime} $$
(4.21)

where \(x_{v}^{\prime}\) belongs to V 34 and \(x_{o}^{\prime}\) belongs to V 12.

Clearly, if the distance from V 2 to V 34 is greater than the distance from V 1 to V 34 then the distance from V 2 to V 34 is greater than the distance from any point in V 12 to V 34. Consequently, there exists the point T in the ray, starting from V 2 through x such that the distance from T to V 34 is equal to the distance from \(x_{v}^{\prime}\) to V 34. It follows that the line connecting T and \(x_{v}^{\prime}\) is parallel to X 34, see Fig. 4.6.

Using Basic Proportionality Theorem, one has

$$ \frac{\|x-x_v^{\prime}\|}{\|x-x_o^{\prime}\|} = \frac{\|x-T\|}{\|x-x_o\|} $$
(4.22)

by using Theorem 4.4 and since

$$\frac{\|x-T\|}{\|x-x_o\|} < \frac{\|x-V_2\|}{\|x-x_o\|} $$

it follows that c<c′. □

Theorem 4.5 states that the minimal value of the interpolating coefficient c is found with the help of the decomposition of V 1234 as V 1234=V 124V 234.

Remark 4.3

Clearly, if V 12 is parallel to V 34, then any convex combination x=cx v +(1−c)x o gives the same value of c. Hence the partition may not be unique.

Remark 4.4

As a consequence of Theorem 4.5, it is clear that the region C N Ω max can be subdivided into partitions (cells) as follows,

  • For each facet of the set Ω max, one has to find the furthest point on ∂C N on the same side of the origin as the facet of Ω max. A polyhedral cell is obtained as the convex hull of that facet of Ω max and the furthest point in C N . By the bounded polyhedral structure of C N , the existence of some vertex of C N as the furthest point is guaranteed.

  • On the other hand, for each facet of C N , one has to find the closest point on ∂Ω max on the same side of the origin as the facet of C N . A polyhedral cell is obtained as the convex hull of that facet of C N and the closest point in Ω max. Again by the bounded polyhedral structure of Ω max, the existence of some vertex Ω max as the closest point is guaranteed.

Remark 4.5

Clearly, in \(\mathbb{R}^{2}\), the state space partition according to Remark 4.4 cover the entire set C N , see e.g. Fig. 4.7. However in \(\mathbb{R}^{n}\), that is not necessarily the case as shown in the following example. Let C N and Ω max be given by the vertex representations, displayed in Fig. 4.8(a),

$$\begin{aligned} &C_N = \operatorname{Conv} \left \{\left [ \begin{array}{ccc}-4\\ 0\\ 0 \end{array} \right ], \left [ \begin{array}{ccc}4\\ 4\\ 4 \end{array} \right ], \left [ \begin{array}{ccc}4\\ -4\\ 0 \end{array} \right ], \left [ \begin{array}{ccc}4\\ 4\\ -4 \end{array} \right ] \right \} \\&\varOmega_{\mathrm{max}} = \operatorname{Conv} \left \{ \left [ \begin{array}{ccc}1\\ 0\\ 0 \end{array} \right ], \left [ \begin{array}{ccc}-0.5\\ -0.5\\ -0.5 \end{array} \right ], \left [ \begin{array}{ccc}-0.5\\ 0.5\\ 0 \end{array} \right ], \left [ \begin{array}{ccc}-0.5\\ -0.5\\ 0.5 \end{array} \right ] \right \} \end{aligned} $$

By solving the parametric linear programming problem (4.10) with respect to x, the state space partition is obtained [19]. Figure 4.8(b) shows two polyhedral partitions of the state space partition. The black set is Ω max. The gray set is the convex hull of two vertices of Ω max and two vertices of C N .

Fig. 4.7
figure 8

Simplex based decomposition as an explicit solution of the LP problem (4.10)

Fig. 4.8
figure 9

Graphical illustration for Remark 4.5. The partition is obtained by two vertices of the inner set Ω max and two vertices of the outer set C N

In conclusion, in \(\mathbb{R}^{n}\) for all xC N Ω max, the smallest value c will be reached when C N Ω max is decomposed into polytopes with vertices both on ∂C N and ∂Ω max. These polytopes can be further decomposed into simplices, each formed by r vertices of C N and nr+1 vertices of Ω max where 1≤rn.

3.3 Explicit Solution

Theorem 4.6

For all xC N Ω max, the controller (4.7), (4.8), (4.10) is a piecewise affine state feedback law defined over a partition of C N Ω max into simplices. The controller gains are obtained by linear interpolation of the control values at the vertices of simplices.

Proof

Suppose that x belongs to a simplex formed by n vertices {v 1,v 2,…,v n } of C N and the vertex v o of Ω max. The other cases of n+1vertices distributed in a different manner between C N and Ω max can be treated similarly.

In this case, x can be expressed as,

$$ x = \sum\limits _{i=1}^{n}{ \beta_iv_i} + \beta_{n+1}v_o $$
(4.23)

where

$$ \sum\limits _{i=1}^{n+1}{\beta_i} = 1,\quad \beta_i \geq0 $$
(4.24)

Given that n+1 linearly independent vectors define a non-empty simplex, let the invertible (n+1)×(n+1) matrix be

$$ T_s = \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} v_1& v_2& \ldots& v_n& v_o\\ 1& 1& \ldots& 1& 1 \end{array} \right ] $$
(4.25)

Using (4.23), (4.24), (4.25), the interpolating coefficients β i with i=1,2,…,n+1 are defined uniquely as,

$$ \left [ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \beta_1& \beta_2& \ldots& \beta_{n}& \beta_{n+1} \end{array} \right ]^T = T_s^{-1}\left [ \begin{array}{c}x\\ 1 \end{array} \right ] $$
(4.26)

On the other hand, from (4.7),

$$x = cx_v + (1-c)x_o, $$

Due to the uniqueness of (4.23), β n+1=1−c and

$$x_v = \sum\limits _{i=1}^{n}{ \frac{\beta_i}{c}v_i} $$

The Vertex Controller (3.46) gives

$$u_v = \sum\limits _{i=1}^{n}{ \frac{\beta_i}{c}u_i} $$

where u i are an admissible control value at v i , i=1,2,…,n. Therefore

$$u = cu_v + (1-c)u_o = \sum\limits _{i=1}^{n}{ \beta_iu_i} + \beta_{n+1}u_o. $$

with u o =Kx o . Together with (4.26), one obtains

$$\begin{aligned} u &= \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c}u_1& u_2& \ldots& u_n& u_o \end{array} \right ]\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c}\beta_1& \beta_2& \ldots& \beta_{n}& \beta_{n+1} \end{array} \right ]^T \\&= \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} u_1& u_2& \ldots& u_n& u_o \end{array} \right ]T_s^{-1} \left [ \begin{array}{c}x\\ 1 \end{array} \right ] \\&= Lx + v \end{aligned} $$

where the matrix \(L \in\mathbb{R}^{m \times n}\) and the vector \(v \in \mathbb{R}^{m}\) are defined by,

$$\left [ \begin{array}{c@{\quad}c} L& v \end{array} \right ] = \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c}u_1& u_2& \ldots& u_n& u_o \end{array} \right ]T_s^{-1} $$

Hence for all xC N Ω max the controller (4.7), (4.8), (4.10) is a piecewise affine state feedback law. □

It is interesting to note that the interpolation between the piecewise linear Vertex Controller and the linear controller in Ω max give rise to a piecewise affine controller. This is not completely unexpected since (4.10) is a multi-parametric linear program with respect to x.

As in MPC, the number of cells can be reduced by merging those with identical control laws [45].

Remark 4.6

It can be observed that Algorithm 4.2 uses only the information about the state space partition of the explicit solution of the LP problem (4.10). The explicit form of c , \(r_{v}^{*}\) and \(r_{o}^{*}\) as a piecewise affine function of the state is not used.

Algorithm 4.2
figure 10

Interpolating control—Explicit solution

Clearly, the simplex-based partition over C N Ω max in step 2 might be very complex. Also the fact, that for all facets of Ω max the local controller is of the form u=Kx, is not exploited. In addition, as practice usually shows, for each facet of C N , the vertex controller is usually constant. In these cases, the complexity of the explicit interpolating controller (4.7), (4.8), (4.10) might be reduced as follows.

Consider the case when the state space partition CR of C N Ω max is formed by one vertex x v of C N and one facet F o of Ω max. Note that from Remark 4.4 such a partition always exists as an explicit solution to the LP problem (4.10). For all xCR it follows that

$$x = c^*x_v^* + \bigl(1-c^*\bigr)x_o^* = c^*x_v^* + r_o^* $$

with \(x_{o}^{*} \in F_{o}\) and \(r_{o}^{*} = (1-c^{*})x_{o}^{*}\).

Let \(u_{v} \in\mathbb{R}^{m}\) be an admissible control value at x v and denote the explicit solution of c and \(r_{o}^{*}\) to the LP problem (4.10) for all xCR as,

$$ \left \{ \begin{aligned} &c^* = L_cx + v_c\\ &r_o^* = L_ox + v_o \end{aligned} \right . $$
(4.27)

where L c , v c and L o , v o are matrices of appropriate dimensions. The control value for xCR is computed as,

$$ u = c^*u_v + \bigl(1-c^*\bigr)Kx_o^* = c^*u_v + Kr_o^* $$
(4.28)

By substituting (4.29) into (4.30), one obtains

$$u = u_v (L_cx + v_c ) + K (L_ox + v_o ) $$

or, equivalently

$$ u = (u_vL_c + KL_o )x + (u_vv_c + Kv_o ) $$
(4.29)

The fact that the control value is a piecewise affine function of state is confirmed. Clearly, the complexity of the explicit solution with the control law (4.31) is lower than the complexity of the explicit solution with the simplex based partition, since one does not have to divide up the facets of Ω max (and facets of C N , in the case when the vertex control for such facets is constant) into a set of simplices.

3.4 Qualitative Analysis

Theorem 4.7 below shows the Lipschitz continuity of the control law based on linear programming (4.7), (4.8), (4.10).

Theorem 4.7

The explicit interpolating control law (4.7), (4.8), (4.10) obtained by using Algorithm  4.2 is continuous and Lipschitz continuous with Lipschitz constant M=max i L i ∥, where i ranges over the set of indices of partitions andL i is defined in (4.28).

Proof

The explicit interpolating controller might be discontinuous only on the boundary of polyhedral cells CR i . Suppose that x belongs to the intersection of s cells CR j , j=1,2,…,s.

For CR j , as in (4.23), the state x can be expressed as,

$$x = \beta_1^{(j)}v_1^{(j)} + \beta_2^{(j)}v_2^{(j)} + \cdots+ \beta _{n+1}^{(j)}v_{n+1}^{(j)} $$

where \(\sum_{i=1}^{n+1}\beta_{i}^{(j)} = 1\), \(0 \leq\beta_{i}^{(j)} \leq1\) and \(v_{i}^{(j)}\), i=1,2,…,n+1 are the vertices of CR j , j=1,2,…,s. It is clear that the only nonzero entries of the interpolating coefficients \(\{\beta_{1}^{(j)}, \ldots, \beta_{n+1}^{(j)} \}\) are those corresponding to the vertices that belong to the intersection. Therefore

$$u = \beta_1^{(j)}u_1^{(j)} + \cdots+ \beta_{n+1}^{(j)}u_{n+1}^{(j)} $$

is equal for all j=1,2,…,s.

For the Lipschitz continuity property, for any two points x A and x B in C N , there exist r+1 points x 0,x 1,…,x r that lie on the line segment, connecting x A and x B , and such that x A =x 0, x B =x r and \((x_{i-1},x_{i}) = \overline{x_{A},x_{B}}\cap \partial \mathit{CR}_{i}\), i.e. (x i−1,x i ) is the intersection between the line connecting x A ,x B and the boundary of some critical region CR i , see Fig. 4.9. Due to the continuity property, proved above, of the control law (4.27), one has,

$$\begin{aligned} &\big\Vert (L_Ax_A + v_A) - (L_Bx_B+ v_B)\big\Vert \\&\quad = \big\Vert (L_0x_0 + v_0) -(L_0x_1+v_0) + (L_1x_1 + v_1) - \cdots -(L_rx_r+v_r)\big\Vert \\&\quad = \Vert L_0x_0 -L_0x_1 + L_1x_1 - \cdots-L_rx_r\Vert \\&\quad \displaystyle\leq\sum\limits_{i=1}^r\big\Vert L_{i-1}(x_i-x_{i-1})\big\Vert \leq\sum\limits_{k=1}^r\Vert L_{i-1}\Vert \big\Vert (x_i-x_{i-1})\big\Vert \\&\quad \displaystyle\leq\max\limits_k\big\{ \Vert L_{i-1}\Vert \big\} \sum\limits_{i=1}^r\big\Vert (x_i-x_{i-1})\big\Vert = M\Vert x_A-x_B\Vert \end{aligned} $$

where the last equality holds, since the points x i with k=0,1,…,r are aligned.

Fig. 4.9
figure 11

Graphical illustration for the proof of Theorem 4.7

 □

Example 4.2

We consider now the explicit interpolating controller for Example 4.1. Using Algorithm 4.2, the state space partition is obtained in Fig. 4.7. Merging the regions with identical control laws, the reduced state space partition is obtained in Fig. 4.9.

Figure 4.10(a) shows the Lyapunov function as a piecewise affine function of state. It is well knownFootnote 1 that the level sets of the Lyapunov function for vertex control are simply obtained by scaling the boundary of the set C N . For the interpolating controller (4.7), (4.8), (4.10), the level sets of the Lyapunov function V(x)=c depicted in Fig. 4.10(b) have a more complicated form and generally are not parallel to the boundary of C N . From Fig. 4.10, it can be observed that the Lyapunov level sets V(x)=c have the outer set C N as an external level set (for c =1). The inner level sets change the polytopic shape in order to approach the boundary of the inner set Ω max.

Fig. 4.10
figure 12

Lyapunov function and Lyapunov level curves for the interpolating controller for Example 4.2

The control law over the state space partition is,

$$ u(k) = \left \{ \begin{array}{l@{\quad}cl} -1& \mbox{ if }& \left [ \begin{array}{cc} 0.45& 0.89 \\ 0.24& 0.97 \\ 0.16& 0.99 \\ -0.55& 0.84 \\ 0.14& 0.99 \\ -0.50& -0.87 \\ 0.20& 0.98 \\ 0.32& 0.95 \\ 0.37& -0.93 \\ 0.70& 0.71 \end{array} \right ]x(k) \leq \left [ \begin{array}{c} 5.50 \\ 3.83\\ 3.37\\ 1.75\\ 3.30\\ -0.89\\ 3.53\\ 4.40\\ 2.73\\ 7.78 \end{array} \right ] \\ -0.38x_1(k)+0.59x_2(k)- 2.23&\mbox{ if } &\left [ \begin{array}{cc} 0.54& -0.84 \\ -0.37& 0.93\\ -0.12& -0.99 \end{array} \right ]x(k) \leq \left [ \begin{array}{c} -1.75\\ 2.30\\ -1.41 \end{array} \right ] \\ -0.02x_1(k)-0.32x_2(k)+0.02 &\mbox{ if }& \left [ \begin{array}{cc} 0.37& -0.93\\ 0.06& 1.00\\ -0.26& -0.96 \end{array} \right ]x(k) \leq \left [ \begin{array}{c} -2.30\\ 3.20\\ -1.06 \\ \end{array} \right ] \\ -0.43x_1(k)-1.80x_2(k) + 1.65&\mbox{ if }& \left [ \begin{array}{cc} 0.16& -0.99\\ 0.26& 0.96\\ -0.39& -0.92 \end{array} \right ]x(k) \leq \left [ \begin{array}{c} -1.97\\ 1.06\\ 0.38 \end{array} \right ] \\ 0.16x_1(k)-0.41x_2(k)+ 2.21&\mbox{ if }& \left [ \begin{array}{cc} 0.39& 0.92\\ -1.00& 0\\ 0.37& -0.93 \end{array} \right ]x(k) \leq \left [ \begin{array}{c} -0.38\\ 10.00\\ -2.73 \end{array} \right ] \\ 1 &\mbox{ if }& \left [ \begin{array}{cc} -0.14& -0.99 \\ -0.37& 0.93 \\ -0.24& -0.97 \\ -0.71& -0.71 \\ -0.45& -0.89 \\ -0.32& -0.95 \\ -0.20& -0.98 \\ -0.16& -0.99 \\ 0.50& 0.87 \\ 0.54& -0.84 \end{array} \right ]x(k) \leq \left [ \begin{array}{c} 3.30\\ 2.73\\ 3.83\\ 7.78\\ 5.50\\ 4.40\\ 3.53\\ 3.37\\ -0.89\\ 1.75 \end{array} \right ] \\ -0.38x_1(k)+0.59x_2(k) + 2.23&\mbox{ if }& \left [ \begin{array}{cc} 0.12& 0.99\\ 0.37& -0.93\\ -0.54& 0.84 \end{array} \right ]x(k) \leq \left [ \begin{array}{c} -1.41\\ 2.30\\ -1.75 \end{array} \right ] \\ -0.02x_1(k)-0.32x_2(k)-0.02 &\mbox{ if }& \left [ \begin{array}{cc} 0.26& 0.96\\ -0.06& -1.00\\ -0.37& 0.93 \end{array} \right ]x(k) \leq \left [ \begin{array}{c} -1.06\\ 3.20\\ -2.30 \\ \end{array} \right ] \\ -0.43x_1(k)-1.80x_2(k) - 1.65&\mbox{ if }& \left [ \begin{array}{cc} 0.39& 0.92\\ -0.26& -0.96\\ -0.16& 0.97 \end{array} \right ]x(k) \leq \left [ \begin{array}{c} 0.38\\ 1.06\\ -1.98 \end{array} \right ] \\ 0.16x_1(k)-0.41x_2(k)- 2.21 &\mbox{ if }& \left [ \begin{array}{cc} 1.00& 0\\ -0.37& 0.93\\ -0.39& -0.92 \end{array} \right ]x(k) \leq \left [ \begin{array}{c} 10.00\\ -2.73\\ -0.38 \end{array} \right ] \\ -0.56x_1(k)-0.98x_2(k)&\mbox{ if }& \left [ \begin{array}{cc} 0.16& -0.99\\ -0.16& 0.99\\ -0.12& -0.99\\ 0.12& 0.99\\ -0.50& -0.87\\ 0.50& 0.87 \end{array} \right ]x(k) \leq \left [ \begin{array}{c} 1.97\\ 1.97\\ 1.41\\ 1.41\\ 0.89\\ 0.89 \end{array} \right ] \end{array} \right . $$

In view of comparison, consider the explicit MPC solution in Example 4.1, Fig. 4.11(a) presents the state space partition of the explicit MPC with the same setup parameters as in Example 4.1. Merging the polyhedral regions with an identical piecewise affine control function, the reduced state space partition is obtained in Fig. 4.11(b).

Fig. 4.11
figure 13

State space partition before and after merging for Example 4.2 using explicit MPC

The comparison of explicit interpolating control and explicit MPC in terms of the number of regions before and after merging is given in Table 4.2.

Table 4.2 Number of regions for explicit interpolating control and for explicit MPC for Example 4.2

Figure 4.12 shows the explicit interpolating control law and the explicit MPC control law as piecewise affine functions of state, respectively.

Fig. 4.12
figure 14

Explicit interpolating control law and explicit MPC control law as piecewise affine functions of state for Example 4.2

4 Improved Interpolating Control

The interpolating controller in Sect. 4.2 and Sect. 4.3 can be considered as an approximate model predictive control law, which in the last decade has received significant attention in the control community [18, 60, 63, 78, 108, 114]. From this point of view, it is worthwhile to obtain an interpolating controller with some given level of accuracy in terms of performance compared with the optimal MPC one. Naturally, the approximation error can be a measure of the level of accuracy. The methods of computing bounds on the approximation error are known, see e.g. [18, 60, 114].

Obviously, the simplest way of improving the performance of the interpolating controller is to use an intermediate s-step controlled invariant set C s with 1≤s<N. Then there will be not only one level of interpolation but two or virtually any number of interpolation as necessary from the performance point of view. For simplicity, we provide in the following a study of the case when only one intermediate controlled invariant set C s is used. Let C s be in the form,

$$ C_s = \bigl\{ x \in\mathbb{R}^n: F_sx \leq g_s \bigr\} $$
(4.30)

and satisfy the condition Ω maxC s C N .

Remark 4.7

It has to be noted however that, the expected increase in performance comes at the price of complexity as long as the intermediate set needs to be stored along with its vertex controller.

For further use, the vertex control law applied for the set C s is denoted as u s . Using the same philosophy as in Sect. 4.2, the state x is decomposed as,

  1. 1.

    If xC N and xC s , then

    $$ x = c_1x_v + (1-c_1)x_s $$
    (4.31)

    with x v C N , x s C s and 0≤c 1≤1. The control law is,

    $$ u = c_1u_v + (1-c_1)u_s $$
    (4.32)
  2. 2.

    Else xC s ,

    $$ x = c_2x_s + (1-c_2)x_o $$
    (4.33)

    with x s C s , x o Ω max and 0≤c 2≤1. The control law is,

    $$ u = c_2u_s + (1-c_2)u_o $$
    (4.34)

Depending on the value of x, at each time instant, either c 1 or c 2 is minimized in order to be as close as possible to the optimal controller. This can be done by solving the following nonlinear optimization problems,

  1. 1.

    If xC N C s ,

    $$ c_1^* = \min\limits _{x_v,x_s,c_1}\{c_1\} $$
    (4.35)

    subject to

    $$\left \{ \begin{aligned} &F_Nx_v \leq g_N,\\ &F_sx_s \leq g_s,\\ &c_1x_v + (1-c_1)x_s = x,\\ &0 \leq c_1 \leq1 \end{aligned} \right . $$
  2. 2.

    Else xC s ,

    $$ c_2^* = \min\limits _{x_s,x_o,c_2}\{c_2\} $$
    (4.36)

    subject to

    $$\left \{ \begin{aligned} &F_sx_s \leq g_s,\\ &F_ox_o \leq g_o,\\ &c_2x_s + (1-c_2)x_o = x,\\ &0 \leq c_2 \leq1 \end{aligned} \right . $$

or by changing variables r v =c 1 x v and r s =c 2 x s , the nonlinear optimization problems (4.37) and (4.38) can be transformed in the following LP problems, respectively,

  1. 1.

    If xC N C s

    $$ c_1^* = \min\limits _{r_v,c_1}\{c_1\} $$
    (4.37)

    subject to

    $$\left \{ \begin{aligned} &F_Nr_v \leq c_1g_N,\\ &F_s(x-r_v) \leq(1-c_1)g_s,\\ &0 \leq c_1 \leq1 \end{aligned} \right . $$
  2. 2.

    Else xC s

    $$ c_2^* = \min\limits _{r_s,c_2}\{c_2\} $$
    (4.38)

    subject to

    $$\left \{ \begin{aligned} &F_sr_s \leq c_2g_s,\\ &F_o(x-r_s) \leq(1-c_2)g_o,\\ &0 \leq c_2 \leq1 \end{aligned} \right . $$

The following theorem shows recursive feasibility and asymptotic stability of the interpolating controller (4.33), (4.34), (4.35), (4.36), (4.39), (4.40),

Theorem 4.8

The control law (4.33), (4.34), (4.35), (4.36), (4.39), (4.40) guarantees recursive feasibility and asymptotic stability of the closed loop system for all initial states x(0)∈C N .

Proof

The proof is omitted here, since it follows the same steps as those presented in the feasibility proof of Theorem 4.1 and the stability proof of Theorem 4.2 in Sect. 4.2. □

Remark 4.8

Clearly, instead of the second level of interpolation (4.35), (4.36), (4.40), the MPC approach can be applied for all states inside the set C s . This has very practical consequences in applications, since it is well known [34, 88] that the main issue of MPC for time-invariant linear discrete-time systems is the trade-off between the overall complexity (computational cost) and the size of the domain of attraction. If the prediction horizon is short then the domain of attraction is small. If the prediction horizon is long then the computational cost may be very burdensome for the available hardware. Here MPC with the short prediction horizon is employed inside C s for the performance and then for enlarging the domain of attraction, the control law (4.33), (4.34), (4.39) is used. In this way one can achieve the performance and the domain of attraction with a relatively small computational cost.

Theorem 4.9

The control law (4.33), (4.34), (4.35), (4.36), (4.39), (4.40) can be represented as a continuous function of the state.

Proof

Clearly, the discontinuity of the control law may arise only on the boundary of the set C s , denoted as ∂C s . Note that for x∂C s , the LP problems (4.39), (4.40) have the trivial solution,

$$c_1^* = 0, \qquad c_2^* = 1 $$

Therefore, for x∂C s the control law (4.33), (4.34), (4.39) is u=u s and the control law (4.35), (4.36), (4.40) is u=u s . Hence the continuity of the control law is guaranteed. □

Remark 4.9

It is interesting to note that by using N−1 intermediate sets C i together with the sets C N and Ω max, a continuous minimum-time controller is obtained, i.e. a controller that steers all state xC N into Ω max in no more than N steps.

Concerning the explicit solution of the control law (4.33), (4.34), (4.35), (4.36), (4.39), (4.40), with the same argument as in Sect. 4.3, it can be concluded that,

  • If xC N C s (or xC s Ω max), the smallest value c 1 (or c 2) will be reached when the region C N C s (or C S Ω max) is decomposed into polyhedral partitions in form of simplices with vertices both on ∂C N and on ∂C s (or on ∂C s and on ∂Ω max). The control law in each simplex is a piecewise affine function of the state, whose gains are obtained by interpolation of control values at the vertices of the simplex.

  • If xΩ max, then the control law is the optimal unconstrained controller.

Example 4.3

Consider again Example 4.1. Here one intermediate set C s with s=4 is introduced. The set of vertices V s of C s is,

$$ V_s = \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 10.00& -5.95& -7.71& -10.00& -10.00& 5.95& 7.71& 10.00\\ -0.06& 2.72& 2.86& 1.78& 0.06& -2.72& -2.86& -1.78 \end{array} \right ] $$
(4.39)

and the set of the corresponding control actions at the vertices V s is,

$$ U_s = \left [ \begin{array}{r@{\quad}r@{\quad}r@{\quad}r@{\quad}r@{\quad}r@{\quad}r@{\quad}r} -1& -1& -1& -1& 1& 1& 1& 1 \end{array} \right ] $$
(4.40)

The sets C N , C s and Ω max are depicted in Fig. 4.13. For the explicit solution, the state space partition of the control law (4.33), (4.34), (4.35), (4.36), (4.39), (4.40) is shown in Fig. 4.14(a). Merging the regions with identical control laws, the reduced state space partition is obtained in Fig. 4.14(b). This figure also shows state trajectories of the closed-loop system for different initial conditions.

Fig. 4.13
figure 15

Two-level interpolation for improving the performance

Fig. 4.14
figure 16

State space partition before merging (number of regions: N r =37) and after merging (N r =19), and state trajectories for Example 4.3

Figure 4.15 shows the control law with two-level interpolation.

Fig. 4.15
figure 17

Control value as a piecewise affine function of the state using two-level interpolation for Example 4.3

For the initial condition x(0)=[9.9800 −3.8291]T, Fig. 4.16 shows the results of a time-domain simulation. The two curves correspond to the one-level and two-level interpolating control, respectively.

Fig. 4.16
figure 18

State and input trajectories for one-level interpolating control (dashed), and for two-level interpolating control (solid) for Example 4.3

Figure 4.17 presents the interpolating coefficients \(c_{1}^{*}\) and \(c_{2}^{*}\). As expected \(c_{1}^{*}\) and \(c_{2}^{*}\) are positive and non-increasing. It is also interesting to note that ∀k≥10, \(c_{1}^{*}(k) = 0\), indicating that x is inside C s and ∀k≥14, \(c_{2}^{*}(k) = 0\), indicating that x is inside Ω max.

Fig. 4.17
figure 19

Interpolating coefficients as functions of time for Example 4.3

5 Interpolating Control via Quadratic Programming

The interpolating controller in Sect. 4.2 and Sect. 4.4 makes use of linear programming, which is extremely simple. However, the main issue regarding the implementation of Algorithm 4.1 is the non-uniqueness of the solution. Multiple optima are undesirable, as they might lead to a fast switching between the different optimal control actions when the LP problem (4.10) is solved on-line. In addition, MPC traditionally has been formulated using a quadratic criterion [92]. Hence, also in interpolating control it is worthwhile to investigate the use of quadratic programming.

Before introducing a QP formulation, let us note that the idea of using QP for interpolating control is not new. In [10, 110], Lyapunov theory is used to compute an upper bound of the infinite horizon cost function,

$$ J = \sum\limits _{k=0}^{\infty} \bigl\{ x(k)^TQx(k) + u(k)^TRu(k) \bigr\} $$
(4.41)

where Q⪰0 and R≻0 are the state and input weighting matrices. At each time instant, the algorithms in [110] use an on-line decomposition of the current state, with each component lying in a separate invariant set, after which the corresponding controller is applied to each component separately in order to calculate the control action. Polytopes are employed as candidate invariant sets. Hence, the on-line optimization problem can be formulated as a QP problem. The approach taken in this section follows ideas originally proposed in [10, 110]. In this setting we provide a QP based solution to the constrained control problem.

This section begins with a brief summary on the works [10, 110]. For this purpose, it is assumed that a set of unconstrained asymptotically stabilizing feedback controllers u(k)=K i x(k), i=1,2,…,s is available such that the corresponding invariant set Ω i X

$$ \varOmega_{i} = \bigl\{ x \in\mathbb{R}^n: F_o^{(i)}x \leq g_o^{(i)} \bigr\} $$
(4.42)

is non-empty for i=1,2,…,s.

Denote Ω as the convex hull of Ω i , i=1,2,…,s. It follows that ΩX, since Ω i X, ∀i=1,2,…,s and the fact that X is convex. Any state x(k)∈Ω can be decomposed as,

$$ x(k) = \lambda_1(k)\widehat{x}_1(k) + \lambda_2(k)\widehat{x}_2(k) + \cdots+ \lambda_s(k)\widehat{x}_s(k) $$
(4.43)

where \(\widehat{x}_{i}(k) \in\varOmega_{i}\), ∀i=1,2,…,s and \(\sum_{i=1}^{s}\lambda_{i}(k) = 1\), λ i (k)≥0.

Define \(r_{i} = \lambda_{i}\widehat{x}_{i}\). Since \(\widehat{x}_{i}\in\varOmega _{i}\), it follows that r i λ i Ω i or equivalently,

$$ F_o^{(i)}r_i \leq\lambda_ig_o^{(i)}, \quad \forall i=1,2,\ldots,s $$
(4.44)

From (4.45), one obtains

$$ x(k) = r_1(k)+r_2(k)+ \cdots+r_s(k) $$
(4.45)

Consider the following control law,

$$ u(k) = \sum\limits _{i=1}^s \lambda_iK_i\widehat{x}_i = \sum \limits _{i=1}^sK_ir_i $$
(4.46)

where u i (k)=K i r i (k) is the control law in Ω i . One has,

$$\begin{array}{ll} x(k+1) &= Ax(k) + Bu(k) = \displaystyle A\sum\limits_{i=1}^sr_i(k) + B\sum\limits _{i=1}^sK_ir_i(k) = \sum\limits_{i=1}^s(A + BK_i)r_i(k)\\ \end{array} $$

or,

$$ x(k+1) = \sum\limits _{i=1}^sr_i(k+1) $$
(4.47)

where r i (k+1)=A ci r i (k) and A ci =A+BK i .

Define the vector \(z \in\mathbb{R}^{sn}\) as,

$$ z = \bigl[r_1^T\quad r_2^T \quad \ldots \quad r_s^T \bigr]^T $$
(4.48)

Using (4.49), one obtains,

$$ z(k+1) = \varPhi z(k) $$
(4.49)

where

$$\varPhi= \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} A_{c1}& 0& \ldots& 0\\ 0& A_{c2}& \ldots& 0\\ \vdots& \vdots& \ddots& \vdots\\ 0& 0& \ldots& A_{cs} \end{array} \right ] $$

For the given state and control weighting matrices \(Q \in\mathbb{R}^{n \times n}\) and \(R \in\mathbb{R}^{m \times m}\), consider the following quadratic function,

$$ V(z) = z^TPz $$
(4.50)

where matrix \(P \in\mathbb{R}^{sn \times sn}\), P≻0 is chosen to satisfy,

$$ V\bigl(z(k+1)\bigr)-V\bigl(z(k)\bigr) \leq-x(k)^TQx(k)-u(k)^TRu(k) $$
(4.51)

Using (4.51), the left hand side of (4.53) can be rewritten as,

$$ V\bigl(z(k+1)\bigr)-V\bigl(z(k)\bigr) = z(k)^T\bigl( \varPhi^TP\varPhi- P\bigr)z(k) $$
(4.52)

and using (4.47), (4.48), (4.50), the right hand side of (4.53) becomes,

$$ -x(k)^TQx(k)-u(k)^TRu(k) = z(k)^T(Q_1+R_1)z(k) $$
(4.53)

where

$$\begin{array}{c} Q_1 = -\left [ \begin{array}{c}I\\ I\\ \vdots\\ I \end{array} \right ]Q\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c}I& I& \ldots& I \end{array} \right ], \qquad R_1 = -\left [ \begin{array}{c}K_1^T\\ K_2^T\\ \vdots\\ K_s^T \end{array} \right ]R\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c}K_1& K_2& \ldots& K_s \end{array} \right ] \end{array} $$

Combining (4.53), (4.54) and (4.55), one gets,

$$\varPhi^TP\varPhi- P \preceq Q_1+R_1 $$

or by using the Schur complements, one obtains,

$$ \left [ \begin{array}{c@{\quad}c} P+Q_1+R_1& \varPhi^TP\\ P\varPhi& P \end{array} \right ] \succeq0 $$
(4.54)

Problem (4.56) is linear with respect to matrix P. Since matrix Φ has a sub-unitary spectral radius (4.51), problem (4.56) is always feasible. One way to obtain P is to solve the following LMI problem,

$$ \min\limits _{P}\bigl\{ \operatorname{trace}(P)\bigr\} $$
(4.55)

subject to constraints (4.56).

At each time instant, for a given current state x, consider the following optimization problem,

$$ \min\limits _{r_i,\lambda_i}\left \{\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c}r_1^T& r_2^T& \ldots& r_s^T \end{array} \right ]P\left [ \begin{array}{c}r_1\\ r_2\\ \vdots\\ r_s \end{array} \right ] \right \} $$
(4.56)

subject to

$$\left \{ \begin{aligned} &F_o^{(i)}r_i \leq\lambda_i g_o^{(i)},\quad \forall i=1,2,\ldots,s,\\ &\displaystyle \sum\limits_{i=1}^sr_i = x,\\ &\displaystyle \sum\limits_{i=1}^s\lambda_i = 1, \\ &\lambda_i \geq0,\quad \forall i=1,2,\ldots,s \end{aligned} \right . $$

and implement as input the control action \(u = \sum_{i=1}^{s}K_{i}r_{i}\).

Theorem 4.10

[10, 110]

The control law (4.45), (4.48), (4.58) guarantees recursive feasibility and asymptotic stability for all initial states x(0)∈Ω.

Note that using the approach in [10, 110], for a given state x we are trying to minimize r 1,r 2,…,r s in the weighted Euclidean norm sense. This is somehow a conflicting task, since,

$$r_1 + r_2 + \cdots+ r_s = x $$

In addition, if the first controller is optimal and plays the role of a performance controller, then one would like to have a control law as close as possible to the first controller. This means that in the interpolation scheme (4.45), one would like to have r 1=x and

$$r_2 = r_3 = \cdots= r_s = 0 $$

whenever it is possible. And it is not trivial to do this with the approach in [10, 110].

Below we provide a contribution to this line of research by considering one of the interpolation factors, i.e. control gains to be a performance related one, while the remaining factors play the role of degrees of freedom to enlarge the domain of attraction. This alternative approach can provide the appropriate framework for the constrained control design which builds on the unconstrained optimal controller (generally with high gain) and subsequently need to adjusted them to cope with the constraints and limitations (via interpolation with adequate low gain controllers). From this point of view, in the remaining part of this section we try to build a bridge between the linear interpolation scheme presented in Sect. 4.2 and the QP based interpolation approaches in [10, 110].

For a given set of state and control weighting matrices Q i ⪰0, R i ≻0, consider the following set of quadratic functions,

$$ V_i(r_i) = r_i^TP_ir_i, \quad\forall i=2,3,\ldots,s $$
(4.57)

where matrix \(P_{i} \in\mathbb{R}^{n \times n}\) and P i ≻0 is chosen to satisfy

$$ V_i\bigl(r_i(k+1)\bigr) - V_i\bigl(r_i(k)\bigr) \leq-r_i(k)^TQ_ir_i(k) - u_i(k)^TR_iu_i(k) $$
(4.58)

Since r i (k+1)=A ci r i (k) and u i (k)=K i r i (k), equation (4.60) can be written as,

$$A_{ci}^TP_iA_{ci} - P_i \preceq-Q_i - K_i^TR_iK_i $$

By using the Schur complements, one obtains

$$ \left [ \begin{array}{c@{\quad}c} P_i - Q_i - K_i^TR_iK_i&A_{ci}^TP_i\\ P_iA_{ci}& P_i \end{array} \right ] \succeq0 $$
(4.59)

Since matrix A ci has a sub-unitary spectral radius, problem (4.61) is always feasible. One way to obtain matrix P i is to solve the following LMI problem,

$$ \min\limits _{P_i}\bigl\{ \operatorname{trace}(P_i)\bigr\} $$
(4.60)

subject to constraint (4.61).

Define the vector \(z_{1} \in\mathbb{R}^{(s-1)(n+1)}\) as,

$$z_1 = \bigl[r_2^T \quad r_3^T \quad \ldots \quad r_s^T \quad \lambda_2 \quad \lambda_3 \quad \ldots \quad \lambda_s\bigr]^T $$

Consider the following quadratic function,

$$ J(z_1) = \sum\limits _{i=2}^sr_i^TP_ir_i + \sum\limits _{i=2}^s\lambda_i^2 $$
(4.61)

We underline the fact that the sum is built on indices {2,3,…,s}, corresponding to the more poorly performing controllers. At each time instant, consider the following optimization problem,

$$ V_1(z_1) = \min\limits_{z_1} \bigl\{ J(z_1) \bigr\} $$
(4.62)

subject to the constraints

$$\left \{ \begin{aligned} &F_o^{(i)}r_i \leq\lambda_i g_o^{(i)}, \forall i=1,2,\ldots,s,\\ &\displaystyle\sum\limits_{i=1}^sr_i = x,\\ &\displaystyle\sum\limits_{i=1}^s\lambda_i = 1, \\ &\lambda_i \geq0, \forall i=1,2,\ldots,s \end{aligned} \right . $$

and apply as input the control signal \(u = \sum_{i=1}^{s}\{K_{i}r_{i}\}\).

Theorem 4.11

The control law (4.45), (4.48), (4.64) guarantees recursive feasibility and asymptotic stability for all initial states x(0)∈Ω.

Proof

Theorem 4.11 makes two important claims, namely the recursive feasibility and the asymptotic stability. These can be treated sequentially.

Recursive feasibility: It has to be proved that F u u(k)≤g u and x(k+1)∈Ω for all x(k)∈Ω. It holds that,

$$\begin{array}{ll} F_uu(k) = F_u\displaystyle \sum\limits_{i=1}^s\lambda_iK_i\widehat{x}_i = \displaystyle \sum\limits _{i=1}^s\lambda_iF_uK_i\widehat{x}_i \leq\sum\limits_{i=1}^s\lambda _ig_u = g_u \end{array} $$

and

$$\begin{array}{ll} x(k+1) = Ax(k) + Bu(k) = \displaystyle \sum\limits_{i=1}^s\lambda_iA_{ci}\widehat{x}_i(k) \end{array} $$

Since \(A_{ci}\widehat{x}_{i}(k) \in\varOmega_{i} \subseteq\varOmega\), it follows that x(k+1)∈Ω.

Asymptotic stability: Consider the positive function V 1(z 1) as a candidate Lyapunov function. From the recursive feasibility proof, it is apparent that if \(\lambda_{1}^{*}(k)\), \(\lambda_{2}^{*}(k)\), …, \(\lambda_{s}^{*}(k)\) and \(r_{1}^{*}(k), r_{2}^{*}(k), \ldots,r_{s}^{*}(k)\) is the solution of the optimization problem (4.64) at time instant k, then \(\lambda_{i}(k+1) = \lambda_{i}^{*}(k)\) and

$$r_i(k+1) = A_{ci}r_i^*(k) $$

i=1,2,…,s is a feasible solution to (4.64) at time instant k+1. Since at each time instant we are trying to minimize J(z 1), it follows that

$$V_1\bigl(z_1^*(k+1)\bigr) \leq J\bigl(z_1(k+1) \bigr) $$

and therefore

$$V_1\bigl(z_1^*(k+1)\bigr)-V_1 \bigl(z_1^*(k)\bigr) \leq J\bigl(z_1(k+1)\bigr) - V_1\bigl(z_1^*(k)\bigr) $$

together with (4.60), one obtains

$$V_1\bigl(z_1^*(k+1)\bigr)-V_1 \bigl(z_1^*(k)\bigr) \leq-\sum\limits _{i=2}^s \bigl(r_i^TQ_ir_i + u_i^TR_iu_i \bigr) $$

Hence V 1(z 1) is a Lyapunov function and the control law (4.45), (4.48), (4.64) assures asymptotic stability for all xΩ. □

The constraints of the problem (4.64) can be rewritten as,

$$\left \{ \begin{aligned} &F_o^{(1)}(x-r_2-\cdots-r_s) \leq(1-\lambda_2-\cdots-\lambda_s)g_o^{(1)}\\&F_o^{(2)}r_2 \leq\lambda_2g_o^{(2)} \\ &\quad \vdots\\&F_o^{(s)}r_s \leq\lambda_sg_o^{(s)} \\&\lambda_i \geq0, \quad\forall i=2,\ldots,s\\&\lambda_2 + \lambda_3 + \cdots+\lambda_s\leq1 \end{aligned} \right . $$

or, equivalently

$$ Gz_1 \leq S + Ex $$
(4.63)

where

$$\begin{array}{c} G = \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} -F_o^{(1)}& -F_o^{(1)}& \ldots& -F_o^{(1)}& g_o^{(1)}& g_o^{(1)}& \ldots & g_o^{(1)}\\ F_o^{(2)}& 0& \ldots& 0& -g_o^{(2)}& 0& \ldots& 0\\ 0& F_o^{(3)}& \ldots& 0& 0& -g_o^{(3)}& \ldots& 0\\ \vdots& \vdots& \ddots& \vdots& \vdots& \vdots& \ddots& \vdots\\ 0& 0& \ldots& F_o^{(s)}& 0& 0& \ldots& -g_o^{(s)}\\ 0& 0& \ldots& 0& -1& 0& \ldots& 0\\ 0& 0& \ldots& 0& 0& -1& \ldots& 0\\ \vdots& \vdots& \ddots& \vdots& \vdots& \vdots& \ddots& \vdots\\ 0& 0& \ldots& 0& 0& 0& \ldots& -1\\ 0& 0& \ldots& 0& 1& 1& \ldots& 1 \end{array} \right ], \\ S = \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} (g_o^{(1)})^T& 0& 0& \ldots& 0& 0& 0& \ldots& 0& 1 \end{array} \right ]^T \\ E = \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} -(F_o^{(1)})^T& 0& 0& \ldots& 0& 0& 0& \ldots& 0& 0 \end{array} \right ]^T \end{array} $$

And the objective function (4.64) can be written as,

$$ \min\limits _{z_1}\bigl\{ z_1^THz_1 \bigr\} $$
(4.64)

where

$$H = \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} P_2& 0& \ldots& 0& 0& 0& \ldots& 0\\ 0& P_3& \ldots& 0& 0& 0& \ldots& 0\\ \vdots& \vdots& \ddots& \vdots& \vdots& \vdots& \ddots& \vdots\\ 0& 0& \ldots& P_s& 0& 0& \ldots& 0\\ 0& 0& \ldots& 0& 1& 0& \ldots& 0 \\ 0& 0& \ldots& 0& 0& 1& \ldots& 0 \\ \vdots& \vdots& \ddots& \vdots& \vdots& \vdots& \ddots& \vdots\\ 0& 0& \ldots& 0& 0& 0& \ldots& 1 \end{array} \right ] $$

Hence, the optimization problem (4.64) is transformed into the quadratic programming problem (4.66), (4.65).

It is worth noticing that for all xΩ 1, the QP problem (4.66), (4.65) has the trivial solution, namely

$$\left \{ \begin{aligned} &r_i = 0,\\ &\lambda_i = 0 \end{aligned} \right . \quad\forall i=2,3, \ldots,s $$

Hence r 1=x and λ 1=1. That means, inside the set Ω 1, the interpolating controller (4.45), (4.48), (4.64) becomes the optimal unconstrained controller.

Remark 4.10

Note that Algorithm 4.3 requires the solution of the QP problem (4.66) of dimension (s−1)(n+1) where s is the number of interpolated controllers and n is the dimension of state. Clearly, solving the QP problem (4.66) can be computationally expensive when the number of interpolated controllers is big. However, it is usually enough with s=2 or s=3 in terms of performance and in terms of the size of the domain of attraction.

Algorithm 4.3
figure 20

Interpolating control via quadratic programming

Example 4.4

Consider again the system in Example 4.2 with the same state and control constraints. Two linear feedback controllers are chosen as,

$$ \left \{ \begin{aligned} &K_1 = [-0.0942 \quad{-}0.7724]\\ &K_2 = [-0.0669 \quad{-}0.2875] \end{aligned} \right . $$
(4.65)

The first controller u(k)=K 1 x(k) is an optimal controller and plays the role of the performance controller, and the second controller u(k)=K 2 x(k) is used to enlarge the domain of attraction.

Figure 4.18(a) shows the invariant sets Ω 1 and Ω 2 correspond to the controllers K 1 and K 2, respectively. Figure 4.18(b) shows state trajectories obtained by solving the QP problem (4.66), (4.65) for different initial conditions.

Fig. 4.18
figure 21

Feasible invariant sets and state trajectories of the closed loop system for Example 4.4

The sets Ω 1 and Ω 2 are presented in minimal normalized half-space representation as,

$$\varOmega_1 = \left \{x \in\mathbb{R}^2: \left [ \begin{array}{c@{\quad}c} 1.0000& 0\\ -1.0000& 0\\ -0.1211& -0.9926\\ 0.1211& 0.9926 \end{array} \right ]x \leq \left [ \begin{array}{c} 10.0000\\ 10.0000\\ 1.2851\\ 1.2851 \end{array} \right ] \right \} $$
$$\varOmega_2 = \left \{x \in\mathbb{R}^2: \left [ \begin{array}{c@{\quad}c} 1.0000& 0\\ -1.0000& 0\\ -0.2266& -0.9740\\ 0.2266& 0.9740\\ 0.7948& 0.6069\\ -0.7948& -0.6069\\ -0.1796& -0.9837\\ 0.1796& 0.9837\\ -0.1425& -0.9898\\ 0.1425& 0.9898\\ -0.1117& -0.9937\\ 0.1117& 0.9937\\ -0.0850& -0.9964\\ 0.0850& 0.9964\\ -0.0610& -0.9981\\ 0.0610& 0.9981\\ -0.0386& -0.9993\\ 0.0386& 0.9993\\ -0.0170& -0.9999\\ 0.0170& 0.9999 \end{array} \right ]x \leq \left [ \begin{array}{c} 10.0000\\ 10.0000\\ 3.3878\\ 3.3878\\ 8.5177\\ 8.5177\\ 3.1696\\ 3.1696\\ 3.0552\\ 3.0552\\ 3.0182\\ 3.0182\\ 3.0449\\ 3.0449\\ 3.1299\\ 3.1299\\ 3.2732\\ 3.2732\\ 3.4795\\ 3.4795 \end{array} \right ] \right \} $$

For the weighting matrices Q 2=I, R 2=1, and by solving the LMI problem (4.62), one obtains,

$$ P_2 = \left [ \begin{array}{c@{\quad}c} 5.1917& 9.9813\\ 9.9813& 101.2651 \end{array} \right ] $$
(4.66)

For the initial condition x(0)=[6.8200 1.8890]T, Fig. 4.19(a) and 4.19(b) present the state and input trajectories of the closed loop system for our approach (solid), and for the approach in [110] (dashed).

Fig. 4.19
figure 22

State and input trajectories of the closed loop system for our approach (solid), and for the approach in [110] (dashed) for Example 4.4

For [110], the matrix P in the problem (4.57) is computed as,

$$P = \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 4.8126& 2.9389& 4.5577& 13.8988\\ 2.9389& 7.0130& 2.2637& 20.4391\\ 4.5577& 2.2637& 5.1917& 9.9813\\ 13.8988& 20.4391& 9.9813& 101.2651 \end{array} \right ] $$

for the weighting matrices Q=I, R=1.

The interpolating coefficient \(\lambda_{2}^{*}\) and the Lyapunov function V 1(z 1) are depicted in Fig. 4.20. As expected V 1(z 1) is a positive and non-increasing function.

Fig. 4.20
figure 23

Interpolating coefficient \(\lambda_{2}^{*}\) and the Lyapunov function V 1(z 1) as functions of time for Example 4.4

6 Interpolating Control Based on Saturated Controllers

In this section, in order to fully utilize the capability of actuators and to enlarge the domain of attraction, an interpolation between several saturated controllers will be proposed. For simplicity, only single-input single-output system is considered, although extensions to multi-input multi-output systems are straightforward.

From Lemma 2.1 in Sect. 2.4.1, recall that for a given stabilizing controller u(k)=Kx(k), there exists an auxiliary stabilizing controller u(k)=Hx(k) such that the saturation function can be expressed as, ∀x such that HxU,

$$ \operatorname{sat}\bigl(Kx(k)\bigr) = \alpha(k) Kx(k) + \bigl(1-\alpha(k) \bigr)Hx(k) $$
(4.67)

where 0≤α(k)≤1. Matrix \(H \in\mathbb{R}^{n}\) can be computed using Theorem 2.3. Using Procedure 2.5 in Sect. 2.4.1, the polyhedral set \(\varOmega_{s}^{H}\) can be computed, which is invariant for system,

$$ x(k+1) = Ax(k) + B\operatorname{sat}\bigl(Kx(k)\bigr) $$
(4.68)

and with respect to the constraints (4.2).

It is assumed that a set of asymptotically stabilizing feedback controllers \(K_{i} \in\mathbb{R}^{n}\), i=1,2,…,s is available as well as a set of auxiliary matrices \(H_{i} \in\mathbb{R}^{n}\), i=2,…,s such that the corresponding invariant sets Ω 1X

$$ \varOmega_{1} = \bigl\{ x \in\mathbb{R}^n: F_o^{(1)}x \leq g_o^{(1)} \bigr\} $$
(4.69)

for the linear controller u=K 1 x and \(\varOmega_{s}^{H_{i}} \subseteq X\)

$$ \varOmega_{s}^{H_i} = \bigl\{ x \in\mathbb{R}^n: F_o^{(i)}x \leq g_o^{(i)} \bigr\} $$
(4.70)

for the saturated controllers \(u = \operatorname{sat}(K_{i}x)\), ∀i=2,3,…,s, are non-empty. Denote Ω s as the convex hull of the sets Ω 1 and \(\varOmega_{s}^{H_{i}}\), i=2,3,…,s. It follows that Ω s X, since Ω 1X, \(\varOmega_{s}^{H_{i}} \subseteq X\), ∀i=2,3,…,s and the fact that X is a convex set.

Remark 4.11

We use one linear control law here in order to show that interpolation can be done between any kind of controllers: linear or saturated. The main requirement is that there exists for each of these controllers its own convex invariant set as the domain of attraction.

Any state x(k)∈Ω s can be decomposed as,

$$ x(k) = \lambda_1(k)\widehat{x}_1(k) + \sum\limits _{i=2}^s\lambda _i(k) \widehat{x}_i(k) $$
(4.71)

where \(\widehat{x}_{1}(k) \in\varOmega_{1}\), \(\widehat{x}_{i}(k) \in\varOmega _{s}^{H_{i}}\), i=2,3,…,s and

$$\sum_{i=1}^s\lambda_i(k) = 1, \quad \lambda_i(k) \geq0.$$

Consider the following control law,

$$ u(k) = \lambda_1(k)K_1 \widehat{x}_1(k)+ \sum\limits _{i=2}^s\lambda _i(k)\operatorname{sat}\bigl(K_i\widehat{x}_i(k)\bigr) $$
(4.72)

Using Lemma 2.1, one obtains,

$$ u(k) = \lambda_1(k)K_1 \widehat{x}_1(k) + \sum\limits _{i=2}^s\lambda _i(k) \bigl(\alpha_i(k)K_i + \bigl(1- \alpha_i(k)\bigr)H_i\bigr)\widehat{x}_i(k) $$
(4.73)

where 0≤α i (k)≤1 for all i=2,3,…,s.

Similar with the notation employed in Sect. 4.5, we denote \(r_{i} =\lambda_{i}\widehat{x}_{i}\). Since \(\widehat{x}_{1} \in\varOmega _{1}\) and \(\widehat{x}_{i} \in\varOmega_{s}^{H_{i}}\), it follows that r 1λ 1 Ω 1 and \(r_{i} \in\lambda_{i}\varOmega_{s}^{H_{i}}\) or, equivalently

$$ F_o^{(i)}r_i \leq\lambda_i g_o^{(i)}, \quad\forall i=1,2,\ldots,s $$
(4.74)

Based on (4.73) and (4.75), one obtains,

$$ \left \{ \begin{aligned} &x = r_1+ \displaystyle \sum\limits_{i=2}^sr_i,\\ &u = u_1+ \displaystyle \sum\limits_{i=2}^su_i \end{aligned} \right . $$
(4.75)

where u 1=K 1 r 1 and u i =(α i K i +(1−α i )H i )r i , i=2,3,…,s.

As in Sect. 4.5, the first controller, identified by the high gain K 1, will play the role of a performance controller, while the remaining controllers \(u= \operatorname{sat}(K_{i}x)\), i=2,3,…,s will be used to extend the domain of attraction.

It holds that,

$$\begin{aligned} x(k+1) &= Ax(k) + Bu(k)\\ &= \displaystyle A\sum\limits_{i=1}^sr_i(k) + B\sum\limits_{i=1}^su_i = \sum\limits _{i=1}^sr_i(k+1)\end{aligned} $$

where r 1(k+1)=Ar 1+Bu 1=(A+BK 1)r 1 and

$$ r_i(k+1) = Ar_i(k)+Bu_i(k) = \bigl\{ A + B \bigl(\alpha_iK_i + (1-\alpha _i)H_i \bigr) \bigr\} r_i(k) $$
(4.76)

or, equivalently

$$ r_i(k+1) = A_{ci}r_i(k) $$
(4.77)

with A ci =A+B(α i K i +(1−α i )H i ), ∀i=2,3,…,s.

For a given set of state and control weighting matrices Q i ⪰0 and R i ≻0, i=2,3,…,s, consider the following set of quadratic functions,

$$ V_i(r_i) = r_i^TP_ir_i, \quad i=2,3,\ldots,s $$
(4.78)

where the matrix \(P_{i} \in\mathbb{R}^{n \times n}\), P i ≻0 is chosen to satisfy,

$$ V_i\bigl(r_i(k+1)\bigr) - V_i\bigl(r_i(k)\bigr) \leq-r_i(k)^TQ_ir_i(k) - u_i(k)^TR_iu_i(k) $$
(4.79)

With the same argument as in Sect. 4.5, equation (4.81) can be rewritten as,

$$A_{ci}^TP_iA_{ci} - P_i \preceq-Q_i - \bigl(\alpha_iK_i + (1-\alpha _i)H_i\bigr)^TR_i \bigl(\alpha_iK_i + (1-\alpha_i)H_i \bigr) $$

Using the Schur complements, the above condition can be transformed into,

$$\left [ \begin{array}{c@{\quad}c} P_i-Q_i - Y_i^TR_iY_i& A_{ci}^TP_i\\ P_iA_{ci}& P_i \end{array} \right ] \succeq0 $$

where Y i =α i K i +(1−α i )H i . Or, equivalently

$$\left [ \begin{array}{c@{\quad}c} P_i& A_{ci}^TP_i\\ P_iA_{ci}& P_i \end{array} \right ] - \left [ \begin{array}{c@{\quad}c} Q_i + Y_i^TR_iY_i& 0\\ 0& 0 \end{array} \right ] \succeq0 $$

Denote \(\sqrt{Q_{i}}\) and \(\sqrt{R_{i}}\) as the Cholesky factor of the matrices Q i and R i , which satisfy

$$\sqrt{Q_i}^T\!\sqrt{Q_i}= Q_i \quad \mbox{and} \quad \sqrt{R_i}^T\!\sqrt{R_i}= R_i.$$

The previous condition can be rewritten as,

$$\left [ \begin{array}{c@{\quad}c} P_i & A_{ci}^TP_i\\ P_iA_{ci}& P_i \end{array} \right ] - \left [ \begin{array}{c@{\quad}c} \sqrt{Q_i}^T & Y_i^T\sqrt{R_i}^T\\ 0& 0 \end{array} \right ] \left [ \begin{array}{c@{\quad}c} \sqrt{Q_i} & 0\\ \sqrt{R_i}Y_i& 0 \end{array} \right ] \succeq0 $$

or by using the Schur complements, one obtains,

$$ \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} P_i& A_{ci}^TP_i& \sqrt{Q_i}^T& Y_i^T\sqrt{R_i}^T\\ P_iA_{ci}& P_i& 0& 0\\ \sqrt{Q_i}& 0& I& 0\\ \sqrt{R_i}Y_i& 0& 0& I \end{array} \right ] \succeq0 $$
(4.80)

Since Y i =α i K i +(1−α i )H i , and A ci =A+BY i the left hand side of inequality (4.82) is linear in α i , and hence reaches its minimum at either α i =0 or α i =1. Consequently, the set of LMI conditions to be checked is following,

$$ \left \{ \begin{array}{l} \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} P_i& (A + BK_i)^TP_i& \sqrt{Q_i}^T& (\sqrt{R_i}K_i)^T\\ P_i(A + BK_i)& P_i& 0& 0\\ \sqrt{Q_i}& 0& I& 0\\ \sqrt{R_i}K_i& 0& 0& I \end{array} \right ] \succeq0 \\ \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} P_i& (A + BH_i)^TP_i& \sqrt{Q_i}^T& (\sqrt{R_i}H_i)^T\\ P_i(A + BH_i)& P_i& 0& 0\\ \sqrt{Q_i}& 0& I& 0\\ \sqrt{R_i}H_i& 0& 0& I \end{array} \right ] \succeq0 \\ \end{array} \right . $$
(4.81)

Condition (4.83) is linear with respect to the matrix P i . One way to calculate P i is to solve the following LMI problem,

$$ \min\limits _{P_i}\bigl\{ \operatorname{trace}(P_i)\bigr\} $$
(4.82)

subject to constraint (4.83).

Once the matrices P i , i=2,3,…,s are computed, they can be used in practice for real-time control based on the following algorithm, which can be formulated as an optimization problem that is efficient with respect to structure and complexity. At each time instant, for a given current state x, minimize on-line the quadratic cost function,

$$ \min\limits _{r_i,\lambda_i}\Biggl\{ \sum\limits _{i=2}^sr_i^TP_ir_i + \sum\limits _{i=2}^s\lambda_i^2 \Biggr\} $$
(4.83)

subject to the linear constraints

$$\left \{ \begin{aligned} &F_o^{(i)}r_i \leq\lambda_i g_o^{(i)},\quad \forall i=1,2,\ldots,s,\\ &\displaystyle \sum\limits_{i=1}^sr_i = x,\\ &\displaystyle \sum\limits_{i=1}^s\lambda_i = 1,\\ &\lambda_i \geq0,\quad \forall i=1,2,\ldots,s \end{aligned} \right . $$

Theorem 4.12

The control law (4.73), (4.74), (4.85) guarantees recursive feasibility and asymptotic stability of the closed loop system for all initial states x(0)∈Ω s .

Proof

The proof is similar to Theorem 4.11. Hence it is omitted here. □

Example 4.5

Consider again the system in Example 4.1 with the same state and control constraints. Two gain matrices are chosen as,

$$ \left \{ \begin{aligned} &K_1 = [-0.9500 \quad {-}1.1137],\\ &K_2 = [-0.4230 \quad {-}2.0607] \end{aligned} \right . $$
(4.84)

Using Theorem 2.3, matrix H 2 is computed as,

$$ H_2 = [-0.0669 \quad {-}0.2875] $$
(4.85)

The invariant sets Ω 1 and \(\varOmega_{s}^{H_{2}}\) are, respectively constructed for the controllers u=K 1 x and \(u = \operatorname{sat}(K_{2}x)\), see Fig. 4.21(a). Figure 4.21(b) shows state trajectories for different initial conditions.

Fig. 4.21
figure 24

Feasible invariant sets and state trajectories of the closed loop system for Example 4.5

The sets Ω 1 and \(\varOmega_{s}^{H_{2}}\) are presented in minimal normalized half-space representation as,

$$\varOmega_1 = \left \{x \in\mathbb{R}^2: \left [ \begin{array}{c@{\quad}c} 0.3919& -0.9200\\ -0.3919& 0.9200\\ -0.6490& -0.7608\\ 0.6490& 0.7608 \end{array} \right ]x \leq \left [ \begin{array}{r} 1.4521\\ 1.4521\\ 0.6831\\ 0.6831 \end{array} \right ] \right \} $$
$$\varOmega_s^{H_2} = \left \{x \in\mathbb{R}^2: \left [ \begin{array}{c@{\quad}c} -0.0170& -0.9999\\ 0.0170& 0.9999\\ -0.0386& -0.9993\\ 0.0386& 0.9993\\ -0.0610& -0.9981\\ 0.0610& 0.9981\\ -0.0850& -0.9964\\ 0.0850& 0.9964\\ -0.1117& -0.9937\\ 0.1117& 0.9937\\ -0.1425& -0.9898\\ 0.1425& 0.9898\\ 0.7948& 0.6069\\ -0.7948& -0.6069\\ -0.1796& -0.9837\\ 0.1796& 0.9837\\ 1.0000& 0\\ -1.0000& 0\\ -0.2266& -0.9740\\ 0.2266& 0.9740 \end{array} \right ]x \leq \left [ \begin{array}{rr} 3.4795\\ 3.4795\\ 3.2732\\ 3.2732\\ 3.1299\\ 3.1299\\ 3.0449\\ 3.0449\\ 3.0182\\ 3.0182\\ 3.0552\\ 3.0552\\ 8.5177\\ 8.5177\\ 3.1696\\ 3.1696\\ 10.0000\\ 10.0000\\ 3.3878\\ 3.3878 \end{array} \right ] \right \} $$

With the weighting matrices Q 2=I, R 2=0.001 and by solving the LMI problem (4.84), one obtains,

$$P_2 = \left [ \begin{array}{c@{\quad}c} 5.4929& 9.8907\\ 9.8907& 104.1516 \end{array} \right ] $$

For the initial condition x(0)=[−9.79 −1.2]T, Fig. 4.22 presents the state and input trajectories for the interpolating controller (solid blue) and for the saturated controller \(u = \operatorname{sat}(K_{2}x)\) (dashed red), which is the controller corresponding to the set \(\varOmega_{s}^{H_{2}}\). The interpolating coefficient \(\lambda_{2}^{*}\) and the objective function as a Lyapunov function are shown in Fig. 4.23.

Fig. 4.22
figure 25

State and input trajectories of the closed loop system as functions of time for Example 4.5 for the interpolating controller (solid) and for the saturated controller \(u = \operatorname{sat}(K_{2}x)\) (dashed)

Fig. 4.23
figure 26

Interpolating coefficient \(\lambda_{2}^{*}\) and Lyapunov function as functions of time for Example 4.5

7 Convex Hull of Ellipsoids

For high dimensional systems, the polyhedral based interpolation approaches in Sects. 4.2, 4.3, 4.4, 4.5, 4.6 might be impractical due to the huge number of vertices or half-spaces in the representation of polyhedral sets. In that case, ellipsoids might be a suitable class of sets for interpolation.

Note that the idea of using ellipsoids for a constrained control system is well known, for time-invariant linear continuous-time systems, see [56], and for time-invariant linear discrete-time systems, see [10]. In these papers, a method to construct a continuous control law based on a set of linear control laws was proposed to make the convex hull of an associated set of invariant ellipsoids invariant. However these results do not allow to impose priority among the control laws.

In this section, an interpolation using a set of saturated controllers and its associated set of invariant ellipsoid is presented. The main contribution with respect to [10, 56] is to provide a new type of controller, that uses interpolation.

It is assumed that a set of asymptotically stabilizing saturated controllers \(u = \operatorname{sat}(K_{i}x)\) is available such that the corresponding ellipsoidal invariant sets E(P i )

$$ E(P_i) = \bigl\{ x \in\mathbb{R}^n: x^TP_i^{-1}x \leq1 \bigr\} $$
(4.86)

are non-empty for i=1,2,…,s. Recall that for all x(k)∈E(P i ), it follows that \(\operatorname{sat}(K_{i}x) \in U\) and \(x(k+1) = Ax(k) + B\operatorname{sat}(K_{i}x(k)) \in X\). Denote \(\varOmega_{E} \subset\mathbb{R}^{n}\) as the convex hull of E(P i ), i=1,2,…,s. It follows that Ω E X, since X is convex and E(P i )⊆X, i=1,2,…,s.

Any state x(k)∈Ω E can be decomposed as,

$$ x(k) = \sum\limits _{i=1}^s \lambda_i(k)\widehat{x}_i(k) $$
(4.87)

where \(\widehat{x}_{i}(k) \in E(P_{i})\) and λ i (k) are interpolating coefficients, that satisfy

$$\sum\limits _{i=1}^s\lambda_i(k) = 1, \quad\lambda_i(k) \geq0 $$

Consider the following control law,

$$ u(k) = \sum\limits _{i=1}^s \lambda_i(k)\operatorname{sat}\bigl(K_i\widehat{x}_i(k) \bigr) $$
(4.88)

where \(\operatorname{sat}(K_{i}\widehat{x}_{i}(k))\) is the saturated control law in E(P i ).

Theorem 4.13

The control law (4.89), (4.90) guarantees recursive feasibility for all initial conditions x(0)∈Ω E .

Proof

One has to prove that u(k)∈U and x(k+1)=Ax(k)+Bu(k)∈Ω E for all x(k)∈Ω E . For the input constraints, from equation (4.90) and since \(\operatorname{sat}(K_{i}\widehat{x}_{i}(k)) \in U\), it follows that u(k)∈U.

For the state constraints, it holds that,

$$\begin{aligned} x(k+1) &= Ax(k) + Bu(k)\\ &= \displaystyle A\sum\limits_{i=1}^s\lambda_i(k)\widehat {x}_i(k) + B\sum\limits_{i=1}^s\lambda_i(k)\operatorname{sat}(K_i\widehat{x}_i(k)) \\ &= \displaystyle \sum\limits_{i=1}^s\lambda_i(k)(A\widehat{x}_i(k) + B\operatorname{sat}(K_i\widehat {x}_i(k))) \end{aligned} $$

One has \(A\widehat{x}_{i}(k) + B\operatorname{sat}(K_{i}\widehat{x}_{i}(k)) \in E(P_{i}) \subseteq\varOmega_{E}\), i=1,2,…,s, which ultimately assures that x(k+1)∈Ω E . □

As in Sects. 4.5 and 4.6, the first high gain controller will be used for the performance, while the rest of available low gain controllers will be used to enlarge the domain of attraction. For a given current state x, consider the following optimization problem,

$$ \lambda_i^* = \min\limits _{\widehat{x}_i,\lambda_i}\Biggl\{ \sum \limits _{i=2}^s\lambda_i\Biggr\} $$
(4.89)

subject to

$$\left \{ \begin{aligned} &\widehat{x}_i^TP_i^{-1}\widehat{x}_i \leq1,\quad \forall i=1,2,\ldots,s,\\ &\displaystyle \sum\limits_{i=1}^s\lambda_i\widehat{x}_i = x,\\ &\displaystyle \sum\limits_{i=1}^s\lambda_i = 1,\\ &\lambda_i \geq0,\quad \forall i=1,2,\ldots,s \end{aligned} \right . $$

Theorem 4.14

The control law (4.89), (4.90), (4.91) guarantees asymptotic stability for all initial states x(0)∈Ω E .

Proof

Consider the following non-negative function,

$$ V(x) = \sum\limits _{i=2}^s\lambda_i^*(k) $$
(4.90)

for all xΩ E E(P 1). V(x) is a Lyapunov function candidate.

For any x(k)∈Ω E E(P 1), by solving the optimization problem (4.91) and by applying (4.89), (4.90), one obtains

$$\left \{ \begin{aligned} &\displaystyle x(k) = \sum\limits_{i=1}^s\lambda_i^*(k)\widehat{x}_i^*(k)\\ &\displaystyle u(k) = \sum\limits_{i=1}^s\lambda_i^*(k)\operatorname{sat}(K_i\widehat{x}_i^*(k)) \end{aligned} \right . $$

It follows that,

$$\begin{aligned} x(k+1) &= Ax(k) + Bu(k)= \displaystyle A\sum\limits_{i=1}^s\lambda_i^*(k)\widehat {x}_i^*(k) + B\sum\limits_{i=1}^s\lambda_i^*(k)\operatorname{sat}(K_i\widehat {x}_i^*(k))\\ &=\displaystyle \sum\limits_{i=1}^s\lambda_i^*(k)\widehat{x}_i(k+1) \end{aligned} $$

where \(\widehat{x}_{i}(k+1) = A\widehat{x}_{i}^{*}(k) + B\operatorname{sat}(K_{i}\widehat {x}_{i}^{*}(k)) \in E(P_{i})\), ∀i=1,2,…,s. Hence \(\lambda _{i}^{*}(k)\), ∀i=1,2,…,s is a feasible solution of (4.91) at time k+1.

At time k+1, by soling the optimization problem (4.91), one obtains

$$x(k+1) = \sum\limits _{i=1}^s\lambda_i^*(k+1) \widehat{x}_i^*(k+1) $$

where \(\widehat{x}_{i}^{*}(k+1) \in E(P_{i})\). It follows that \(\sum_{i=2}^{s}\lambda_{i}^{*}(k+1) \leq\sum_{i=2}^{s}\lambda_{i}^{*}(k)\) and V(x) is a non-increasing function.

The contractive property of the ellipsoids E(P i ), i=1,2,…,s assures that there is no initial condition x(0)∈Ω E E(P 1) such that \(\sum_{i=2}^{s}\lambda_{i}^{*}(k+1) = \sum_{i=2}^{s}\lambda _{i}^{*}(k)\) for sufficiently large and finite k. It follows that \(V(x)=\sum_{i=2}^{s}\lambda_{i}^{*}(k)\) is a Lyapunov function for all xΩ E E(P 1).

The proof is completed by noting that inside E(P 1), λ 1=1 and λ i =0, i=2,3,…,s, the saturated controller \(u = \operatorname{sat}(K_{1}\widehat{x})\) is contractive and thus the control laws (4.89), (4.90), (4.91) assures asymptotic stability for all xΩ E . □

Denote \(r_{i} = \lambda_{i}\widehat{x}_{i}\). Since \(\widehat{x}_{i} \in E(P_{i})\), it follows that r i λ i E(P i ), and hence \(r_{i}^{T}P_{i}^{-1}r_{i} \leq\lambda_{i}^{2}\). The non-linear optimization problem (4.91) can be rewritten as,

$$ \min\limits _{r_i,\lambda_i}\Biggl\{ \sum\limits _{i=2}^s \lambda_i\Biggr\} $$
(4.91)

subject to

$$\left \{ \begin{aligned} &r_i^TP_i^{-1}r_i \leq\lambda_i^2,\quad \forall i=1,2,\ldots,s,\\ &\displaystyle \sum\limits_{i=1}^sr_i = x,\\ &\displaystyle \sum\limits_{i=1}^s\lambda_i = 1,\quad \lambda_i \geq0,\quad \forall i=1,2,\ldots,s \end{aligned} \right . $$

By using the Schur complements, (4.93) is converted into the following LMI problem,

$$ \min\limits _{r_i,\lambda_i}\Biggl\{ \sum\limits _{i=2}^s \lambda_i\Biggr\} $$
(4.92)

subject to

$$\left \{ \begin{aligned} &\left [ \begin{array}{c@{\quad}c}\lambda_i& r_i^T\\ r_i& \lambda_iP_i \end{array} \right ] \succeq0,\quad \forall i=1,2,\ldots,s, \\ &\displaystyle \sum\limits _{i=1}^sr_i = x, \\ &\displaystyle \sum\limits _{i=1}^s\lambda_i = 1,\,\, \lambda_i \geq0,\quad \forall i=1,2,\ldots,s \end{aligned} \right . $$

Remark 4.12

It is worth noticing that for all x(k)∈E(P 1), the LMI problem (4.94) has the trivial solution,

$$\lambda_i = 0, \quad\forall i=2,3,\ldots,s $$

Hence λ 1=1 and \(x = \widehat{x}_{1}\). In this case, the interpolating controller becomes the saturated controller \(u = \operatorname{sat}(K_{1}x)\).

Example 4.6

Consider again the system in Example 4.1 with the same state and control constraints. Three gain matrices are chosen as,

$$ \left \{ \begin{aligned} &K_1 = [-0.9500 \quad {-}1.1137],\\ &K_2 = [-0.4230 \quad {-}2.0607],\\ &K_3 = [-0.5010 \quad {-}2.1340] \end{aligned} \right . $$
(4.93)

By solving the LMI problem (2.55) three invariant ellipsoids E(P 1), E(P 2), E(P 3) are computed corresponding to the saturated controllers \(u = \operatorname{sat}(K_{1}x)\), \(u = \operatorname{sat}(K_{2}x)\) and \(u = \operatorname{sat}(K_{3}x)\). The sets E(P 1), E(P 2), E(P 3) and their convex hull are depicted in Fig. 4.24(a). Figure 4.24(b) shows state trajectories for different initial conditions.

The matrices P 1, P 2 and P 3 are,

$$\begin{array}{c} P_1 = \left [ \begin{array}{c@{\quad}c} 42.27& 2.82\\ 2.82& 4.80 \end{array} \right ], \quad\ P_2 = \left [ \begin{array}{c@{\quad}c} 100.00& -3.10\\ -3.10& 8.12 \end{array} \right ], \quad\ P_3 = \left [ \begin{array}{c@{\quad}c} 100.00& -19.40\\ -19.40& 9.54 \end{array} \right ] \end{array} $$

For the initial condition x(0)=[−0.64 −2.8]T, using Algorithm 4.4, Fig. 4.25 presents the state and input trajectories and the sum \((\lambda_{2}^{*} + \lambda_{3}^{*})\). As expected, the sum \((\lambda_{2}^{*} + \lambda_{3}^{*})\), i.e. the Lyapunov function is positive and non-increasing.

Algorithm 4.4
figure 27

Interpolating control—Convex hull of ellipsoids

Fig. 4.24
figure 28

Invariant ellipsoids and state trajectories of the closed loop system for Example 4.6

Fig. 4.25
figure 29

State trajectory, input trajectory and the sum \((\lambda_{2}^{*} + \lambda_{3}^{*})\) of the closed loop system for Example 4.6