Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In this chapter some basic results which concern the problem of switching and switched systems are reported. We wish to immediately tell the reader that this chapter has been written in a period of intensive research so that, quite differently from the previous sections, many other new results are expected.

There are several prestigious publications which provide complete surveys and to which the reader is referred for a comprehensive treatment of the topic. In particular, it is mandatory to mention the books [Lib03, Joh03, SG05] and the surveys [LA09, SWM+07, DBPL00, Col09]. In this chapter, we will briefly introduce the concept of hybrid systems, which are dynamical systems which include both continuous and logic variables, and then we will immediately consider the case of systems which can undergo switching, which are a special subclass of hybrid systems.

The nomenclature proposed in this chapter about switching and switched is not universal. Still, to keep the exposition simple, we will refer to “switching” systems when the commutation can be arbitrary and to “switched” systems when the commutation is controlled. There are, quite obviously, several intermediate situations. For instance, the case in which switching is state-dependent (as the bouncing ball), the case in which switching is arbitrary, but subject to dwell time, and other “mixed cases” in which some logic variables are controlled and some other are not controlled.

9.1 Hybrid and switching systems

In this section, a field in which set-theoretic considerations have a certain interest and some related problems are briefly presented. A hybrid system is a dynamic system which includes both discrete and continuous dynamics. A simple model (although not the most general one) of a hybrid system is given by

$$\displaystyle\begin{array}{rcl} \dot{x}(t)& =& f_{x}(x(t),u(t),w(t),q(t)){}\end{array}$$
(9.1)
$$\displaystyle\begin{array}{rcl} q(t)& =& f_{q}(x(t),u(t),w(t),q^{-}(t)){}\end{array}$$
(9.2)

where the variables x, u, and w have the usual meaning while q(t) assumes its values in \(\mathcal{Q}\), a discrete and finite set. Without restrictions it is assumed that

$$\displaystyle{\mathcal{Q} =\{ 1,2,\ldots,r\}}$$

This class of systems is often encountered in many applications. The first equation (9.1) is a standard differential equation with the new input q, which can assume discrete values only. It can be interpreted as a set of r differential equations each associated with a value of q. The second equation (9.2), the novelty in this book, expresses the commutation law among the discrete values of \(\mathcal{Q}\). Any possible commutation depends on the current continuous-time state, on the control, on the disturbance and on the last value q (t) assumed by the discrete variable.

Example 9.1 (Oven in on–off mode).

Let \(\mathcal{Q} =\{ 0,1\}\), let \(\bar{x}\) be a desired temperatureFootnote 1 and let \(x^{+} =\bar{ x}+\varepsilon\) and \(x^{-} =\bar{ x}-\varepsilon\), where \(\varepsilon > 0\) is a tolerance. Consider the system

$$\displaystyle\begin{array}{rcl} \dot{x}(t)& =& -\alpha x(t) + q(t)u(t) + u_{0} {}\\ q(t)& =& \left \{\begin{array}{lll} 0&\mbox{ if}&x \geq x^{+} \\ 1&\mbox{ if}&x \leq x^{-} \\ 0&\mbox{ if}&x^{-} < x(t) < x^{+}\ \ \mbox{ and}\ q(t^{-}) = 0 \\ 1&\mbox{ if}&x^{-} < x(t) < x^{+}\ \ \mbox{ and}\ q(t^{-}) = 1 \end{array} \right.{}\\ \end{array}$$

with α > 0, u 0 is a constant signal representing the heat introduced by the external environment, while u(t) is supplied electrically. Assume for brevity that \(u(t) =\bar{ u}\) is constant, so that the oven works only in on–off mode. This is perhaps one of the most popular control systems. The oven works properly if the interval characterized by the extremal steady state temperatures \(\bar{x}_{max} = (\bar{u} + u_{0})/\alpha\) and \(\bar{x}_{min} = u_{0}/\alpha\) includes the interval \([x^{-},x^{+}]\) in its interior

$$\displaystyle{\bar{x}_{min} < x^{-} < x^{+} <\bar{ x}_{ max}}$$

We assume that this condition is granted.

We do not analyze the system, which is an elementary exercise and leads to the conclusion that the behavior is that in Figure 9.1, but we just point out some facts. If we set \(\varepsilon = 0\), the temperature reaches in finite time the desired temperature. However, due to the discontinuity, we have (in theory) infinite-frequency switching between q = 0 and q = 1. This is not suitable for the application. First of all infinite frequency is not possible due to the hysteresis of real switches. Second, an hysteresis is typically introduced to avoid high frequency commutation and this is actually the reason why the region \([x^{-},x^{+}]\) is introduced along with a discrete-dynamics for the system.

Fig. 9.1
figure 1

The oven switching behavior

In the previous example, the logic part has its own dynamics, in the sense that in the intermediate region \([x^{-},x^{+}]\) the logic variable q(t) is not uniquely determined, but depends on its own value q(t ). We let the reader note that dynamic systems in which some logic variables depend on the state (or on an output) variable have already been encountered in the present book and are those obtained by the implementation of the Gutman and Cwikel control (see (4.39) at page 159) [GC86a, Bla99], which produces a closed-loop system which is piecewise-linear, hence characterized by a state-dependent switching.

Example 9.2 (Bouncing ball).

To provide an example of purely state dependent switching, a simplified model of the bouncing ball is considered.

The vertical motion of a ball bouncing on a plane has two phases (see Fig. 9.2). The gravitational one G in which the ball is subject to the gravity force and the elastic one E in which the ball is in contact with the plane. Denoting by r the radius of the ball, and denoting by y(t) the level of the barycenter, it is possible to write two distinct equations:

$$\displaystyle{ \begin{array}{llll} \ddot{y}(t) = -g &\ \ \mbox{ if}\ \ &y(t) > r&\ \ \mbox{ Gravitational} \\ \ddot{y}(t) = -g + k(r - y(t))&\ \ \mbox{ if}\ \ &y(t) \leq r&\ \ \mbox{ Elastic}\end{array} }$$
(9.3)

This is the typical case in which the logic variable G, E is purely dependent on the output.

Fig. 9.2
figure 2

The bouncing ball: G gravitational phase, E elastic phase.

This system is quite easy to analyze. First note that the system can be written as

$$\displaystyle{\ddot{y}(t) = -g + k(y)(r - y(t))}$$

where k(y) = 0 for y > r and k(y) = k for y ≤ r. Indeed, since no dissipative terms were considered, its mechanical energy is preserved

$$\displaystyle{\varPsi (y,\dot{y}) = \frac{1} {2}\dot{y}^{2} + yg + \frac{1} {2}k(y)(r - y)^{2}}$$

(note that this is a differentiable function since \(d/dy[k(y)(r - y)^{2}]\) is continuous). It is also immediately seen that this is a positive definite function having a unique minimum in the equilibrium \(\bar{y} = r - g/k\), \(\dot{y} = 0\). An example of system trajectory is reported in Fig. 9.3.

Fig. 9.3
figure 3

The trajectories in the state space

The previous example falls in the category of piecewise affine systems. A piecewise affine system has the form

$$\displaystyle{\dot{x}(t) = A_{i}x(t) + b_{i},\ \ \ \ \mbox{ for}\ \ \ x \in \mathcal{S}_{i}}$$

where the family of sets \(\mathcal{S}_{i}\) forms a partition of the state space. An interesting case is that in which the sets \(\mathcal{S}_{i}\) are simplices [HvS04, BR06, Bro10]. Simplices have the nice property of being quite flexible to reasonably cover non-trivial regions in the state space. Moreover, a piecewise linear function defined on a simplex is uniquely identified by the values at the vertices. So, if a region is covered by simplices having pairwise n vertices in common, it is possible to define a continuous function by choosing the value at the vertices. We will use this property later, applied to the relatively optimal control technique.

The next scholastic example aims at illustrating that even the analysis of simple hybrid systems is a tackling problem.

Example 9.3.

Consider the discrete-time hybrid system

$$\displaystyle{x(t + 1) = A_{q}x(t)}$$

with \(q \in \mathcal{Q} =\{ 1,2\}\) and

$$\displaystyle{A_{1} = \left [\begin{array}{cc} 0.8&0.9\\ 0 &0.9 \end{array} \right ]\,\,\,\,A_{2} = \left [\begin{array}{cc} 0.8& 0\\ 1 &0.9 \end{array} \right ]}$$

whose generating matrices are both Schur stable, having both eigenvalues inside the unit disc.

If the equation for the discrete variable q is

$$\displaystyle{q(t) = \left \{\begin{array}{l} 1,\ \mbox{ if }\ \left [\begin{array}{cc} 1& - 1 \end{array} \right ]x > 1\\ 2,\ \mbox{ if } \ \left [\begin{array}{cc} 1 & - 1 \end{array} \right ] x \leq 1 \end{array} \right.}$$

the set of initial conditions \(\|x_{0}\|_{1} \leq 1\) starting from which the evolution is driven to zero is the dark area depicted in Figure 9.4, whereas if the discrete variable mapping is

$$\displaystyle{q(t) = \left \{\begin{array}{l} 1,\ \mbox{ if }\ \left [\begin{array}{cc} 1& - 0.2 \end{array} \right ]x > 1\\ 2,\ \mbox{ if } \ \left [\begin{array}{cc} 1 & - 0.2 \end{array} \right ] x \leq 1 \end{array} \right.}$$

the evolution starting from every initial condition \(\|x(0)\|_{1} < 1\) converges to the origin, as one can check by running the code

Fig. 9.4
figure 4

Initial condition set for the Example 9.3

for i=1:1000

  if [1 -1]*x0>1 ([1 -.2]*x0>1 in the second case)

         x0=A1*x0

    else

         x0=A2*x0;

    end

end

for different values of x(0) and evaluating the final value.

Another example of linear switching systems can be found in the framework of networked control systems

Example 9.4 (Networked control system).

Consider the problem of controlling a strictly proper n-dimensional discrete-time linear time invariant (LTI) plant with P = { A, B, C}:

$$\displaystyle{ \left \{\begin{array}{rcl} x(t + 1)& =&Ax(t) + Bu(t)\\ y(t)&=&Cx(t - q(t)) \end{array} \right. }$$
(9.4)

where \(x(t) \in \mathrm{I\!R}^{n}\), \(u(t) \in \mathrm{I\!R}^{m}\), and \(y(t) \in \mathrm{I\!R}^{p}\) and no delays or dropouts in the actuator channel, as depicted in Figure 9.5, are present. The system matrices can be thought of as obtained from a continuous-time plant controlled at a given sampling rate T c . The controller clock is synchronized with that of the sensor and the transmitted data are time stamped, so that the sensor-to-controller delay \(q(t) \in \{ 0,1,\ldots,N_{max}\}\) is known.

Fig. 9.5
figure 5

Delay system

To recast such a system in a switching framework, first the system state is augmented so as to include delayed copies of the output, \(y_{i}(t) = Cx(t - i)\), as

$$\displaystyle{x_{e}(t) = \left [\begin{array}{c} x(t) \\ y_{1}(t)\\ \vdots \\ y_{N_{max}}(t) \end{array} \right ]}$$

and a time-varying output matrix is introduced to get the dynamic system

$$\displaystyle{ \begin{array}{rcl} x_{e}(t + 1)& =&\tilde{A}x_{e}(t) +\tilde{ B}u(t) \\ \tilde{y}(t)& =&\tilde{C}_{q(t)}x_{e}(t)\end{array} }$$
(9.5)

where

$$\displaystyle{ \begin{array}{rcl} \tilde{A}& =&\left [\begin{array}{ccc} A &0^{n\times (N_{max}-1)p}& 0^{n\times p} \\ C &0^{p\times (N_{max}-1)p}& 0^{p\times p} \\ 0^{(N_{max}-1)p\times n}& I^{(N_{max}-1)p} &0^{(N_{max}-1)\times p} \end{array} \right ] \\ \tilde{B}& =&\left [\begin{array}{c} B \\ 0^{p\times m} \\ 0^{(N_{max}-1)p\times m} \end{array} \right ] \\ \tilde{C}_{0} & =&\left [\begin{array}{ccc} C&0^{p\times (N_{max}-1)p}&0^{p\times p} \end{array} \right ] \end{array} }$$
(9.6)

and, for \(1 \leq i \leq \bar{ N}\)

$$\displaystyle{ \begin{array}{rcl} \tilde{C}_{i}& =&\left [\begin{array}{ccc} 0^{p\times (n+(i-1)p)}&I^{p}&0^{p\times (N_{max}-i)p} \end{array} \right ]\end{array}. }$$
(9.7)

Note that when q(t) = 0 the augmented system output is nothing but the actual plant output, say \(\tilde{y}(t) = y(t)\), whereas for q(t) ≥ 1 the augmented system output is the q(t) step delayed version of the plant output, say \(\tilde{y}(t) = y(t - q(t))\).

In view of the above embedding, the problem of controlling the system in the presence of known delays in the sensing channel is recast into the problem of stabilizing the augmented switching system. This embedding is not new in this area and it has been widely used in conjunction with the theory of jump linear systems (see [ZSCH05, XHH00]).

We will come back to this problem later, in Section 9.6, when the stabilization problem of such a class of systems will be analyzed.

In the sequel, we do not consider hybrid systems in their most general form, but we rather consider special cases of systems that can switch (with controlled or uncontrolled switch) and the case in which the switching is state dependent, but the switching law is imposed by a feedback control. For a more general view, the reader is referred to specialized literature [Lib03, Joh03, SG05].

9.2 Switching and switched systems

Consider an autonomous system of the form

$$\displaystyle{ \dot{x}(t) = f(x(t),q(t)) }$$
(9.8)

where \(q(t) \in \mathcal{Q}\) and \(\mathcal{Q}\) is a finite set. We consider two slightly (but substantially) different definitions.

Definition 9.5 (Switching system).

The system is said to be switching if the signal q(t) is not controlled but exogenously determined. This corresponds to the choice \(q(t) = f_{q}(w(t)) \in \mathcal{Q}\) in (9.2).

Definition 9.6 (Switched system).

The system is said to be switched if the signal q(t) is controlled. This corresponds to the choice \(q(t) = f_{q}(u(t)) \in \mathcal{Q}\) in (9.2).

9.3 Switching Systems

The analysis of a switching system is basically a robustness analysis problem already considered in the previous sections. In particular, a switching system is stable if and only if it admits a smooth Lyapunov function according to known converse results [Meı79, LSW96]. The special case of switching linear systems is included in the results in Subsection 7.3.2 (see Proposition 7.39), here reported in the switching framework for clarity of presentation.

Proposition 9.7.

The linear switching system

$$\displaystyle{ \dot{x}(t) = A_{q(t)}x(t) }$$
(9.9)

(or its discrete-time counterpart \(x(t + 1) = A_{q(t)}x(t)\) ) is asymptotically stable (equivalently exponentially stable) if and only if it admits a polyhedral norm as Lyapunov function.

The property holds also for norms of the type \(\|Fx\|_{2p}\) [MP86a, MP86b, MP86c]. Note in particular that the stability of the switching system is equivalent to the stability of the corresponding polytopic system.

Theorem 9.8 ([MP86a, MP86b, MP86c]).

The stability of (9.9) under arbitrary switching is equivalent to the robust stability of the associated polytopic system

$$\displaystyle{ \dot{x}(t) = \left [\sum _{q=1}^{r}\alpha _{ q}(t)A_{q}(t)\right ]x(t),\ \ \ \sum _{q=1}^{r}\alpha _{ q}(t) = 1,\ \ \ \alpha _{q}(t) \geq 0 }$$
(9.10)

As a consequence, the stability of each of the single systems \(x(t + 1) = A_{q}x(t)\), which is necessary for the stability of switching systems, is not sufficient [Lib03]. In view of the theorem, not even the Hurwitz stability of all the matrices (assumed constant) in the convex hull is sufficient for switching stability.

There exists a simple case in which the frozen-time Hurwitz stability assures switching, hence robust, stability.

Proposition 9.9.

Assume that all the matrices A q are symmetric. Then the following conditions are equivalent.

  • System (9.9) is stable under arbitrary switching.

  • System (9.9) is robustly stable.

  • All the matrices in \(conv\{A_{q},\ q = 1,\ldots,r\}\) are Hurwitz.

  • All the matrices A q , \(q = 1,\ldots,r\) , are Hurwitz.

  • The systems is quadratically stable, i.e. A q share a common quadratic Lyapunov function.

Proof.

See Exercise 4.

Example 9.10.

Consider the two-tank system presented in Subsection 8.3.1, with state matrix

$$\displaystyle{A(\xi,\eta ) = \left [\begin{array}{cc} -\xi & \xi \\ \xi & - (\xi +\eta ) \end{array} \right ]}$$

Assume that the parameters can change in an on–off mode \(\xi \in \{\xi ^{-},\xi ^{+}\}\) and \(\eta \in \{\eta ^{-},\eta ^{+}\}\) with all values strictly positive. It is immediate that for fixed values the system is asymptotically stable.

One might be interested in understanding whether this system can be destabilized by switching, say whether an inexpertFootnote 2 operator could destabilize the system by improperly changing the positions of the two valves which are on the duct between the two tanks and the duct after the second one (see Fig. 8.10).

The reply is obviously no, because the matrix is symmetric for any value of the parameters ξ and η and all the matrices are Hurwitz since the characteristic polynomial is \(p(s) = s^{2} + (2\xi +\eta )s+\xi \eta\).

There are quite a few cases of switching systems for which Hurwitz stability of the vertices implies the stability of all the matrices of the convex hull. One of such classes, specifically that formed by planar positive systems, will be discussed later.

9.3.1 Switching systems: switching sequences and dwell time

The concept of dwell-time , say the time interval during which no transitions of a switching system can occur (say the minimum amount of time the system “rests” in a given configuration), originates from the simple idea that a switching system composed by asymptotically stable systems exhibits an asymptotically stable behavior if the time interval between two switchings is sufficiently large. Quite clearly, the main focus of the research is directed towards the determination of the minimum dwell time. The interested reader is referred to [GC06, Col09] and the excellent survey [SWM+07].

Definition 9.11 (Dwell time).

The value τ > 0 is said to be the dwell time if the time instants t k and t k+1 in which two consecutive switchings occur, must be such that

$$\displaystyle{t_{k+1} - t_{k} \geq \tau }$$

It is clear that, if all the systems are stable, τ has a stabilizing role. It is also straightforward to see that, if the system is stable with dwell time τ 1 then it is stable for any dwell time τ 2 > τ 1. The following proposition holds.

Proposition 9.12.

For any switching linear system with generating asymptotically stable matrices A 1 , A 2 , …A r there exists a value \(\bar{\tau }\) such that for any \(\tau \geq \bar{\tau }\) assumed as dwell time, the switching system is stable.

Proof.

We provide a very conservative value of τ but in a constructive way (an alternative determination of the minimum dwell time by means of quadratic functions can be found in [GC05, GCB08]). Take a 0-symmetric polytope \(\bar{\mathcal{V}}[X]\) which is a C-set and for each stable system compute an invariant ellipsoid \(\mathcal{E}_{k} \subset \lambda \bar{\mathcal{V}}[X]\), 0 ≤ λ < 1. For each vertex x i compute the minimum time necessary for the k-th system to reach \(\mathcal{E}_{k}\)

$$\displaystyle{T_{ik} =\min \{ t \geq 0: e^{A_{k}t}x_{ i} \in \mathcal{E}_{k}\}}$$

Such an operation can be easily done by iterating on t > 0 by bisection and by applying an LP test for any trial. Then it is immediate to see that a possible value for \(\bar{\tau }\) is

$$\displaystyle{\bar{\tau }=\max _{i,k}T_{ik}}$$

This can be seen by considering the following discrete-time system

$$\displaystyle{x(t_{k+1}) = e^{A_{q}(t_{k+1}-t_{k})}x(t_{ k})}$$

defined on the switching time sequence t k . This is a linear LPV system. It is immediate that the set \(\bar{\mathcal{V}}[X]\) is λ-contractive and thus the Minkowski function associated with \(\bar{\mathcal{V}}[X]\) is a Lyapunov function for this system, so that \(\|x(t_{k})\| \rightarrow 0\) as k →  and the system is globally stable.

9.4 Switched systems

The situation is completely different in the case of switched systems. The stabilization problem is that of choosing a feedback law

$$\displaystyle{q(t) =\varPhi (x(t),q(t^{-}))}$$

such that the resulting system is stabilized. In general, determining the control law Φ(x(t), q(t )) is hard even for linear switched system. There is a sufficient condition which provides a helpful tool. The basic assumption is that there exists a Lyapunov stable system in the convex hull of the points f(x, q).

Theorem 9.13.

Assume there exists a (sufficiently regular) function \(\bar{f}(x)\) such that

$$\displaystyle{\bar{f}(x) \in conv\{f(x,q),\ q \in \mathcal{Q}\},}$$

where \(\mathcal{Q} =\{ 1,2,\ldots,r\}\) , for which the system \(\dot{x} =\bar{ f}(x)\) admits a smooth Lyapunov function such that

$$\displaystyle{\nabla \varPsi (x)\bar{f}(x) \leq -\phi (\|x\|)}$$

where \(\phi (\|x\|)\) is a κ-function . Then there exists a stabilizing strategy which has the form

$$\displaystyle{q =\varPhi (x) =\arg \min _{q\in \mathcal{Q}}\nabla \varPsi (x)f(x,q)}$$

Proof.

The proof of the theorem is very easy, since it is immediately seen that

$$\displaystyle{\min _{p}\ \ \nabla \varPsi (x)f(x,p) \leq \nabla \varPsi (x)\bar{f}(x) \leq -\phi (\|x\|)}$$

Note that the condition on the average system implies that the “bar” system has an equilibrium point in x = 0. This is not necessarily true for the individual systems. Moreover, this procedure is not just useful for stabilizing the systems, but it is quite efficient to speed up the convergence rate. Some results on the equilibria of switched systems and their stability are found in [BS04].

Corollary 9.14.

Assume there exists a (sufficiently regular) function \(\bar{f}(x)\) such that

$$\displaystyle{\bar{f}(x) \in conv\{f(x,q),\ q \in \mathcal{Q}\}}$$

where \(\mathcal{Q} =\{ 1,2,\ldots,r\}\) and \(\bar{f}(0) = 0\) (with f(0,q) arbitrary such that \(\bar{f}(0) \in conv\{f(0,q)\}\) ) and there exists a smooth Lyapunov function Ψ(x) such that

$$\displaystyle{\nabla \varPsi (x)\bar{f}(x) \leq -\beta \varPsi (x)}$$

Then the strategy

$$\displaystyle{q =\varPhi (x) =\arg \min _{q\in \mathcal{Q}}\nabla \varPsi (x)f(x,q)}$$

assures

$$\displaystyle{\varPsi (x(t)) \leq \varPsi (x(0))e^{-\beta t}}$$

A typical example of application of this strategy is a system with quantized control, as shown in the next example. Here the main issue is not the system stabilization (the system is already stable), but that of speeding up the convergence.

Example 9.15.

Consider the two-tank hydraulic system already considered in Subsection 8.3.1 and represented in Figures 8.9 and 8.10, whose equations areFootnote 3

$$\displaystyle\begin{array}{rcl} \dot{h}_{1}(t)& =& -\alpha \sqrt{h_{1 } (t) - h_{2 } (t)} + q(t) {}\\ \dot{h}_{2}(t)& =& \alpha \sqrt{h_{1 } (t) - h_{2 } (t)} -\beta \sqrt{h_{2 } (t)} {}\\ \end{array}$$

where α and β are positive parameters, h 1(t) and h 2(t) are water levels, and q(t) is the incoming flow. The device works with a couple of identical on–off valves, so the possible values of the flow are

$$\displaystyle{q(t) \in \{ 0;\bar{q};2\bar{q}\}}$$

Let us consider the steady state associated with a single open valve. Then, by defining the variables \(x_{1}(t) = h_{1}(t) -\bar{ h}_{1}\), \(x_{2}(t) = h_{2}(t) -\bar{ h}_{2}\), \(u(t) = q(t) -\bar{ q}\), where \(\bar{h}_{1}\), \(\bar{h}_{2}\), and \(\bar{q}\) are the steady-state water levels and the incoming flow satisfying the conditions

$$\displaystyle{\bar{h}_{1} = \left (\frac{\bar{q}} {\alpha } \right )^{2} + \left (\frac{\bar{q}} {\beta } \right )^{2},\ \ \ \ \ \bar{h}_{ 2} = \left (\frac{\bar{q}} {\beta } \right )^{2},}$$

the following equations

$$\displaystyle\begin{array}{rcl} \dot{x}_{1}(t)& =& -\alpha \sqrt{x_{1 } (t) +\bar{ h}_{1 } - x_{2 } (t) -\bar{ h}_{2}} +\bar{ q} + u(t) {}\\ \dot{x}_{2}(t)& =& \alpha \sqrt{x_{1 } (t) +\bar{ h}_{1 } - x_{2 } (t) -\bar{ h}_{2}} -\beta \sqrt{x_{2 } (t) +\bar{ h}_{2}} {}\\ \end{array}$$

are derived.

Let us now consider the candidate control Lyapunov function (computed via linearization)

$$\displaystyle{\varPsi (x) = \frac{1} {2}\left (x_{1}^{2} + x_{ 2}^{2}\right )}$$

The corresponding Lyapunov derivative for \(x \in \mathcal{N}[\varPsi,\bar{h}_{2}^{2}/2]\) (this value is chosen in such a way that the ball is included in the positive region for the true levels \(\bar{h}_{i} + x_{i}\)) is

$$\displaystyle\begin{array}{rcl} & & \dot{\varPsi }(x,u) = {}\\ & =& \mathop{\underbrace{-(x_{1} - x_{2})\left (\alpha \sqrt{x_{1 } (t) +\bar{ h}_{1 } - x_{2 } (t) -\bar{ h}_{2}} -\bar{ q}\right ) - x_{2}\left (\beta \sqrt{x_{2 } (t) +\bar{ h}_{2}} -\bar{ q}\right )}}\limits _{\doteq\dot{\varPsi }_{N}(x_{1},x_{2})} {}\\ & +& x_{1}u \leq \dot{\varPsi }_{N}(x_{1},x_{2}) + x_{1}u {}\\ \end{array}$$

Note that \(\dot{\varPsi }_{N}(x_{1},x_{2})\), the natural derivative achieved for u = 0, is negative definite. Indeed the term in the left brackets has the same sign of x 1x 2 and the term in the right brackets has the same sign of x 2, thus \(\dot{\varPsi }_{N}(x_{1},x_{2})\) is zero only for \(x_{1} - x_{2} = 0\) and x 2 = 0 and negative elsewhere. This nonlinear system is hence naturally asymptotically stable. The control input admits three admissible values

$$\displaystyle{u(t) \in \{-\bar{q};0;\bar{q}\}}$$

which correspond to the three cases in which none, just one or both the switching valves are open.

Let us now pretend to be able to implement a continuous controller

$$\displaystyle{u = -\kappa x_{1}(t),}$$

a controller which clearly cannot be implemented because the device has no continuous flow regulation. Still, in terms of convergence, it is possible to do as good as this fictitious control, since the continuous control assures a decreasing rate

$$\displaystyle{\dot{\varPsi }(x,u) \leq \dot{\varPsi }_{N}(x_{1},x_{2}) -\kappa x_{1}^{2}}$$

and the discontinuous control

$$\displaystyle{u = -\bar{q}\ sgn(x_{1})}$$

provides a better decreasing rate when \(\vert \kappa x_{1}\vert <\bar{ q}\). Indeed, the fictitious system with the continuous control is included in the extremal systems achieved, respectively, with \(u = -\bar{q}\) and \(u = +\bar{q}\), at least in the set

$$\displaystyle{\vert \kappa x_{1}\vert \leq \bar{ q}}$$

where the closed-loop with this bang–bang control achieves better performances in terms of convergence than the continuous closed-loop plant.

From a practical point of view, the discontinuous controller has to be implemented with a threshold. We actually consider the function

$$\displaystyle{u = \left \{\begin{array}{rcrcl} \ \bar{q}\ &\ \ \mbox{ if}\ \ & \ \ &\ \ x_{1}\ \ & < -\varepsilon \\ 0\ &\ \ \mbox{ if}\ \ &\ \ -\varepsilon \leq &\ \ x_{1}\ \ & \leq \ \ \varepsilon \\ -\bar{ q}\ &\ \ \mbox{ if}\ \ & \ \ &\ \ x_{1}\ \ & \geq \ \ \varepsilon \\ \end{array} \right.}$$

In Figures 9.6 and 9.7 the experimental behavior is shown with \(\varepsilon = 0.01\) and \(\varepsilon = 0.03\), being the latter less subject to ripples as expected. Ripples are due to the real implementation and they cannot be reproduced via simulation.

Fig. 9.6
figure 6

The experimental behavior of the two-tank system with \(\varepsilon = 0.01\)

Fig. 9.7
figure 7

The experimental behavior of the two-tank system with \(\varepsilon = 0.03\)

As a final comment, we point out that in the real system α and β may vary (depending on the pipe conditions), thus producing an offset. However, the considered control basically eliminates the offset on x 2, since \(\bar{h}_{2}\) can be fixed. Conversely, under parameter variations, an offset on the first variable x 1 ≠ 0 is possible.

9.4.1 Switched linear systems

The problem of stabilization of switched systems is hard even in the linear case. This is apparent from the current literature which clearly shows that even in special cases, such as the case of positive switched linear systems, there are no general results which allow for algorithms of a reasonable complexity.

There are anyway important sufficient conditions which provide efficient but conservative solutions. For instance, in the case of a linear plant

$$\displaystyle{ \dot{x}(t) = A_{q(t)}x(t),\ \ q \in \mathcal{Q} }$$
(9.11)

\(\mathcal{Q} =\{ 1,2,\ldots,r\}\), the problem is easily solved if there exists

$$\displaystyle{\tilde{A} \in conv\{A_{i},\ \ i = 1,2,\ldots,r\}}$$

which is asymptotically stable. Indeed, it is possible to consider any Lyapunov function for the system \(\dot{x} =\tilde{ A}x\), e.g. Ψ(x) = x T Px, where P is a symmetric positive definite matrix such that

$$\displaystyle{\tilde{A}^{T}P + P\tilde{A}\preceq - Q,\ \ \ \ Q \succ 0}$$

and choose as switched law

$$\displaystyle{\varPhi (x) =\arg \min _{i}\ \ \dot{\varPsi }(x) =\arg \min _{i}\ \ x^{T}PA_{ i}x}$$

which insures the condition

$$\displaystyle{\dot{\varPsi }(x) \leq -x^{T}Qx}$$

Unfortunately, the existence of such a stable element in the convex hull, besides being hard to check, is not necessary. For instance, the system given by the pair of matrices

$$\displaystyle{A(w) = \left [\begin{array}{cc} \ 0\ & \ 1\ \\ \ - 1 + w\ &\ -a\ \end{array} \right ]}$$

where \(w = \pm \bar{w}\) is a switched parameter and a < 0, is not stable for any value of a. However, if a is small enough, then there exists a suitable stabilizing strategy [Lib03]. Another example will be discussed later on, in the context of positive switched systems.

As far as the stabilizability of switched systems is concerned, given the designer choice of the switching rule and/or control law, three possible definitions can be considered.

Definition 9.16.

System (9.11) is

  • consistently stabilizable if there exists a sequence q(t) such that the corresponding linear time-varying system \(\dot{x}(t) = A_{q(t)}x(t)\) is asymptotically stableFootnote 4;

  • open-loop stabilizable if, for any μ > ε > 0, there exists T > 0 such that, for all initial states \(\|x(0)\| \leq \mu\), there exists a specific switching sequence q(t) (depending on x(0)) assuring: \(\|x(t)\| \leq \epsilon\), for t ≥ T;

  • feedback stabilizable if there exists a closed-loop strategy

    $$\displaystyle{q(t) =\varPhi (x(t),t,q(t^{-}))}$$

    such that the corresponding (nonlinear discontinuous) system is globally uniformly asymptotically stable.

It is not difficult to see that open-loop and feedback stabilizability are equivalent. Clearly, if a system is consistently stabilizable, then it is open-loop and feedback stabilizable. The opposite is not true in general [SG11]. An example will be given in Section 9.7.2.

The difficulties in dealing with the stabilization problem for switched systems can be explained, in a Lyapunov framework, by the absence of convex and even smooth Control Lyapunov functions. Indeed, in general, the best “regularity property” which one can ask to a control Lyapunov function for a stabilizable linear system is homogeneity. The interest in such a property derives from the following fundamental result, whose proof can be found in the recent book [SG11].

Theorem 9.17.

Assume that the system \(\dot{x}(t) = A_{q}x(t)\) (or \(x(t + 1) = A_{q}x(t)\) ) is closed-loop stabilizable. Then it necessarily admits a Lyapunov function which is positively homogeneous of order 1.

To give an idea of how this function can be defined, consider the following set-theoretic considerations. First we need the following technical proposition.

Proposition 9.18.

If (9.11) is feedback (or open-loop) stabilizable, then also the following perturbed version

$$\displaystyle{ \dot{x}(t) = [\beta I + A_{q(t)}]x(t),\ \ \ q = 1,2,\ldots,r }$$
(9.12)

\(q \in \mathcal{Q}\) , is stabilizable for β > 0 small enough.

Proof.

We have to notice that for the same initial condition x(0) and the same q, the solution x(t) of the unperturbed system and the solution x β (t) of the perturbed one (9.12) are related as

$$\displaystyle{x_{\beta }(t) = e^{\beta t}x(t)}$$

Indeed, for a given q(t), (9.12) is just a linear time-varying system .

Assume that there exists a feedback stabilizing strategy q. Then for all μ > 0 there exists T > 0 such that \(\|x(T)\| \leq \mu /4\), for all \(\|x(0)\| \leq \mu\). For β > 0 small, we also have \(\|x_{\beta }(T)\| =\| e^{\beta T}x(T)\| \leq \mu /2\).

If we can drive all \(\|x(0)\| \leq \mu\) in the ball \(\|x_{\beta }(T)\| \leq \mu /2\), by linearity we have with similar arguments \(\|x_{\beta }(2T)\| \leq \mu /4\), \(\|x_{\beta }(3T)\| \leq \mu /8\), \(\|x_{\beta }(kT)\| \leq \mu /(2^{k})\).

This also means, in passing, that if the system can be stabilized, then it can be exponentially stabilized. If the system is uniformly stabilizable, then we can find a control Lyapunov function of the form

$$\displaystyle{\varPsi (x_{0}) =\inf _{q(\cdot )}\int _{0}^{\infty }\ \ \|x_{\beta }(q(t),x_{ 0})\|\ dt < \infty }$$

where we denoted by x β (q(⋅ ), x 0) the solution of (9.12) corresponding to the initial condition x 0 and sequence q(⋅ ). Function Ψ is well defined if the system is stabilizable. It is clearly positive definite. By linearity it is immediate that Ψ(x 0) is positively homogeneous of order one, continuous and 0 symmetric. Such a function is non-increasing for (9.12) and therefore strictly decreasing for the original system (9.11) if a proper feedback switching law is applied. Note that Ψ(x 0) is defined by taking the infimum over open-loop sequences, which proves that open-loop stabilizability implies feedback global stabilizability (the opposite is obviously true).

Unfortunately, we cannot go much further beyond homogeneity. The next negative result [BS08], which is in contrast with Proposition 9.7, shows a first headache in the stabilization of switched systems: convexity is not assured.

Proposition 9.19.

There are linear switched systems \(\dot{x}(t) = A_{q(t)}x(t)\) (or \(x(t + 1) = A_{q(t)}x(t)\) ) which are stabilizable by means of a switching feedback control law but do not admit convex control Lyapunov functions.

Example 9.20.

Consider the system

$$\displaystyle{\dot{x}(t) = A_{i(x(t))}x(t)\quad i \in \mathcal{I} =\{ 1,2\}}$$

where A 1 and A 2 are unstable matrices given by

$$\displaystyle{A_{1} = \left [\begin{array}{cc} 1& \ 0\\ 0 & -1 \end{array} \right ]\qquad A_{2} = \left [\begin{array}{cc} \gamma & - 1\\ \ 1 & \gamma \end{array} \right ]}$$

with γ > 1. Given the initial condition x(0), the system trajectory is \(x(t) = e^{A_{i}t}x(0)\), i = i(x) (see Fig. 9.8), where

$$\displaystyle{e^{A_{1}t} = \left [\begin{array}{cc} e^{t}& 0\ \ \\ 0 &e^{-t} \end{array} \right ]\ \ \mbox{ and}\ \ e^{A_{2}t} = e^{\gamma t}\left [\begin{array}{cc} \cos (t)& -\sin (t) \\ \sin (t)& \cos (t)\ \end{array} \right ]}$$

It is rather intuitive that this system is stabilizable. Indeed, when the state is of the form (0, x 2) and the dynamics i = 1 is active, the state converges to 0. On the other hand, the state can be always driven to the x 2-axis, from every initial condition by activating the dynamics i = 2. Using this idea without any modification leads to the construction of a switching law which activates the dynamics i = 1 in a conic sector with empty interior. Such a strategy is not robust with respect to switching delays. This is not a problem, because we can take a line (line c in Fig. 9.9) originating at zero and sufficiently close to the x 2 axis. We also take a line g (see again Fig. 9.9) in which the derivatives of the two motions A 1 x g and A 2 x g are aligned.

Fig. 9.8
figure 8

Possible trajectories for dynamics i = 1 (dashed) and dynamics i = 2 (plain).

Fig. 9.9
figure 9

Trajectory of the stabilized continuous-time system starting from the initial state \(\tilde{x}\).

Then a suitable strategy is i = 1 in sector s–0–g and i = 2 in sector g–0–c. Consider the point \(\tilde{x}\) in sector s–0–g where i = 1. We let the trajectory reach line g in x g to commute to i = 2, rotate counterclockwise (vector x b ) and reach line c. The we commute again to i = 1. Then again we reach line g, to commute to i = 2, we reach line s again and again we commute to i = 1. Then we will reach the same line aligned with the initial state \(\tilde{x}\) (line b 4). It is rather intuitive that, if s is close enough to the x 2 axis, the motion with i = 1 reduces the norm of the state, so the state returns on line b 4 with a reduced norm. Then the system repeats the same trajectory, eventually converging to 0.

On the other hand, the following happens. Given a convex compact set \(\mathcal{X}_{0}\) including 0, assume that a stabilizing control strategy \(q(x(t))\) is given (as the one proposed before). Define as \(\mathcal{R}(T)\) the set of all states x(t) which are reached from \(\mathcal{X}_{0}\) in time 0 < t ≤ T. The following result, which can be found in [BS08], holds.

Proposition 9.21.

If there exists a convex Lyapunov function, then the following condition is false for T > 0:

$$\displaystyle{\mathcal{X}_{0} \subset conv\{\mathcal{R}(T)\}}$$

where \(conv\{\mathcal{R}(T)\}\) is the convex hull.

The non-existence of a convex Lyapunov function can be understood by reconsidering Example 9.20 and looking at Figure 9.10. It can indeed be seen that necessarily, if one takes as \(\mathcal{X}_{0}\) the segment \(-\tilde{x}\)\(\tilde{x}\) the condition \(\mathcal{X}_{0} \subset conv\{\mathcal{R}(T)\}\) is satisfied for γ large for some T, no matter how the stabilizing strategy is chosen. The reader is referred to [BS08] for further details and for a discrete-time counterexample.

Fig. 9.10
figure 10

The set \(\mathcal{R}(T)\) includes \(\mathcal{X}_{0}\).

Absence of convexity can be a problem andFootnote 5 we have to announce another negative result (see [BCV12] and [BCV13] for details).

Proposition 9.22.

There are linear switched systems \(\dot{x}(t) = A_{q(t)}x(t)\) (or \(x(t + 1) = A_{q(t)}x(t)\) ) which are stabilizable under a switching control law but do not admit smooth (away from 0) positively homogeneous control Lyapunov functions.

We will sketch a proof of the result in Subsection 9.5.3 about positive switched systems.

So essentially the previous negative results justify the use of non-convex non-smooth functions. In particular the class of minimum-type functions such as those considered in [HL08] in the quadratic case and defined as

$$\displaystyle{\varPsi (x) =\min _{i}\ \ x^{T}P_{ i}x}$$

where P i are positive definite or positive semi-definite matrices, have been proved to be especially useful.

As a final point, it is worth recalling that in the discrete-time case the existence of a Schur convex combination is not sufficient for stabilizability (see Exercise 6).

There are several attempts in the literature to deal with the stabilization problem of switched systems. Successful techniques have been proposed for planar systems [BC06, XA00]. More general methods have been proposed based on (non-convex) piecewise linear functions [Yfo10] and on (non-convex) piecewise quadratic Lyapunov functions [HL08, GCB08, DGD11]. Necessary and sufficient conditions based on non-convex piecewise linear Lyapunov functions for the stabilizability of discrete-time switched systems have been recently successfully proposed in [FJ14], where a numerical procedure is presented along with its computational issues. Explanations of the difficulties in terms of computational complexity can be found also in [VJ14] and the references therein. Again, the reader is referred to more specialized literature [LA09, SG11].

9.5 Switching and switched positive linear systems

This section focuses on positive switching and switched linear systems, represented by the equation

$$\displaystyle{ \dot{x}(t) = A_{q(t)}x(t),\ \ \ \ \ \ \ \ (\mbox{ respectively}\ \ x(k + 1) = A_{q(t)}x(k)) }$$
(9.13)

where \(q(t) \in \{ 1,2,\ldots,r\}\) and A q , \(q = 1,2,\ldots,r\) are Metzler matrices in continuous-time and non-negative matrices in discrete-time.

Clearly, these systems are a special case of linear switching/switched systems, and therefore have all the properties presented in the previous sections. Given the extra properties enjoyed by the class of LTI positive systems (for instance, the existence of the Perron–Frobenius eigenvalue), it is legitimate to ask whether these extra properties can be somehow helpful when dealing with switching/switched positive systems.

Some examples of positive switching/switched systems are now presented. To keep things slightly more general, in the following also positive linear systems equipped with a non-negative input as follows:

$$\displaystyle{ \dot{x}(t) = A_{q(t)}x(t) + B_{q(t)}u(t) }$$
(9.14)

where B q are non-negative matrices and u(t) is a non-negative input, will be considered.

9.5.1 The fluid network model revisited

Consider the fluid network already considered in Section 4.5.7, Example 4.62, whose equations have the form (9.14), with state and input matrices (reported here for convenience)

$$\displaystyle\begin{array}{rcl} A& =& \left [\begin{array}{cccc} - (\alpha _{12} +\beta _{31})& \alpha _{12} & 0 & 0 \\ \alpha _{21} & - (\alpha _{21} +\alpha _{23} +\beta _{42})& \alpha _{23} & 0 \\ \beta _{31} & \alpha _{32} & - (\alpha _{32} +\alpha _{34} +\beta _{03})& \alpha _{34} \\ 0 & \beta _{42} & \alpha _{43} & - (\alpha _{43} +\beta _{40}) \end{array} \right ] {}\\ B& =& \left [\begin{array}{cc} \beta _{10} & \\ 0\\ 0 \\ 0 \end{array} \right ]{}\\ \end{array}$$

We remind the reader that α ij  = α ji . The network is depicted in Fig. 9.11

Fig. 9.11
figure 11

The switched fluid network.

which shows that the fluid can flow in on–off mode. This means that all the coefficients α ij and β ij can instantaneously change values as

$$\displaystyle{\alpha _{ij} \in \{\alpha _{ij}^{-},\alpha _{ ij}^{+}\}}$$

and

$$\displaystyle{\beta _{ij} \in \{\beta _{ij}^{-},\beta _{ ij}^{+}\}}$$

with \(\alpha _{ij}^{+} >\alpha _{ ij}^{-} > 0\) and \(\beta _{ij}^{+} >\beta _{ ij}^{-} > 0\). According to our distinction, if the system is “switching,” we have to consider the problem of assuring stability under arbitrary switching. Conversely, if the system is “switched,” then we typically wish to guarantee closed-loop stability with a prescribed convergence rate β (or β-contractiveness). A final interesting question, at least from a practical point of view, concerns the possibility of confining the ultimate system evolution in a neighborhood of a desired equilibrium, given a certain input u.

To answer the above questions, the notion of co-positive Lyapunov functions is introduced next. The basic idea is that to solve these problems for positive systems it is possible to restrict our attention to the positive orthant only.

Definition 9.23.

A function Ψ(x), \(x \in \mathrm{I\!R}^{n}\), is co-positive if Ψ(0) = 0 and Ψ(x) > 0 for x ≥ 0 and x ≠ 0.

For instance, in \(\mathrm{I\!R}^{2}\), the function \(\varPsi (x_{1},x_{2}) = x_{1} + x_{2}^{2}\) is co-positive.

A co-positive function Ψ(x) is a co-positive Lyapunov function for a positive systems if it is decreasing along the system trajectories. It is a weak co-positive Lyapunov function if it is non-increasing along the system trajectories. As expected, this condition is assured if the Lyapunov derivative is negativeFootnote 6 in the positive orthant with the exception of 0. In the weak case, we just require the derivative to never be positive.

If we consider the fluid network example just reported, it can be immediately seen that the first problem has an immediate solution. Indeed it is apparent that the system matrix, in view of the symmetry assumption α ij  = α ji , is weakly column diagonally dominant. Since it is irreducible (see Definition 4.59), we can render it diagonally dominant by using a diagonal state-transformation

$$\displaystyle{D = \mbox{ diag}\{\lambda,\lambda,1,1\}}$$

with

$$\displaystyle{ \frac{\alpha _{23}^{+}} {\alpha _{23}^{+} +\beta _{ 03}^{-}} <\lambda < 1}$$

(it will be shown soon that the lower bound is chosen in such a way to assure dominance) to get \(\hat{A} = D^{-1}AD\) and \(\hat{B} = D^{-1}B\) as follows:

$$\displaystyle{\hat{A} = \left [\begin{array}{cccc} - (\alpha _{12} +\beta _{31})& \alpha _{12} & 0 & 0 \\ \alpha _{21} & - (\alpha _{21} +\alpha _{23} +\beta _{42})& \alpha _{23}/\lambda & 0 \\ \lambda \beta _{31} & \lambda \alpha _{32} & - (\alpha _{32} +\alpha _{34} +\beta _{03})& \alpha _{34} \\ 0 & \lambda \beta _{42} & \alpha _{43} & - (\alpha _{43} +\beta _{04}) \end{array} \right ]}$$

and \(\hat{B} = [\ \beta _{10}\ 0\ 0\ 0\ ]^{T}\).

Set \(z = D^{-1}x\) and consider the co-positive function

$$\displaystyle{\varPsi (z) =\bar{ 1}^{T}z.}$$

Its Lyapunov derivative is

$$\displaystyle\begin{array}{rcl} & & \dot{\varPsi }(z) =\bar{ 1}^{T}(\hat{A}z +\hat{ B}u) = {}\\ & -& \left [(1-\lambda )\beta _{31}z_{1} + (1-\lambda )(\beta _{42} +\alpha _{32})z_{2} + (\beta _{03} +\alpha _{32}(1 -\frac{1} {\lambda } ))z_{3} +\beta _{40}z_{4}\right ] + \frac{u\beta _{10}} {\lambda } \leq {}\\ & -& \left [(1-\lambda )\beta _{31}^{-}z_{ 1} + (1-\lambda )(\beta _{42}^{-} +\alpha _{ 32}^{-})z_{ 2} + (\beta _{03}^{-} +\alpha _{ 32}^{+}(1 -\frac{1} {\lambda } ))z_{3} +\beta _{ 40}^{-}z_{ 4}\right ] + \frac{u\beta _{10}} {\lambda } = {}\\ & =& -v^{T}z + \frac{u\beta _{10}} {\lambda } {}\\ \end{array}$$

with obvious meaning of the vector v. Let β 10 = 0. Since λ is close enough to 1 and such that the critical term \(\beta _{03}^{-} +\alpha _{ 32}^{+}(1 -\frac{1} {\lambda } ) > 0\), then v > 0 and therefore \(\dot{\varPsi }< 0\) for z > 0. This means that the system is asymptotically stable because:

$$\displaystyle{\dot{\varPsi }(z) \leq -\min \{v_{j}\}\ \sum z_{j} = -\min \{v_{j}\}\varPsi (z)}$$

For β 10 > 0 and u bounded, the system solution is bounded. Indeed, if we take the plane

$$\displaystyle{\varPsi (z) =\bar{ 1}^{T}z =\mu }$$

the derivative becomes negative for μ > 0 large. Note that the Lyapunov function \(\bar{1}^{T}z\) for the modified system corresponds to the Lyapunov function \(\bar{1}^{T}D^{-1}x\) for the original one.

Let us consider the switched stabilization problem, which is quite interesting in this case. We could use the same Lyapunov function previously derived, but we propose a different idea. We wish to find a switching strategy to control the system in the sense that we wish to try to force the system, by switching, to stay at its lowest level, given a constant incoming flow u = const ≥ 0, or to approach 0 as quickly as possible if the external input is u = 0.

In this case there is a simple solution which is shown next. Indeed the “average system” is weakly diagonally dominant and irreducible, hence asymptotically stable. Then we can find a linear co-positive Lyapunov function which can be used as a control Lyapunov function for the switched systems.

Denote by \(\bar{A}\) the system corresponding to the average values

$$\displaystyle{\alpha _{ij} = \frac{\alpha _{ij}^{-} +\alpha _{ ij}^{+}} {2} }$$

and

$$\displaystyle{\beta _{ij} = \frac{\beta _{ij}^{-} +\beta _{ ij}^{+}} {2} }$$

Let z T be the eigenvector associated with the Frobenius eigenvalue:

$$\displaystyle{z^{T}\bar{A} =\lambda _{ F}z^{T}}$$

with λ F  < 0. As the Frobenius eigenvalue is negative, all the others have negative real parts. Again, the average system is fictitious and “cannot be realized.” However, since it would render the derivative negative when u = 0 and it is in the convex hull of all the matrices, then at each x there exists one of such matrices (a vertex) which assures a smaller derivative.

Denoting by A i any of the matrices achieved by taking α ij and β ij in all possible ways we have, in the case of a constant input \(\bar{u} > 0\),

$$\displaystyle{\min _{i}\ \ z^{T}A_{ i}x + z^{T}B\bar{u} \leq z^{T}\bar{A}x + z^{T}B\bar{u} =\lambda _{ F}z^{T}x + z^{T}B\bar{u}}$$

Hence, considering the strategy

$$\displaystyle{q(x) =\arg \min _{i}z^{T}A_{ i}x}$$

the Lyapunov derivative of the co-positive Lyapunov function Ψ(x) = z T x for the system \(\dot{x} = A_{q(x)}x + B\bar{u}\) would be

$$\displaystyle{D^{+}\varPsi (x) \leq \lambda _{ F}z^{T}x + z^{T}B\bar{u} =\lambda _{ F}\varPsi (x) + z^{T}B\bar{u}}$$

hence implying ultimate boundedness of the system, since λ F  < 0. Therefore D + Ψ(x) < 0 for every x such that

$$\displaystyle{\varPsi (x) < -\frac{z^{T}B} {\lambda _{F}} \bar{u}}$$

Remark 9.24.

The “arg-min” strategy (obviously) leads to a system of differential equations with discontinuous right-hand side. This typically introduces chattering and sliding modes in the system. As we will see later, introducing chattering is not necessarily the most convenient strategy.

9.5.2 Switching positive linear systems

In the case of switching systems, namely when the sequence is arbitrary, it is a legitimate question to ask whether positivity brings something good to the analysis of switching systems, in particular whether there is any advantage from the assumption that all the matrices are Metzler.

For instance, it has been shown that (Th. 9.8) switching robust stability is equivalent to the stability of the convex differential inclusion, which is in turn equivalent to the existence of a convex (in particular polyhedral) common Lyapunov function. Therefore it is necessary that all the elements in the convex hull are Hurwitz matrices. The condition is by no means sufficient, for switching linear systems, and an example will be provided later (Example 9.50).

On the other hand, positive systems have a “dominant” eigenvalue, the Frobenius one, which rules all the others. So a stronger results might be true.

Conjecture 9.25.

For positive switching systems, the Hurwitz stability of all the matrices in the convex hull is necessary and sufficient for switching stability.

The conjecture is true for second order systems only [GSM07, FMC09].

Proposition 9.26.

For second order continuous-time switching systems, the Hurwitz stability of all the matrices in the convex hull is necessary and sufficient for asymptotic stability under arbitrary switching.

Unfortunately, we cannot go much further. Indeed, in [FMC09], a third order counterexample is provided in which it is shown that the condition is not sufficient.

What about discrete-time? Even worse. Take \(x(k + 1) = A_{i}x(k)\), i = 1, 2, with

$$\displaystyle{A_{1} = \left [\begin{array}{cc} \ \ 0\ \ &\ \ 0\ \ \\ (2-\epsilon ) &\ \ 0\ \ \end{array} \right ]\ \ \ A_{2} = \left [\begin{array}{cc} \ \ 0\ \ &(2-\epsilon )\\ \ \ 0\ \ & \ \ 0\ \ \end{array} \right ]}$$

and ε > 0 small enough. The characteristic polynomial of the matrices in the convex hull is

$$\displaystyle{p(z) = z^{2} -\alpha (1-\alpha )(2-\epsilon )^{2}}$$

For 0 ≤ α ≤ 1, \(\alpha (1-\alpha )(2-\epsilon )^{2} < (2-\epsilon )^{2}/4 < 1\). Then all the matrices are Schur. It is an exercise to see that the product (A 1 A 2)k goes to infinity, thus the system is not stable under arbitrary switching.

From the computational side, one can have some advantages in the construction of polyhedral functions, introduced in Chapters 5 and 6, since the plane generating procedure would work with positive constraints only.

Let us consider the discrete-time case. We present a procedure similar to the polyhedral Lyapunov function generation algorithm described in Sections 5.4 and 6.3.3, for the computation of the joint spectral radius. Let us introduce a definition.

Definition 9.27.

We call a P-set a set \(\mathcal{S}\subset \mathrm{I\!R}_{+}^{n}\) which

  • is closed, bounded and includes the origin;

  • is star-shaped in the positive orthant: any ray originating at zero and contained in the positive orthant encounters the boundary of \(\mathcal{S}\) in a single non-zero point. In other words, for any (non-negative) vector v ≥ 0, the intersection of the positive ray with direction v with \(\mathcal{S}\) is a segment with one extremum in x = 0 and the other extremum on the boundary of \(\mathcal{S}\):

    $$\displaystyle{\{\lambda:\ \ \lambda v \in \mathcal{S}\} = [0,\lambda _{max}(v)]}$$

    for some λ max (v) > 0 (see Fig. 9.12).

    Fig. 9.12
    figure 12

    Example of a P-set, and the corresponding “Minkowski-like” function

Given any co-positive and positively homogeneous function its sub-level sets \(\mathcal{N}[\varPsi,\kappa ]\) are P-sets. Conversely, given a P-set, we can define a co-positive function which is positively homogeneous, extending the concept of Minkowski functional

$$\displaystyle{\varPsi (x) =\inf \ \{\lambda > 0:\ x \in \lambda \mathcal{S}\}}$$

which is a co-positive function of order 1. Clearly we can generate co-positive homogeneous functions of any order by considering Ψ(x)p.

Obviously a P-set can be convex. In this case the Minkowski-like functional would also be convex.

As a special case, we can consider polyhedral P-sets, which can be represented as

$$\displaystyle{\mathcal{S} =\{ x \geq 0: Fx \leq \bar{ 1}\} = \mathcal{P}(F)}$$

where F ≥ 0 is a full column rank non-negative matrix.

First we remind that, to assure a certain speed of convergence λ, we have just to consider the modified system

$$\displaystyle{x(t + 1) = \frac{A_{q(t)}} {\lambda } x(t),\ \ q \in \{ 1,2,\ldots r\}.}$$

With the above in mind, it is possible to compute the largest invariant set starting from any arbitrary P-set by means of the procedure presented next.

Procedure:

Computation of a co-positive Lyapunov function given a contraction factor λ > 0.

  1. 1.

    Take any arbitrary polyhedral P-set \(\mathcal{S}\), associated with a non-negative full column rank matrix F 0. Set \(\mathcal{P}_{0}:= \mathcal{P}(F_{0})\bigcap \mathrm{I\!R}_{+}^{n}\). Fix a tolerance ε > 0, k = 1 and a maximum number of steps k max .

  2. 2.

    Recursively compute \(\mathcal{P}_{k} = \mathcal{P}(F_{k})\bigcap \mathrm{I\!R}_{+}^{n}\) as

    $$\displaystyle{\mathcal{P}_{k} =\{ x:\ \ F_{k-1}\frac{A_{i}} {\lambda } x \leq \bar{ 1},\ \ i = 1,2,\ldots r\}}$$
  3. 3.

    Remove all the redundant rows in matrix F k to get \(\mathcal{P}_{k} = \mathcal{P}(F_{k})\bigcap \mathrm{I\!R}_{+}^{n}\).

  4. 4.

    If \(\mathcal{P}_{k} = \mathcal{P}_{k-1}\) STOP: the procedure is successful.

  5. 5.

    If k = k max or if \(\epsilon \bar{1}\not\in \mathcal{P}_{k}\) STOP: the procedure is unsuccessful for the given λ.

  6. 6.

    Set \(k = k + 1\) and GO TO step 2.

If the procedure stops successfully, then the final set is the largest invariant set included in the initial one for the modified system (and the largest λ-contractive set for the original system).

Again, the previous considerations about the tolerance and maximum number of steps can be made. Note that, in the event that the sequence collapses, the failure can be just detected by the exclusion \(\epsilon \bar{1}\not\in \mathcal{S}_{k}\).

As a starting set, we can take a simple set, for instance \(\mathcal{S} =\{ x \geq 0:\ Fx \leq 0\}\) for some row vector F > 0.

This method for finding polyhedral co-positive functions can be applied (needless to say) to continuous-time problems by means of the Euler Auxiliary System as described in Chapter 5 Note that, given a positive continuous-time system \(\dot{x} = Ax\) with A Metzler, then [I +τ A] is a positive matrix provided that τ > 0 is small enough. Similar techniques have been applied to prove structural boundedness of a class of biochemical networks [BG14].

Example 9.28 (Worst-case emptying speed).

Assume that waste material is accumulated in two stock houses and it has to be eliminated. Assume that the system has two possible configurations, as in Fig. 9.13. In configuration A the waste material in node 1 is transferred to node 2 and then eliminated. Configuration B is the symmetric one. We assume a discrete-time model of the form

$$\displaystyle{A_{A} = \left [\begin{array}{cc} \ \beta \ &\ 0\ \\ 1-\beta & \ \alpha \ \end{array} \right ],\ \ \ \ \ \ A_{B} = \left [\begin{array}{cc} \ \alpha \ &1-\beta \\ \ 0\ & \ \beta \ \end{array} \right ]}$$

with 0 < α, β < 1. In any fixed configuration, the system would converge with a speed depending on the maximum eigenvalue max{α, β}. A legitimate question is whether switching between the two configurations can worsen the situation and how much.

Fig. 9.13
figure 13

The two-configuration emptying problem.

If, for instance, we assume \(\alpha =\beta = 0.8\), then the maximum eigenvalue is λ = 0. 8, for both matrices and so, under arbitrary switching, the speed of convergence in the worst case cannot be smaller than λ = 0. 8. Iterating over λ it is possible to compute numerically that the best contractivity factor, which is assured under arbitrary switching is around λ  ≈ 0. 92. The reader can enjoy in Fig. 9.14 the maximal set computed for λ = 0. 94 included in the region

$$\displaystyle{\mathcal{S} =\{ x_{1},x_{2} \geq 0:\ \ x_{1} + x_{2} \leq 1\}}$$
Fig. 9.14
figure 14

The maximal invariant set for the modified system with λ = 0. 94 (lower-left portion of the square)

9.5.3 Switched positive linear systems

Perhaps the problem of switched positive systems is of more interest, because there are many situations in which this problem is encountered and positivity turns out to be an assumption under which some general interesting properties can be proved.

Consider the system

$$\displaystyle{ \dot{x}(t) = A_{q(t)}x(t) }$$
(9.15)

where

$$\displaystyle{q(t) \in \mathcal{Q} =\{ 1,2,\ldots,r\}}$$

and the matrices A q are all Metzler.

The basic question we consider is the existence of a feedback function

$$\displaystyle{q(t) =\varPhi (x,t)}$$

ensuring that the state x(t) converges to zero for any initial condition x(0) ≥ 0.

Strange as it may seem, in choosing q(t), the knowledge of x(t) does not matter (see [FV12, BCV12] for details).

Theorem 9.29.

For a system of the form (9.15) with A q Metzler the following conditions are equivalent.

  1. i)

    There exists x 0 > 0 and \(q(t) \in \mathcal{Q}\) (an open-loop control depending on x 0 ) such that the trajectory starting from x(0) = x 0 converges to zero.

  2. ii)

    There exists a feedback law q(t) = Φ(x(t),t) such that the trajectory starting from any x(0) ≥ 0 converges to zero.

  3. iii)

    The switched system is consistently stabilizable (i.e., there exists a single \(q(t) \in \mathcal{Q}\) which drives x(t) to zero from any initial condition, not necessarily positive).

Proof.

ii) ⇒ i): Clearly, if there exists a stabilizing closed-loop q(t) = Φ(t, x), given x 0 > 0 there exists an open-loop sequence which drives the state to 0 starting from x(0) = x 0.

i) ⇒ iii): (this result is in [FV12]). Assume that, given \(\bar{x}_{0} > 0\), there exists a switching function q(t) such that for \(\bar{x}(0) =\bar{ x}_{0}\), the solution \(\bar{x}(t)\) converges to zero. Let x(0) be any initial condition such that \(x(0) \leq \bar{ x}_{0}\) and let x(t) be the corresponding solution. Since q(t) is fixed, both x(t) and \(\bar{x}(t)\) are solutions and their difference \(\bar{x}(t) - x(t)\dot eqz(t)\) satisfies

$$\displaystyle{\dot{z}(t) = A_{q(t)}z(t)}$$

where A q(t) is Metzler, so the system is positive. Since by construction \(z(0) =\bar{ x}(0) - x(0) \geq 0\), the condition z(t) ≥ 0 is preserved. Hence \(x(t) \leq \bar{ x}(t)\) for all t > 0. Now take the symmetric initial state \(-\bar{x}_{0}\). For the given q(t) the solution is \(-\bar{x}(t)\) which goes to zero. Exactly in the same way, one can show that if \(x(0) \geq -\bar{x}_{0}\) then \(x(t) \geq -\bar{x}(t)\). Then, for \(-\bar{x}_{0} \leq x(0) \leq \bar{ x}_{0}\), we have (see Fig. 9.15)

$$\displaystyle{-\bar{x}(t) \leq x(t) \leq \bar{ x}(t)}$$

Therefore all the initial states in the box \(\bar{\mathcal{P}}[I,\bar{x}_{0}]\) are driven to zero. Since the box includes zero in its interior, and the system is linear, any initial state can be included in the box \(\lambda \bar{\mathcal{P}}[I,\bar{x}_{0}]\), for λ > 0 large enough, so q(t) drives all states to zero. This proves iii).

Fig. 9.15
figure 15

The idea of the proof: all solutions are bounded below and above from those originating between \(-\bar{x}_{0}\) and \(\bar{x}_{0}\)

iii) ⇒ ii): obvious.

The previous theorem admits a corollary (see [SG05] for further details).

Corollary 9.30.

If any of the equivalent conditions of Theorem 9.29 holds, then there exists a periodic sequence q p (t), with period T large enough, such that any initial state is driven to 0.

Proof.

Consider a function q(t) driving the state trajectory \(\bar{x}(t)\) from \(\bar{x}_{0}\) to zero. Take a positive λ < 1 and T > 0 such that the solution \(\bar{x}(t)\) is in the box \(\lambda \bar{\mathcal{P}}[I,\bar{x}_{0}]\) at time t = T, namely \(\bar{x}(T) \in \bar{\mathcal{P}}[I,\lambda \bar{x}_{0}]\). So for any initial state in \(\bar{\mathcal{P}}[I,\bar{x}_{0}]\) (\(x(0) \leq \bar{ x}_{0}\)) we have

$$\displaystyle{x(T) \in \lambda \bar{\mathcal{P}}[I,\bar{x}_{0}]}$$

Truncate function q and extend it periodically with period T. The next period we will have \(x(2T) \in \lambda ^{2}\bar{\mathcal{P}}[I,\bar{x}_{0}]\) and in general

$$\displaystyle{x(kT) \in \lambda ^{k}\bar{\mathcal{P}}[I,\bar{x}_{ 0}]}$$

Since λ < 1, this means that x(t) → 0.

The previous results hold, without changes, in discrete-time.

For clear reasons, it is important to find a feedback solution to the problem anyway. We have seen that in general, for switched linear systems, convex control Lyapunov function may not exist, even if the system is stabilizable. A natural question is whether there exists a class of Lyapunov functions which are universal for the problem. The following theorem provides an answer [HVCMB11] and tells us that, surprisingly, for positive linear systems, as long as we stay in the positive orthant, we can always find concave control Lyapunov functions.

Theorem 9.31.

Assume that a positive switched linear system is stabilizable. Then there exists a concave co-positive control Lyapunov function, positively homogeneous of order one.

Proof.

Consider the perturbed system

$$\displaystyle{\dot{x}(t) = [\beta I + A_{q}]x(t) = A_{\beta,q}x(t)}$$

for β > 0 small enough. We remind that, for the same initial condition and the same q the solutions of the unperturbed system x(t) and of the perturbed one x β (t) are related as (see Proposition 9.18) x β (t) = e β t x(t) and that for β small enough the system remains stabilizable if the original system is such.

Denote by x β, q (t, x 0) the solution with initial condition x 0, corresponding to a switching function q(⋅ ).

Consider for the modified system the following function:

$$\displaystyle{\varPsi _{\beta }(x_{0}) =\inf _{q}\ \ \int _{0}^{\infty }\ \bar{1}^{T}x_{\beta,q}(t,x_{0})dt}$$

which is well defined since we assumed stabilizability.

To prove concavity, consider \(x_{0} =\alpha _{1}x_{1} +\alpha _{2}x_{2}\), \(\alpha _{1} +\alpha _{2} = 1\), α 1, α 2 ≥ 0. For any q we have

$$\displaystyle\begin{array}{rcl} \int _{0}^{\infty }\ \bar{1}^{T}x_{\beta,q}(t,x_{0})& =& \alpha _{1}\int _{0}^{\infty }\ \bar{1}^{T}x_{\beta,q}(t,x_{1})dt +\alpha _{2}\int _{0}^{\infty }\ \bar{1}^{T}x_{\beta,q}(t,x_{2})dt {}\\ & & {}\\ \end{array}$$

hence

$$\displaystyle\begin{array}{rcl} \varPsi _{\beta }(x_{0}) =& & \inf _{q}\ \ \int _{0}^{\infty }\ \bar{1}^{T}x_{\beta,q}(t,x_{0}) = {}\\ & =& \inf _{q}\ \left [\alpha _{1}\int _{0}^{\infty }\ \bar{1}^{T}x_{\beta,q}(t,x_{1})dt +\alpha _{2}\int _{0}^{\infty }\ \bar{1}^{T}x_{\beta,q}(t,x_{2})dt\right ] {}\\ & \geq & \alpha _{1}\left [\inf _{q_{1}}\ \int _{0}^{\infty }\ \bar{1}^{T}x_{\beta,q_{1}}(t,x_{1})dt\right ] +\alpha _{2}\left [\inf _{q_{2}}\ \int _{0}^{\infty }\ \bar{1}^{T}x_{\beta,q_{2}}(t,x_{2})dt\right ] {}\\ & =& \alpha _{1}\varPsi _{\beta }(x_{1}) +\alpha _{2}\varPsi _{\beta }(x_{2}) {}\\ \end{array}$$

and therefore we have that Ψ β (x 0) is concave. It is obviously positively homogeneous of order one.

Consider the directional derivative

$$\displaystyle{D^{+}\varPsi (x,[\beta I + A]x) =\lim _{ h\rightarrow 0^{+}}\ \frac{\varPsi (x + h[\beta I + A]x) -\varPsi (x)} {h} }$$

and any interval [0, τ]. By applying dynamic programming considerations, we have

$$\displaystyle{\varPsi _{\beta }(x_{0}) =\inf _{q}\ \ \int _{0}^{\tau }\ \bar{1}^{T}x_{\beta,q}(t,x_{0})dt +\varPsi _{\beta }(x_{\beta }(\tau ))}$$

Then, for all τ > 0,

$$\displaystyle{\varPsi _{\beta }(x_{0}) >\varPsi _{\beta }(x_{\beta }(\tau ))}$$

As a consequence we have

$$\displaystyle{D^{+}\varPsi (x,[\beta I + A]x) \leq 0}$$

Let us denote by \(\eta = h/(1 -\beta h)\) (then h → 0 implies η → 0) and let us bear in mind that Ψ(λ x) = λ Ψ(x). Now consider for the nominal system

$$\displaystyle\begin{array}{rcl} D^{+}\varPsi (x)& =& \lim _{ h\rightarrow 0^{+}} \frac{\varPsi (x + hAx) -\varPsi (x)} {h} =\lim _{h\rightarrow 0^{+}}\ \frac{\varPsi (x - h\beta x) -\varPsi (x)} {h} {}\\ & +& \lim _{h\rightarrow 0^{+}}\ \frac{\varPsi (x + hAx + h\beta x - h\beta x) -\varPsi (x - h\beta x)} {h/(1 -\beta h)} \frac{1} {(1 -\beta h)} {}\\ & =& -\beta \varPsi (x) +\mathop{\underbrace{ \lim _{\eta \rightarrow 0^{+}} \frac{\varPsi (x+\eta [\beta I+A]x)-\varPsi (x)} {\eta } }}\limits _{\leq 0} \leq -\beta \varPsi (x) {}\\ \end{array}$$

and the proof is completed.

One could at this point try a conjecture. We have seen that, in the switching system case, a convex Lyapunov function can be smoothed. Then

Conjecture:

for a positive switched linear system, stabilizability implies the existence of a smooth concave positively homogeneous Lyapunov function.

The conjecture is false, unless we take n = 2. Precisely, we can claim the following.

Theorem 9.32.

Assume that the matrices A i \(i = 1,2,\ldots,r\) are irreducible. Then the following statements are equivalent.

  1. i)

    The system is stabilizable and admits a co-positive and positively homogeneous smooth control Lyapunov function.

  2. ii)

    There exists a matrix \(\bar{A}\) in the convex hull of the A i , \(\bar{A} \in conv\{A_{q},\ q = 1,\ldots,r\}\) , which is Hurwitz.

  3. iii)

    The system admits a linear co-positive control Lyapunov function Ψ(x) = z T x, with z > 0.

Proof.

ii) ⇒ iii) If there exists an Hurwitz matrix in the convex hull, then we can take its Frobenius left eigenvector z and \(\lambda z^{T} = z\bar{A}\), with λ < 0 and z > 0. Note that \(\bar{A}\) is irreducible, if the A i are such. Then, for all x ≥ 0

$$\displaystyle{\min _{i}\ \ z^{T}A_{ i}x \leq \lambda z^{T}\bar{A}x =\lambda z^{T}x < 0}$$

Then \(\varPsi (x)\dot eqz^{T}x\) is a co-positive control Lyapunov function.

iii) ⇒ i) Obviously, since a co-positive linear function is smooth and positively homogeneous.

i) ⇒ ii) See [BCV12].

Proposition 9.33.

There exist Metzler matrices A 1 , A 2 , …A r for which the corresponding switched system is stabilizable but there are no positively homogeneous co-positive Lyapunov functions which are continuously differentiable.

Proof.

In view of implication i) ⇒ ii) in Theorem 9.32 it is sufficient to show that there exist stabilizable positive switched systems which do not include Hurwitz matrices in the convex hull. This will be shown in Example 9.34.

This is not good news as previously announced. Since positive linear systems are a special case of linear systems, the existence of a positively homogeneous smooth Lyapunov function would imply the existence of a co-positive Lyapunov function if we restrict our attention to the positive orthant. Then Proposition 9.33 implies Proposition 9.22. The proof of the result is in [BCV12] and a different and more “detailed” one is in [BCV13].

The following example motivates the analysis and proves Proposition 9.33.

Example 9.34.

Consider a traffic control problem in a junction.

Assume that there are three main roads (A, B, and C) converging into a “triangular connection” governed by traffic lights. Three buffer variables, x 1, x 2, and x 3, represent the number of vehicles waiting at the three traffic lights inside the triangular loop. We assume that there are three symmetric configurations as far as the states of the 6 traffic lights are concerned. In the first configuration, described in Fig. 9.16, we assume that traffic lights corresponding to x 1, x 2, B and C are green, while the ones corresponding to x 3 and A are red. Accordingly,

  • x 3 increases proportionally (β > 0) to x 2;

  • x 2 remains approximately constant, receiving inflow from B and buffer x 1, while giving outflow to A and to buffer x 3;

  • x 1 decays exponentially ( −γ < 0), since the inflow from C goes all to x 2 and B.

Fig. 9.16
figure 16

The traffic control problem.

The exponential decay takes “approximately” into account the initial transient due to the traffic light switching. The other two configurations are obtained by a circular rotation of x 1, x 2, and x 3 (as well as of A, B, and C).

We model this problem by considering the following switched system, in which the control must select one of the three sub-systems characterized by the matrices

$$\displaystyle{ A_{1} =\ \left [\begin{array}{ccc} -\gamma &0&0\ \\ 0 &0 &0 \\ 0 & \beta &0 \end{array} \right ]\ \ A_{2} =\ \left [\begin{array}{ccc} \ 0& 0 & \beta \ \\ 0 &-\gamma &0 \\ 0& 0 &0 \end{array} \right ]\ \ A_{3} =\ \left [\begin{array}{ccc} \ 0&0& 0\ \\ \beta &0 & 0 \\ 0&0&-\gamma \end{array} \right ], }$$
(9.16)

with γ = 1 and β = 1.

First of all notice that no convex Hurwitz combination of the three matrices can be found. Indeed the characteristic polynomial of matrix \(\alpha _{1}\hat{A}_{1} +\alpha _{2}\hat{A}_{2} +\alpha _{3}\hat{A}_{3}\) is

$$\displaystyle{p(s,\alpha ) = s^{3} + (\alpha _{ 1} +\alpha _{2} +\alpha _{3})s^{2} + (\alpha _{ 1}\alpha _{2} +\alpha _{2}\alpha _{3} +\alpha _{3}\alpha _{1})s.}$$

So p(s, α) is not a Hurwitz polynomial for any choice of α i  ≥ 0, i = 1, 2, 3, with \(\alpha _{1} +\alpha _{2} +\alpha _{3} = 1\), and therefore there are no Hurwitz convex combinations in the convex hull.

However, the matrix product \(e^{A_{1}}e^{A_{2}}e^{A_{3}}\) is Schur (the dominant eigenvalue is ≈ 0. 69). So, the periodic switching law

$$\displaystyle{q(t) = \left \{\begin{array}{ll} 3,&t \in [3k,3k + 1); \\ 2,&t \in [3k + 1,3k + 2); \\ 1,&t \in [3k + 2,3k + 3); \end{array} \qquad k \geq 0,\right.}$$

makes the resulting system consistently (actually exponentially) stable, and hence exponentially stabilizable.

It is also worth pointing out an interesting fact. In general, the existence of a smooth control Lyapunov function would lead to the “arg-min” strategy which introduces chattering and sliding modes, as pointed out in Remark 9.24. For this problem chattering would be catastrophic, while it is obvious that to deal with this problem we must “dwell” on each configuration for a sufficiently long time. In the case of a periodic strategy with dwell time T in each mode, the product of the three exponentials \(e^{A_{1}T}e^{A_{2}T}e^{A_{3}T}\) has to be stable. We have seen that this is the case for T = 1.

Note that this commutation implies that the “red” is imposed according to the circular order 3, 2, 1, 3, 2, 1 …. It is surprising to notice that, if the order is changed, not only the system performance can get worse, but the system may even become unstable. Indeed, \(e^{A_{3}}e^{A_{2}}e^{A_{1}}\) is unstable with spectral radius ≈ 1. 90, which means that the commutation order is fundamental and the order 1, 2, 3, 1, 2, 3 …is unsuitable. A simple explanation is that switching the red light from 3 to 2 allows for a “fast recovery” from the congestion on x 3 (due to the exponential decay), while switching the red from 3 to 1 would leave such a congestion unchanged.

We complete the example by considering the effect of a constant input (the incoming traffic) and hence by introducing the system

$$\displaystyle{\dot{x} = A_{q}x + b}$$

with \(b =\bar{ 1}\), γ = 1 and β = 1. It turns out that, with these values, \(F:= e^{A_{1}T}e^{A_{2}T}e^{A_{3}T}\) is Schur for T > 0. 19. This means that, under a periodic strategy with T > 0. 19, the system converges to a periodic trajectory \(\tilde{x}(t)\), as shown in Figure 9.17.

Fig. 9.17
figure 17

State trajectory corresponding to T = 2. 1 and x(0) = [10 10 10]′.

Note that it is possible to optimize T in order to achieve a strategy which reduces as much as possible the buffer levels of the periodic trajectory (see [BCV12] for details).

Remark 9.35.

Note that, in principle, the matrices provided in the example do not satisfy the assumption of Theorem 9.32, because they are reducible. This is not an issue, because we can modify the system by perturbing all the coefficients with positive small numbers

$$\displaystyle{A_{i} +\epsilon O}$$

where O is the 1-matrix O ij  = 1, for all i, j, and ε > 0 small. The periodic strategy would be stabilizing for ε small. However, no Hurwitz convex combination would exist anyway (see Exercise 5).

In the simple case of second order systems, the following result holds [BCV12].

Proposition 9.36.

A continuous-time second order positive switched system is stabilizable if and only if there exists an Hurwitz convex combination.

The proof can be found in [BCV12]. We stress that sufficiency holds for positive switched systems of any order, since it holds for linear switched systems in continuous-time, as we have seen at the beginning of Section 9.4. Still we give a proof of the sufficiency to show that we can derive a linear co-positive control Lyapunov function using the left Frobenius eigenvector of a Hurwitz convex combination. A similar proof works also for discrete-time switched systems. Assume that there exists

$$\displaystyle{\bar{A} =\sum _{ i=1}^{r}A_{ i}\alpha _{i},\ \ \ \alpha _{i} \geq 0,\ \ \sum _{i=1}^{r}\alpha _{ i} = 1}$$

which is a Hurwitz matrix and assume, for brevity, that it is irreducible. Take the positive eigenvector z T > 0 of \(\bar{A}\) associated with the Frobenius eigenvalue λ F  < 0. Consider the co-positive function Ψ(x) = z T x. Then, by linearity, for all x ≥ 0

$$\displaystyle{\min _{i}z^{T}A_{ i}x \leq z^{T}\bar{A}x =\lambda _{ F}z^{T}x =\lambda _{ F}\varPsi (x)}$$

Therefore, the strategy \(q =\arg \min _{i}z^{T}A_{i}x\) is stabilizing.

We know that, in general, the existence of a Schur-stable convex combination is neither necessary nor sufficient (see Exercises 6 and 7) for stabilizability of linear switched discrete-time systems. For positive discrete-time switched systems, the condition is sufficient, but not necessary (see Exercise 7) even for second order systems. The sufficiency proof can be derived exactly (mutatis mutandis) by means of the previous considerations (see Exercise 8).

We can now establish a procedure for discrete-time switched systems which leads to the generation of a polyhedral concave co-positive control Lyapunov function.

We use again some dynamic programming arguments used in Chapters 5 and 6 (see [HVCMB11]). Consider the set

$$\displaystyle{\mathcal{X}_{0} =\{ x:\bar{ 1}^{T}x \geq 1\}}$$

and notice that it is impossible to stabilize the system if and only if, starting from some initial value inside \(\mathcal{X}_{0}\), the state x(t) remains in this set for all possible switching sequences q(t) or, equivalently, if there exists a robustly positive invariant set included in \(\mathcal{X}_{0}\). This is also equivalent to saying that we have stabilizability if and only if there is no robustly positively invariant set included in \(\mathcal{X}_{0}\). Since \(\mathcal{X}_{0}\) is convex, one might try to compute the largest (convex) invariant set in \(\mathcal{X}_{0}\) and check whether such a set is empty.

Consider the set of all vectors x in \(\mathcal{X}_{0}\) which are driven inside \(\mathcal{X}_{0}\) by all matrices A i

$$\displaystyle{\mathcal{X}_{1} =\{ x \geq 0:\bar{ 1}^{T}x \geq 1,\ \ \bar{1}^{T}A_{ i}x \geq 1,\ \ i = 1,\ldots r\}}$$

This set can be represented as

$$\displaystyle{F^{(1)}x \geq \bar{ 1}}$$

with

$$\displaystyle{F^{(1)} = \left [\begin{array}{c} \bar{1}^{T} \\ \bar{1}^{T}A_{1} \\: \\ \bar{1}^{T}A_{r} \end{array} \right ]}$$

For \(k = 1,2,3,\ldots\), recursively compute the set

$$\displaystyle{\mathcal{X}_{k+1} =\{ x \geq 0: F^{(k)}x \geq 1,\ \ F^{(k)}A_{ i}x \geq 1,\ \ i = 1,\ldots r\}}$$

compactly represented as \(\mathcal{X}_{k+1} =\{ x \geq 0: F^{(k+1)}x \geq 1\}\), and proceed in a “dynamic programming” style by generating the sets \(\mathcal{X}_{k}\).

The set \(\mathcal{X}_{k}\) is the set of all the states x 0 which remain in the set \(\mathcal{X}_{0}\) (\(\bar{1}^{T}x(t) \geq 1\)) in k steps for all possible sequences \(A_{q(t)} \in \{ A_{1},A_{2},\ldots,A_{r}\}\). On the contrary, if \(x_{0}\not\in \mathcal{X}_{k}\) there exists a sequence that brings x(t) outside \(\mathcal{X}_{0}\) hence (\(\bar{1}^{T}x(t) < 1\)).

Therefore, if for some k > 0 \(\mathcal{X}_{k}\) is strictly included in \(\mathcal{X}_{0}\) (see Fig. 9.18), then all the states x(0) on its positive boundary can be driven in k steps to \(x(k) \in \partial \mathcal{X}_{0}\), the boundary of \(\mathcal{X}_{0}\).

Fig. 9.18
figure 18

The initial set \(\mathcal{X}_{0}\) (complement of the dark region), the final set \(\mathcal{X}_{h}\) (to the right of the thick curve), and the contracted version \(\lambda \mathcal{X}_{h}\) (to the right of the dashed curve)

By construction, any point x 0 on the positive boundary of \(\mathcal{X}_{k}\) satisfies the equality

$$\displaystyle{\bar{1}^{T}A_{ i_{k-1}}A_{i_{k-2}}\ldots A_{i_{0}}x(0) = 1}$$

for a proper choice of the indices i h , say \(\bar{1}^{T}x(k) = 1\).

Now, if \(\mathcal{X}_{k}\) does not intersect \(\bar{1}^{T}x = 1\), then we can take a positive λ < 1, close enough to 1, such that the following inclusion is preserved (see Fig. 9.18)

$$\displaystyle{\lambda \mathcal{X}_{k} \subset \mathcal{X}_{0}}$$

Let \(\tilde{\mathcal{X}}_{k}\) be the closure of the complement of \(\mathcal{X}_{k}\) and consider the function Ψ(x) = minF i (k) x. Note that \(\tilde{\mathcal{X}}_{k} = \mathcal{N}[\varPsi (x),1] =\{ x:\ \varPsi (x) \leq 1\}\). Given any \(x(0) \in \partial \tilde{\mathcal{X}}_{k}\), it can be brought in k steps to \(\partial \tilde{\mathcal{X}}_{0}\), hence to \(\lambda \tilde{\mathcal{X}}_{k}\). Hence we have that

$$\displaystyle{\varPsi (x(k)) \leq \lambda \varPsi (x(0))}$$

for a proper sequence. Thus, by repeating this sequence periodically, we can assure Ψ(x(ik)) ≤ λ i Ψ(x(0)), \(i = 1,2,\ldots\), and drive the state to 0.

A possible different stopping condition for the set sequence is

$$\displaystyle{\lambda \mathcal{X}_{k} \subset \mathcal{X}_{k-1}}$$

for some contraction factor λ > 0. Denoting by F the matrix describing the final set \(\mathcal{X}_{k} =\{ x \geq 0: Fx \geq 1\}\), we have that the concave co-positive piecewise-linear Lyapunov function

$$\displaystyle{ \varPsi (x) =\min Fx }$$
(9.17)

is a control Lyapunov function since

$$\displaystyle{\min _{i}\ \varPsi (A_{i}x) \leq \varPsi (x)}$$

According to the considerations in Chapters 5 and 6, we can consider the modified system

$$\displaystyle{x(t + 1) = \left (\frac{A_{i}} {\lambda } \right )x(t)}$$

with an assigned contractivity factor λ > 0.

Procedure

  1. 1.

    Let \(F^{(0)} =\bar{ 1}^{T}\). Fix a maximum number of steps k max , a contractivity factor λ > 0 and a tolerance ε > 0 such that λ +ε < 1.

  2. 2.

    For \(k = 1,2,\ldots\), compute the set

    $$\displaystyle{\mathcal{X}_{k} =\{ x \geq 0: F^{(k-1)}x \geq \bar{ 1},\ \ \ F^{(k-1)}(A_{ i}/\lambda )x \geq 0,\ \ i = 1,2,\ldots,r\}}$$

    This set is of the form \(\mathcal{X}_{k} =\{ x \geq 0:\ F^{(k)}x \geq \bar{ 1}\}\).

  3. 3.

    Eliminate all the redundant inequalities, to achieve a minimal F (k) representing \(\mathcal{X}_{k} =\{ x \geq 0: F^{(k)}x \geq \bar{ 1}\}\).

  4. 4.

    If \(\mathcal{X}_{k} \subset (1+\epsilon )\mathcal{X}_{k-1}\) stop (successfully): the closure of the complement of \(\mathcal{X}_{k}\) in the positive orthant is a λ +ε contractive set.

  5. 5.

    If \(\mathcal{X}_{k} = \mathcal{X}_{k-1}\) and the boundary of the original set \(\bar{1}^{T}x\) is an active constraint stop (unsuccessfully): the set \(\mathcal{X}_{k}\) is robustly invariant for the modified system. Hence, no matter which sequence A q(k) is chosen, for x 0 in this set we will have \(\bar{1}^{T}x(k) \geq 1\).

  6. 6.

    Let \(k:= k + 1\). If k ≥ k max , STOP.

Example 9.37.

Consider again the system of Example 9.28, but assume now that we can choose at each time the matrix (A 1 or A 2). Under arbitrary switching the system has a contractivity factor of about 0. 92. If we apply the above, we see that for the controlled system the contractivity factor (obviously) reduces to 0. 89. The sequence of regions is depicted in Fig 9.19. The final control Lyapunov function is

$$\displaystyle{\varPsi (x) =\min \{ 1.2710x_{1} + 0.7263x_{2},0.7263x_{1} + 1.2710x_{2}\}}$$
Fig. 9.19
figure 19

The two-configuration emptying problem in the controlled case

9.6 Switching compensator design

The trade-off among different, often conflicting, design goals is a well-known problem in control design [LM99]. Even in simple cases, such as the servo design, it is not possible to achieve a certain performance without compromising another. For instance, a fast signal tracking in a controlled loop requires a large bandwidth which has the side-effect of rendering the system more sensible to disturbances. This problem is often thought of as an unsolvable one: trade-off is generically considered an unavoidable issue.

In this section we wish to partially contradict this sentence by presenting, in a constructing way, techniques for switching among controllers each designed for a specific goal as an efficient approach to reduce the limitations due to adopting a single controller.

9.6.1 Switching among controllers: some applications

We start with a very simple example which basically shows which are the benefits we can achieve by switching.

Example 9.38.

Switching strategies can be successfully applied to the problem of semi-active damping of elastic structures. This is a problem investigated since 30 years ago [HBR83] and it is still quite popular. Here we just propose a simple example to provide an idea of what can be done by means of Lyapunov-based techniques. Consider the very simple problem of damping via feedback a single degree of freedom oscillator whose model is

$$\displaystyle{\left [\begin{array}{c} \dot{x}_{1}(t) \\ \dot{x}_{2}(t) \end{array} \right ] =\mathop{\underbrace{ \left [\begin{array}{cc} 0 & 1\\ - 1 &-\mu \end{array} \right ]}}\limits _{A(\mu )}\ \left [\begin{array}{c} x_{1}(t) \\ x_{2}(t) \end{array} \right ]}$$

where μ is a damping coefficient μ ≥ β > 0. The lower bound β > 0 represents the natural damping (i.e., that achieved with no control) and in the example is assumed that β = 0. 1.

Consider the problem of determining the appropriate value of μ to achieve “good convergence” from the initial state \(x = -[x_{0}\ 0]^{T}\). This is an elementary problem often presented in basic control courses. The trade-off in this case is the following:

  • small values of μ produce an undamped dynamics with undesirable oscillations;

  • large values of μ produce an over-damped dynamics with slow convergence.

Elementary root locus considerations lead to the conclusion that the “best” coefficient to achieve the fastest transient is the critical damping, i.e. the value μ cr  = 2 (associated with a pair of coincident eigenvalues).

The situation is represented in Fig. 9.20, where we considered the initial condition \(x(0) = [-2\ 0]^{T}\). The choice μ = β, which produces a fast reaction of the system, results anyway in a poorly damped transient which is represented by the dotted oscillating solution. If we consider a high gain, for instance such that μ = 10, then we have the opposite problem: there are no oscillations, but a slow exponential transient (represented by the exponentially decaying dotted curve in Fig. 9.20). The critical oscillation, obtained when μ cr  = 2, is the best trade-off and is represented by the lower dashed line in Fig. 9.20.

Fig. 9.20
figure 20

The different solutions

Can we do better? We can provide a positive answer if we do no limit ourselves to considering a single value of μ. The idea is that, by switching among different values of μ we have more degrees of freedom. Note that this is equivalent to switching between derivative controllers with different gain.

We consider the case in which we can switch between two gains μ = β and \(\bar{\mu }= 10\). The problem is clearly how to switch between the two. The first idea one might have in mind is heuristically motivated. We allow the system to go without artificial damping μ = β until a certain strip | x 1 | ≤ ρ is reached, where ρ is an assigned value, and we brake by switching to \(\bar{\mu }\). We chose ρ = 0. 05. Unfortunately, this heuristic solution is not very satisfactory and is represented by the dashed curve marked as heuristic in Fig. 9.20. As expected, it is identical to the undamped solution in the first part, and then there is a braking stage. It is however apparent that braking is not sufficient and the system has an overshoot since the state leaves the braking region in the overshoot stage and jumps out of the braking zone.

Let us consider a better solution for this problem. We activate the braking value \(\bar{\mu }\) only when a proper positively invariant strip for the damped system is reached. For μ = 10, which is quite larger than the critical value, the system has two real eigenvalues λ F  < λ S  < 0, where λ S  ≃ 0 and λ F  < < λ S are the slow and the fast eigenvalue, respectively. The eigenvector associated with the fast eigenvalue is v F  = [1  λ F ]T. Along the subspace corresponding the “fast” eigenvector the transient is fast. Let us consider the orthogonal unit vector

$$\displaystyle{f^{T} = \frac{[-\lambda _{F}\ \ 1]} {\|[-\lambda _{F}\ \ 1]\|}}$$

and a thin strip of the form

$$\displaystyle{\mathcal{S}(\xi ) =\{ x:\ \ \vert f^{T}x\vert \leq \xi \}}$$

This strip is positively invariant for the system with high damping. Indeed the vector f T is a left eigenvector associated with λ F and thus

$$\displaystyle{f^{T}A(\bar{\mu }) =\lambda _{ F}f^{T}}$$

so that the positive invariance conditions of Corollary 4.42 are satisfied. Note also that the strip includes the subspace associated with v F . Theoretically, the idea is to switch on the subspace associated with v F to exploit the fast transient. However, “being on a subspace” is not practically meaningful, so we replace the above with the condition \(x \in \mathcal{S}(\xi )\), which can be interpreted as “being on the subspace up to a tolerance ξ”. We chose \(\xi =\rho = 0.05\) for a fair comparison. It is apparent that the results are much better. The transient, the plain line in Fig. 9.20, is initially the fast undamped one and then it is close to the “fast eigenvector” motion.

The situation can be visualized in the phase plane in Fig. 9.21. The heuristic solution switches, as required, in the strip | x 1 | ≤ ρ, but cannot remain inside this strip (the vertical one), since it is not invariant, and then it undergoes a further switching and the damping is improperly set to the natural β again to eventually reach the braking region. An application of the proposed technique to more general vibrating structures has been proposed in [BCGM12, BCC+14].

Fig. 9.21
figure 21

The solution with the good (plain) and the heuristic (dashed) switching law

The idea of switching between controllers by exploiting the properties of invariant sets is not new. It was presented in [GT91, WB94, KG97, BB99]. The role played by invariant sets has been evidenced in the previous example. If switching to a new controller is subject to reaching a proper invariant set for the closed-loop system with such a controller, then we automatically assure that the new set will never be left anymore. A possible application is the transient improvement via switching. Suppose that we are given a linear system and a family of controllers with “increasing gain”

$$\displaystyle{u = K_{i}x,\ \ i = 1,2,\ldots,r}$$

associated with a nested family of invariant C-sets for the systems \(\dot{x} = (A + BK_{i})x\):

$$\displaystyle{\mathcal{S}_{1} \subseteq \mathcal{S}_{2} \subseteq \ldots \subseteq \mathcal{S}_{r}}$$

Choosing a nested family is always possible since any invariant set remains such if it is properly scaled. Then switching among controllers avoids control saturation when the state is far from the target and exploits the strong action of the larger gains in proximity of the origin. The idea can be extended to the case of output feedback if we introduce an observer-based supervisor [DSDB09]. Assume for brevity that a family of static output feedback gains is given. Then a possible scheme is that in Fig. 9.22.

Fig. 9.22
figure 22

The logic-based switching system

After an initial transient, in which no switching should be allowed, the estimate state is accurate (here we assume accurate modeling and negligible disturbances), say \(\hat{x}(t) \simeq x(t)\). The inclusion of the estimated state \(\hat{x}(t) \in \mathcal{S}_{i}\) in the appropriate region allows the switching to the i-th gain.

Clearly the method requires either state feedback or accurate state estimation, which is not always possible. Finally we notice that we can design compensators of different order and dynamic. In this case we have an extra degree of freedom, given by the possible initialization of the compensator state at the switching time.

A fundamental result concerning switching among controllers is the following [HM02], here reported.

Theorem 9.39.

Consider a linear plant P and a set of r linear stabilizing compensators. Then, for each compensator there exists a realization (not necessarily minimal) such that, no matter how the switching among these compensators is performed, the overall closed-loop plant is asymptotically stable.

It is important to notice that the previous result does not imply that the property is true for any family of compensators with given realizations. In other words, it is easy to find families of compensator (even static gains) for which an unfavorable switching law can lead to instability. A simple case is given by the system

$$\displaystyle{A = \left [\begin{array}{cc} 0 & 1\\ - 1 &-\beta \end{array} \right ]\ \ B = \left [\begin{array}{c} 0\\ 1 \end{array} \right ]\ \ C = \left [\begin{array}{cc} 1&0 \end{array} \right ]}$$

If we take \(u = -(1+\eta )y\) and \(u = -(1-\eta )y\), with β small enough and 0 < η < 1 sufficiently close to 1, there exists a destabilizing switching law [Lib03]. However, there exist equivalent non-minimal realizations of the constant gains for which stability is assured under switching.

The result [HM02], which is of fundamental importance, does not consider performances. It is apparent from Example 9.38, the friction system, that, if one wishes to assure a performance to the system, then some “logic constraints” to the switching rule have to be applied.

Further techniques of “switching among controllers” have been proposed in the literature, although of different nature. It is worth mentioning a special technique which consists in partitioning the state space in connected subsets in each of which a “local controller” is active. This idea has been pursued in [MADF00] in a context of gain-scheduling control and in [BRK99, BPV04] in a context of robot manipulator control in the presence of obstacles.

9.6.2 Parametrization of all stabilizing controllers for LTI systems and its application to compensator switching

One possibility to avoid the limitation of a single controller is to use more than one controller. For instance, if the system changes its working point or its configuration, one may decide to change the compensator accordingly.

In this subsection we briefly describe the essential of the idea of the mentioned Theorem 9.39 due to [HM02] and then we apply it to the case in which a switching compensator is applied to a fixed plant.

The first fundamental step towards the solution of the parametrization of a family of switching compensators is the standard Youla–Kucera parametrization of all stabilizing controllers for linear systems. Consider a stabilizable LTI system

$$\displaystyle\begin{array}{rcl} \dot{x}(t)& =& Ax(t) + Bu(t) {}\\ y(t)& =& Cx(t) {}\\ \end{array}$$

and assume a stabilizing compensator is given with transfer function W(s).

Then W(s) can be realized as follows:

$$\displaystyle\begin{array}{rcl} \dot{x}_{o}(t)& =& (A - LC)x_{o}(t) + Bu(t) + Ly{}\end{array}$$
(9.18)
$$\displaystyle\begin{array}{rcl} u(t)& =& -Jx_{o}(t) + v(t){}\end{array}$$
(9.19)
$$\displaystyle\begin{array}{rcl} o(t)& =& Cx_{o}(t) - y(t){}\end{array}$$
(9.20)
$$\displaystyle\begin{array}{rcl} \dot{z}& =& F_{T}z(t) + G_{T}o(t){}\end{array}$$
(9.21)
$$\displaystyle\begin{array}{rcl} v(t)& =& H_{T}z(t) + K_{T}o(t){}\end{array}$$
(9.22)

where J and L are matrices such that (ALC) and (ABJ) are Hurwitz, (F T , G T , H T , K T ) are suitable matrices (which depend on the plant matrices (A, B, C), as well as on W(s)), with F T Hurwitz. We let the reader note that x o (t) is the state estimate.

Conversely, given L and J such that (ALC) and (ABJ) are Hurwitz, for any choice of (F T , G T , H T , K T ) of suitable dimensions, with F T Hurwitz, the corresponding compensator is stabilizing.

This double implication is equivalent to saying that (9.18)–(9.22) parametrize all the stabilizing compensators for an LTI plant.

Proposition 9.40 ([ZDG96, SPS98]).

The transfer matrix W(s) is a stabilizing compensator for (A,B,C) if and only if it can be realized as in (9.18)–(9.22) , with some choice of the Youla–Kucera parameter (F T ,G T ,H T ,K T ).

Note that the realization of W(s) in (9.18)–(9.22) is non-minimal in general.

The sub-system (9.21)–(9.22) represents the Youla–Kucera (YK) parameter. The fundamental point is that the input of this system is \(o(t) = Cx_{o}(t) - y(t) = C(x_{o}(t) - x(t))\), which asymptotically converges to 0 since

$$\displaystyle{ \frac{d} {dt}(x_{o}(t) - x(t)) = (A - LC)(x_{o}(t) - x(t))}$$

is an unreachable variable. This in turn means that the output v(t) of the Youla–Kucera parameter is not fed back by the plant and therefore any stable choice of the YK parameter cannot destabilize the closed-loop. Note however that any output y of the plant is fed back by the compensator and the feedback depends on the YK parameter.

Going back to the general case, assume now that a family W 1(s), W 2(s), …,W r (s), of stabilizing compensators is given: is it possible to switch among them arbitrarily while preserving closed-loop stability?

In this case Theorem 9.39 comes into play providing a positive answer: yes, stability can be preserved if the realization of each of the stabilizing compensator is done in the right way. How does such a realization look like?

The solution of this problem is quite intuitive and boils down to the YK parametrization. Since any W i (s) corresponds to some YK parameter (F T (i), G T (i), H T (i), K T (i)), we can switch between compensators by fixing matrices L and J and by switching just among the YK parameters. However, the fact that F T (i) is Hurwitz does not assure that switching between the YK parameters results in a stable behavior. Moreover, the F T (i) may be of different dimension.

The problem of dimension is immediately solved by merging some YK parameters in fictitiously augmented dynamics, in order to make them all of the same size.

For stability under switching, we need the following.

Lemma 9.41.

Given a stable square matrix F, there exists an invertible T such that \(\hat{F} = T^{-1}FT\) has P = I as a Lyapunov matrix.

Proof.

Assume that F is stable and solves

$$\displaystyle{F^{T}P + PF = -I}$$

with P ≻ 0. Let \(T = P^{-1/2}\). Then, bearing in mind that T T PT = I,

$$\displaystyle{\hat{F}^{T}+\hat{F} = T^{T}F^{T}T^{-T}T^{T}PT+T^{T}PTT^{-1}FT = T^{T}(F^{T}P+PF)T = -T^{T}T < 0}$$

With the above in mind, the idea is then that we may change the realization to all the YK parameters in such a way they share I as a Lyapunov matrix. This will in turn

  • not change the compensator transfer functions;

  • assure that the YK parameter is stable under arbitrary switching.

9.6.3 Switching compensators for switching plants

In this subsection, the case in which the compensator switching is subsequent to a plant switching is considered. More precisely, the problem is cast in the following setting: assume that the plant formed by the family of stabilizable LTI systems

$$\displaystyle{\begin{array}{rcl} \dot{x}(t)& =&A_{i}x(t) + B_{i}u(t) \\ y(t)& =&C_{i}x(t)\end{array} }$$

is subject to an arbitrary switching rule

$$\displaystyle{i = i(t) \in \mathcal{I} =\{ 1,2,\ldots,r\}}$$

and that the switching plant has to be controlled by means of a family of r stabilizing controllers, each (clearly) stabilizing the corresponding plant (see Fig. 9.23)

$$\displaystyle{\begin{array}{rcl} \dot{z}(t)& =&F_{i}z(t) + G_{i}y(t) \\ u(t)& =&H_{i}z(t) + K_{i}y(t)\end{array} }$$
Fig. 9.23
figure 23

The switching control

To check whether such a family of controllers exists, a couple of assumptions are needed.

Assumption (Non-Zenoness).

The number of switching instants is finite on every finite interval.

Assumption (No delay).

There is no delay in the communication between the plant and the controller, which knows exactly the current output y(t) and configuration i(t).

With the above in mind, the two essential problems considered next are the following.

Problem 1.

Does there exist a family of matrices (F i , G i , H i , K i ), \(i \in \mathcal{I}\), such that the closed-loop system is switching stable?

Problem 2.

Given a set of compensators W i (s), each assuring Hurwitz stability for fixed i, does there exist a realization (F i , G i , H i , K i ) such that:

  1. 1.

    \(W_{i}(s) = H_{i}(sI - F_{i})^{-1}G_{i} + K_{i}\);

  2. 2.

    the closed-loop system is switching stable?

Clearly Problem 1 is preliminary to Problem 2. A weaker, but computationally tractable, version is that in which, rather than requiring stability, quadratic stability of the switching closed-loop system only is requested. It will be soon shown that the solution to Problem 1 amounts to checking a set of necessary and sufficient conditions, whereas for Problem 2, assuming Problem 1 has a solution, the answer is always affirmative [BMM09].

As a first preliminary fact it must be pointed out that, when dealing with the stability of switching systems, we need to talk about their realization and not about their transfer functions. In fact, while for LTI systems the same transfer function can be realized in infinite (equivalent) ways, this is not the case for switching systems.

Example 9.42.

Consider the plant

$$\displaystyle{P(s) = \frac{1} {s+\alpha }}$$

with α > 0 and let the compensator be of the form

$$\displaystyle{W_{i}(s) = \frac{k_{i}} {s+\beta },\ \ \ i = 1,2}$$

with β > 0. Using standard realization techniques, the two following closed-loop matrices

$$\displaystyle{A_{1} = \left [\begin{array}{cc} -\alpha &k_{i}\\ - 1 & -\beta \end{array} \right ]\ \ \ \mbox{ or}\ \ \ A_{2} = \left [\begin{array}{cc} -\alpha &\sqrt{k_{i}} \\ -\sqrt{k_{i}}& -\beta \end{array} \right ]}$$

can be obtainedFootnote 7. It can be seen that the first is unstable under arbitrary switching (see Section 9.7.2), whereas the second is stable under arbitrary switching.

The following theorem provides a solution to Problem 1.

Theorem 9.43.

The following two statements are equivalent.

  1. i)

    There exists a linear switching-stabilizing compensator

  2. ii)

    The two equations

    $$\displaystyle\begin{array}{rcl} A_{i}X + B_{i}U_{i} = XP_{i}& & {}\end{array}$$
    (9.23)
    $$\displaystyle\begin{array}{rcl} RA_{i} + L_{i}C_{i} = Q_{i}R& & {}\end{array}$$
    (9.24)

    have a solution (P i ,Q i ,U i ,L i ,X,R), with

    $$\displaystyle{P_{i} \in \mathcal{H}_{1}\ \ \mbox{ and}\ \ Q_{i} \in \mathcal{H}_{\infty }}$$

    (resp. \(\|P_{i}\|_{1} < 1\) and \(\|Q_{i}\|_{\infty } < 1\) in the discrete-time case), and a full row-rank matrix X ∈I​Rn×μ and a full column-rank matrix R ∈I​Rν×n .

If the conditions of the theorem hold, then the problem can be solved as follows. Take M: MR = I, V i  = ZP i , where Z is any complement of X and

$$\displaystyle{\left [\begin{array}{cc} K_{i}&H_{i} \\ G_{i} & F_{i} \end{array} \right ] = \left [\begin{array}{c} U_{i} \\ V _{i} \end{array} \right ]\ \left [\begin{array}{c} X\\ Z \end{array} \right ]^{-1}}$$

A possible compensator is the following:

$$\displaystyle{\begin{array}{l} \mbox{ Estimated state feedback}\!:\ \ \left \{\begin{array}{rcl} \dot{z}(t)& =&F_{i}z(t) + G_{i}\hat{x}(t) \\ u(t)& =&H_{i}z(t) + K_{i}\hat{x}(t) + v(t) \\ v(t)& \equiv &0 \end{array} \right. \\ \\ \mbox{ Generalized state observer}\!:\ \ \left \{\begin{array}{rcl} \dot{w}(t)& =&Q_{i}w(t) - L_{i}y(t) + RB_{i}u(t) \\ \hat{x}(t)& =&Mw(t) \end{array} \right. \end{array}}$$

The previous compensator has a separation structure and it can be shown that \(\hat{x}(t) - x(t) \rightarrow 0\) and that the first part is a dynamic state-feedback compensator. The auxiliary signal v(t) = 0 is a dummy signal which will be used later.

We do not report a proof here (the interested reader is referred to [BMM09]), but we just point out that in the necessity part of the theorem we generalize the results in Subsection 4.5.6 More precisely, the equations are an extension of (4.40) and (4.53). Note that the construction is identical to that proposed in Section 7.4 This is absolutely expected, since we know that an LPV system whose matrices are inside a polytope is stable if and only if its corresponding switching system is stable.

If the less stringent requirement of quadratic stability only is imposed, then the following (tractable) result holds:

Theorem 9.44.

The following two statements are equivalent.

  1. i)

    There exists a family of linear quadratically stabilizing switching compensators.

  2. ii)
    $$\displaystyle\begin{array}{rcl} & PA_{i}^{T} + A_{i}P + B_{i}U_{i} + U_{i}^{T}B_{i}^{T} < 0& {}\\ & A_{i}^{T}Q + QA_{i} + Y _{i}C_{i} + C_{i}^{T}Y _{i}^{T} < 0& {}\\ \end{array}$$

    for some positive definite symmetric n × n matrices P and Q, and matrices U i I​Rm×n , Y i I​Rn×p .

A possible compensator is

$$\displaystyle{\begin{array}{rcl} \frac{d} {dt}\hat{x}(t)& =&(A_{i} + L_{i}C_{i} + B_{i}J_{i})\hat{x}(t) - L_{i}y(t) + B_{i}v(t) \\ u(t)& =&J_{i}\hat{x}(t) + v(t) \\ v(t)& =&0 \end{array} }$$

where

$$\displaystyle{J_{i} = U_{i}P^{-1},\ \ \ \ \ L_{ i} = Q^{-1}Y _{ i}}$$

In discrete-time the inequalities are

$$\displaystyle\begin{array}{rcl} & \left [\begin{array}{cc} P &\left (A_{i}P + B_{i}U_{i}\right )^{T} \\ A_{i}P + B_{i}U_{i}& P \end{array} \right ] > 0& {}\\ & \left [\begin{array}{cc} Q &\left (QA_{i} + Y _{i}C_{i}\right )^{T} \\ QA_{i} + Y _{i}C_{i}& Q \end{array} \right ] > 0& {}\\ \end{array}$$

The previous ones are standard quadratic stabilizability conditions which involve LMIs [BP94, AG95, BEGFB04].

To provide an answer to Problem 2, the signal v(t) comes into play. The first point is the following. Assume that in the previous machinery the signal v(t) is generated as the output of the following system:

$$\displaystyle{v(t) = T(C(\hat{x} - x))}$$

where T(⋅ ) is a “stable operator.” This in turn will change the compensator, but it will not destabilize the plant. Although in general any input–output stable operator would fit, we limit ourselves to linear finite-dimensional systems.

From Proposition 9.40 it is known that for any fixed mode there exists a Youla–Kucera parameter T i such that the resulting compensator has transfer function W i . The issue is only to realize the Youla–Kucera parameters in such a way that they are stable under switching as in Fig. 9.24.

Fig. 9.24
figure 24

The observer-based compensator structure

Note that the figure includes, as a particular case, the case of a quadratic stabilizable plant for which R = I, \(Q = A + LC\), M = I, and the state feedback is static: \(u = Jx + v\).

Theorem 9.45.

If the (quadratic) stabilizability conditions in Theorem 9.43 (9.44) are satisfied, then, given any arbitrary family of transfer functions W i (s), \(i = 1,\ldots,r\) each stabilizing the i-th plant, there exists a switching compensator

$$\displaystyle{\begin{array}{rcl} \delta z(t)& =&F_{i}z(t) + G_{i}y(t) \\ u(t)& =&H_{i}z(t) + K_{i}y(t) \end{array} }$$

(δz is either \(\dot{z}\) or z(t + 1)) such that

  1. 1.

    \(H_{i}(sI - F_{i})^{-1}G_{i} + K_{i} = W_{i}(s)\)

  2. 2.

    the closed-loop system is switching stable.

We also point out the following aspect. Assume that there are a disturbance input ω(t) and a performance output ξ

$$\displaystyle{ \begin{array}{rcllc} \delta x(t)& =&A_{i}x(t) & + B_{i}u(t) & + B_{i}^{\omega }\omega (t) \\ y(t)& =&C_{i}x(t)& & + D_{i}^{y,\omega }\omega (t) \\ \xi (t)& =&E_{i}x(t)& + D_{i}^{\xi,u}u(t)& + D_{i}^{\xi,\omega }\omega (t)\end{array} }$$
(9.25)

Then the i-th input–output map is of the form

$$\displaystyle{\xi (s) = [M_{i}^{\xi,\omega }(s) + M_{ i}^{\xi,v}(s)T_{ i}(s)M_{i}^{o,\omega }(s)]\omega (s)}$$

which is amenable for optimization.

Example 9.46 (Networked control system (continued from Example 9.4)).

The system matrices of the extended switching system (9.5) for the unstable dynamic system \(P(s) = \frac{s+1} {s^{2}-10s}\), with sampling time T c  = 0. 05s and N max  = 5 are:

$$\displaystyle{A = \left [\begin{array}{cc} 1.649&0\\ 0.065 &1 \end{array} \right ],\ \ \ B = \left [\begin{array}{c} 0.130\\ 0.003 \end{array} \right ],\ \ \ C = \left [\begin{array}{cc} 0.5&0.5 \end{array} \right ]}$$

A family of quadratically stabilizing compensators is obtained by solving the conditions in Theorem 9.44, which provide the following feedback and observer gains (mind that the first set of LMIs is indeed a single inequality in view of the fact that the system update and input matrices do not switch)

$$\displaystyle{J = \left [\begin{array}{ccccccc} - 12.859& - 6.015& -.035& -.022& -.010& -.005& -.002 \end{array} \right ]}$$

and \(L = [L_{0}\ldots L_{N_{max}}]\),

$$\displaystyle{L = \left [\begin{array}{cccccc} - 3.04& - 5.02& - 8.25& - 13.82& - 22.79& - 27.55\\ - 0.28 & - 0.46 & - 0.76 & - 1.29 & - 2.13 & - 2.60 \\ - 1.00& - 1.66& - 2.72& - 4.56 & - 7.53 & - 9.11\\ - 0.60 & - 1.00 & - 1.64 & - 2.76 & - 4.55 & - 5.51 \\ - 0.36& - 0.60& - 0.99& - 1.65 & - 2.73 & - 3.31\\ - 0.21 & - 0.36 & - 0.59 & - 1.01 & - 1.63 & - 1.98 \\ - 0.13& - 0.21& - 0.35& - 0.58 & - 1.07 & - 1.19\\ \end{array} \right ]}$$

The closed-loop step responses y t and \(\hat{y}^{t}\) for two specific delay realizations are depicted in Fig. 9.25.

Fig. 9.25
figure 25

Closed-loop step response for Example 9.46: y t solid, \(\hat{y}^{t}\) dashed

9.7 Special cases and examples

9.7.1 Relay systems

We now consider relay systems, which are nothing but a very special case of switching systems that can be tackled via a set-theoretic approach. Consider the scheme depicted in Figure 9.26 and representing a system with a relay feedback.

Fig. 9.26
figure 26

The relay feedback loop

We assume that the control law is

$$\displaystyle{u = -sgn(y),}$$

thus normalizing the input amplitude to 1. It is well known that feedback systems of this type have several problems, for instance that of being potential generators of limit cycles. There is a case in which one can guarantee asymptotic stability and this is the case we are going to investigate next. The following proposition holds.

Proposition 9.47.

Assume that the n-th order SISO transfer function P(s) has n − 1 zeros z i with strictly negative real part (say it is minimum-phase with relative degree one) and that

$$\displaystyle{\lim _{s\rightarrow \infty }sP(s) > 0}$$

Then the relay loop of Figure 9.26 is locally stable.

Proof.

Since we are interested in local stability, it is assumed that r = 0. Let −β < 0 be greater than the largest real part of the transmission zeros. For any realization (A, B, C, 0) of P(s) it is possible to define the transformation matrix

$$\displaystyle{T = \left [\begin{array}{c} \left (\mbox{ null}\{B^{T}\}\right )^{T} \\ \frac{C} {CB} \end{array} \right ]^{-1}}$$

(null(M) denotes a basis matrix for the kernel of M) so that

$$\displaystyle{T^{-1}B = \left [\begin{array}{c} 0\\ \vdots \\ 1 \end{array} \right ],\ \ CT = \left [\begin{array}{ccc} 0&\cdots &CB \end{array} \right ]}$$

with CB > 0 in view of the condition on the limit and T −1 AT can be partitioned as

$$\displaystyle{T^{-1}AT = \left [\begin{array}{cc} F &G \\ H & J \end{array} \right ]}$$

where the eigenvalues of F (see Exercise 10 in Chapter 8) are the transmission zeros , thus have negative real part smaller than −β. Assume for brevity CB = 1. The state representation of P(s) can then be written as

$$\displaystyle\begin{array}{rcl} \left [\begin{array}{c} \dot{z}(t)\\ \dot{y}(t) \end{array} \right ]& =& \left [\begin{array}{cc} F &G\\ H & J \end{array} \right ]\left [\begin{array}{c} z(t)\\ y(t) \end{array} \right ] + \left [\begin{array}{c} 0\\ 1 \end{array} \right ]u(t){}\end{array}$$
(9.26)
$$\displaystyle\begin{array}{rcl} y(t)& =& \left [\begin{array}{cc} 0&CB \end{array} \right ]\left [\begin{array}{c} z(t)\\ y(t) \end{array} \right ]{}\end{array}$$
(9.27)

where y is the output and the input is \(u = -sgn(y)\).

Since matrix F is asymptotically stable, if y(t) is bounded, so is z(t). Consider now a β-contractive C-set for the z-sub-system \(\dot{z} = Fz + G\tilde{y}\), where \(\tilde{y}\) is subject to \(\vert \tilde{y}(t)\vert \leq 1\) and is seen as a disturbance.

Let such a set be \(\mathcal{S}\) and let Ψ z (z) be its Minkowski functional. Assume \(\mathcal{S}\) is 0-symmetrical, so that Ψ z (z) is a norm (for instance, a quadratic norm \(\varPsi _{z}(z) = \sqrt{z^{T } Qz}\) with Q ≻ 0, associated with a contractive ellipsoid according to inequality (4.23)).

For brevity assume (although this is not necessary) that Ψ z (z) is smooth for z ≠ 0. Since \(\mathcal{S}\) is contractive (see Definition 4.15 with u = 0), for z on the boundary (Ψ z (z) = 1) one has that

$$\displaystyle{ D^{+}\varPsi _{ z}(z) = \nabla \varPsi _{z}(z)(Fz + G\tilde{y}) \leq -\beta, }$$
(9.28)

for all \(\vert \tilde{y}\vert \leq 1\). Since ∇Ψ z (z) = ∇Ψ z (ξ z) for any scaling factor ξ (see Exercise 14 in Chapter 4) and since any \(\hat{z}\) in \(\mathrm{I\!R}^{n-1}\) can be written as \(\hat{z} =\varPsi _{z}(\hat{z})z\) for some \(z \in \partial \mathcal{S}\), then by scaling (9.28) one gets

$$\displaystyle{ D^{+}\varPsi _{ z}(\hat{z}) = \nabla \varPsi _{z}(z)(F\varPsi _{z}(\hat{z})z + Gy) \leq -\beta \varPsi _{z}(\hat{z}),\ \ \mbox{ for}\ \ \vert y\vert \leq \varPsi _{z}(\hat{z}) }$$
(9.29)

say the scaled set \(\xi \mathcal{S}\) is β contractive if | y | ≤ ξ (see Exercise 15 in Chapter 4). Now, consider the second equation and define, as a first step, the quantity

$$\displaystyle{\mu =\max _{z\in \mathcal{S}}\vert Hz\vert,}$$

which is a bound for the influence of z on y since

$$\displaystyle{\vert Hz\vert \leq \mu \varPsi _{z}(z)}$$

Then, consider the candidate Lyapunov function (referred to the y-system)

$$\displaystyle{\varPsi _{y}(y) = \vert y\vert,}$$

which is the Minkowski function of the interval \(I_{1} = [-1,1]\) and whose gradient is ∇Ψ y (y) = sgn[y]. In the (z, y) space consider the C-set which is the Cartesian product of I 1 and \(\mathcal{S}\), \(\mathcal{P} =\{ (z,y): z \in \mathcal{S},\ \ y \in I_{1}\}\) (see Fig. 9.27).

Fig. 9.27
figure 27

The set \(\mathcal{P}\)

The Minkowski functional of \(\mathcal{P}\) is

$$\displaystyle{\varPsi (z,y) =\max \{\varPsi _{z}(z),\varPsi _{y}(y)\}}$$

Now assume \((z,y) \in \varepsilon \mathcal{P}\) (namely \(\varPsi (z,y) \leq \varepsilon\)), where \(\varepsilon\) is chosen as

$$\displaystyle{\varepsilon \leq \frac{CB} {\mu +\vert J\vert +\beta }}$$

so that

$$\displaystyle{\left (\mu +\vert J\vert \right )\varepsilon \leq CB-\beta \varepsilon }$$

For any \((z,y) \in \varepsilon \mathcal{P}\), by considering the y derivative, one gets

$$\displaystyle\begin{array}{rcl} D^{+}\varPsi _{ y}(y)& =& D^{+}\vert y\vert = sgn(y)[Hz + Jy + CBu] {}\\ & \leq & \mu \varPsi _{z}(z) + \vert J\vert \varPsi _{y}(y) - CB \leq \mu \varepsilon +\vert J\vert \varepsilon - CB \leq -\beta \varepsilon, {}\\ \end{array}$$

so that, for \(\varPsi _{y}(y) \leq \varepsilon\),

$$\displaystyle{ D^{+}\varPsi _{ y}(y) \leq -\beta \varPsi _{y}(y) }$$
(9.30)

Therefore, for \(\varPsi (z,y) \leq \varepsilon\) both (9.29) and (9.30) hold, which in turn implies that Ψ(z(t), y(t)) cannot increase if \(\varPsi (z(t),y(t)) \leq \varepsilon\) or, in other words, \(\varepsilon \mathcal{P}\) is positively invariant. Since these inequalities hold inside \(\varepsilon \mathcal{P}\), y(t) and z(t) both converge to 0 with speed of convergence β.

It can be shown, by means of arguments similar to those used for the relay system, that also almost relay systems, precisely those formed by a loop with control

$$\displaystyle{u = -sat_{[-1,1]}(ky)}$$

are locally stable for k large enough.

Example 9.48.

Consider the unstable system

$$\displaystyle{P(s) = \frac{s + 1} {s^{2} - s - 1}}$$

represented by the equations

$$\displaystyle\begin{array}{rcl} \dot{z}(t)& =& -z(t) - y(t), {}\\ \dot{y}(t)& =& z(t) + 2y(t) + u(t) {}\\ \end{array}$$

Consider the first sub-system \(\dot{z} = -z -\tilde{ y}\), \(\vert \tilde{y}\vert \leq 1\) and β = 0. 5, which is compatible with the fact that the zero is in − 1. Consider the interval [−ζ, ζ] as candidate contractive set whose Minkowski functional is | z | ∕ζ. The condition is that for z = ζ

$$\displaystyle{\frac{1} {\zeta } (-z -\tilde{ y}) \leq -\beta }$$

for all \(\vert \tilde{y}\vert \leq 1\), which is satisfied for \(\zeta \geq 1/(1-\beta ) = 2\). The opposite condition leads to the same conclusion, so we take ζ = 2. We now compute

$$\displaystyle{\mu =\max _{\vert z\vert \leq \zeta }\vert Hz\vert = 2}$$

(H = 1) and we can finally evaluate

$$\displaystyle{\varepsilon = \frac{1} {\mu +\vert J\vert +\beta } = \frac{2} {9}}$$

(J = 2). The derived domain of attraction is the rectangle

$$\displaystyle{\left \{(z,y):\ \frac{\vert z\vert } {\zeta } \leq \varepsilon,\ \ \vert y\vert \leq \varepsilon \right \} = \left \{(z,y):\ \vert z\vert \leq \frac{4} {9},\ \ \vert y\vert \leq \frac{2} {9}\right \}}$$

The computed domain of attraction, the trajectories originating from the vertices and some of the system trajectories originating outside are depicted in Fig 9.28. It is apparent (and expected) that the actual domain of attraction is quite greater than the computed one.

Fig. 9.28
figure 28

The evaluated domain of attraction with some internal trajectories (plain lines) and some of the system trajectories originating outside (dotted curves)

The previously presented results can be happily (and easily) extended to the case of actuators with bounded rate. Let us now reconsider the rate-bounding operator considered in Subsection 8.4.1 In particular consider a feedback loop which includes the rate bounding operator in Figure 8.16 Here we consider the extreme case in which the loop includes the following operator

$$\displaystyle{\dot{u}(t) =\bar{ v}\ sgn[\omega -u]}$$

as in Fig. 9.29. Note that this block allows a maximum increasing rate of \(\bar{v}\).

Fig. 9.29
figure 29

The loop with a rate-bounding operator

It is known that rate-bounds can destroy global stability of a system. However, by means of the previous results it is possible to show that a stable loop preserves local stability, achieved by a nominal control, in the presence of rate bounds, as shown in the following proposition.

Proposition 9.49.

Given the n dimensional SISO transfer function F(s), if

$$\displaystyle{G(s) = \frac{F(s)} {1 + F(s)}}$$

is asymptotically stable, then the loop represented in Fig.  9.29 is locally stable.

Proof.

By “pulling out the deltas” (see [ZDG96]), say by rearranging the transfer function so as to write the linear relation between the input and the output of the nonlinear block (which in this case, plays the role of delta) it can be seen that the loop in Fig. 9.29 is equivalent to the one in Fig. 9.30

Fig. 9.30
figure 30

Equivalent loop with rate-bounding operator

which is in turn equivalent to that achieved by feeding back the (v-to-z) transfer function

$$\displaystyle{P(s) = \frac{1 + F(s)} {s} }$$

with the sgn block. It is apparent that the zeros of P(s) are the poles of G(s) (say the poles of the closed-loop system without saturation), which are asymptotically stable by hypothesis, and that the relative degree of P(s) is exactly one, since there are n zeros (the poles of G(s)) and n + 1 poles (the poles of F(s) plus the one in the origin). Thus, by Proposition 9.47, the system is locally stable.

It is worth stressing that the shown strict equivalence between the rate-bounding and the simply saturated problem allows to apply the construction of Proposition 9.47 to find a proper domain of attraction.

9.7.2 Planar systems

Planar systems, namely systems whose state space is two-dimensional, have several nice properties which are worth a brief presentation. The first obvious fact is that, if we limit our attention to second order systems, the major issue of the computational complexity of the considered methods almost disappears. On the other hand, limiting the investigation to this category is clearly a restriction. Still we can claim that many real-word problems are naturally represented by second order systems. Furthermore, there are many system of higher order which can be successfully approximated by second order systems. This is, for example, the case of the magnetic levitator in Fig. 2.1, which would be naturally represented by a third order system since there is an extra equation due to the current dynamics

$$\displaystyle{L\dot{i}(t) = -Ri(t) + V (t)}$$

where V is the voltage. However, since the ratio RL is normally quite big, this equation can be replaced by the static equation Ri(t) = V (t) without compromising the model (we are basically neglecting the fast dynamics).

Let us point out the main properties of second order systems. Consider the second order system \(\dot{x}(t) = f(x(t))\), assume f Lipschitz, so that the problem is well-posed, and let \(\bar{x}(t)\) be any trajectory corresponding to a solution of the system. Since two trajectories cannot intersect \(\bar{x}(t)\) forms a barrier to any other system trajectory. This can be simply seen as follows. Consider any closed ball \(\mathcal{B}\) and assume that \(\bar{x}(t)\) crosses \(\mathcal{B}\) side-by-side in such a way that \(\bar{x}(t_{1}) \in \partial \mathcal{B}\), \(\bar{x}(t_{2}) \in \partial \mathcal{B}\) and \(\bar{x}(t) \in int\{\mathcal{B}\}\) for t 1 < t < t 2. The interior of the ball is divided in three subsets \(\mathcal{B}_{1}\), \(\mathcal{B}_{2}\) and \(\bar{x}(t)\bigcap int\{\mathcal{B}\}\). Then no trajectory of the system originating in \(\mathcal{B}_{1}\) can reach \(\mathcal{B}_{2}\) without leaving \(\mathcal{B}\).

Planar systems have several important properties and indeed many of the books dealing with nonlinear differential equations often have a section devoted to them. Here we propose some case studies (basically exercises) which we think are meaningful.

Example 9.50 (Stability of switching and switched systems).

Consider the system \(\dot{x} = A(p)x\), where

$$\displaystyle{A = \left [\begin{array}{cc} \alpha &1\\ - p(t)^{2 } & \alpha \end{array} \right ],\ \ \ p \in \{ p^{-},p^{+}\}}$$

and let us consider the problems of determining:

  • the supremum value of α (necessarily negative) for which the switching (p uncontrolled) system is stable;

  • the supremum value of α (possibly positive) for which the switched (p controlled) system is stabilizable.

Problems of this kind can be solved by generating some extremal trajectories in view of their planar nature (see [Bos02]) and are often reported in the literature as simple examples of the following facts:

  • Hurwitz stability of all elements in the convex hull of a family of matrices does not imply switching stability;

  • in some particular cases it is possible to stabilize a switched system (i.e., by choosing the switching rule) whose generating matrices are all unstable and do not admit a stable convex combination;

  • a linear switched system which is stabilizable via feedback is not necessarily consistently stabilizable.

As a first step we notice that, for fixed p, the solution has the following form

$$\displaystyle{\left [\begin{array}{c} x_{1}(t) \\ x_{2}(t) \end{array} \right ] = e^{\alpha (t-t_{0})}\left [\begin{array}{cc} \ \cos (p(t - t_{0})) &\ \frac{1} {p}\sin (p(t - t_{0})) \\ - p\sin (p(t - t_{0}))& \ \cos (p(t - t_{0})) \end{array} \right ]\ \left [\begin{array}{c} x_{1}(t_{0}) \\ x_{2}(t_{0}) \end{array} \right ]}$$

To analyze the stability of the switched system, let us start from the point x(0) = [1 0]T and let us assume α = 0. It is apparent that with both values p and p +, the corresponding trajectories reach the x 2 negative axis in time π∕(2 p), but if we choose the upper value p + the trajectory is below (and then external to) that corresponding to the lower value. When the x 2 negative axis is reached, the situation is exactly the opposite. Then, by taking the worst case trajectory, generated by means of the following strategy

$$\displaystyle{p = \left \{\begin{array}{lll} p^{+} & \ \mbox{ if}\ &x_{ 1}x_{2} < 0 \\ p^{-}&\ \mbox{ if}\ &x_{1}x_{2} > 0 \end{array} \right.}$$

we achieve a trajectory which is the most external (see Fig. 9.31 left).

Fig. 9.31
figure 31

The worst-case and the best case strategy for α = 0

If we consider the time instants in which the axes are crossed, since we keep p constant inside any sector, we get the following discrete relation

$$\displaystyle{\left [\begin{array}{c} x_{1}(t_{k}) \\ x_{2}(t_{k}) \end{array} \right ] = \left [\begin{array}{cc} 0 &\ \frac{1} {p} \\ - p& 0 \end{array} \right ]\ \left [\begin{array}{c} x_{1}(t_{k-1}) \\ x_{2}(t_{k-1}) \end{array} \right ]}$$

with \(p \in \{ p^{-},p^{+}\}\).

Take the initial vector [ 1 0 ]T. If we consider the worst-case trajectory, we encircle the origin and we reach the positive x 1 axis again, at time \(t = 2\pi /p\), in a point [ξ 0]T where

$$\displaystyle{\xi = \frac{(p^{+})^{2}} {(p^{-})^{2}} \geq 1}$$

(the equality holds only if \(p^{-} = p^{+}\)).

By taking into account α again, we can find the largest value of α < 0 which assures stability. This limit value is such that

$$\displaystyle{e^{\alpha 2\pi /p}\xi = 1}$$

namely

$$\displaystyle{\alpha = -\frac{p} {2\pi }\log (\xi )}$$

which is negative. To prove this, one has to take into account the fact that the solution with a generic value α is x α (t) = e α t x 0(t), where x 0(t) is that achieved with α = 0.

To solve the “switched” problem we have just to proceed in the same way and consider the “best trajectory,” which is clearly given by (see Fig. 9.31 right)

$$\displaystyle{p = \left \{\begin{array}{lll} p^{+} & \ \mbox{ if}\ &x_{ 1}x_{2} > 0, \\ p^{-}&\ \mbox{ if}\ &x_{1}x_{2} < 0 \end{array} \right.}$$

The expression is identical, with the difference that α turns out to be the opposite.

This system with α > 0 is an example of a system which can be stabilized via state feedback, but which is not consistently stabilizable. Indeed one can show that, no matter how a fixed function p(t) is taken with \(p^{-}\leq p(t) \leq p^{+}\),Footnote 8 the system trajectory will diverge for some initial conditions.

To this aim, let us consider the case of a linear time-varying system

$$\displaystyle{\dot{x}(t) = A(t)x(t)}$$

and denote by Φ(t, t 0) the state transition matrix for A(t):

$$\displaystyle{ \frac{d} {dt}\varPhi (t,t_{0}) = A(t)\varPhi (t,t_{0}),\ \ \ \ \ \varPhi (t_{0},t_{0}) = I.}$$

Then, by the Jacobi-Liouville formula,

$$\displaystyle{\det (\varPhi (t,t_{0})) =\exp \left (\int _{t_{0}}^{t}\ \ \mbox{ trace}(A(\tau ))d\tau \right ).}$$

Indeed

$$\displaystyle{ \frac{d} {dt}(\log \ \det (\varPhi (t,t_{0}))) = \mbox{ trace}(\varPhi (t,t_{0})^{-1}A(t)\varPhi (t,t_{ 0})) = \mbox{ trace}(A(t)).}$$

If we look back at the example we have that trace(A(p)) = 2α, hence

$$\displaystyle{\det (\varPhi (t,t_{0})) =\exp \left (\int _{t_{0}}^{t}\ \ \mbox{ trace}(A(\tau ))d\tau \right ) = e^{2\alpha (t-t_{0})}}$$

and thus det(Φ(t, t 0)) grows arbitrarily large. Therefore some elements of Φ(t, t 0) grow arbitrarily large as t → +.

9.8 Exercises

  1. 1.

    Find the analytic expression of the function in Figure 9.1.

  2. 2.

    Prove the stability of the system proposed in Example 7.16 for | δ | ≤ ρ < 1, by using the technique of extremal trajectories adopted in Example 9.50. Hint: the extremal trajectories are as in Fig. 9.32

    Fig. 9.32
    figure 32

    Extremal trajectories

  3. 3.

    Prove that the P set \(\mathcal{P}(F)\bigcap \mathrm{I\!R}^{+}\), F ≥ 0, is bounded if and only if for any j there exists k such that F kj  > 0.

  4. 4.

    Prove Proposition 9.9. Hint: Consider P = I as a Lyapunov matrix. Then, if all the A q are symmetric, the Lyapunov inequality which assures quadratic stability is

    $$\displaystyle{A_{q}^{T} + A_{ q} = 2A_{q} < 0,\ \ \ \mbox{ for all}\ \ q,}$$

    which is assured if the A q are Hurwitz ….

  5. 5.

    Let A be Metzler and not Hurwitz. Then A +ε O (O ij  = 1) is non-Hurwitz as well for ε > 0. Provide a proof.

  6. 6.

    For discrete-time switched systems, show that the existence of a stable matrix in the convex hull is not sufficient for stabilizability. (Hint: try in dimension one).

  7. 7.

    For discrete-time switched systems, show that the existence of a stable matrix in the convex hull is not necessary for stabilizability. Find two non-negative matrices of order 2, A 1 and A 2, such that no Schur convex combination exists, but the corresponding switched system is stabilizable. (Hint: try two diagonal matrices).

  8. 8.

    Show that, for discrete-time positive switched systems, the existence of a Schur matrix in the convex hull is sufficient for stabilizability. (Hint. Assume that the Schur matrix in the convex hull is irreducible and take the left Frobenius eigenvector z T > 0 and the copositive function z T x …).

  9. 9.

    Paradox of the 0 transfer functions. Let A i , B, C be a family of systems which are Hurwitz for fixed i, but not switching stable. Assume that each of the systems is stabilizable, for instance quadratically. By Theorem 9.45, we would have that we can choose compensator transfer functions W i (s) ≡ 0, which can be implemented so that stability is assured under switching. Is this impossible, or not? (see [BMM09]).

  10. 10.

    Show that any minimum-phase system of relative degree 1 with transfer function P(s) can be written in the form (9.26)–(9.27), where the eigenvalues of F are coincident with the zeros of P(s).