Keywords

1 Introduction

The problem of optimal process control, when some state vector components are not measurable, has undoubtedly initiated the first works on observers. These allow the development of a state estimation model using the accessible variables of the system, such as its inputs and outputs.

In the deterministic case, this model is known as a state observer [1, 2] and in the case of a stochastic system, this model is called a filter [3,4,5].

This state estimation uses the measured outputs of the system, its inputs and its model. When a system is completely observable, the state reconstruction can be performed either by a full order observer (the order of the observer is the same as the one of the system), or by a reduced order observer (the order of the observer is smaller than the one of the system).

The asymptotic convergence of the state estimation error to zero requires a very precise determination of the observer matrices. Raymond [6] has shown that a small error on the parameters of the system matrices could generate a large reconstruction, an important reconstruction error (obtained by comparing the estimate to the measured ones). Several authors have presented state estimation techniques based on the design of proportional and integral action observers for uncertain linear systems [7] and singular systems [8, 9]. In the presence of unknown inputs and sensor faults, there are several techniques of state estimation which will be discussed in this chapter.

2 Unknown Input Observer

A physical process is often subject to disturbances which have as origin the noise due to the process environment, the measurement uncertainties, sensor or actuator faults; these disturbances have adverse effects on the normal behavior of the process and these estimation can be used to design a controlled system able to minimize their effects. Disturbances are called unknown inputs when they affect the process input and their presence can make it difficult to estimate the system state.

Several works have been done concerning the estimation of the state and the output in the presence of unknown inputs and they can be grouped into two categories. The first assumes a priori knowledge of information about these unmeasurable inputs, in particular, Johnson [10] has proposed a polynomial approach and Meditch [11] has suggested approximating the unknown inputs by a known dynamic system response. The second category proceeds either by estimating the unknown input [12], or by its complete elimination from the system equations [13, 14].

Among the techniques that do not require the elimination of unknown inputs, several authors have proposed observer design methods capable of fully reconstructing the state of a linear system in the presence of unknown inputs [15, 16]; Kobayashi [17], Lyubchik [18] and Liu [19] have used a model inversion method for state estimation.

Besides, among the techniques that allow the elimination of unknown inputs, the one proposed by Kudva [20] is interested, in the case of linear systems, in the existence conditions of the unknown input system observer based on the technique of the generalized matrix technique. Guan has proceeded to the elimination of the unknown inputs of the state equations for continuous linear systems [21]. Several other variants exist, but the majority of them have been developed for linear systems.

Koenig [8] has presented a simple method to design a proportional and integral action observer for singular systems with unknown inputs. Sufficient conditions for the existence of this observer have been established.

Reduced order observers have been considered by several authors in recent years [22,23,24]. However, Yang and Wilde [22] have demonstrated that the full order unknown input observer can have a faster convergence speed than the reduced order observer.

The use of unknown input observers for fault diagnosis and process monitoring systems has also attracted a lot of attention [13, 24,25,26] and [27]. Dassanayake, [13] has considered an observer, by eliminating unknown inputs in the state equations, to be able to detect and isolate several sensor faults, in the presence of unknown inputs, on an engine (turbojet).

2.1 State Reconstruction by Eliminating Unknown Inputs

The reconstruction of the linear dynamical system state where several inputs are not measurable is of a great interest in practice. In such circumstances, a conventional observer, which requires the knowledge of the inputs, cannot be used directly. The Unknown Inputs Observer (UIO) has been developed to estimate the system state, despite the existence of unknown inputs or disturbances by eliminating them in the state equations. This type of observer has attracted the attention of many researchers [10, 16, 19, 28, 29].

In this section, we show that the convergence conditions of an unknown input observer are solutions of bilinear matrix inequalities (BMI) which can be linearized by different techniques to obtain linear matrix inequalities (LMI).

2.2 Reconstruction Principle

Consider the linear dynamic system with unknown inputs, described by the following equations :

$$\begin{aligned} \left\{ \begin{array}{ll} \dot{x}(t)= A x(t)+ B u(t) + R \bar{u}(t) \\ y(t)=C x(t) \end{array} \right. \end{aligned}$$
(1)

where \(x(t) \in \mathbb {R}^{n} \) is the state vector, \( u(t) \in \mathbb {R}^{m} \) is the vector of known inputs, \( \bar{u}(t) \in \mathbb {R}^{q}\), \(q < n \) is the vector of unknown inputs, \( y(t) \in \mathbb {R}^{p} \) represents the vector of measurable outputs. \(A \in \mathbb {R}^{n\times n}\) is the state matrix of the linear system, \(B \in \mathbb {R}^{n\times m}\) is the input matrix, \(R \in \mathbb {R}^{n\times q} \) is the influence matrix of the unknown inputs and \( C \in \mathbb {R}^{p \times n} \) is the output matrix.

We assume that the matrix R is of full column rank and that the pair (AC) is observable. The objective is the complete estimation of the state vector despite the presence of the unknown inputs \(\bar{u}(t)\). Thus, consider the full order observer [30] :

$$\begin{aligned} \left\{ \begin{array}{ll} \dot{z}(t)= N z(t)+ G u(t) + L y(t) \\ \hat{x}(t)= z(t) - E y(t) \end{array} \right. \end{aligned}$$
(2)

where \(z(t) \in \mathbb {R}^{n} \) is the state vector, \( \hat{x}(t) \in \mathbb {R}^{n} \) is the estimate of the state vector x(t). In order to guarantee this estimation, \( \hat{x}(t)\) must asymptotically approach to x(t), that is the state estimation error

$$\begin{aligned} e(t)=x(t)-\hat{x}(t) \end{aligned}$$
(3)

approaches to zero asymptotically. The dynamics equation of the evolution of this error is written as follows:

$$\begin{aligned} \dot{e}(t)= & {} \dot{x}(t)-\dot{z}(t)+E C \dot{x}(t)\\ \nonumber= & {} (I + EC) \dot{x}(t)- \dot{z}(t) \\ \nonumber= & {} (I + EC) (A x(t)+ B u(t)+ R \bar{u}(t))-(N z(t)+ G u(t)+ L y(t)) \\ \nonumber= & {} (I + EC) (A x(t)+ B u(t)+ R \bar{u}(t))-(N \hat{x}(t)+ G u(t)+ (L+NE)C x(t)) \end{aligned}$$
(4)

Let us consider \(P = I + EC\), then we obtain :

$$\begin{aligned} \dot{e}(t)=N e(t)+ (PB-G)u(t)+P R \bar{u}(t))+(PA-NP-LC)x(t) \end{aligned}$$
(5)

The state estimation error converges asymptotically to zero if and only if:

$$\begin{aligned} LC=PA-NP\end{aligned}$$
(6a)
$$\begin{aligned} G=P B \end{aligned}$$
(6b)
$$\begin{aligned} PR=0 \end{aligned}$$
(6c)
$$\begin{aligned} N ~~is~~ stable^1 \end{aligned}$$
(6d)

The numerical solution of the system of equation (6) is based on the computation of the pseudo-inverse of the (CR) matrix, this is possible if the matrix (CR) is of full row rank [31].

$$\begin{aligned} E=- R(CR)^T ((CR)(CR)^T)^{-1}\end{aligned}$$
(7a)
$$\begin{aligned} P=I- R(CR)^T ((CR)(CR)^T)^{-1}C \end{aligned}$$
(7b)
$$\begin{aligned} G=PB \end{aligned}$$
(7c)
$$\begin{aligned} N=PA -KC \end{aligned}$$
(7d)
$$\begin{aligned} L=K -NE \end{aligned}$$
(7e)
$$\begin{aligned} N ~~is~~ stable \end{aligned}$$
(7f)

Thus, if the system of equation (7) is satisfied, the dynamics of the state estimation error reduces to :

$$\begin{aligned} \dot{e}(t)=N e(t) \end{aligned}$$
(8)

Given the properties of N, the state estimation error converges well asymptotically to zero.

2.3 Convergence Conditions of the Observer

In this section, we develop sufficient conditions for the asymptotic convergence of the state estimation error to zero. According to (8), this convergence is guaranteed if there exists a symmetric and positive definite matrix X, such that

$$\begin{aligned} N^T X + X N < 0 \end{aligned}$$
(9)

Since \(N = PA - KC\), the inequality (9) becomes :

$$\begin{aligned} (PA - KC)^T X + X(PA - KC) < 0 \end{aligned}$$
(10)

Unfortunately, we notice that the previous inequality (10) has the disadvantage of being non-linear (bilinear) with respect to the variables K and X. Two methods of resolution can be used:

  • Linearization with respect to the variables K and X,

  • Change of variables.

2.4 Resolution Methods

Solving methods have been proposed to solve nonlinear matrix inequalities and in particular the bilinear ones [32].

2.5 Linearization with Respect to Variables

We can use a “local” method, based on the linearization of the inequalities, with respect to the variables K and X, around the initial values \(K_0\) and \(X_0\) (well chosen). We define:

$$\begin{aligned} K = K_0 + \partial K ~~~\text {and} ~~~ X = X_0 + \partial X \end{aligned}$$
(11)

From the inequality (10), we obtain :

$$\begin{aligned} \left\{ \begin{array}{ll} ((PA - (K_0+ \Delta K) C) +(PA -(K_0+ \Delta K)C^T)(X_0 +\Delta X)+ \\ (X_0 +\Delta X)((PA - (K_0+ \Delta K) C)+(PA -(K_0+ \Delta K)C^T)) <0\\ X_0 +\Delta X>0 \end{array} \right. \end{aligned}$$
(12)

Ignoring the second order terms of the inequality (12), we obtain:

$$\begin{aligned} \left\{ \begin{array}{ll} ((PA - K_0 C+ (PA -K_0 C)^T) \Delta X +\Delta X(( PA- K_0 C)+ (PA - K_0 C)^T)-\\ \Delta K C X_0 - (C X_0)^T \Delta K^T - C^T \Delta K^T X_0 - X_0 \Delta K C +\\ ((PA - K_0 C) + (PA - K_0 C)^T) X_0 + X_0 ((PA - K_0 C) + (PA - K_0 C)^T) <0\\ X_0 +\Delta X>0 \end{array} \right. \end{aligned}$$
(13)

The system (13) is then a LMI (linear matrix inequality) type problem and its solution with respect to \(\Delta K\) and \(\Delta X\) is standard [33]. Note that the choice of initial values \(K_0\) and \(X_0\) remains the main drawback of this method and moreover the convergence to a solution is not always guaranteed. Unfortunately, from a practical point of view, one may have to examine various choices of initial values in order to to obtain a solution.

Remark 1

The LMI system (13) is valid only in the neighborhood of \(K_0\) and \(X_0\); this encouraged us, in order to improve the resolution, to propose, to limit the variations of the matrices \(\delta K\) and \(\delta X\), the following additional constraints:

$$\begin{aligned} \left\{ \begin{array}{ll} \Vert \Delta K_0\Vert< \epsilon \Vert K_0\Vert ,\\ \Vert \Delta X_0\Vert< \epsilon \Vert X_0\Vert ~~~~~with ~~~0 <\epsilon \ll 1. \end{array} \right. \end{aligned}$$
(14)

The LMI formulation of these constraints (14) is described by the following matrix inequalities:

$$\begin{aligned} \left[ \begin{array}{ll} \epsilon \Vert X_0\Vert I_{n \times n} ~~~~ \Delta X\\ \Delta X ~~~~~~~~~~~~~~~~ \epsilon \Vert X_0\Vert I_{n \times n} \end{array} \right]>0, \nonumber ~~~~ \left[ \begin{array}{ll} \epsilon \Vert X_0\Vert I_{n \times n} ~~~~\Delta K\\ \Delta K ~~~~~~~~~~~~~~~~ \epsilon \Vert X_0\Vert I_{m \times m} \end{array} \right] >0.\\ \end{aligned}$$
(15)

If the LMI systems (13) and (15) are feasible, then the observer (2) asymptotically estimates the state of the linear system with unknown inputs (1).

2.5.1 Change of Variables

To overcome the drawbacks of the previous method, a method based on a variable change is more interesting. For that, let us consider the following change of variables:

$$\begin{aligned} W=X K \end{aligned}$$
(16)

The inequality obtained after this change of variables can be written as follows:

$$\begin{aligned} (PA)^T X+X(PA)-(C^T W^T + WC)<0 \end{aligned}$$
(17)

The solution of the initial problem is obtained in two steps. First, we solve the linear matrix inequality (17) with respect to the unknowns X and W. Then we deduce the value of the gain K by the formula :

$$\begin{aligned} K=X^{-1}W \end{aligned}$$
(18)

2.6 Pole Placement

In this section, we examine how to improve the performance of the observer in particular with respect to the convergence speed to zero of the state estimation error.

For a better estimation of the state, the observer dynamics is chosen to be faster than that of the system. For this, we fix the eigenvalues of the observer in the left half-plane of the complex plane so that their real parts are larger in absolute value than those of the state matrix.

To ensure some convergence dynamics of the state estimation error, we define the complex region \(\mathcal{D}(\alpha ,\beta )\) by the intersection of a circle with center (0, 0) and radius equal to \(\beta \) and the left half of the region bounded by a vertical line of coordinates equal to \(- \alpha \) where \(\alpha \) is a positive constant (Fig. 1).

Fig. 1
figure 1

LMI area

2.6.1 Corollary

The eigenvalues of the matrix N are in the LMI region \(\mathcal{D}(\alpha ,\beta )\) if there exist matrices \(\Delta X\) and \(\Delta K\) such that:

$$\begin{aligned} \left[ \begin{array}{ll} - \beta (X_0+ \Delta X) ~~~~ N^T_0 X - (\Delta K C)^T X_0 \\ X N_0 - X_0 ( \Delta K C) ~~~~ - \beta (X_0+ \Delta X) \end{array} \right]<0 ~~~~\\ \nonumber N^T_0 \Delta X + \Delta X N_0 - C^T \Delta K^T X_0 - X_0 \Delta K C + N^T_0 X_0 + X_0 N_0 + 2 \alpha (X_0 + \Delta X)<0 \end{aligned}$$
(19)

with

$$\begin{aligned} \left\{ \begin{array}{ll} N_0 = PA -K_0 C, \\ X= X_0 + \Delta X \end{array} \right. \end{aligned}$$
(20)

3 Introduction to the Development of Discontinuous Observers

In recent years, the control problem or diagnosis of uncertain dynamic systems subject to external disturbances has been the subject of great interest. In practice, it is not always possible to measure the state vector, in this case, a design method based only on measured outputs and known inputs is used.

From a robust control perspective, the desirable properties of variable structure control systems, especially with a sliding mode, are well developed [34, 35]. Despite the successful research and development activity of variable structure control theory and its insensitivity to uncertainties or unknown inputs, few authors have considered the application of the fundamental principles to the observer design problem. Utkin has presented the design of an observer method with a discontinuous structure for which the error between the estimated and measured outputs is forced to converge to zero [36]. Dorling and Zinober [37] have explored the practical application of this observer to an uncertain system and examine the difficulties of choosing an appropriate sliding gain. Walcott et al. [38], Walcott and Zac [39] and Zak [40] have presented a method of observer design based on the Lyapunov approach. Under appropriate assumptions, they have shown the asymptotic decay of the state estimation error in the presence of bounded nonlinearities/uncertainties.

Recently, Ha [41] has presented a methodology to design a sliding mode controller for an uncertain linear system based on the pole placement technique. Xiong [42] has considered a sliding mode observer for the state estimation of an uncertain nonlinear system, the uncertainties are considered as unknown inputs. Islam [43] has proposed a theoretical and experimental evaluation of a sliding mode observer to measure the position and the velocity on a switched reluctance motor. In this section, we seek to construct a sliding mode observer building on the existing contributions described above. A detailed reminder of the design approaches of Utkin [34] and Walcott and Zac [39, 44] has been provided. Then, we are interested in the methodology developed by Edwards and Spurgeon [45, 46] to determine the gain expression of an observer, which overcomes the drawbacks of the observer of Walcott and Zak [39]. A Lyapunov approach have been proposed to ensure asymptotic convergence of the state estimation error. The solution of the Lyapunov inequalities leads to the solution of a LMI type problem.

4 Methods for Discontinuous Observer Design

Considering the following uncertain dynamic system:

$$\begin{aligned} \left\{ \begin{array}{ll} \dot{x}(t)= A x(t)+ B u(t) + f(x,u, t) \\ y(t)=C x(t) \end{array} \right. \end{aligned}$$
(21)

where x(t) is the state vector, u(t) is the vector of known inputs, y(t) represents the measurable output. \(A \in \mathbb {R}^{n\times n}\), \(B \in \mathbb {R}^{n\times m}\), \(C \in \mathbb {R}^{p\times n} \) with \(p\ge m\). The unknown function \(f : \mathbb {R}^{n} \times \mathbb {R}^{m} \times \mathbb {R}_{+} \rightarrow \mathbb {R}^{n}\) represents the uncertainties and satisfies the following conditions.

$$\begin{aligned} ||f(x,y,t)|| \le \rho , \forall x(t) \in \mathbb {R}^{n}, ~~ u(t) \in \mathbb {R}^{m},~~ t \ge 0 \end{aligned}$$
(22)

Moreover, the matrix C is assumed to have full row rank. The problem considered here is the reconstruction of the state vector in spite of the presence of unknown inputs.

4.1 Utkin Observer

Consider first the system (22) and assume that the pair (AC) is observable and that the function \(f(x, u, t) \equiv 0\). Since the state reconstruction relies on measured outputs, it is natural to perform a coordinate change so that the system outputs appear directly as components of the state vector. Without loss of of generality, the output matrix can be written as follows:

$$\begin{aligned} C=\left[ \begin{array}{cc} C_1 &{} C_2 \\ \end{array} \right] \end{aligned}$$
(23)

where \(C_1 \in \mathbb {R}^{p \times (n-p)}, ~~~ C_2 \in \mathbb {R}^{p \times p}, ~~~with det(C_2) \ne 0\), then the transformation matrix

$$\begin{aligned} T^{-1} = \left[ \begin{array}{cc} I_{n-p} &{} 0 \\ -C_1 C_2^{-1} &{} C_2^{-1} \end{array} \right] \end{aligned}$$
(24)

is non-singular and, in this new coordinate system, we can easily verify that the new output matrix is written as follows:

$$\begin{aligned} C T^{-1}=\left[ \begin{array}{cc} 0 &{} I_p \\ \end{array} \right] \end{aligned}$$
(25)

The new state and control matrices are expressed as:

$$\begin{aligned} A=TAT^{-1}=\left[ \begin{array}{cc} A_{11} &{} A_{12} \\ A_{21} &{} A_{22} \\ \end{array} \right] ~~~~ and~~~~ B=TB=\left[ \begin{array}{c} B_1 \\ B_2 \\ \end{array} \right] \end{aligned}$$
(26)

The nominal system can then be written as follows:

$$\begin{aligned} \left\{ \begin{array}{ll} \dot{x_1}(t)= A_{11} x_1(t)+ A_{12} y(t)+ B_1 u(t) \\ \dot{y}(t)=A_{21} x_1(t)+ A_{22} y(t)+ B_2 u(t) \end{array} \right. \end{aligned}$$
(27)

where

$$\begin{aligned} \left[ \begin{array}{c} x_1(t) \\ y(t) \end{array} \right] =T x(t) ~~~and ~~~x_1(t) \in \mathbb {R}^{n-p}. \end{aligned}$$
(28)

The observer proposed by Utkin [36] has the following form:

$$\begin{aligned} \left\{ \begin{array}{ll} \dot{x_1}(t)= A_{11} \hat{x}_1(t)+ A_{12} \hat{y}(t)+ B_1 u(t) +L \upsilon (t) \\ \dot{\hat{y}}(t)=A_{21} \hat{x}_1(t)+ A_{22} \hat{y}(t)+ B_2 u(t)-\upsilon (t) \end{array} \right. \end{aligned}$$
(29)

where \((\hat{x}_1(t), \hat{y}(t))\) are the estimated values of \(({x_1}(t), y(t))\), \(L \in \mathbb {R}^{(n-p) \times p}\) is the gain observer and the components of the discontinuous vector \(\upsilon (t)\) are defined by the following equation :

$$\begin{aligned} \upsilon _i(t)= M sign(\hat{y}_i(t)-y_i(t)) ,~~ for~~ M ~~\in ~~\mathbb {R}_+ \end{aligned}$$
(30)

where \(\hat{y}_i(t)\) and \(y_i(t)\) are the components of the vectors \(\hat{y}(t)\) and y(t) respectively and sign is the signum function.

Let us denote by \(e_1(t)\) and \(e_y(t)\) the state and output estimation errors.

$$\begin{aligned} e_1(t)= \hat{x}_1 (t)- x_1 (t)\\ \nonumber e_y(t)= \hat{y}(t)-y(t) \end{aligned}$$
(31)

From Eqs. (27), (29) and (31), the following system can be obtained:

$$\begin{aligned} \left\{ \begin{array}{ll} \dot{e}_1(t)= A_{11} e_1(t)+ A_{12} e_y(t)+ L \upsilon (t) \\ \dot{e}_y(t)=A_{21} e_1(t)+ A_{22} e_y(t)+ \upsilon (t) \end{array} \right. \end{aligned}$$
(32)

As the pair (AC) is observable, so is the pair \((A_{11}, A_{21})\). Therefore, L can be chosen so that the eigenvalues of the matrix \(A_{11} + L A_{21}\) are in the left half-plane of the complex plane. Now let us define the new change of variable:

$$\begin{aligned} T_s^{-1}=\left[ \begin{array}{cc} I_{n-p} &{} -L \\ 0 &{} I_{p} \\ \end{array} \right] ~~ with ~~ \left[ \begin{array}{c} x'_1(t) \\ y(t) \end{array} \right] =T_s \left[ \begin{array}{c} x_1(t) \\ y(t) \end{array} \right] \end{aligned}$$
(33)

After this change of variable, the estimation errors can be written as:

$$\begin{aligned} \dot{e}'_1(t)= A'_{11}{e}'_1(t)- A'_{12} e_y (t) \end{aligned}$$
(34)
$$\begin{aligned} \dot{e}'_y(t)= A'_{11}{e}'_1(t)- A'_{22} e_y (t)-\upsilon (t) \end{aligned}$$
(35)

with \(\dot{e}'_1(t)= {e}_1(t)-L e_y (t)\) and \(A'_{11}= A_{11}+ L A_{21}\), \(A'_{12}= A_{12} +L A_{22} -A'_{11} L\) and \( A'_{22} = A_{22}- A_{21} L\). It can be shown, using the theory of singular perturbations, that for a M large enough, a sliding motion can arise on the output error (35). Thus, after a finite time \(t_s\), the error \(e_y(t)\) and its derivative are zero \((e_y(t) = 0, \dot{e}_y(t) = 0)\). Equation (34) becomes:

$$\begin{aligned} \dot{e}'_1(t)= A'_{11}{e}'_1(t) \end{aligned}$$
(36)

By correctly choosing the gain matrix L (so that the matrix \(A'_{11}\) is stable), the system of error Eqs. (34)–(35) is stable, i.e., \(e'_1(t) \rightarrow 0\) when \(t \rightarrow \infty \).

Therefore \(\hat{x}_1(t) \rightarrow x_1(t)\) and the other component of the state vector \(x_2(t)\) can be reconstructed in the original coordinate system as follows:

$$\begin{aligned} \hat{x}_2(t)= C^{-1}_{2}(y(t)-C_1 \hat{x}_1(t)) \end{aligned}$$
(37)

The main practical difficulty of this approach lies in the choice of an appropriate gain M to induce a sliding motion in a finite time. Dorling and Zinober [37] have shown the need to modify the gain M during the time interval in order to reduce excessive switching.

4.2 Walcott and Żak Observer

The problem considered by Walcott and Żak [39, 44] is the state estimation of a system described by (21) such that the error goes to zero exponentially despite the presence of the considered uncertainties. In this part, we assume that :

$$\begin{aligned} f(x, u, t)= R \xi (x,t) \end{aligned}$$
(38)

where \(\xi : \mathbb {R}^{n} \times \mathbb {R}_{+} \rightarrow \mathbb {R}^{q}\) is a bounded and unknown function, such that :

$$\begin{aligned} \Vert \xi (x(t), t)\Vert \le \rho , ~~\forall x(t) \in \mathbb {R}_{n},~~ t \ge 0 \end{aligned}$$

Consider a matrix \(G \in \mathbb {R}^{n \times p}\) such that the matrix \(A_0 = (A-GC)\) has stable eigenvalues, a pair of symmetric, positive and definite Lyapunov matrices (PQ) and a matrix F respecting the following structural constraint:

$$\begin{aligned} (A - G C)^T P + P (A-G C)= & {} -Q \\ \nonumber C^T F^T= & {} PR \end{aligned}$$
(39)

The proposed observer can be expressed as:

$$\begin{aligned} \dot{\hat{x}}= A \hat{x}(t)+ B u(t) - G(C \hat{x}(t)-y(t))+\upsilon (t) \end{aligned}$$
(40)
$$\begin{aligned} \upsilon (t)=\left\{ \begin{array}{ll} -\rho \dfrac{P^{-1}C^T F^T F C e(t)}{\Vert FCe(t)\Vert } ~~~\mathrm {if} ~~FCe(t) \ne 0\\ 0 ~~\mathrm {otherwise} \end{array} \right. \end{aligned}$$
(41)

where

$$\begin{aligned} e(t)=\hat{x}(t)-x(t) \end{aligned}$$
(42)

The dynamics of the state estimation error generated by this observer is determined by the following equation:

$$\begin{aligned} \dot{e}(t)= & {} \dot{\hat{x}}(t)-\dot{x}(t) \\ \nonumber= & {} A \hat{x}(t)+ Bu(t) - G(C \hat{x}(t) - y(t)) + \upsilon (t) - (Ax(t) + Bu(t) + R \xi (x, t)) \\ \nonumber= & {} (A - GC)e(t) + v(t)- R \xi (x, t) \end{aligned}$$
(43)

The following Lyapunov function is considered:

$$\begin{aligned} V(e)(t)= & {} e^T(t)Pe(t) \end{aligned}$$
(44)

Its derivative along the trajectory of the estimation error can be written as:

$$\begin{aligned} \dot{V}(e(t))= & {} \dot{e}^T(t)Pe(t)+e^T(t)P \dot{e}(t) \\ \nonumber= & {} ((A- GC)e(t) + \upsilon (t) - R\xi (x,t))^T P e(t) + e^T(t)P((A- GC)e(t) + \upsilon (t) - R\xi (x, t))\\ \nonumber= & {} -e^T (t)Qe(t) + 2e^T (t)P v(t) - 2 e^T(t)P R \xi (x, t)\\ \nonumber= & {} -e^T (t)Qe(t) + 2e^T (t)P v(t) - 2 e^T(t)C^T F^T \xi (x, t) \nonumber \end{aligned}$$
(45)

Let us consider the two following cases:

First case

If \(F C e(t) \ne 0\), by replacing the expression of \(\xi (t)\) by Eq. (41), the derivative of the Lyapunov function becomes :

$$\begin{aligned} \dot{V}(e(t))= & {} e^T(t)Qe(t)-2e^T(t)\rho \frac{C^T F^T F C e(t)}{\Vert FCe(t)\Vert }-2e^T C^T F^T \xi (x,t) \\ \nonumber= & {} -e^T Q e(t) - 2 \rho \Vert FCe(t)\Vert -2 e^T(t)C^T F^T \xi (x,t) \\ \nonumber \end{aligned}$$
(46)

Using the fact that the unknown function \(\upsilon (x,t)\) is bounded by a positive scalar \(\rho \), the derivative of the Lyapunov function can be increased as follows:

$$\begin{aligned} \dot{V}(e(t))\le & {} -e^T(t)Qe(t)-2 \rho \Vert FCe(t)\Vert +2\rho \Vert FCe(t)\Vert \\ \nonumber\le & {} -e^T Q e(t) < 0\\ \nonumber \end{aligned}$$
(47)

Second case If \(FCe(t) = 0\), by replacing the expression of \(\xi (t)\) by Eq. (41), the derivative of the Lyapunov function becomes :

$$\begin{aligned} \dot{V}(e(t))= & {} -e^T(t)Qe(t) < 0 \\ \nonumber \end{aligned}$$
(48)

Thus, in both cases, we have shown that the derivative of the Lyapunov function is negative which shows that the state estimation error converges asymptotically to zero. To guarantee the asymptotic convergence of the observer, we must verify that:

  • the pair (AC) is observable,

  • there exists a pair of Lyapunov matrices (PQ) and a matrix F respecting the constraints (39).

5 Sliding Mode Observer Using a Canonical Form

Edward and Spurgeon [45, 46] have presented a method for designing a sliding mode observer, based on the structure of the Walcott and Zak observer [39], while avoiding the major drawback of the Walcott and Zak observer mentioned above. For this purpose, let us consider again the dynamical system presented previously:

$$\begin{aligned} \left\{ \begin{array}{ll} \dot{x}(t)= A x(t)+ B u(t) + R \xi (x,u, t) \\ y(t)=C x(t) \end{array} \right. \end{aligned}$$
(49)

where \(A \in \mathbb {R}^{n\times n}\), \(B \in \mathbb {R}^{n\times m}\), \(C \in \mathbb {R}^{p\times n} \) and \(D \in {n\times q}\) with \( p \ge q\). We suppose that the matrices A, B and R are of full rank and the function \(\xi : \mathbb {R}_{+} \times \mathbb {R}^{n} \times \mathbb {R}^m \rightarrow \mathbb {R}^q\) is unknown bounded function such that :

$$\begin{aligned} \Vert \xi (x,u,t)\Vert \le \rho \end{aligned}$$
(50)

Before proceeding to the estimation of the state and output vector of the system (49), we will proceed to two coordinates changes of the state vectors.

5.1 Simplified Output Equation

Suppose that the system described above is observable. It is quite natural to perform a change of coordinates so that the outputs of the system appear directly as components of the state vector. Without loss of generality, the output matrix can be written as [39]:

$$\begin{aligned} C=\left[ \begin{array}{cc} C_1 &{} C_2 \\ \end{array} \right] \end{aligned}$$
(51)

where \(C_1 \in R^p \times (n-p), ~~C_2 \in R^{p \times p} ~~and~~ det(C_2) \ne 0\).

Let us then perform the following change of coordinates:

$$\begin{aligned} \tilde{x}(t)=\tilde{T}x(t) \end{aligned}$$
(52)

where \(\tilde{T}\) is a non-singular matrix definite as:

$$\begin{aligned} \tilde{T}=\left[ \begin{array}{cc} I_{np} &{} 0 \\ C_1 &{} C_2 \end{array} \right] \end{aligned}$$
(53)

In this new coordinate system, we can easily verify that the new output matrix is written :

$$\begin{aligned} \tilde{C}=C \tilde{T}^{-1}=[ 0 ~~~I_p] \end{aligned}$$
(54)

Other matrices are transformed as follows:

$$\begin{aligned} \tilde{A}=\tilde{T}A\tilde{T}^{-1}=\left[ \begin{array}{cc} \tilde{A}_{11} &{} \tilde{A}_{12} \\ \tilde{A}_{21} &{} \tilde{A}_{22} \\ \end{array} \right] , ~~ \tilde{B}=\tilde{T}B=\left[ \begin{array}{c} \tilde{B}_{1} \\ \tilde{B}_2 \\ \end{array} \right] ,~~ and~~ \tilde{R}=\tilde{T}R=\left[ \begin{array}{c} \tilde{R}_1 \\ \tilde{R}_2 \\ \end{array} \right] ~~ \end{aligned}$$
(55)

The system (49) can then be written as :

$$\begin{aligned} \left\{ \begin{array}{ll} \tilde{\dot{x}}(t)= \tilde{A} x(t)+ \tilde{B}u(t) + \tilde{R} \xi (x,u,t) \\ y(t)=\tilde{C} \tilde{x}(t)=\tilde{x}_2(t) \end{array} \right. \end{aligned}$$
(56)

The change of coordinates allows to express directly the output vector as a function of a part of the state vector.

Then, the constraints (39) and the Lyapunov matrices (PQ) can be expressed as:

$$\begin{aligned} (\tilde{A}-\tilde{G}\tilde{C})^T \tilde{P}+ \tilde{P} (\tilde{A}-\tilde{G}\tilde{C})=-\tilde{Q} \\ -\tilde{C}^T \tilde{F}^T = \tilde{P}\tilde{R} \nonumber \end{aligned}$$
(57)
$$\begin{aligned} \left\{ \begin{array}{lll} \tilde{P}&{}=&{} (\tilde{T}^{-1})^T P \tilde{T}^{-1} \\ \tilde{Q}&{}=&{} (\tilde{T}^{-1})^T Q \tilde{T}^{-1} \\ \tilde{G}&{}=&{} \tilde{T}^{-1} G \\ \end{array} \right. \end{aligned}$$
(58)

5.2 Decoupling of the Unknown Function

We can now use a result established by Walcott and Zak regarding the design of a robust observer with respect to the presence of unknown inputs or model uncertainties.

Let the linear model \((\tilde{A}, \tilde{B}, \tilde{R}, \tilde{C})\) be defined by the state Eq. (56) where \(\tilde{A}\) is a stable matrix, and let \((\bar{A}, \bar{B}, \bar{R}, \bar{C})\) be related to \((\tilde{A}, \tilde{B}, \tilde{R}, \tilde{C})\) by the following coordinate transformation:

$$\begin{aligned} \bar{x}(t)= \bar{T}\tilde{x}(t) \end{aligned}$$
(59)

Matrices \((\bar{A}, \bar{B}, \bar{R}, \bar{C})\) are expressed as:

$$\begin{aligned} \left\{ \begin{array}{ll} \bar{A}= \bar{T}\tilde{A}\bar{T}^{-1}=\left[ \begin{array}{cc} \bar{A}_{11} &{} \bar{A}_{12} \\ \bar{A}_{21} &{} \bar{A}_{22} \\ \end{array} \right] , ~~~~ \bar{B}=\bar{T} \tilde{B}=\left[ \begin{array}{c} \bar{B}_{1} \\ \bar{B}_{2} \\ \end{array} \right] \\ \bar{R}=\bar{T}\bar{R}=\left[ \begin{array}{c} \bar{R}_1 \\ \bar{R}_2 \\ \end{array} \right] , ~~~~ \bar{C}=\bar{C}\bar{T}^{-1}=\big [0 ~~ I_p \big ] \end{array} \right. \end{aligned}$$
(60)

The constraints (57) and the matrices \((\tilde{P}, \tilde{Q})\) become:

$$\begin{aligned} (\bar{A}-\bar{G}\bar{C})^T \bar{P}+ \bar{P}(\bar{A}-\bar{G}\bar{C})=-\bar{Q} \\ \bar{C}^T \bar{F}^T=\bar{P} \bar{R} \nonumber \end{aligned}$$
(61)
$$\begin{aligned} \left\{ \begin{array}{lll} \bar{P}= (\bar{T}^{-1})^T \tilde{P} \bar{T}^{-1} \\ \bar{Q}= (\bar{T}^{-1})^T \tilde{Q} \bar{T}^{-1} \\ \bar{G}= \bar{T}^{-1} \tilde{G} \end{array} \right. \end{aligned}$$
(62)

First Proposition:

Let consider the linear model \((\tilde{A}, \tilde{B}, \tilde{R}, \tilde{C})\) defined by the state equation (56) for which there exists a pair of matrices \((\tilde{P}, F)\) defined by constraints (39) and (58), then there exists a nonsingular transformation \(\bar{T}\) such that the new coordinates of matrices \((\bar{A}, \bar{B}, \bar{R}, \bar{C})\), \((\bar{P},F)\) have the following properties:

  1. 1.

    \(\bar{A}=\left[ \begin{array}{cc} \bar{A}_{11} &{} \bar{A}_{12} \\ \bar{A}_{21} &{} \bar{A}_{22} \\ \end{array} \right] ~~~~ \text {where}~~\bar{A}_{11} \in \mathbb {R}^{(n-p)\times (n-p)} ~~\text {is} ~~\text {a} ~~\text {stable}~~\text {matrix} \)

  2. 2.

    \(\bar{R}=\left[ \begin{array}{c} 0 \\ P^*_{22} F^T\\ \end{array} \right] ~~~~ \text {where} ~~P_{22} \in \mathbb {R}^{p \times p} \)

  3. 3.

    \(\bar{C}=\left[ \begin{array}{cc} 0 &{} I_p \\ \end{array} \right] \)

  4. 4.

    The Lyapunov matrix has a block-diagonal structure \( \bar{P}=\left[ \begin{array}{cc} \bar{P}_1 &{} 0\\ 0&{} \bar{P}_2 \\ \end{array} \right] ~~~~ \text {with} ~~\bar{P}_1 \in \mathbb {R}^{(n-p) \times (n-p)} ~~\text {and} ~~\bar{P}_2 \in \mathbb {R}^{p \times p} \)

Proof

let the pair \((\tilde{P}, F)\) associated with the linear model \((\tilde{A}, \tilde{B}, \tilde{R}, \tilde{C})\) and let the Lyapunov matrix \(\tilde{P}\) written in the following form:

$$\begin{aligned} \tilde{P}=\left[ \begin{array}{cc} \bar{P}_{11} &{} \bar{P}_{12} \\ \bar{P}_{21} &{} \bar{P}_{22} \\ \end{array} \right] ~~~~ \text {where} \left\{ \begin{array}{ll} \bar{P}_{11} \in \mathbb {R}^{(n-p) \times (n-p)} \\ \bar{P}_{12} \in \mathbb {R}^(n-p) \times p~~ \text {and}~~\bar{P}_{22} \in \mathbb {R}^p \times p \\ \end{array} \right. \end{aligned}$$
(63)

The coordinate change uses the following transformation matrix \(\bar{T}\) :

$$\begin{aligned} \bar{T}=\left[ \begin{array}{cc} \tilde{P}_{11} &{} \tilde{P}_{12}\\ 0&{} I_p \\ \end{array} \right] \end{aligned}$$
(64)

which is nonsingular, the matrix \(\tilde{P}_{11}\) being a positive definite symmetric matrix \(\tilde{P}_{11} = \tilde{P}_{11}^T >0\). In the new coordinates, we obtain: \(\bar{C}= \tilde{C} \bar{T}^{-1}=\big [0 ~~ I_p \big ]\). Thus, property 3 is satisfied. From Eq. (58) we obtain: \(\tilde{R}=\tilde{P}^{-1} \tilde{C}^{-1} F^T\). If we note:

$$\begin{aligned} \tilde{P}^{-1}=\left[ \begin{array}{cc} P_{11}^* &{} P_{12}^* \\ P_{21}^* &{} P_{22}^* \\ \end{array} \right] \end{aligned}$$
(65)

we obtain

$$\begin{aligned} \bar{R}=\bar{T}\tilde{R}=\left[ \begin{array}{cc} \tilde{P}_{11} &{} \tilde{P}_{12} \\ 0 &{} I_p \\ \end{array} \right] ~~\left[ \begin{array}{cc} \tilde{P}_{11}^* &{} \tilde{P}_{12}^* \\ \tilde{P}_{21}^* &{} \tilde{P}_{22}^* \\ \end{array} \right] ~~\left[ \begin{array}{c} 0 \\ I_p \\ \end{array} \right] F^T=\left[ \begin{array}{c} 0 \\ \tilde{P}_{22}^* F^T \\ \end{array} \right] =\left[ \begin{array}{c} 0 \\ \bar{R}_2\\ \end{array} \right] \end{aligned}$$
(66)

Thus, the second property explaining the decoupling of unknown inputs (uncertain function) is proved. If there exists a Lyapunov matrix \(\tilde{P}\) that satisfies the constraints (58), then the matrix \(\bar{P}=(\bar{T}^{-1})^T \tilde{P} \bar{T}^{-1}\) represents the Lyapunov matrix for the state matrix \(\bar{A}_0=\bar{A}-\bar{G}\bar{C}\) and satisfies the constraint \(\bar{C}^T F^T=\bar{P}\bar{R}\). Using a direct calculation, one can easily find :

$$\begin{aligned} \bar{P}=\left[ \begin{array}{cc} \tilde{P}_{11}^{-1} &{} 0 \\ -\tilde{P}_{12}^T \tilde{P}_{11}^{-1} &{} I_p \\ \end{array} \right] ~~\left[ \begin{array}{cc} \tilde{P}_{11} &{} \tilde{P}_{12} \\ \tilde{P}_{12}^T &{} \tilde{P}_{22} \\ \end{array} \right] ~~\left[ \begin{array}{cc} \tilde{P}_{11}^{-1} &{} -\tilde{P}_{11}^{-1}\tilde{P}_{12} \\ 0 &{} I_p \\ \end{array} \right] =\left[ \begin{array}{cc} \tilde{P}_{11}^{-1} &{} 0 \\ 0 &{} \bar{P}_2 \\ \end{array} \right] \end{aligned}$$
(67)

where \(\bar{P}_2=-\tilde{P}_{12}^T \tilde{P}_{11}^{-1} \tilde{P}_{12}\). Thus, the matrix \(\bar{P}\) has the block-diagonal structure shown in property 4. Finally, replacing the matrix \(\bar{P}\) (67) in the constraint (61), we obtain:

$$\begin{aligned} \left[ \begin{array}{cc} \bar{A}_{011} &{} \bar{A}_{012} \\ \bar{A}_{021} &{} \bar{A}_{022} \\ \end{array} \right] ^T~~\left[ \begin{array}{cc} \bar{P}_{1} &{} 0 \\ 0 &{} \bar{P}_{2} \\ \end{array} \right] ~ + \left[ \begin{array}{cc} \bar{P}_{1} &{} 0 \\ 0 &{} \bar{P}_{2} \\ \end{array} \right] =\left[ \begin{array}{cc} \bar{A}_{011} &{} \bar{A}_{012} \\ \bar{A}_{021} &{} \bar{A}_{022} \\ \end{array} \right] \end{aligned}$$
(68)

Then

$$\begin{aligned} \left\{ \begin{array}{ll} \bar{A}_{011}^T \bar{P}_1+ \bar{P}_1 \bar{A}_{011}<0 \\ \bar{A}_{022}^T \bar{P}_2+ \bar{P}_2 \bar{A}_{022} <0\\ \end{array} \right. \end{aligned}$$
(69)

with

$$\begin{aligned} \bar{A}_0=\left[ \begin{array}{cc} \bar{A}_{011} &{} \bar{A}_{012} \\ \bar{A}_{021} &{} \bar{A}_{022} \\ \end{array} \right] . \end{aligned}$$
(70)

\(\bar{A}_{011} = \bar{A}_{11} - \bar{G}\bar{C}_{011} = \bar{A}_{11}\), because \((\bar{G}\bar{C})_{11} = 0 ~~\forall G \in \mathbb {R}^ {n \times p} \) because \(\bar{C} =\left[ \begin{array}{cc} 0 &{} I_p \\ \end{array} \right] \) and therefore the matrix \(\bar{A}_{11}\) is stable. Thus property 1 is proved.

6 Sliding Mode Observer

The implementation of control laws based on the nonlinear model of the system, requires the knowledge of the complete state vector of the system at each instant. However, in most cases, only one part of the state is accessible using of sensors.

To reconstitute the complete system state, the idea is based on the use of a software sensor, called observer.

An observer is a dynamic system which from the system input u(t) (the control), the measured output y(t), as well as a priori knowledge of the model, will provide an estimated output state \(\hat{x}(t)\) which should tend towards the real state x(t).

One of the best known classes of robust observers is the sliding mode observers [47].

6.1 Design of Sliding Mode Observer

The principle of sliding mode observers consists in remaining the system dynamics with order n using discontinuous functions, to converge to a variety s of dimension \((n-p)\) called sliding surface (p is the dimension of the measurement vector) [47].

The attractiveness of this surface is ensured by sliding conditions. If these conditions are verified, the system converges towards the sliding surface and y moves according to a \((n-p)\) order dynamics.

In the case of sliding mode observers, the dynamics concerned are those of the observation errors \(e(t)=x(t)-\hat{x}(t)\).

From their initial values e(0) , these errors converge to the equilibrium values in two steps:

  • The first step, the observation error trajectory evolves towards the sliding surface on which the errors between the observer output and the real system output (measurements) \(e_y=y-\hat{y}\) are equal to zero. This step, which is generally very dynamic, is called the attainment mode.

  • In the second step, the observation error trajectory remains on the sliding surface with imposed dynamics, to cancel all the observation errors. This last mode is called sliding mode.

Consider the following n-order nonlinear state system :

$$\begin{aligned} \left\{ \begin{array}{lll} \dot{x}(t)=f(x,u)\\ y(t)=h(x) \end{array} \right. \end{aligned}$$
(71)

where \(x \in \mathbb {R}^n\) is the state vector, \( u \in \mathbb {R}^m\) is the vector of known inputs or control, \( y \in \mathbb {R}^p\) represents the output vector.

Functions f and h are vector systems assumed to be continuously differentiable on x.

The input u is locally bounded and measurable.

The sliding mode observer is defined with the following structure [48]:

$$\begin{aligned} \left\{ \begin{array}{lll} \hat{\dot{x}}=f(\hat{x},u)-K\mathrm {sign}(\hat{y}-y)\\ \hat{y}=h(\hat{x}) \end{array} \right. \end{aligned}$$
(72)

with K is the gain matrix of \((n-p)\) dimension.

The obtained observer is a copy of the system model plus a correction term which establishes the convergence of \(\hat{x}\) to x (Fig. 2).

The sliding surface in this case is given by:

\(s(x)=y-\hat{y}\).

The correction term used is proportional to the discontinuous signum function applied to the output error that is defined by [48]:

$$\begin{aligned} sign(x)=\left\{ \begin{array}{lll} 1 ~~if ~~x>0 \\ 0 ~~if ~~x=0 \\ -1 ~~if ~~x<0 \end{array} \right. \end{aligned}$$
(73)
Fig. 2
figure 2

Block diagram of a sliding mode observer

The sliding mode observer must respect two conditions in order to guarantee that the estimated state converge to the real state:

  • The first condition concerns the reaching mode and guarantees the attractiveness of the sliding surface \(S=0\) with p dimension. The sliding surface is attractive if the Lyapunov function \(V(x)=S^T \times S\) verifies the condition: \(\dot{V}(x)<0\) if \(S\ne 0\)

  • The second one, concerns the sliding mode. During this step, the corrective gain matrix satisfies the following invariance condition:

    $$\begin{aligned} \left\{ \begin{array}{ll} \dot{S}=0 \\ S=0 \\ \end{array} \right. \end{aligned}$$
    (74)

    The system dynamics are reduced and the n-order system becomes an equivalent \((n-p)\) order system. These criteria allow the synthesis of the sliding mode observer and determine its operation [49].

6.2 Sliding Mode Observer of Linear Systems

Considering the following linear system:

$$\begin{aligned} \left\{ \begin{array}{ll} \dot{x}=A x(t) + B u(t) \\ y=C x(t) \\ \end{array} \right. \end{aligned}$$
(75)

where \(x \in \mathbb {R}^n\) is the state vector, \( u \in \mathbb {R}^m\) is the vector inputs, \( y \in \mathbb {R}^p\) denotes the output vector.

Matrices A, B and C have appropriate dimensions.

The pair (AC) is assumed to be observable.

The reconstruction of the state variables is based on the measured outputs. A change of coordinates can be performed so that the outputs appear directly as components of the state vector.

Recalling Eq. (51), a non-singular transformation matrix T allows to rewrite respectively the output, state and control matrices as follows:

$$\begin{aligned} \tilde{A}=T A T^{-1}= \left[ \begin{array}{cccc} A_{11} &{} A_{12} \\ A_{21} &{} A_{22} \end{array} \right] \end{aligned}$$
(76)
$$\begin{aligned} \tilde{B}=T B= \left[ \begin{array}{c} B_{1} \\ B_{2} \end{array} \right] \end{aligned}$$
(77)

The linear system presented in Eq. (75) can thus be in the following form:

$$\begin{aligned} \left\{ \begin{array}{ll} \dot{x}_1(t)=A_{11} x_1(t) + A_{12} y(t)+ B_1 u(t) \\ \dot{y}=A_{21} x_1(t) + A_{22} y(t)+ B_2 u(t) \\ \end{array} \right. \end{aligned}$$
(78)
$$\begin{aligned} T x(t)=\left[ \begin{array}{c} x_1(t) \\ y(t) \\ \end{array} \right] \end{aligned}$$
(79)

with \(x_1(t) \in \mathbb {R}^{n-p}\) The proposed sliding mode observer for this type of system is expressed as:

$$\begin{aligned} \left\{ \begin{array}{ll} \dot{\hat{x}}_1(t)=A_{11} \hat{x}_1(t) + A_{12} \hat{y}(t)+ B_1 u(t)+L K sign (\hat{y}_i(t)-y_i(t)) \\ \dot{\hat{y}}=A_{21}\hat{ x}_1(t) + A_{22} \hat{y}(t)+ B_2 u(t)- K sign (\hat{y}_i(t)-y_i(t)) \\ \end{array} \right. \end{aligned}$$
(80)

with \(L \in \mathbb {R}^{(n-p) \times p}\) is the observer gain, \(K >0\) and \(\hat{y}_i(t)\) and \(y_i(t)\) are the vector components of \(\hat{y}(t)\) and y(t), respectively.

The state and output estimation errors are given by :

$$\begin{aligned} \left\{ \begin{array}{ll} e_1(t)= \hat{x}_1(t)-x_1(t)\\ e_y(t)= \hat{y}(t) + y(t) \\ \end{array} \right. \end{aligned}$$
(81)

From Eqs. (78), (80) and (81), the dynamics of the estimation errors will be written as:

$$\begin{aligned} \left\{ \begin{array}{ll} \dot{e}_1(t)=A_{11} e_1(t) + A_{12} e_y(t)+ L K sign (\hat{y}_i(t)-y_i(t)) \\ \dot{e}_y(t)=A_{21} e_1(t) + A_{22} e_y(t)- K sign (\hat{y}_i(t)-y_i(t)) \\ \end{array} \right. \end{aligned}$$
(82)

The pair \((A_{11},A_{21})\) is observable because the pair (AC) is observable. Therefore, the gain L can be chosen such that the eigenvalues of the matrix \(A_{11}+ L A_{21}\) are in the left half-plane plane of the complex plane.

6.3 Triangular Sliding Mode Observer

The triangular sliding mode observer has the following form:

$$\begin{aligned} \left\{ \begin{array}{ll} \left( \begin{array}{c} \dot{x}_1 \\ \dot{x}_2 \\ . \\ . \\ . \\ \dot{x}_{n-1} \\ \dot{x}_n \\ \end{array} \right) =\left( \begin{array}{c} x_2+g_1(x_1,u) \\ x_3+g_2(x_1,x_2,u) \\ . \\ . \\ . \\ x_n+g_{n-1}(x_1,x_2, \ldots ,u) \\ f_n(x)+g_n(x,u) \\ \end{array} \right) \\ y=x_1 \end{array} \right. \end{aligned}$$
(83)

where \(g_i\) and \(f_n\) for \(i=1,2, \ldots , n\) are the analytic functions, \(x=[x_1 x_2 \ldots x_n]^T \in \mathbb {R}^n\) is the system state, \(u \in \mathbb {R}^m\) is the input vector and \(y \in \mathbb {R}\) is the output.

The proposed observer structure is as follows:

$$\begin{aligned} \left\{ \begin{array}{ll} \left( \begin{array}{c} \dot{\hat{x}}_1 \\ \dot{\hat{x}}_2 \\ . \\ . \\ . \\ \dot{\hat{x}}_{n-1} \\ \dot{\hat{x}}_n \\ \end{array} \right) =\left( \begin{array}{c} \hat{x}_2+g_1(x_1,u)+ \lambda _1 \mathrm {sign}_1(x_1-\hat{x}_1) \\ \hat{x}_3+g_2(x_1,\bar{x}_2,u)+ \lambda _2 \mathrm {sign}_2(\bar{x}_2-\hat{x}_2) \\ . \\ . \\ . \\ \hat{x}_n+g_{n-1}(x_1,\bar{x}_2, \ldots ,\bar{x}_{n-1},u)+ \lambda _{n-1} \mathrm {sign}_{n-1}(\bar{x}_{n-1}-\hat{x}_{n-1}) \\ f_n(x_1,\bar{x}_2, \ldots ,\bar{x}_{n})+g_n(x_1,\bar{x}_2, \ldots ,\bar{x}_{n},u)\lambda _{n} \mathrm {sign}_{n}(\bar{x}_{n}-\hat{x}_{n}) \\ \end{array} \right) \\ \hat{y}=\hat{x}_1 \end{array} \right. \end{aligned}$$
(84)

where \(\bar{x}_i=\hat{x}_i+\lambda _{i-1}\mathrm {sign}_{moy,i-1}(\bar{x}_{i-1}-\hat{x}_{i-1})\) with \(\mathrm {sign}_{moy,i-1}\) denoting the function \(\mathrm {sign}_{i-1}\) filtered by a low pass filter. \(\mathrm {sign}_{i}(.)\) is equal to zero if there exists \(j \in \{1, \ldots , i-1\}\) such that \(\bar{x}_j-\hat{x}_j\ne 0\) (by definition \(\bar{x}_1=x_1\)), if not \(\mathrm {sign}_{i}(.)\) is taken equal to the classical function \(\mathrm {sign}(.)\). According to these propositions, we impose that the corrector term is “active” only if the condition \(\bar{x}_j-\hat{x}_j=0\) for \(j =1,2, \ldots , i-1\) is verified.

There exists a choice of \(\lambda _j\) such that the observer state \(\hat{x}\) converges in a finite time to the state x of the system.

Let us consider the dynamics of the observer error \(e=x-\hat{x}\) and proceed step by step. For \(e_1=x_1-\hat{x}_1\), we obtain: \(\dot{e}_1=e_2-\lambda _1 \mathrm {sign}(e_1)\) with \(e_2=x_2-\hat{x}_2\).

If \(\lambda _1>|e_2|_{max}\) for \(t>t_1\), then the sliding surface \(e_1=0\) is reached after a finite time \(t_1\) which means that \(\dot{e}_1=0\).

There is a continuous function noted \(\mathrm {sign}_{eq}\) defined by: \(e_2-\lambda _1 \mathrm {sign}_{eq}(e_1)=0\), involving \(\bar{x}_2=x_2\) on the sliding surface, since \(\mathrm {sign}_{eq}=\mathrm {sign}_{moy}\), then:

$$\begin{aligned} \dot{e}_1=x_2-(\hat{x}_2+\lambda _1 \mathrm {sign}_{eq}(x_1-\bar{x}_1))=x_2-\bar{x}_2=0 \end{aligned}$$
(85)

Once \(x_2\) is known, we will move on to the dynamics of \(e_2\).

After, \(t_1\), we obtain \(\bar{x}_2=x_2\) which implies that: \(g_1(x_1,x_2)-g_2(x_1,\bar{x}_2)=0\). Then, \(\dot{e}_2=e_3-\lambda _2 \mathrm {sign}(e_2)\). Following the same reasoning, if \(\lambda _2>|e_3|_{max}\) for \(t>t_2\), we will obtain after a finite time \(t_2>t_1\), the convergence to the surface \(e_1=e_2=0\). The dynamics of the remaining observer error on the sliding surface is given by \(\dot{e}_2=0\). Then, \(x_3=\bar{x}_3\) because: \(\dot{e}_2=x_3-(\hat{x}_3+\lambda _2 \mathrm {sign}_{eq}(x_2-\bar{x}_2)=x_3-\bar{x}_3=0\).

By reiterating \((n-1)\) times this process, we obtain after \(t_{n-1}\) convergence of all the observer errors on the sliding surface \(e_1=e_2= \cdots =0\) and consequently \(\bar{x}\) tends towards x, in a finite time \(t_{n-1}\) all the state is known and the observer error is zero.

7 Conclusion

Preliminaries on unknown input observers are presented in this chapter. The evolution of discontinuous observers, as well as the methods by which they are constructed, are also discussed. The sliding mode observers are the focus of the chapter’s concluding sections.