Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

2.1 Introduction

Modern control systems, such as NSHV that is considered in this chapter, become more and more complex and involve an increasing number of actuators and sensors. These physical components may become faulty which can cause system performance deterioration and lead to instability that can further produce catastrophic accidents. To improve system reliability and guarantee system stability in all situations, FDI and fault accommodation methods have become attractive topics which received considerable attention during the past two decades as it can be attested by the abundant literature [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]. Fault tolerant control (FTC) aims at preserving the functionalities of a faulty system with acceptable performances. FTC can be achieved in two ways namely passive and active ways. The former uses feedback control laws that are robust with respect to possible system faults. On the other hand, the latter uses a FDI module and accommodation techniques.

It is valuable to point out that, although there are abundant results in literature, most results concerning actuator faults reported in the literature only considered bias faults. Gain faults did not attract enough attention, which motivates this chapter. In addition, in some existing work, estimation error \(\lim _{t \rightarrow \infty }{e_x}(t)=e_x(\infty )\) was considered as an indicator, by which the faulty system can be distinguished from the normal system. That is to say, if \({e_x}(\infty ) = 0\), then the system is healthy; if \({e_x}(\infty ) \ne 0\), the system is faulty. However, \({e_x}(\infty )\) is not available in practice, and \({e_x}(\infty ) \ne 0\) can not practically be considered as fault indicator. Another motivation of this work is thus to provide a fault indicator with an associated decision algorithm which is efficient in practical application.

The concept of near space hypersonic vehicle was first proposed by American air force in a military exercise called “Schrieffer” in 2005. NSHV is a class of vehicle flying in near space which offers a promising and new, lower cost technology for future spacecraft. It can advance space transportation and also prompt global strike capabilities. Such complex technological system attracts considerable interests from the control research community and aeronautical engineering in the past couple of decades and significant results were reported [21,22,23,24,25,26,27,28,29,30,31,32]. For such high technological system, with great economical and societal issues, it is of course essential to maintain high reliability against possible faults. One of the difficulties to deal with FTC for NSHV is that the dynamics are complex nonlinear, multi-variable and strongly coupled ones. To solve the difficulties, T-S fuzzy system was used to describe the NSHV attitude dynamics [33]. During the past two decades, the stability analysis for Takagi-Sugeno (T-S) fuzzy systems has attracted increasing attention [34,35,36,37,38,39,40,41,42]. These studies combine the flexibility of fuzzy logic theory and rigorous mathematical theory of linear/nonlinear systems into a unified framework. The important advantage of a T-S fuzzy system is its universal approximation of any smooth nonlinear function by a “blending” of some local linear models, which greatly facilitates the analysis and synthesis of the complex nonlinear system. Lots of stability criteria of T-S fuzzy systems have been expressed in terms of linear matrix inequalities (LMIs) via various stability analysis methods (see [43,44,45,46,47,48,49,50] and the references therein). In [51], authors studied the problem of fault-tolerant tracking control for near-space-vehicle attitude dynamics with bias actuator fault, where the bias fault was assumed to be unknown constant. However, in practical application, the fault may be time-varying, which motivates this chapter.

In this chapter, we investigate the problem of fault tolerant control for T-S fuzzy systems with actuator time-varying faults, with the objective to provide an efficient solution for controlling NSHV in faulty situations. Compared with some existing work, there are four main contributions that are worth to be emphasized.

  1. 1.

    The actuator fault model presented in this chapter integrates not only time-varying gain faults, but also time-varying bias faults, which means that a wide class of faults can be handled. The theoretic developments and results of this chapter are thus valuable in a wide field of practical applications.

  2. 2.

    An adaptive fault estimation algorithm is proposed where the common assumption that the derivative of the output errors with respect to time should be known is removed and the parameter drift phenomenon is prevented even in the presence of bounded disturbances.

  3. 3.

    Compared with some results, a decision threshold for FDI is defined and applied on an online computable fault indicator and not on an asymptotic value of a criterion, which means the decision algorithm is thus more practical.

  4. 4.

    The proposed fault estimation observer is designed to online estimate not only bias faults but also gain faults.

The rest of the chapter of this chapter is organized as follows. In Sect. 2.2, the T-S fuzzy model is first briefly recalled. Actuator faults are integrated in such model and the FTC objective is formulated. In Sect. 2.3, the main technical results of this chapter are given, which include fault detection, isolation, estimation and fault-tolerant control scheme. The NSHV application is presented in Sect. 2.4. The T-S fuzzy model is employed to approximate the nonlinear NSHV attitude dynamics and simulation results of NSHV are presented to demonstrate the effectiveness of the proposed technique. Finally, Sect. 2.5 draws the conclusion.

2.2 Problem Statement and Preliminaries

Consider the following T-S fuzzy model composed of a set of fuzzy implications, where each implication is expressed by a linear state space model. The ith rule of this T-S fuzzy model is of the following form:

Plant Rule i: IF \({z_1}(t)\) is \({M_{i1}}\) and \( \ldots {z_q}(t)\) is \({M_{iq}}\), THEN

$$\begin{aligned} \left\{ \begin{aligned}&\dot{x}(t) = {A_i}x(t) + {B_i}u(t) \\&y(t) = {C_i}x(t) \\ \end{aligned} \right. \end{aligned}$$
(2.1)

where \(i = 1, \ldots ,r\), r is the number of the IF-THEN rules, \({M_{ij}}\), \(j = 1, \ldots ,q\) is the fuzzy set, \(z(t) = {[{z_1}(t), \ldots ,{z_q}(t)]^T}\) are the premise variables which are supposed to be known, \(x(t) = \) \({[{x_1}(t), \ldots ,{x_n}(t)]^T} \in {R^n}\), \(u(t) \in {R^m},{A_i} \in {R^{n \times n}}\), and \({B_i} \in {R^{n \times m}}\).

The overall fuzzy system is inferred as follows:

$$\begin{aligned} \left\{ \begin{aligned}&\dot{x}(t) = \sum \limits _{i = 1}^r {{h_i}} (z(t))({A_i}x(t) + {B_i}u(t)) \\&y(t) = \sum \limits _{i = 1}^r {{h_i}} (z(t)){C_i}x(t) \\ \end{aligned} \right. \end{aligned}$$
(2.2)

where \({h_i}(z(t))\) is defined as

$$\begin{aligned} {h_i}(z(t)) = \frac{{\prod \limits _{j = 1}^n {{M_{ij}}[z(t)]} }}{{\sum \limits _{i = 1}^r {\prod \limits _{j = 1}^n {{M_{ij}}[z(t)]} } }},{} {} {} {} {} i = 1,2, \ldots ,r \end{aligned}$$
(2.3)

where \({M_{ij}}[z(t)]\) is the grade of membership of \({z_j}(t)\) in \({M_{ij}}\). It is assumed in this chapter that \(\prod \nolimits _{j = 1}^n {{M_{ij}}[z(t)]} \geqslant 0\) for all t. Therefore, we have \(\sum \limits _{i = 1}^r {{h_i}(z(t))} = 1,{} {} {} {} {} {} {} 0 \leqslant {h_i}(z(t)) \leqslant 1\) for all t.

In this chapter, the state feedback control strategy is chosen as a parallel distributed compensation (PDC), which can be described as follows:

Control Rule i: IF \({z_1}(t)\) is \({M_{i1}}\) and \( \ldots {z_q}(t)\) is \({M_{iq}}\), THEN

$$\begin{aligned} {u_i}(t) = {K_i}x(t) \end{aligned}$$
(2.4)

where \({K_i}\) is the controller gain matrix to be determined later.

The overall fuzzy controller is given as follows:

$$\begin{aligned} u(t) = \sum \limits _{i = 1}^r {{h_i}(z(t)){K_i}} x(t) \end{aligned}$$
(2.5)

The control objective under normal conditions is to design a proper state feedback control controller u(t) such that the system (2.2) is stable.

However, in practical application, actuators may become faulty. Bias faults and gain faults are two kinds of actuator faults commonly occurring in practice. An actuator bias fault can be described as:

$$\begin{aligned} u_i^f(t) = {u_i}(t) + {f_i}(t), ~~~i = 1, \ldots ,m \end{aligned}$$
(2.6)

where \({f_i}(t)\) denotes a bounded signal, and an actuator gain fault can be described as:

$$\begin{aligned} u_i^f(t) = (1 - {\rho _i}(t)){u_i}(t), ~~i = 1, \ldots ,m \end{aligned}$$
(2.7)

where \(0 \leqslant {\rho _i}(t) \leqslant 1\) which is supposed to be unknown, denotes the remaining control rate. Therefore, the above two kinds of actuator faults can be uniformly described as:

$$\begin{aligned} u_i^f(t) = (1 - {\rho _i}(t)){u_i}(t) + {f_i}(t) \end{aligned}$$
(2.8)

Furthermore, a more general fault model can be given as:

$$\begin{aligned} u_i^f(t) = (1 - {\rho _i}(t)){u_i}(t) + \sum \limits _{j = 1}^{{p_i}} {{g_{i.j}}{f_{i,j}}(t)} \end{aligned}$$
(2.9)

where \({f_{i,j}}(t),~i = 1, \ldots ,m,j = 1, \ldots ,{p_i}\) denotes a bounded signal, \({p_i}\) is a known positive constant. \({g_{i,j}}\) denotes an unknown constant. With no restriction, let suppose \({p_1} = {p_2} = \cdots = {p_m}= p\), with p a known positive constant. Consider the following notation: \([{a_{i,j}}(t) = {g_{i.j}}{f_{i,j}}(t)\). Then, (2.9) can be re-written as follows:

$$\begin{aligned} u_i^f(t) = (1 - {\rho _i}(t)){u_i}(t) + \sum \limits _{j = 1}^p {{a_{i,j}}(t)} \end{aligned}$$
(2.10)

Denote

$$\begin{aligned} \varGamma (t) = diag({\rho _1}(t), \ldots ,{\rho _m}(t)) \end{aligned}$$
(2.11)
$$\begin{aligned} F(t) = {[{f_1},{f_2}, \ldots ,{f_m}]^T},{f_i} = \sum \limits _{j = 1}^p {{a_{i,j}}(t)} \end{aligned}$$
(2.12)

Then, we have

$$\begin{aligned} {u^f}(t) = (I - \varGamma (t))(u(t) + F(t)),~~t>t_f \end{aligned}$$
(2.13)

where the failure time instant \({t_j}\) is unknown, and I denotes identity matrix with appropriate dimensions. In this chapter, both bias and gain faults are handled by considering the general fault model (2.13).

Notice that, in the following, just for the sake of notational simplicity, we will use \({h_i},{\rho _i}\) and \({a_{i,j}}\) to denote \({h_i}(z(t)),{\rho _i}(t)\) and \({a_{i,j}}(t)\).

Now, the control objective is re-defined as follows. An active fault tolerant control approach is proposed to make system (2.2) stable in normal and faulty conditions. Under normal condition (no fault), a state feedback control input u(t) is designed, such that the system (2.2) is stable. Meanwhile, the FDI algorithm is working. As soon as an actuator fault is detected and isolated, the fault estimation algorithm is activated. The obtained fault estimation is used to design a proper control inputu(t), such that the system (2.2) is still maintained stable under faulty case.

Remark 2.1

In the literature, many chapters consider actuator faults. However, most of them only considered bias faults. Gain faults have not attracted enough attention. In [51], a class of bias fault was studied, where the fault was assumed to be an unknown constant. However, in practical application, the fault may be time-varying. Equation (2.10) is a deterministic but uncertain actuator model which represents a class of practical actuator faults such as actuator gain variations and measurement errors. In fact, the fault model in [51] can be described by (2.10). If \({\rho _i}(t) = 0\), then the model (10) becomes the bias fault model. If \({\rho _i}(t)\) is an unknown constant and \({f_i}(t) = 0\), then the model (2.10) denotes the constant bias faults model. Hence, the proposed actuator fault model (2.10) is more general and has wider practical use than the classical ones.

2.3 Fault Diagnosis and Accommodation

In this section, the main technical results of this chapter are given. We will first formulate the fault diagnosis and accommodation problem of the above T-S fuzzy system. We will then design a bank of SMOs to generate residuals, investigate the FDI algorithm based on the SMOs, and propose a FTC scheme to tolerate the fault using estimated fault information.

2.3.1 Preliminary

Consider the T-S fuzzy faulty system described in (2.2). We assume that only actuator faults occur and no sensor fault is involved. For simplicity, we consider the case that only one single actuator is faulty at one time. The actuator fault diagnosis problem is formulated as: with the available outputy, we propose an observer based scheme to identify the faulty actuator, and then estimate the fault.

To solve the problem, we will design a bank of SMOs with desired actuator fault detection and fault estimation properties. Thus, the following assumptions are made in this chapter.

Assumption 2.1

Matrix \({B_i}\) is of full column rank and the pair \(({A_i},{C_i})\) is observable.

Assumption 2.2

There exist known positive constants \({\bar{\rho }_i},{\bar{\bar{\rho }}_i},\) \({\bar{\rho }_1},{\bar{\rho }_2}\), such that \(|{\rho _i}(t)| \leqslant {\bar{\rho } _i}\) and \(|{\dot{\rho } _i}(t)| \leqslant {\bar{\bar{\rho }}_i}\), \({\bar{\rho } _1}\) \( = \max \{ {\bar{\rho } _1},\) \({\bar{\rho } _2}, \ldots ,{\bar{\rho } _m}\} \), \({\bar{\rho } _2} = \max \{ {\bar{\bar{\rho }} _1},{\bar{\bar{\rho }} _2}, \ldots , {\bar{\bar{\rho }} _m}\} \), \(i = 1, \ldots ,m\).

Assumption 2.3

There exist known positive constants \({\bar{a}_1},~{\bar{a}_2},~{\bar{a}_{i,j}},~{\bar{\bar{a}}_{i,j}}\), such that \(|{a_{i,j}}(t)| \leqslant {\bar{a}_{i,j}}\) and \(|{\dot{a}_{i,j}}(t)| \leqslant {\bar{\bar{a}}_{i,j}}\), \({\bar{a}_1} = \max \{ {\bar{a}_{1,1}}, \ldots ,{\bar{a}_{i,p}}, \ldots ,{\bar{a}_{m,1}}, \ldots ,{\bar{a}_{m,p}}\} \), \({\bar{a}_2} = \) \(\max \{ {\bar{\bar{a}}_{1,1}}, \ldots ,\) \({\bar{\bar{a}}_{i,p}}, \ldots ,{\bar{\bar{a}}_{m,1}}, \ldots ,{\bar{\bar{a}}_{m,p}}\} \), \(i = 1, \ldots ,m\), \(j = 1, \ldots ,p\).

Our actuator fault diagnosis and accommodation scheme consists of FDI and FTC. We first design the fault diagnosis observer utilizing SMOs to detect, isolate and estimate the fault, and then, propose a FTC method to compensate the fault.

2.3.2 Fault Detection

In order to detect the actuator faults, we design a fuzzy state-space observer for the system (2.8), which is described as:

\(Observer\,Rule\,i\): IF \({z_1}(t)\) is \({M_{i1}}\) and \( \ldots {z_q}(t)\) is \({M_{iq}}\), THEN

$$\begin{aligned} \left\{ \begin{aligned}&\dot{\hat{x}}(t) = {A_i}\hat{x}(t) + {B_i}u(t) + {L_i}(y(t) - \hat{y}(t)) \\&\hat{y}(t) = {C_i}\hat{x}(t) \\ \end{aligned} \right. \end{aligned}$$
(2.14)

where \({L_i},i = 1,\ldots ,r\) is the observer gain for the ith observer rule.

The overall fuzzy system is inferred as follows:

$$\begin{aligned} \left\{ \begin{aligned}&\dot{\hat{ x}}(t) = \sum \limits _{i = 1}^r {{h_i}(z(t))} ({A_i}\hat{x}(t) + {B_i}u(t) + {L_i}(y(t) - \hat{y}(t)) \\&\hat{y}(t) = \sum \limits _{i = 1}^r {{h_i}(z(t)){C_i}\hat{x}(t)} \\ \end{aligned} \right. \end{aligned}$$
(2.15)

Denote

$$\begin{aligned} {e_x} = x(t) - \hat{x}(t),~~{e_y} = y(t) - \hat{y}(t) \end{aligned}$$
(2.16)

then the error dynamics is described by

$$\begin{aligned} \left\{ \begin{aligned}&{{\dot{e}}_x} = \sum \limits _{i = 1}^r {{h_i}(z(t))({A_i} - {L_i}{C_i}){e_x}(t))} \\&{e_y} = \sum \limits _{i = 1}^r {{h_i}(z(t)){C_i}{e_x}(t)} \\ \end{aligned} \right. \end{aligned}$$
(2.17)

Lemma 2.1

The estimation error \({e_x}\) converges asymptotically to zero if there exist matrices \(P = {P^T} > 0\) and \({Q_i} > 0\) with appropriate dimensions such that the following linear matrix inequality is satisfied:

$$\begin{aligned} P({A_i} - {L_i}{C_i}) + {({A_i} - {L_i}{C_i})^T}P \leqslant - {Q_i},\forall i = 1,2, \ldots ,r \end{aligned}$$
(2.18)

Proof

Consider the following Lyapunov function

$$\begin{aligned} {V_1} = e_x^T(t)P{e_x}(t) \end{aligned}$$

Differentiating \({V_1}\) with respect to time t, one has

$$\begin{aligned} \begin{aligned} {{\dot{V}}_1}(t)&= \sum \limits _{i = 1}^r {{h_i}(z(t))[e_x^T(t)(P({A_i} - {L_i}C) + {{({A_i} - {L_i}C)}^T}P){e_x}(t)]} \\&\leqslant - \sum \limits _{i = 1}^r {{h_i}(z(t))[e_x^T(t){Q_i}{e_x}(t)]} \\&\leqslant 0 \\ \end{aligned} \end{aligned}$$
(2.19)

Because \({V_1}(t) \in {L_\infty }\) is a monotonous and non-increasing bounded function, \({V_1}( + \infty )\) exists. Hence, we have \({V_1}(0) - \) \({V_1}( + \infty )\) \( \geqslant - \int _{{\text { }}0}^{ + \infty } {\sum \limits _{i = 1}^r {{h_i}(z(t))[e_x^T(t){Q_i}{e_x}(t)]} } \), i.e., \({e_x}(t)\) \(\in {L_2}\). And since \({e_x}(t),{\dot{e}_x}(t) \in {L_\infty }\), using the Lyapunov stability theory, we obtain \(\mathop {\lim }\limits _{t \rightarrow \infty } {e_x}(t) = 0\). Furthermore, we have \(\mathop {\lim }\limits _{t \rightarrow \infty } {e_y}(t) = 0\). The proof is completed.

From Lemma 1.1, we have

$$\begin{aligned} \begin{aligned} {{\dot{V}}_1}(t)&\leqslant - \sum \limits _{i = 1}^r {{h_i}(z(t))[e_x^T(t){Q_i}{e_x}(t)]} \\&\leqslant - \sum \limits _{i = 1}^r {{h_i}(z(t))[{\lambda _{\min }}({Q_i})e_x^T(t){e_x}(t)]} \\&\leqslant - \sum \limits _{i = 1}^r {{h_i}(z(t))[{\lambda _{\min }}({Q_i})/{\lambda _{\max }}(P)e_x^T(t)P{e_x}(t)]} \\&\leqslant - {h_i}(z(t))[{\lambda _{\min }}({Q_i})/{\lambda _{\max }}(P)]V(t) = - \kappa V(t) \\ \end{aligned} \end{aligned}$$
(2.20)

where \(\kappa = \min (\frac{{{\lambda _{\min }}({Q_1})}}{{{\lambda _{\max }}(P)}},\frac{{{\lambda _{\min }}({Q_2})}}{{{\lambda _{\max }}(P)}}, \ldots ,\frac{{{\lambda _{\min }}({Q_r})}}{{{\lambda _{\max }}(P)}}) \in R\).

Hence,

$$\begin{aligned} {V_1}(t) \leqslant {e^{ - \kappa t}}V(0) \end{aligned}$$
(2.21)

Furthermore, we have

$$\begin{aligned} {\lambda _{\min }}(P)||{e_x}(t)|{|^2} \leqslant {e^{ - \kappa t}}{\lambda _{\max }}(P)||{e_x}(0)|{|^2} \end{aligned}$$
(2.22)

Therefore the norm of the error vector satisfies

$$\begin{aligned} \begin{aligned} ||{e_x}(t)||&\leqslant \sqrt{\frac{{{e^{ - \kappa t}}{\lambda _{\max }}(P)}}{{{\lambda _{\min }}(P)}}} ||{e_x}(0)|| \\&= \sqrt{{\lambda _{\max }}(P)/{\lambda _{\min }}(P)} ||{e_x}(0)||{e^{ - \kappa t/2}} \\ \end{aligned} \end{aligned}$$
(2.23)

Furthermore, the detection residual can be defined as:

$$\begin{aligned} J = ||y(t) - \hat{y}(t)|| \end{aligned}$$
(2.24)

From (2.23), it can be seen that the following inequality holds in the healthy case:

$$\begin{aligned} J \leqslant \sum \limits _{i = 1}^r {{h_i}(z(t))\sqrt{{\lambda _{\max }}(P)/{\lambda _{\min }}(P)} ||{C_i}||||{e_x}(0)||{e^{ - \kappa t/2}}} \end{aligned}$$
(2.25)

Then, the fault detection can be performed using the following mechanism:

$$\begin{aligned} \left\{ \begin{aligned}&J \leqslant {T_d}{\text { no fault occurred,}} \\&J > {T_d}{\text { fault has occurred}} \\ \end{aligned} \right. \end{aligned}$$
(2.26)

where threshold \({T_d}\) is defined as follows:

$${T_d} = \sum \limits _{i = 1}^r {{h_i}(z(t))\sqrt{{\lambda _{\max }}(P)/{\lambda _{\min }}(P)} ||{C_i}||||{e_x}(0)||{e^{ - \kappa t/2}}}.$$

Remark 2.2

It is easy to find from (2.20) that, if no actuator fault occurs, we have \(\lim _{t \rightarrow \infty } {e_x} = 0\). If there is an actuator fault, then \(\lim _{t \rightarrow \infty } {e_x} \ne 0\). Therefore, in some existing work, the fault detection is carried out as:

$$\begin{aligned} \left\{ \begin{aligned}&{\lim }_{t \rightarrow \infty } {e_x} = 0,{\text { no fault occurred}} \\&{\lim }_{t \rightarrow \infty } {e_x} \ne 0,{\text { fault has occurred}} \\ \end{aligned} \right. \end{aligned}$$
(2.27)

and the above observer given by (2.15) was referred to as the fault detection observer for the system described by (2.2). However, it is valuable to point out that \({e_x}(\infty )\) is not available in practice, thus \({e_x}(\infty ) \ne 0\) cannot be considered as an indicator of fault occurrence. That is to say, the above fault detection (2.27) does not work in practical applications. Therefore, the mechanism (2.26) is more efficient for fault detection in practical cases.

2.3.3 Fault Isolation

Since the system has m actuators and it is assumed that only one single fault occurs at one time, we have m possible faulty cases in total. When the sth (\(1 \leqslant s \leqslant m\)) actuator is faulty, the faulty model can be described as:

$$\begin{aligned} \left\{ \begin{aligned}&{{\dot{x}}_s}(t) =\sum \limits _{i = 1}^r {{h_i}(z(t)){A_i}{x_s}(t)} + \sum \limits _{i = 1}^r {{h_i}(z(t)){B_i}u(t) - } \\&~~~~~~~~~~~~\sum \limits _{i = 1}^r {{h_i}(z(t)){b_{i,s}}[{\rho _s}(t){u_s}(t) - \sum \limits _{j = 1}^p {{a_{s,j}}(t)} ]} \\&y(t) = \sum \limits _{i = 1}^r {{h_i}(z(t)){C_i}x(t)} \\ \end{aligned} \right. \end{aligned}$$
(2.28)

where \({B_i} = [{b_{i,1}},{b_{i,2}}, \ldots ,{b_{i,m}}]\), \({b_{i,l}} \in {R^{n \times 1}},1 \leqslant l \leqslant m\). \({\rho _s}(t)\), \({a_{s,j}}(t),j = 1,2, \ldots ,p\) denote the time profiles of the sth actuator fault, which are described by (2.10), \({u_s}(t)\) is the desired controller when the sth actuator is healthy. Inspired by the SMOs in [52], we are ready to present one of the results of this chapter. It is assumed that fuzzy observer and fuzzy control systems have the same premise variables z(t), then the following fuzzy observers are proposed to isolate the actuator fault.

\(Isolation\,Observer\,Rule\,i\): IF \({z_1}(t)\)is \({M_{i1}}\) and \( \ldots {z_q}(t)\) is \({M_{iq}}\), THEN

$$\begin{aligned} \left\{ \begin{aligned}&{{\dot{\hat{x}}}_{is}}(t) = {A_i}{{\hat{x}}_{is}}(t) + {L_i}(y(t) - {{\hat{y}}_{is}}(t)) + {B_i}u(t) + {b_{i,s}}{\mu _s}[{{\bar{\rho } }_s}|{u_s}(t)| + \sum \limits _{j = 1}^p {{{\bar{a}}_{s,j}}} ] \\&{{\hat{y}}_{is}}(t) = {C_{is}}{{\hat{x}}_{is}}(t) \\ \end{aligned} \right. \end{aligned}$$
(2.29)

where \({\hat{x}_{is}}(t),{\hat{y}_{is}}(t)\) are the sth fuzzy observer’s state and output, respectively. \({L_i}\) is the observer’s gain matrix for ith observer. The global fuzzy observer is represented as:

$$\begin{aligned} \left\{ \begin{aligned}&{{\dot{\hat{x}}}_s}(t) = \sum \limits _{i = 1}^r {{h_i}(z(t)){A_i}{{\hat{x}}_{is}}} (t) + \sum \limits _{i = 1}^r {{h_i}(z(t)){L_i}(y(t) - {{\hat{y}}_{is}}(t))} + \\&~~~~~~~~~~~~\sum \limits _{i = 1}^r {{h_i}(z(t)){B_i}u} (t) + \sum \limits _{i = 1}^r {{h_i}(z(t)){b_{i,s}}{\mu _s}[{{\bar{\rho } }_s}|{u_s}(t)|} + \sum \limits _{i = 1}^p {{{\bar{a}}_{s,j}}} ]\\&{{\hat{y}}_s}(t) = \sum \limits _{i = 1}^r {{h_i}(z(t)){C_i}{{\hat{x}}_s}(t)} \\&{\mu _s} = - \sum \limits _{i = 1}^r {{h_i}(z(t))} {F_{is}}{e_{ys}}(t)/||\sum \limits _{i = 1}^r {{h_i}(z(t))} {F_{is}}{e_{ys}}(t)|| \\ \end{aligned} \right. \end{aligned}$$
(2.30)

where \({F_{is}} \in {R^{1 \times n}}\) is the sth row of \({F_i} \in {R^{m \times n}}\), which will be defined later, \({L_i} \in {R^{n \times n}}\) is chosen such that \({A_i} - {L_i}{C_i}\) is Hurwitz, \({e_{xs}}(t) = {x_s}(t) - {\hat{x}_s}(t)\) and \({e_{ys}}(t) = y(t) - {\hat{y}_s}(t)\) are respectively the state error and output error between the plant and the sth SMO observer.

For \(s = l\), the error dynamics is obtained from (2.28) and (2.30).

$$\begin{aligned} \begin{aligned} {{\dot{e}}_{xs}}(t)&= \sum \limits _{i = 1}^r {{h_i}(z(t)){A_i}{e_{is}}} (t) - \sum \limits _{i = 1}^r {{h_i}(z(t)){L_i}(y(t) - }\\&~~~~{{\hat{y}}_{is}}(t)){\text { + }}\sum \limits _{i = 1}^r {{h_i}(z(t)){b_{i,s}}[( - {\rho _s}(t){u_s}(t) - {\mu _s}{{\bar{\rho } }_s} \cdot } \\&~~~~|{u_s}(t)|){\text { + }}\sum \limits _{j = 1}^p {({a_{s,j}}(t) - {\mu _s}{{\bar{a}}_{s,j}})} ]\\&= \sum \limits _{i = 1}^r {{h_i}(z(t))\{ ({A_i} - {L_i}{C_i}){e_{is}}} (t) + {b_{i,s}}[( - {\rho _s}(t) \cdot \\&~~~~{u_s}(t) - {\mu _s}{{\bar{\rho } }_s}|{u_s}(t)|) + \sum \limits _{j = 1}^p {({a_{s,j}}(t) - {\mu _s}{{\bar{a}}_{s,j}})} ]\} \\ \end{aligned} \end{aligned}$$
(2.31)

For \(s \ne l\), we have

$$\begin{aligned} \begin{aligned} {{\dot{e}}_{xs}}(t) =&\sum \limits _{i = 1}^r {{h_i}(z(t))({A_i} - {L_i}{C_i}){e_{is}}} (t) + \\&\sum \limits _{i = 1}^r {{h_i}(z(t))[( - {b_{i,l}}{\rho _l}(t){u_l}(t) - {b_{i,s}}{\mu _s}{{\bar{\rho } }_s}|{u_s}(t)|)} + \\&\sum \limits _{j = 1}^p {({b_{i,l}}{a_{l,j}}(t) - {b_{i,s}}{\mu _s}{{\bar{a}}_{s,j}})} ] \\ \end{aligned} \end{aligned}$$
(2.32)

The stability of the state error dynamics is guaranteed by the following theorem.

Theorem 2.1

Under Assumptions 2.12.3, if there exist a common symmetric positive definite matrix P and matrices \({L_i},~{F_i}\), and \({Q_i} > 0\), \(i =1,2, \ldots ,r\) with appropriate dimensions, such that the following conditions hold,

$$\begin{aligned} {({A_i} - {L_i}{C_i})^T}P + P({A_i} - {L_i}{C_i}) \leqslant - {Q_i}, \end{aligned}$$
(2.33)
$$\begin{aligned} P{B_i} = {({F_i}{C_i})^T}. \end{aligned}$$
(2.34)

Then, when the lth actuator is faulty, for \(s = l\), \( {\lim }_{t \rightarrow \infty } {e_{xs}} = 0\), and for \(s \ne l\), \( {\lim }_{t \rightarrow \infty } {e_{xs}} \ne 0\).

Proof

(1) For \(s = l\), according to (2.31), we have

$$\begin{aligned} \begin{aligned} {{\dot{e}}_{xs}}(t) =&\sum \limits _{i = 1}^r {{h_i}(z(t))({A_i} - {L_i}{C_i}){e_{is}}} (t) \,+ \\&\sum \limits _{i = 1}^r {{h_i}(z(t)){b_{i,s}}[( - {\mu _s}{{\bar{\rho } }_s}|{u_s}(t)| - {\rho _s}(t){u_s}(t))} - \\&\sum \limits _{j = 1}^p {{\mu _s}{{\bar{a}}_{s,j}}} + \sum \limits _{j = 1}^p {{a_{s,j}}(t)} ] \\ \end{aligned} \end{aligned}$$

Define the following Lyapunov function

$$\begin{aligned} {V_2}(t) = e_{xs}^T(t)P{e_{xs}}(t) \end{aligned}$$
(2.35)

Differentiating \({V_2}\) with respect to time t, and using (2.33), one has

$$\begin{aligned} \begin{aligned} {{\dot{V}}_2}(t)&= \dot{e}_{xs}^T(t)P{e_{xs}}(t) + e_{xs}^T(t)P{{\dot{e}}_{xs}}(t) \\&\leqslant - e_{xs}^T(t){Q_i}{e_{xs}}(t) + 2e_{xs}^T(t)P\sum \limits _{i = 1}^r {{h_i}(z(t)){b_{i,s}} \cdot } \\&~~~~[( - {\mu _s}{{\bar{\rho } }_s}|{u_s}(t)| - {\rho _s}(t){u_s}(t)) - \sum \limits _{j = 1}^p {{\mu _s}{{\bar{a}}_{s,j}}} + \sum \limits _{j = 1}^p {{a_{s,j}}(t)} ] \\ \end{aligned} \end{aligned}$$

From \({\mu _s} = - \sum \limits _{i = 1}^r {{h_i}(z(t))} {F_{is}}{e_{ys}}(t)/||\sum \limits _{i = 1}^r {{h_i}(z(t))} {F_{is}}{e_{ys}}(t)||\) and (2.34), one has

$$2e_{xs}^T(t)P\sum \limits _{i = 1}^r {{h_i}(z(t)){b_{i,s}}( - {\mu _s}{{\bar{\rho } }_s}|{u_s}(t)| - {\rho _s}(t){u_s}(t))} \leqslant 0,$$
$$2e_{xs}^T(t)P\sum \limits _{i = 1}^r {{h_i}(z(t)){b_{i,s}}( - \sum \limits _{j = 1}^p {{\mu _s}{{\bar{a}}_{s,j}}} + \sum \limits _{j = 1}^p {{a_{s,j}}(t)} } ) \leqslant 0.$$

Hence,

$$\begin{aligned} {\dot{V}_2}(t) \leqslant - e_{xs}^T(t){Q_i}{e_{xs}}(t) \leqslant 0 \end{aligned}$$
(2.36)

Because \({V_2}(t) \in {L_\infty }\) is a monotonous and non-increasing bounded function, \({V_2}( + \infty )\) exists. Hence, we have \({V_2}(0) - {V_2}( + \infty ) \geqslant - \int _{~0}^{ + \infty } {e_{xs}^T} (t){Q_i}{e_{xs}}(t)\), i.e. \({e_{xs}}(t) \in {L_2}\). Since \({e_{xs}}(t)\) and \({\dot{e}_{xs}}(t) \in {L_\infty }\), using the Lyapunov stability theory, we have \( {\lim }_{t \rightarrow \infty } {e_{xs}}(t)\) \( = 0\). Thus, we have \({\lim }_{t \rightarrow \infty } {e_{ys}}(t) = 0\).

(2) For \(s \ne l\), it follows from (2.28) and (2.30) that:

$$\begin{aligned} \begin{aligned} {{\dot{e}}_{xs}}(t)=&\sum \limits _{i = 1}^r {{h_i}(z(t))({A_i} - {L_i}{C_i}){e_{is}}} (t)\,+ \\&\sum \limits _{i = 1}^r {{h_i}(z(t))[( - {b_{i,l}}{\rho _l}(t){u_l}(t) - {b_{i,s}}{\mu _s}{{\bar{\rho } }_s}|{u_s}(t)|)}\, + \\&\sum \limits _{j = 1}^p {({b_{i,l}}{a_{l,j}}(t) - {b_{i,s}}{\mu _s}{{\bar{a}}_{s,j}})} ] \\ \end{aligned} \end{aligned}$$

Because matrix \({B_i}\) is of full column rank (Assumption 2.1), we know that \({b_{is}}\) and \({b_{il}}\) are linearly independent. Therefore,

$$\begin{aligned} \begin{aligned}&\mathop {\lim }\limits _{t \rightarrow \infty } \sum \limits _{i = 1}^r {{h_i}(z(t))[( - {b_{i,l}}{\rho _l}(t){u_l}(t) - {b_{i,s}}{\mu _s}{{\bar{\rho } }_s}|{u_s}(t)|)} + \sum \limits _{j = 1}^p {({b_{i,l}}{a_{l,j}}(t) - {b_{i,s}}{\mu _s}{{\bar{a}}_{s,j}})} \ne 0 \\ \end{aligned} \end{aligned}$$
(2.37)

Thus, we have \( {\lim }_{t \rightarrow \infty } {e_{xs}}(t) \ne 0\) and \( {\lim }_{t \rightarrow \infty } {e_{ys}}(t) \ne 0\).

From (1) and (2), we obtain the conclusions. This ends the proof.

Now, we denote the residuals between the real system and SMOs as follows:

$$\begin{aligned} {J_s}(t) = \left\| {{e_{ys}}(t)} \right\| = \left\| {{{\hat{y}}_s}(t) - y(t)} \right\| {\text {,}}\;\;\;1{\text { }} \leqslant {\text { s}} \leqslant m \end{aligned}$$
(2.38)

According to Theorem 2.1, when the lth actuator is faulty, i.e., \(s=l\), the residual \({J_s}(t)\) must tend to zero; while for any \(s \ne l\), basically, \({J_s}(t)\) does not equal zero. Furthermore, from Lemma 2.1, we have, if \(l = s\), then

$$\begin{aligned} {J_s}(t) \leqslant \sum \limits _{i = 1}^r {{h_i}(z(t))\sqrt{{\lambda _{\max }}(P)/{\lambda _{\min }}(P)} ||{e_{ys}}(0)||{e^{ - \kappa t/2}}} \end{aligned}$$
(2.39)

and if \(l \ne s\), then

$$\begin{aligned} {J_s}(t) > \sum \limits _{i = 1}^r {{h_i}(z(t))\sqrt{{\lambda _{\max }}(P)/{\lambda _{\min }}(P)} ||{e_{ys}}(0)||{e^{ - \kappa t/2}}} \end{aligned}$$
(2.40)

Hence, the isolation law for actuator fault can be designed as

$$\begin{aligned} \left\{ \begin{aligned}&{J_s}(t) \leqslant {T_I},l = s \Rightarrow {\text {the }}l{\text {th actuator is faulty }} \\&{J_s}(t) > {T_I},l \ne s \\ \end{aligned} \right. \end{aligned}$$
(2.41)

where threshold \({T_I}\) is defined as follows:

$${T_I} = \sum \limits _{i = 1}^r {{h_i}(z(t))\sqrt{{\lambda _{\max }}(P)/{\lambda _{\min }}(P)} ||{e_{ys}}(0)||{e^{ - \kappa t/2}}}.$$

Note that, \({\mu _s} = - \sum \nolimits _{i = 1}^r {{h_i}(z(t))} {F_{is}}{e_{ys}}(t)/||\sum \nolimits _{i = 1}^r {{h_i}(z(t))} {F_{is}}{e_{ys}}(t)||\) in (2.30), which denominator contains \( {\sum ^{f^{2}}}{e_{ys}}(t)\). Just as pointed out in [52], the chattering phenomenon occurs when \({e_{ys}}(t) \rightarrow 0\) in practice. Inspired by [52], in order to reduce this chattering in practical applications, we modify SMOs (2.30) by introducing a positive constant \(\delta \) as follows:

$$\begin{aligned} \left\{ \begin{aligned}&{\dot{\hat{x}}}_s(t) = \sum \limits _{i = 1}^r {{h_i}(z(t)){A_i}{{\hat{x}}_s}(t)} - \sum \limits _{i = 1}^r {{h_i}{L_i}({{\hat{y}}_s}(t) - y(t))}\, + \\&~~~~~~~~~~~~\sum \limits _{i = 1}^r {{h_i}(z(t)){B_i}u} (t) - \sum \limits _{i = 1}^r {{h_i}(z(t)){{\mu '}_s}[{{\bar{\rho } }_s}|{u_s}(t)| + \sum \limits _{j = 1}^p {{{\bar{a}}_{s,j}}} ]} \\&{{\hat{y}}_s}(t) = \sum \limits _{i = 1}^r {{h_i}(z(t)){C_{is}}{{\hat{x}}_s}(t)} \\&{{\mu '}_s} = - \sum \limits _{i = 1}^r {{h_i}} (z(t)){F_{is}}{e_{ys}}(t)/(||\sum \limits _{i = 1}^r {{h_i}} (z(t)){F_{is}}{e_{ys}}(t)|| + \delta ) \\ \end{aligned} \right. \end{aligned}$$
(2.42)

where \(\delta > 0 \in R\) is a constant, \(s = 1,2, \ldots ,m\). Obviously, the denominator of \({\mu '_s}\) will converge asymptotically to \(\delta \) when \({e_{ys}}(t) \rightarrow 0\), which reduces this chattering phenomenon.

From the above analysis, it is easy to find that, a suitable threshold \(\delta \) must be selected such that \({J_s}(s = l)\) tends to be very small when the lth actuator is faulty, while other residuals \({J_s}(s \ne l)\) are not equal to zero on any small time intervals. Thus, the modified SMOs can not only decrease the chattering problem in practice, but also can realize fault diagnosis successfully.

2.3.4 Fault Estimation

After fault isolation, we can estimate the fault. Assume the sth (\(1 \leqslant s \leqslant m\)) actuator is faulty, the faulty system can be described as:

$$\begin{aligned} \left\{ \begin{aligned}&\dot{x}(t) = \sum \limits _{i = 1}^r {{h_i}(z(t)){A_i}x(t)} + \sum \limits _{i = 1}^r {{h_i}(z(t)){B_i}u(t)} - \sum \limits _{i = 1}^r {{h_i}(z(t)){b_{i,s}}[{\rho _s}{u_s}(t) - \sum \limits _{j = 1}^p {{a_{s,j}}(t)} ]} \\&y(t) = \sum \limits _{i = 1}^r {{h_i}(z(t)){C_i}x(t)} \\ \end{aligned} \right. \end{aligned}$$
(2.43)

To estimate the fault, an observer is presented as follows:

$$\begin{aligned} \left\{ \begin{aligned}&\dot{ \hat{ x}}(t) = \sum \limits _{i = 1}^r {{h_i}(z(t)){A_i}\hat{x}(t)} + \sum \nolimits _{i = 1}^r {{h_i}(z(t)){B_i}u(t)} - \\&~~~~~~~~~~\sum \limits _{i = 1}^r {{h_i}(z(t)){b_{i,s}}[{{\hat{\rho } }_s}{u_s}(t) - \sum \limits _{j = 1}^p {{{\hat{a}}_{s,j}}} ]} + \sum \limits _{i = 1}^r {{h_i}(z(t)){L_i}(y(t) - \hat{y}(t))} \\&\hat{y}(t) = \sum \limits _{i = 1}^r {{h_i}(z(t)){C_i}x(t)} \\ \end{aligned} \right. \end{aligned}$$
(2.44)

where \({\hat{\rho }_s},{\hat{a}_{s,j}}\) are the estimate values of \({\rho _s}(t),{a_{s,j}}(t)\) at time t.

Remark 2.3

Many results about observer design were reported in literature. For faulty systems with only bias fault \({f_a}\) described as follows:

$$\left\{ \begin{aligned}&\dot{x}(t) = Ax(t) + B(u(t) + {f_a}) \\&\hat{y}(t) = Cx(t) \\ \end{aligned} \right. $$

an observer is classically designed in the following form of

$$\left\{ \begin{aligned}&\dot{ \hat{ x}}(t) = A\hat{x}(t) + B(u(t) + {{\hat{f}}_a}) + L(y(t) - \hat{y}(t)) \\&\hat{y}(t) = C\hat{x}(t) \\ \end{aligned} \right. $$

Let \({e_x}(t) = x(t) - \hat{x}(t)\), then the error dynamics is described by

$${\dot{e}_x}(t) = (A - LC){e_x}(t) + B({f_a} - {\hat{f}_a})$$

where \({\hat{f}_a}\) denotes the estimation of \({f_a}\). However, in this chapter, actuator bias faults and gain faults are both considered, the above observer does not work. The novel observer (2.44) is proposed in order to estimate the two kinds of faults.

Using (2.43) and (2.44), the error dynamics is obtained:

$$\begin{aligned} \begin{aligned} {{\dot{e}}_x}(t)&= \sum \limits _{i = 1}^r {{h_i}(z(t))[({A_i} - {L_i}{C_i}){e_x}(t))} ] - \sum \limits _{i = 1}^r {{h_i}(z(t)){b_{i,s}}[{{\tilde{\rho } }_s}{u_s} - \sum \limits _{j = 1}^p {{{\tilde{a}}_{s,j}}} ]} \\ \end{aligned} \end{aligned}$$
(2.45)

where \({e_x}(t) = x(t) - \hat{x}(t),{\tilde{\rho } _s} = {\rho _s}(t) - {\hat{\rho }_s},{\tilde{a}_{s,j}} = {a_{s,j}}(t) - {\hat{a}_{s,j}}\).

Now, an adaptive fault diagnostic algorithm is proposed to estimate the actuator fault. The stability of the error dynamics is guaranteed by the following theorem.

Theorem 2.2

Under Assumptions 2.12.3, if there exist a common symmetric positive definite matrix P, real matrices \({L_i}\) and \({Q_i} > 0\), \(i = 1,2, \ldots ,r\) with appropriate dimensions, such that the following conditions hold,

$$\begin{aligned} P({A_i} + {L_i}{C_i}) + {({A_i} + {L_i}{C_i})^T}P < - {Q_i} \end{aligned}$$
(2.46)
$$\begin{aligned} P{B_i} = {({F_i}{C_i})^T} \end{aligned}$$
(2.47)
$$\begin{aligned} {{\dot{ \hat{ \rho }} }_i} = \left\{ \begin{aligned}&0,{\text { }}{{\hat{\rho } }_i} = {{\bar{\rho } }_1}{\text { and}} - 2{\eta _1}{F_{i,s}}{e_y} > 0~{\text {or}}~{{\hat{\rho } }_i} = - {{\bar{\rho } }_1}{\text { and}} - 2{\eta _1}{F_{i,s}}{e_y} < 0 \\&- 2{\eta _1}{F_{i,s}}{e_y}{u_s},{\text { otherwise }} \\ \end{aligned} \right. \end{aligned}$$
(2.48)
$$\begin{aligned} {{\dot{ \hat{ a}}}_{i,j}} = \left\{ \begin{aligned}&0,~~{{\hat{a}}_{i,j}}> {{\bar{a}}_1}{\text { and }}2{\eta _2}{F_{i,s}}{e_y} > 0~or~{{\hat{a}}_{i,j}}< - {{\bar{a}}_1}{\text { and }}2{\eta _2}{F_{i,s}}{e_y} < 0 \\&2{\eta _2}{F_{i,s}}{e_y},{\text {otherwise}} \\ \end{aligned} \right. \end{aligned}$$
(2.49)

where \(i = 1, \ldots ,m,{\text { }}j = 1, \ldots ,p\), \({F_{is}} \in {R^{1 \times n}}\) is the sth row of \({F_i} \in {R^{m \times n}}\), \({\eta _1}> 0,{\eta _2} > 0\) denote the adaptive rates, then the error system (2.45) is asymptotically stable. Moreover, \({e_x}(t)\), \({\tilde{\rho }_s}\) and \({\tilde{a}_{s,j}}\) are semi-globally uniformly ultimately bounded, converging asymptotically to a small neighborhood of zero, namely, \(|{e_x}| \leqslant \sqrt{\alpha /{\lambda _{\min }}(P)} \), \(|{\tilde{\rho }_i}| \leqslant \sqrt{2{\eta _1}\alpha } \), and \(|{\tilde{g}_{i,j}}| \leqslant \sqrt{2{\eta _2}\alpha } \), where

$${\mu _0} = \sum \limits _l^r {{h_l}(z(t))(\frac{{2{{\bar{\rho } }_1}(2{{\bar{\rho } }_1} + {{\bar{\rho } }_2})}}{{{\eta _1}}} + \sum \limits _{j = 1}^p {\frac{{2{{\bar{a}}_1}(2{{\bar{a}}_1} + {{\bar{a}}_2})}}{{{\eta _2}}}} )},$$
$${\lambda _0}= \min \{ \frac{{{\lambda _{\min }}({Q_1})}}{{{\lambda _{\max }}(P)}}, \ldots ,\frac{{{\lambda _{\min }}({Q_r})}}{{{\lambda _{\max }}(P)}},1\}$$

and \(\alpha = \) \({\mu _0}/{\lambda _0} + V(0)\).

Proof

Define the following smooth function

$$\begin{aligned} V = {V_1} + {V_2} + {V_3} \end{aligned}$$
(2.50)
$$\begin{aligned} {V_1} = e_x^T(t)P{e_x}(t) \end{aligned}$$
(2.51)
$$\begin{aligned} {V_2} = \sum \limits _{i = 1}^r {{h_i}(z(t))(\frac{1}{{2{\eta _1}}}\tilde{\rho }_s^2(t))} \end{aligned}$$
(2.52)
$$\begin{aligned} {V_3} = \sum \limits _{i = 1}^r {\sum \limits _{j = 1}^p {{h_i}(z(t))(\frac{1}{{2{\eta _2}}}a_{s,j}^2(t))} } \end{aligned}$$
(2.53)

Differentiating \(V,~{V_i},i = 1,2,3\) with respect to time t, leads to

$$\begin{aligned} \dot{V} = \dot{V}_1 + \dot{V}_2 + \dot{V}_3 \end{aligned}$$
(2.54)
$$\begin{aligned} \begin{aligned} {{\dot{V}}_1} =&\sum \limits _{i = 1}^r {{h_i}(z(t))[e_x^T(t)(P({A_i} - {L_i}{C_i}) + {{({A_i} - {L_i}{C_i})}^T}P){e_x}(t)]} - \\&\sum \limits _{i = 1}^r {{h_i}(z(t))[2e_x^T(t)P{b_{i,s}}{{\tilde{\rho }}_s}{u_s} - \sum \limits _{j = 1}^p {2e_x^T(t)P{b_{i,s}}{{\tilde{a}}_{s,j}}} ]} \\ \end{aligned} \end{aligned}$$
(2.55)
$$\begin{aligned} \begin{aligned} {{\dot{V}}_2}&=\sum \limits _{i = 1}^r {{h_i}(z(t))(\frac{1}{{{\eta _1}}}{{\tilde{\rho }}_s}{{\dot{\tilde{\rho }} }_s})} = \sum \limits _{i = 1}^r {{h_i}(z(t))(\frac{1}{{{\eta _1}}}{{\tilde{\rho }}_s}({{\dot{\rho } }_s} - {{\dot{\hat{\rho }} }_s})} \\&= \sum \limits _{i = 1}^r {{h_i}(z(t))\frac{1}{{{\eta _1}}}{{\tilde{\rho }}_s}{{\dot{\rho } }_s}} - \sum \limits _{i = 1}^r {{h_i}(z(t))\frac{1}{{{\eta _1}}}{{\tilde{\rho }}_s}{{\dot{\hat{\rho }} }_s}} \\ \end{aligned} \end{aligned}$$
(2.56)
$$\begin{aligned} \begin{aligned} {{\dot{V}}_3}&= \sum \limits _{i = 1}^r {\sum \limits _{j = 1}^p {{h_i}(z(t))\frac{{{{\tilde{a}}_{s,j}}{{\dot{\tilde{a}}}_{s,j}}}}{{{\eta _2}}}} } = \sum \limits _{i = 1}^r {\sum \limits _{j = 1}^p {{h_i}(z(t))\frac{{{{\tilde{a}}_{s,j}}({{\dot{a}}_{s,j}} - {{\dot{\hat{a}}}_{s,j}})}}{{{\eta _2}}}} } \\&= \sum \limits _{i = 1}^r {\sum \limits _{j = 1}^p {{h_i}(z(t))\frac{{{{\tilde{a}}_{s,j}}{{\dot{a}}_{s,j}}}}{{{\eta _2}}}} } - \sum \limits _{i = 1}^r {\sum \limits _{j = 1}^p {{h_i}(z(t))\frac{{{{\tilde{a}}_{s,j}}{{\dot{\hat{a}}}_{s,j}}}}{{{\eta _2}}}} } \\ \end{aligned} \end{aligned}$$
(2.57)

Substituting (2.552.57) into (2.54), it yields

$$\begin{aligned} \begin{aligned} \dot{V} =&- \sum \limits _{i = 1}^r {{h_i}(z(t))e_x^T{Q_i}{e_x}} + \sum \limits _{i = 1}^r {{h_i}(z(t))\frac{1}{{{\eta _1}}}{{\tilde{\rho }}_s}{{\dot{\rho } }_s}} + \sum \limits _{i = 1}^r {\sum \limits _{j = 1}^p {{h_i}(z(t))\frac{1}{{{\eta _2}}}{{\tilde{a}}_{s,j}}{{\dot{a}}_{s,j}}} } - \\&\sum \limits _{i = 1}^r {{h_i}(z(t)){{\tilde{\rho }}_s}(2e_x^TP{b_{i,s}}{u_s} + \frac{1}{{{\eta _1}}}{{\dot{\hat{\rho }} }_s}} ){\text { + }} \sum \limits _{i = 1}^r {\sum \limits _{j = 1}^p {{h_i}(z(t)){{\tilde{a}}_{s,j}}(2e_x^TP{b_{i,s}} - \frac{1}{{{\eta _2}}}{{\dot{\hat{a}}}_{s,j}})} } \\ \end{aligned} \end{aligned}$$
(2.58)

Substituting (2.48, 2.49) into (2.58), it yields

$$\begin{aligned} \begin{aligned} \dot{V} =&- \sum \limits _{i = 1}^r {{h_i}(z(t))e_x^T{Q_i}{e_x}} + \sum \limits _{i = 1}^r {{h_i}(z(t))\frac{1}{{{\eta _1}}}{{\tilde{\rho }}_s}{{\dot{\rho } }_s}} + \sum \limits _{i = 1}^r {\sum \limits _{j = 1}^p {{h_i}(z(t))\frac{1}{{{\eta _2}}}{{\tilde{a}}_{s,j}}{{\dot{a}}_{s,j}}} } \\ \end{aligned} \end{aligned}$$
(2.59)

Since

$$\begin{aligned} \begin{aligned} \frac{{{{\tilde{\rho } }_i}{{\dot{\rho } }_i}}}{{{\eta _1}}}&= - \frac{{{{\tilde{\rho } }^2}_i}}{{{\eta _1}}} + \frac{{{{\tilde{\rho } }_i}({{\tilde{\rho } }_i} + {{\dot{\rho } }_i})}}{{{\eta _1}}} = - \frac{{{{\tilde{\rho } }^2}_i}}{{{\eta _1}}} + \frac{{({\rho _i} - {{\hat{\rho } }_i})({\rho _i} - {{\hat{\rho } }_i} + {{\dot{\rho } }_i})}}{{{\eta _1}}} \\&\leqslant - \frac{{{{\tilde{\rho } }^2}_i}}{{{\eta _1}}} + \frac{{(|{\rho _i}| + |{{\hat{\rho } }_i}|)(|{\rho _i}| + |{{\hat{\rho } }_i}| + |{{\dot{\rho } }_i}|)}}{{{\eta _1}}} \\ \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \sum \limits _{j = 1}^p {\frac{{{{\tilde{a}}_{i,j}}{{\dot{a}}_{i,j}}}}{{{\eta _2}}}}&= - \sum \limits _{j = 1}^p {\frac{{{{\tilde{a}}^2}_{i,j}}}{{{\eta _2}}}} + \sum \limits _{j = 1}^p {\frac{{{{\tilde{a}}^2}_{i,j}}}{{{\eta _2}}}} + \sum \limits _{j = 1}^p {\frac{{{{\tilde{a}}_{i,j}}{{\dot{a}}_{i,j}}}}{{{\eta _2}}}} \\&\leqslant - \sum \limits _{j = 1}^p {\frac{{{{\tilde{a}}^2}_{i,j}}}{{{\eta _2}}}} + \sum \limits _{j = 1}^p {\frac{{(|{a_{i,j}}| + |{{\hat{a}}_{i,j}}|)(|{a_{i,j}}| + |{{\hat{a}}_{i,j}}| + |{{\dot{a}}_{i,j}}|)}}{{{\eta _2}}}} \\ \end{aligned} \end{aligned}$$

and \(|{\hat{\rho } _i}(t)| \leqslant {\bar{\rho } _1}\) and \(|{\hat{a}_{i,j}}(t)| \leqslant {\bar{a}_1}\), which can be guaranteed by using the adaptive laws (2.48) and (2.49), and Assumptions 2.2 and 2.3 (i.e., \(|{\rho _i}(t)| \leqslant {\bar{\rho } _1}\), \(|{\dot{\rho } _i}(t)| \leqslant {\bar{\rho } _2}\), \(|{a_{i,j}}(t)| \leqslant {\bar{a}_1}\), and \(|{\dot{a}_i}(t)| \leqslant {\bar{a}_2}\)) are satisfied, one has

$$\frac{{{{\tilde{\rho } }_i}{{\dot{\rho } }_i}}}{{{\eta _1}}} \leqslant - \frac{{{{\tilde{\rho } }^2}_i}}{{{\eta _1}}} + \frac{{2{{\bar{\rho } }_1}(2{{\bar{\rho } }_1} + {{\bar{\rho } }_2})}}{{{\eta _1}}}$$
$$\sum \limits _{j = 1}^p {\frac{{{{\tilde{a}}_{i,j}}{{\dot{a}}_{i,j}}}}{{{\eta _2}}}} \leqslant - \sum \limits _{j = 1}^p {\frac{{{{\tilde{a}}^2}_{i,j}}}{{{\eta _2}}}} + \sum \limits _{j = 1}^p {\frac{{2{{\bar{a}}_1}(2{{\bar{a}}_1} + {{\bar{a}}_2})}}{{{\eta _2}}}} $$

Hence, from (2.59), one has

$$\begin{aligned} \begin{aligned} \dot{V}&\leqslant \sum \limits _{l = 1}^r {{h_l}(z(t))[ - e_x^T{Q_i}{e_x} - \frac{{{{\tilde{\rho } }^2}_i}}{{{\eta _1}}} - \sum \limits _{j = 1}^p {\frac{{{{\tilde{a}}^2}_{i,j}}}{{{\eta _2}}}} + } \frac{{2{{\bar{\rho } }_1}(2{{\bar{\rho } }_1} + {{\bar{\rho } }_2})}}{{{\eta _1}}} + \sum \limits _{j = 1}^p {\frac{{2{{\bar{a}}_1}(2{{\bar{a}}_1} + {{\bar{a}}_2})}}{{{\eta _2}}}} ] \\&\leqslant \sum \limits _l^r {{h_l}(z(t))[ - e_x^T{Q_i}{e_x} - \frac{{{{\tilde{\rho } }^2}_i}}{{{\eta _1}}} - \sum \limits _{j = 1}^p {\frac{{{{\tilde{a}}^2}_{i,j}}}{{{\eta _2}}}} + } \frac{{2{{\bar{\rho } }_1}(2{{\bar{\rho } }_1} + {{\bar{\rho } }_2})}}{{{\eta _1}}} + \sum \limits _{j = 1}^p {\frac{{2{{\bar{a}}_1}(2{{\bar{a}}_1} + {{\bar{a}}_2})}}{{{\eta _2}}}} ] \\&\leqslant \sum \limits _{l = 1}^r {{h_l}(z(t))[ - e_x^T{Q_i}{e_x} - \frac{{{{\tilde{\rho } }^2}_i}}{{{\eta _1}}} - \sum \limits _{j = 1}^p {\frac{{{{\tilde{a}}^2}_{i,j}}}{{{\eta _2}}}} + \mu ]} \\&\leqslant \sum \limits _{l = 1}^r {{h_l}(z(t))[ - {\lambda _{\min }}({Q_i})e_x^T{e_x} - \frac{{{{\tilde{\rho } }^2}_i}}{{2{\eta _1}}} - \sum \limits _{j = 1}^p {\frac{{{{\tilde{a}}^2}_{i,j}}}{{2{\eta _2}}}} + \mu ]} \\&\leqslant \sum \limits _{l = 1}^r {{h_l}(z(t))[ - \frac{{{\lambda _{\min }}({Q_i})}}{{{\lambda _{\max }}(P)}}e_x^TP{e_x} - \frac{{{{\tilde{\rho } }^2}_i}}{{2{\eta _1}}} - \sum \limits _{j = 1}^p {\frac{{{{\tilde{a}}^2}_{i,j}}}{{2{\eta _2}}}} + \mu ]} \\&\leqslant - {\lambda _0}V(t) + {\mu _0} \\ \end{aligned} \end{aligned}$$
(2.60)

where

$$\mu = \frac{{2{{\bar{\rho } }_1}(2{{\bar{\rho } }_1} + {{\bar{\rho } }_2})}}{{{\eta _1}}} + \sum \limits _{j = 1}^p {\frac{{2{{\bar{a}}_1}(2{{\bar{a}}_1} + {{\bar{a}}_2})}}{{{\eta _2}}}} ,$$
$${\mu _0} = \sum \limits _l^r {{h_l}(z(t))(\frac{{2{{\bar{\rho } }_1}(2{{\bar{\rho } }_1} + {{\bar{\rho } }_2})}}{{{\eta _1}}} + \sum \limits _{j = 1}^p {\frac{{2{{\bar{a}}_1}(2{{\bar{a}}_1} + {{\bar{a}}_2})}}{{{\eta _2}}}} )},$$

\({\lambda _0} = \min \{ \frac{{{\lambda _{\min }}({Q_1})}}{{{\lambda _{\max }}(P)}},\) \(\frac{{{\lambda _{\min }}({Q_2})}}{{{\lambda _{\max }}(P)}}, \ldots ,\frac{{{\lambda _{\min }}({Q_r})}}{{{\lambda _{\max }}(P)}},1\} \). Then, one has, \(\frac{d}{{dt}}(V(t){e^{{\lambda _0}t}}) \leqslant {e^{{\lambda _0}t}}{\mu _0}\). Furthermore, \(0 \leqslant V(t) \leqslant \frac{{{\mu _0}}}{{{\lambda _0}}} + [V(0) - \frac{{{\mu _0}}}{{{\lambda _0}}}]{e^{ - {\lambda _0}t}} \leqslant \frac{{{\mu _0}}}{{{\lambda _0}}} + V(0)\) Let \(\alpha = \frac{{{\mu _0}}}{{{\lambda _0}}} + V(0)\), one has \(|{e_x}| \leqslant \sqrt{\frac{\alpha }{{{\lambda _{\min }}(P)}}} \), \(|{\tilde{\rho } _i}| \leqslant \) \(\sqrt{2{\eta _1}\alpha } \), and \(|{\tilde{a}_{i,j}}| \leqslant \sqrt{2{\eta _2}\alpha } \). This ends the proof.

Remark 2.4

If there exist two known constants \({f_{\min }},{f_{\max }}\) such that \({f_{\min }} \leqslant |f(t)| \leqslant {f_{\max }}\), then the fault f(t) can be approximated by the following form

$$\begin{aligned} f(t) = \frac{1}{2}({f_{\max }} - {f_{\min }})(1 - \tanh \zeta ) + {f_{\min }} \end{aligned}$$
(2.61)

where \(\zeta \) is an unknown constant. Thus, the fault f(t) is estimated through the estimation of \(\hat{\zeta }\), namely

$$\begin{aligned} \hat{f}(t) = \frac{1}{2}({f_{\max }} - {f_{\min }})(1 - \tanh \hat{\zeta } ) + {f_{\min }} \end{aligned}$$
(2.62)

This method prevents the phenomenon of parameter drift in the presence of bounded disturbances because of \(|\tanh \hat{\varsigma } | < 1\), and ensures \({f_{\min }} \leqslant |\hat{f}(t)|\) \( \leqslant {f_{\max }}\).

2.3.5 Fault Accommodation

After that the fault information is obtained, we will consider the fault-tolerant control problem of system (2.2), and design a fault-tolerant control law to recover the control system’s dynamics performance when an actuator fault occurs. Firstly, we consider the fuzzy control problem for the following nominal system without actuator faults:

$$\begin{aligned} \left\{ \begin{aligned}&\dot{x}(t) = \sum \limits _{i = 1}^r {{h_i}} (z(t))({A_i}x(t) + {B_i}u(t)) \\&y(t) = \sum \limits _{i = 1}^r {{h_i}} (z(t)){C_i}x(t) \\ \end{aligned} \right. \end{aligned}$$

The parallel distributed compensation technique offers a procedure to design a fuzzy control law from a given T-S fuzzy model. In the PDC design, each control rule is designed from the corresponding rule of T-S fuzzy model. The designed fuzzy controller has the same fuzzy sets as the considered fuzzy system.

\(Control\,Rule\,i\): IF \({z_1}(t)\) is \({M_{i1}}\) and \( \ldots {z_q}(t)\) is \({M_{iq}}\), THEN

$${u_i}(t) = {K_i}x(t)$$

and the overall fuzzy controller is given as follows:

$$u(t) = \sum \limits _{i = 1}^r {{h_i}(z(t)){K_i}} x(t)$$

where the controller gain matrix \({K_i}\) is determined by solving the following LMI:

$$\begin{aligned} P({A_i} + {B_i}{K_i}) + {({A_i} + {B_i}{K_i})^T}P < - {Q_i} \end{aligned}$$
(2.63)

where \(P = {P^T} > 0\) and \({Q_i} > 0\) are matrices with appropriate dimensions.

On the basis of the estimated actuator fault, the fault tolerant controller is constructed as

$$\begin{aligned} {u_s} = \frac{{(u_s^N - \sum \nolimits _{j = 1}^{{p_i}} {{{\hat{a}}_{i,j}}} )}}{{(1 - {{\hat{\rho } }_s})}} \end{aligned}$$
(2.64)

where \(u_s^N\) is the sth normal control input, \({\hat{\rho } _s},{\hat{a}_{i,j}}\) are the estimations of \({\rho _s},{a_{i,j}}\), which are used to compensate for the gain fault and bias fault.

Theorem 2.3

Consider system (2.2) under Assumptions 2.12.3. If there exist a common symmetric positive definite matrix P, real matrices \({L_i}\) and \({Q_i} > 0\), \(i = 1,2, \ldots ,r\) with appropriate dimensions, such that the following conditions hold

$$\begin{aligned} P({A_i} - {L_i}{C_i}) + {({A_i} - {L_i}{C_i})^T}P < - {Q_i} \end{aligned}$$
(2.65)
$$\begin{aligned} P{B_i} = {({F_i}{C_i})^T} \end{aligned}$$
(2.66)
$$\begin{aligned} \dot{\hat{\rho }} _i = \left\{ \begin{aligned}&0,~~~~{{\hat{\rho } }_i} = {{\bar{\rho } }_1}{\text { and }} - 2{\eta _1}{F_{i,s}}{e_y} > 0 ~or{\text { }}{{\hat{\rho } }_i} = - {{\bar{\rho } }_1}{\text { and }} - 2{\eta _1}{F_{i,s}}{e_y} < 0 \\&- 2{\eta _1}{F_{i,s}}{e_y}{u_s},{\text { otherwise}} \\ \end{aligned} \right. \ \end{aligned}$$
(2.67)
$$\begin{aligned} \dot{\hat{a}}_{i,j} = \left\{ \begin{aligned}&0,{\text { }}{{\hat{a}}_{i,j}}> {{\bar{a}}_1}{\text { and }}2{\eta _2}{F_{i,s}}{e_y} > 0{\text { }} ~or~{{\hat{a}}_{i,j}}< - {{\bar{a}}_1}{\text { and }}2{\eta _2}{F_{i,s}}{e_y} < 0 \\&2{\eta _2}{F_{i,s}}{e_y},{\text {otherwise}} \\ \end{aligned} \right. \end{aligned}$$
(2.68)

where \(i = 1, \ldots ,m,~j = 1, \ldots ,p\), Then system (2.2) is asymptotically stable under the feedback FTC (2.65) and all signals involved in the closed-loop system are semi-globally uniformly ultimately bounded, converging asymptotically to a small neighborhood of zero, namely,

$$|e| \leqslant \sqrt{\alpha /{\lambda _{\min }}(P)} ,|{\tilde{\rho } _i}| \leqslant \sqrt{2{\eta _1}\alpha },~~ |{\tilde{a}_{i,j}}| \leqslant \sqrt{2{\eta _2}\alpha } ,$$

where \({\lambda _0} = \min \{ \frac{{{\lambda _{\min }}({Q_1})}}{{{\lambda _{\max }}(P)}}, \ldots ,\frac{{{\lambda _{\min }}({Q_r})}}{{{\lambda _{\max }}(P)}},1\} ,{\mu _0} \) \( = \sum \limits _l^r {{h_l}(z(t))} [\frac{{2{{\bar{\rho } }_1}(2{{\bar{\rho } }_1} + {{\bar{\rho } }_2})}}{{{\eta _1}}} + \sum \limits _{j = 1}^p {\frac{{2{{\bar{a}}_1}(2{{\bar{a}}_1} + {{\bar{a}}_2})}}{{{\eta _2}}}} ]\), \(\alpha = V(0) + {\mu _0}/{\lambda _0}\).

Proof

Similar to the proof of Theorem 2.2, it is easy to obtain the conclusions of Theorem 2.3. The detailed proof is thus omitted here.

2.4 Simulation Results

2.4.1 NSHV Modeling and Analysis

Considering the longitudinal flight mode of NSHV, a mathematical model for a generic NSHV developed at NASA Langley Research Center is presented in [53]. The longitudinal dynamics of NSHV can be described by a set of differential equations involving its velocity V, flight-path angle \(\gamma \), altitude h, angle of attack \(\alpha \) and pitch rate q as

$$\begin{aligned} \dot{V} = \frac{{T\cos \alpha - D}}{m} - \frac{{u\sin \gamma }}{{{r^2}}} \end{aligned}$$
(2.69)
$$\begin{aligned} \dot{\gamma } = \frac{{L + T\sin \alpha }}{{mV}} + \frac{{(\mu - V{r^2})\cos \gamma }}{{V{r^2}}} \end{aligned}$$
(2.70)
$$\begin{aligned} \dot{h} = V\sin \gamma \end{aligned}$$
(2.71)
$$\begin{aligned} \dot{\alpha } = q - \dot{\gamma } \end{aligned}$$
(2.72)
$$\begin{aligned} \dot{q} = \frac{{{M_{yy}}}}{{{I_{yy}}}} \end{aligned}$$
(2.73)

where \(L = {\bar{q}}S{C_L},D = {\bar{q}}S{C_D},T = {\bar{q}}S{C_T},r = h + {R_e}\), \({M_{yy}} = \) \({\bar{q}}S{\bar{c}}[{C_M}(\alpha ) + {C_M}({\delta _e}) + {C_M}(q)]\), \({C_L} = 0.6203\alpha \), \({C_D} = 0.6450{\alpha ^2} +\,0.0043378\alpha + 0.003772\), \({C_M}({\delta _e}) \,{=}\, \) \({c_e}({\delta _e} - \alpha )\), \({C_M}(q) = \) \(({\bar{c}}/2V)q( - 6.796{\alpha ^2} + 0.3015\alpha - 0.2289),{C_M}(\alpha ) = - 0.035 \cdot \) \({\alpha ^2} + 0.036617(1 + \varDelta {C_{M\alpha }})\alpha + 5.3261e - 06\), and

$${C_T} = \left\{ \begin{aligned}&0.02576{\delta _T}{} ,{\text {when}}\,{} {} {} {} {\delta _T} < 1 \\&0.0224 + 0.00336{\delta _T},{} {} {} {} {} {} {} {} {} {} {} {} {\text {when}}\,{} {} {} {} {\delta _T} > 1 \\ \end{aligned} \right. .$$

The parameters are the aircraft mass m, the gravitational constant \(\mu \), the moments of inertia \({I_{yy}}\) and the pitch moment coefficients. The aerodynamic coefficients and inertia data are coupled with state variables and control inputs. The control input vector is \(u(t) = {[{\delta _e},{\delta _T}]^T}\), where \({\delta _e}\) is the elevator detection, and \({\delta _T}\) is the throttle setting, respectively. The longitudinal model of the NSHV described by (2.692.73) can be written in the following affine nonlinear form:

$$\begin{aligned} \left\{ \begin{aligned}&\dot{x}(t) = f(x) + g(x)u(t) \\&y(t) = Cx(t) \\ \end{aligned} \right. \end{aligned}$$
(2.74)

where \(x(t) = {[V,\gamma ,h,\alpha ,q]^T} \in {R^n}\) denotes state vector, \(u(t) = {[{\delta _e},{\delta _T}]^T} \in {R^m}\) denotes the control input vector, and y(t) is the output vector.

In this section, some simulation results are presented to demonstrate the effectiveness of the proposed techniques. For the purpose of this study, the aerodynamic coefficients are simplified around the cruising flight mode. The nominal flight of NSHV is at a trimmed cruise conditions: \(Mach = 15\), \(V = 15060\) ft/s and \(h = 110000\) ft/s.

If each state variable is selected as a premise variable, then the number of fuzzy rules will become too large. However, from the property of NSHV, we know that the angle of attack \(\alpha \) is a key variable affecting the nonlinear character of NSHV, and the velocity V has constraint relationship to the altitude h, and the pitch angle \(\theta = \alpha + \gamma \). Similar to [53], we select \({\bar{x}} = {[V,\theta ,q]^T}\) as a new state vector. As a result, we denote \({z_1} = V\), \({z_2} = \alpha + \gamma \), \({z_3} = q\), and select \({z_1}\), \({z_2}\) and \({z_3}\) as premise variables for the T-S fuzzy system model. Hence, it can not only reduce the number of fuzzy rules but also well approximate the nonlinear system and characterize the NSHV model [7]. Furthermore, we assume

$${z_1} \in (6000{} {} {} {} {}\,\, 16000){} {} {} {} {}\,\mathrm{m/s}, ~{z_2} \in ( - 0.5{} {} {} {} {} 0.5){} {} {} {} {}\,\,\mathrm{rad/s},~ {z_3} \in ( - 0.5{} {} {} {} {} 0.5){} {} {} {} {}\,\mathrm{rad/s}.$$

Suppose that each premise variable has two associated fuzzy sets:

$$\{ {z_1} = 6000,16000\};~\{ {z_2} = - 0.5,0.5\} ;~\{ {z_3} = - 0.5,0.5\}$$

The corresponding fuzzy membership functions are defined as

$${M_{{z_1} = 6000}} = \exp [ - {({z_1}/{\varsigma _1})^2}],{M_{{z_1} = 16000}} = 1 - {M_{{z_1} = 6000}}$$
$${M_{{z_2} = - 0.5}} = \frac{1}{{1 + \exp [({{({z_2})}^2} - \sigma )/{\varsigma _2}]}},~ {M_{{z_2} = 0.5}} = 1 - {M_{{z_2} = - 0.5}}$$
$${M_{{z_3} = - 0.5}} = \exp [ - (\frac{{{z_3}}}{{{\varsigma _3}}} - {\bar{\sigma }} )],{M_{{z_3} = - 0.5}} = 1 - {M_{{z_3} = - 0.5}}$$

where the unknown parameters \(\sigma ,{\bar{\sigma }} ,{\varsigma _1},{\varsigma _2},{\varsigma _3}\) should be selected to symmetrically cover the space of the input variables.

We choose eight working points of NSHV as follows:

$$\begin{aligned}{}[{z_1},{z_2},{z_3}{]^T} =:\left\{ \begin{aligned}&[6000,-0.5,0.5],[6000,0.5,0.5],[6000,-0.5,-0.5]\\&[6000,0.5,-0.5], [5000,-0.5,0.5],[16000,0.5,0.5]\\&[16000,0.5,-0.5],[6000,-0.5,0.5]\\ \end{aligned}\right. \end{aligned}$$

The parameters of the membership are selected as: \(\sigma = 0.15\), \(\bar{\sigma } = 4\), \({\varsigma _1} = 3200\), \({\varsigma _2} = 0.05\), \({\varsigma _3} = 0.4\).

Then, eight plant rules and corresponding control rules can be obtained. We give the first rule as an example, and the other rules have the similar form.

Rule 1: IF \({z_1}\) is about \(6000\,{\text {m/s }}\) and \({z_2}\) is about \( - 0.5\,{\text {rad/s }}\) and \({z_3}\) is about \( - 0.5\,{\text {rad/s }}\), THEN

$$\dot{\bar{x}} (t) = {A_1}\bar{x}(t) + {B_1}u(t),y(t) = C\bar{x}(t)$$

where \({A_i}\) and \({B_i},i = 1,2, \ldots ,8\) can be easily obtained by the substitution of each of the eight operating points to f(x) and g(x).

In this study, we assume that only an actuator is faulty at one time. We consider:

Case 1:

$$u_1^f(t) = {u_1}(t),$$
$$u_2^f(t) = \left\{ \begin{aligned}&{y_2}(t),{\text { }}t < 5 \\&(1 - {\rho _2}(t))({y_2}(t) + \sum \nolimits _{j = 1}^p {{g_{2,j}}{f_{2,j}}(t))} ,{\text { }}t \geqslant 5 \\ \end{aligned} \right. $$

where \({\rho _2}(t) = 0.4\sin (\pi t),p = 1,{g_{2,1}} = 0.4,{f_{2,1}}(t) = \cos (t)\).

In order to compare with the results in [6, 8], we consider the following cases.

Case 2 (Bias fault) [24]:

$$u_1^f(t) = {u_1}(t),$$
$$u_2^f(t) = {u_2}(t) + {f_{2,1}}(t),{\text { }}{f_{2,1}}(t) = \left\{ \begin{aligned}&0,{\text { }}t < 4s \\&{\text {5, }}t \geqslant 4s \\ 5 + 2(t - 7),{\text { }}t \geqslant 7s \\ \end{aligned} \right. $$

where \({\rho _2}(t) = 0,p = 1,{g_{2,1}} = 1\).

Case 3 (Gain fault) [51]:

$$u_1^f(t) = {u_1}(t),$$
$$u_2^f(t) = (1 - {\rho _2}(t)){u_2}(t),{\text { }}{\rho _2}(t) = \left\{ \begin{aligned}&0,{\text { }}t < 2s \\&0.4,{\text { }}t \geqslant 2s \\ \end{aligned} \right. $$

where \({\rho _2}(t) = 0,p = 0,{g_{2,1}} = 0\).

Remark 2.5

If each state variable of the near space hypersonic vehicle (NSHV) model is selected as premise variable, then the number of fuzzy rules becomes too large, which leads to the increasing amount of computing and thus affects the setting time of the closed loop system. In order to reduce the number of fuzzy rules, taking into account the main characteristics of NSHV, we select \(\bar{x} = {[V,\theta ,q]^T}\) as premise variables where \(\theta = \alpha + \gamma \). As pointed out in [52], it can not only reduce the number of fuzzy rules but it provides also a good approximation of the nonlinear system. As a result, it can achieve satisfactory accuracy and dynamic performance of the proposed fault tolerant control.

2.4.2 Simulation Results

By using Matlab toolbox to solve the matrices inequalities (2.18), one can obtain the fault diagnostic observer gains \({L_i}\). By solving (2.64) and (2.67), one can obtain the positive definite symmetric matrix P and the nominal controller gains \({K_i}\). Due to the space limitation, only the common matrix P,  and the matrices \({Q_1}\), \({L_1},{K_1}\) of the first working point of NSHV are given here. Therefore, one can design the fault-tolerant controller (2.65).

$$\begin{gathered} P = {\text {1}}{\text {.0e + 005 *}} \\ {\text { }}\left[ \begin{gathered} {\text { 3}}{\text {.4852 }} - {\text {0.0000 0}}{\text {.0000 0}}{\text {.0000 0}}{\text {.0000}} \\ {\text { }} - {\text {0.0000 3}}{\text {.4852 0}}{\text {.0000 0}}{\text {.0000 }} - {\text {0.0000}} \\ {\text { 0}}{\text {.0000 0}}{\text {.0000 3}}{\text {.4852 0}}{\text {.0000 0}}{\text {.0000}} \\ {\text { 0}}{\text {.0000 0}}{\text {.0000 0}}{\text {.0000 3}}{\text {.4852 }} - {\text {0.0000}} \\ {\text { 0}}{\text {.0000 }} - {\text {0.0000 0}}{\text {.0000 }} - {\text {0.0000 3}}{\text {.4852}} \\ \end{gathered} \right] \\ \end{gathered} $$
$$\begin{gathered} {Q_1} = {\text {1}}{\text {.0e + 005 *}} \\ {\text { }}\left[ \begin{gathered} {\text { 3}}{\text {.4852 }} - {\text {0.0000 }} - {\text {0.0000 0}}{\text {.0001 }} - {\text {0.0006}} \\ {\text { 0}}{\text {.0000 3}}{\text {.4852 }} - {\text {0.0000 }} - {\text {0.0000 0}}{\text {.0000}} \\ {\text { 0}}{\text {.0000 0}}{\text {.0000 3}}{\text {.4852 }} - {\text {0.0000 0}}{\text {.0000}} \\ {\text { }} - {\text {0.0001 0}}{\text {.0000 0}}{\text {.0000 3}}{\text {.4852 0}}{\text {.0001}} \\ {\text { 0}}{\text {.0006 }} - {\text {0.0000 }} - {\text {0.0000 }} - {\text {0.0001 3}}{\text {.4852}} \\ \end{gathered} \right] \\ \end{gathered} $$
$${\text { }}{K_1} = \left[ \begin{gathered} {\text {9.4165}} ~ {\text {44487.8491}}~~~{\text {0.8575}}~{\text {181.5760}}~{\text {1.6392}} \\ {\text {5.6423}} ~ {\text {18484.9800}}~{\text {-0.5165}} ~{\text {85.75630}}~{\text {0.7744 }} \\ \end{gathered} \right] $$
$$\begin{gathered} {L_1} = {\text {1}}{\text {.0e + 008 *}} \\ {\text { }}\left[ \begin{gathered} {\text { }} - {\text {0.0035 }} - {\text {0.1100 }} - {\text {0.0003 0}}{\text {.0354 0}}{\text {.0003}} \\ {\text { }} - {\text {0.1100 }} - {\text {0.0035 6}}{\text {.9356 0}}{\text {.0000 0}}{\text {.0000}} \\ {\text { }} - {\text {0.0003 6}}{\text {.9356 }} - {\text {0.0035 }} - {\text {0.0000 }} - {\text {0.0000}} \\ {\text { 0}}{\text {.0354 0}}{\text {.0000 0}}{\text {.0000 }} - {\text {0.0035 }} - {\text {0.7706}} \\ {\text { 0}}{\text {.0003 }} - {\text {0.0000 0}}{\text {.0000 }} - {\text {0.7706 }} - {\text {0.7755}} \\ \end{gathered} \right] \\ \end{gathered}$$
Fig. 2.1
figure 1

The observer errors time responses: \({e_1},{e_2},{e_3}\) (healthy case)

Fig. 2.2
figure 2

Fault detection residual J with threshold

Fig. 2.3
figure 3

Fault detection residuals \({J_1},{J_2}\) with threshold

Fig. 2.4
figure 4

Time responses of the observer errors: \({e_1},{e_2},{e_3}\) (no compensation for fault)

Fig. 2.5
figure 5

Time responses of the observer errors: \({e_1},{e_2},{e_3}\) (with compensation for fault)

Fig. 2.6
figure 6

The gain fault \({\rho _2}(t) = 0.4\sin (\pi t)\) and its estimation \({\hat{\rho } _2}(t)\)

The simulation results are presented in Figs. 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8 and 2.9. From Fig. 2.1, it is easy to see that, under normal operating condition, observation errors globally asymptotically converge to zero. In this chapter, it is assumed that the error system is stable before fault occurrence, namely, \({e_x}(0) = 0,{\bar{e}_{xs}}(0) = 0\), \(||{e_x}(0)||{e^{ - \kappa t/2}}\) \(*\sqrt{{\lambda _{\max }}(P)/{\lambda _{\min }}(P)} = 0\). Hence, in the ideal situation, the detection threshold\({T_d}\) and the isolation threshold \({T_I}\) can be select as \({T_d} = {T_I} = 0\). However, there may exist noise and disturbance in practical situation. In the simulations, a white noise, with zero mean and standard deviation which is equal to 0.1, is added on each output. As a result, the detection threshold \({T_d}\) and the isolation threshold \({T_I}\) can be selected as \({T_d} = 0.1,{T_I} = 0.1\) according to the definition of detection residual and isolation residuals. Figure 2.2 shows that, when an actuator fault occurs in the system, an alarm is generated since the residual signal deviates significantly from zero. Meanwhile, the SMOs quickly isolate the fault, as shown in Fig. 2.3. From Fig. 2.4, we can see that, when an actuator fault occurs, with no fault compensation, the observation errors do not converge zero. However, compensating for the fault, the error system becomes stable, as shown in Fig. 2.5. From Figs. 2.6 and 2.7, we can clearly draw the conclusion that both gain faults and bias faults can be estimated accurately and promptly.

Compared with [24, 51], because a clear definition of threshold for fault detection and isolation is provided, it is easy to detect and isolate the faults. The fault estimation observer presented in this paper has the following two properties. On the one hand, differing from the classical fault estimation schemes in [24, 51, 52], where only bias faults or gain faults can be estimated, it is designed to estimate the two types of faults. On the other hand, from Figs. 2.8 and 2.9, it is obvious that it can estimate the types of faults considered in [24, 51] and the fault estimation algorithm has better performances. From the above simulation results, it can be seen that, by the proposed fault detection and isolation observer, an actuator fault can be quickly detected and isolated, and using the fault estimation algorithm, the fault can be estimated online, which can be used to compensate for the fault and to ensure the stability of the closed-loop system in spite of actuator fault.

Fig. 2.7
figure 7

The bias fault \({f_2}(t) = 0.4\cos (t)\) and its estimation \({\hat{f}_2}(t)\)

Fig. 2.8
figure 8

The fault and its estimation (Case 2)

Fig. 2.9
figure 9

The fault and its estimation (Case 3)

Remark 2.6

From the simulation results, it can be seen that (i) the proposed FDI/FTC scheme is effective because the fault can be detected, estimated and accommodated quickly, and (ii) the performance of our algorithm is better than that presented in the literature.

2.5 Conclusions

In this paper, the problem of fault tolerant control for T-S fuzzy systems with actuator faults is studied. We first design a bank of SMOs to detect and estimate the fault and a sufficient condition for the existence of SMOs is derived. Simulation results of NSHV show that the designed fault detection, isolation and estimation algorithms and fault-tolerant control scheme have good dynamic performances in the presence of actuator faults.