1 Introduction

In recent years, singular systems have been widely studied since singular systems with more accuracy and simplicity describe the complex physical systems, for example, Li (1989), Zhang et al. (2018), Xing et al. (2019a, b), Duan et al. (2013), and Ma and Yan (2016). T–S fuzzy has been employed in many application fields and attracted great attention. T–S fuzzy systems describe the nonlinear systems in the form of IF–THEN rules which utilize the fuzzy membership functions to represent the local linear input–output relations, such as Takagi and Sugeno (1985), Kumar et al. (2018), Tian et al. (2009), and Ge et al. (2019). In the last 20 years, many studies have been conducted on the singular T–S fuzzy systems (Zhu and Xia 2014; Han et al. 2012a, b; Su and Ye 2013; Zhao et al. 2015; Ma et al. 2016). Zhu and Xia (2014) discussed the design of \({H_\infty }\) filter for T–S fuzzy systems based on descriptor fault detection. A new sliding surface was designed for the T–S fuzzy time-varying delay system in Han et al. (2012a, b), and the less conservative conditions were given. Ma et al. (2016) designed a non-fragile static feedback controller for Markovian jump systems described by singular T–S fuzzy model.

In many practical processes, finite-time stability (FTS) analysis has attracted more and more attention in the past few years, such as Zhang et al. (2012) and Zhang et al. (2014a, b). In 1960s, FTS was first introduced in the field of control (Weiss and Infante 1967). Subsequently, Amato et al. (2001) presented the definition of finite-time boundedness. Since then, there are many studies about finite-time boundedness. As we all know, LKF approach is an efficient technique in the analysis of stability for time-varying delay fuzzy systems. For example, Bhat and Bernstein (2006) obtained finite-time stability conditions with less conservatism for the nonlinear systems. Tong et al. (2012) discussed finite-time boundedness for linear singular impulsive systems based on \({L_2}\)-gain analysis. The design of \({H_\infty }\) filtering, robust estimation and \({H_\infty }\) stability based on finite-time control for Markovian jump systems have been handled (Cheng and Zhu 2013; He and Liu 2013; Li and Zhang 2015). The problem of robust finite-time \({H_\infty }\) control for uncertain stochastic jump systems with time delay was introduced in Zhang et al. (2014a, b). By designing the \({H_\infty }\) non-fragile robust state feedback controller, a set of sufficient conditions was obtained to guarantee the finite-time boundedness and the finite-time asymptotic stability of the Markovian jump system (Zhang et al. 2014a, b).

In addition, dissipativity is a more universally applicable notion than passivity and \({H_\infty }\) performance, which was widely discussed in Men et al. (2018), Kong et al. (2019), Zhang et al. (2019), Wang et al. (2020). The concept of dissipation is an extension of passivity in complex dynamical systems, which consumes energy theoretically (Tao and Hu 2010). Dissipative theory attracts quite a few interest among researchers in the control field (Ma et al. 2015a, b). As an important target, dissipative theory is essential in the research and analysis of control systems and is of great significance in reducing disturbance to stability. Han et al. (2012a, b) proposed a new delay-dependent result that guaranteed the T–S fuzzy descriptor systems to be dissipative. Gassara et al. (2014a, b) investigated \(\left( {Q,S,R} \right) - \gamma \)-dissipativeness under observer-based control, and some less conservative results were given. Xing et al. (2019a, b) used the convex combination technique and designed the mode-dependent filter to study the dissipative control of the system. Based on the above discussion, there have been a lot of research results on finite-time H and passive control, but how to get a set of conditions including both h performance and passive incentive for our research. To solve this problem, we introduce a more general dissipative property based on the finite time analysis of uncertain singular T–S fuzzy time-varying delay systems.

On the other hand, in practical, the actuator is in the saturated state under normal conditions, and the energy it can provide is limited. Because the actuator saturation will bring adverse effects to the system, the control research on the saturated system has always been paid much attention. For example, the new controller for ensuring stability of the systems with actuator saturation was studied in both linear systems (Hu et al. 2002; Lin and Lv 2007; Ji et al. 2011; Cao and Lin 2003) and nonlinear systems (Ma and Zhang 2012; Hu et al. 2002; Gassara et al. 2014a, b; Zuo et al. 2015; Ma et al. 2015a, b). In addition, the several stability conditions and variable performance indicators for systems with actuator saturation were researched in both nonsingular systems (Yang et al. 2014) and singular systems (Gassara et al. 2014a, b; Zuo et al. 2015; Ma et al. 2015a, b).

In this paper, the finite-time dissipative control for uncertain singular T–S fuzzy systems with actuator saturation is discussed. By establishing an appropriate LKF and using the convex combination technique, the novel results that guarantee finite-time boundedness and dissipativity for the singular systems are given. Moreover, by solving a set of LMIs, an efficient controller is designed. Finally, several examples are provided to demonstrate the feasibility of this method. The key contributions of this study are listed as below:

  1. 1.

    An appropriate LKF is constructed, some free matrices are introducing based on the suitable integral inequality used in this paper. And the conservatism is greatly reduced.

  2. 2.

    By taking the effects of time-varying delay and actuator saturation into account, the criterion with less conservatism that guarantee finite-time boundedness and dissipativity of the uncertain fuzzy systems is obtained. With the help of the LMIs technique, the effective controllers are designed.

Notations: Throughout this paper, \({P^{ - 1}}\) means inverse of P; \({P^{\mathrm{T}}}\) means transpose of P. \({R^n}\) is \(n - \) dimensional Euclidean. \({R^{m \times n}}\) is a set of \(n \times m\) real matrices. A symmetric matrix \(P > 0\) means P is positive define. \({\mathrm{diag}}\left\{ \cdots \right\} \) is a block-diagonal matrix. \({\mathrm{Sym}}\left\{ X \right\} = X\mathrm{{ + }}{X^\mathrm{T}}\). \({\lambda _{\max }}\left( R \right) \left( {{\lambda _{\min }}\left( R \right) } \right) \) means the maximum (minimum) of the eigenvalue of a real symmetric matrix R. \(*\) stands for the transposed term in symmetric matrix. I represents the identity matrix. The notation of \(sat( \cdot )\) denotes the scalar values and the vector-valued saturation.

2 Problem formulation

Consider the following T–S fuzzy systems with actuator saturation and time delay:

\( \bullet \) Plant rule i: IF \({\varepsilon _1}(t)\) is \({M_{i1}}\) and \({\varepsilon _2}(t)\) is \({M_{i2}}\), \( \cdots \), \({\varepsilon _p}(t)\) is \({M_{ip}}\), THEN

$$\begin{aligned} \left\{ \begin{array}{r@{~}l} E\dot{x}(t) =&{} ({A_i} + \varDelta {A_i})x(t) + ({A_{\text {d}i}} + \varDelta {A_{di}})x(t - \text {d}(t)) \\ +&{} ({B_i} + \varDelta {B_i})sat(u(t)) + ({B_{\omega i}} + \varDelta {B_{\omega i}})\omega (t) \\ z(t) =&{} ({C_i} + \varDelta {C_i})x(t) + ({C_{\text {d}i}} + \varDelta {C_{\text {d}i}})x(t - \text {d}(t)) \\ +&{} ({D_i} + \varDelta {D_i})sat(u(t)) + ({D_{\omega i}} + \varDelta {D_{\omega i}})\omega (t) \\ x(t) =&{} \phi (t),\mathrm{{ }}t \in ( - {d_2},0), \\ \end{array} \right. \end{aligned}$$
(1)

where \({M_{ik}}(k = 1,2, \ldots ,p)\), \(i \in \mathrm{I}: = \left\{ {1,2, \ldots ,r} \right\} \) are fuzzy sets and r is the number of fuzzy IF–THEN rules; \({\varepsilon _1}(t),\mathrm{{ }}{\varepsilon _2}(t), \cdots ,{\varepsilon _r}(t)\) are the premise variables; \(\phi (t)\) is the initial value; \(x(t) \in {R^n}\) is the state vector; \(z(t) \in {R^q}\) is the controlled output; \(u(t) \in {R^m}\) is the control input; d(t) is positive time-varying delays functions satisfying: \(0 \le {d_1} \le \text {d}(t) \le {d_2}\), \(\dot{d}(t) \le h\) and \(h < 1\); \(\omega (t) \in {R^p}\) is exogenous disturbance input satisfying:

$$\begin{aligned} \int _0^T {{\omega ^\mathrm{T}}(t)} \omega (t)\text {d}t \le {d^2}. \end{aligned}$$
(2)

In addition, \(sat:{R^m}\times {R^m}\) is the standard saturation function satisfying:

$$\begin{aligned} sat(u(t)) = {[sat({u_1}(t)), \ldots ,sat({u_m}(t))]^\mathrm{T}}, \end{aligned}$$

and \(sat({u_i}(t)) = \mathrm{{sign}}({u_i}(t))\mathrm{{min}}\left\{ {1,|{u_i}(t)|} \right\} \).

The matrix \(E \in {R^{n \times n}}\) may be singular and we assume that \(rank(E) = r \le n\). \({A_i},{A_{di}},{B_i},{B_{\omega i}},{C_i},{C_{di}},{D_i}\) and \({D_{\omega i}}\) are known real constant matrices with appropriate dimensions; \(\varDelta {A_i},\varDelta {A_{di}},\varDelta {B_i},\varDelta {B_{\omega i}},\varDelta {C_i},\varDelta {C_{di}},\varDelta {D_i}\) and \(\varDelta {D_{\omega i}}\) are unknown matrices satisfying:

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} {\varDelta {A_i}} &{} {\varDelta {A_{di}}} &{} {\varDelta {B_i}} &{} {\varDelta {B_{\omega i}}} \\ {\varDelta {C_i}} &{} {\varDelta {C_{di}}} &{} {\varDelta {D_i}} &{} {\varDelta {D_{\omega i}}} \\ \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {{H_{1i}}} \\ {{H_{2i}}} \\ \end{array}} \right] {F_i}(t)\left[ {\begin{array}{*{20}{c}} {{N_i}} &{} {{N_{di}}} &{} {\begin{array}{*{20}{c}} {{N_{bi}}} &{} {{N_{\omega i}}} \\ \end{array}} \\ \end{array}} \right] , \end{aligned}$$
(3)

where \({N_{1i}}, {N_{di}}, {N_{bi}}, {N_{\omega i}}, {H_{1i}}\) and \({H_{2i}}\) are known real constant matrices with appropriate dimensions; \({F_i}(t)\) is an unknown real, time-delay matrix function and satisfies

$$\begin{aligned} F_i^\mathrm{T}(t){F_i}(t) \le I,\forall t \ge 0. \end{aligned}$$
(4)

Using center-average defuzzifier, product interference and singleton fuzzifier, the defuzzified output of the overall uncertain singular T–S fuzzy systems is represented as follows:

$$\begin{aligned} \left\{ \begin{array}{r@{~}l} E\dot{x}(t) &{}= \sum \limits _{i = 1}^r {{h_i}(\varepsilon (k))} [({A_i} + \varDelta {A_i})x(t) + ({A_{di}} + \varDelta {A_{di}})x(t - \text {d}(t)) \\ &{}\quad + ({B_i} + \varDelta {B_i})sat(u(t)) + ({B_{\omega i}} + \varDelta {B_{\omega i}})\omega (t)] \\ z(t) &{}= \sum \limits _{i = 1}^r {{h_i}(\varepsilon (k))} [({C_i} + \varDelta {C_i})x(t) + ({C_{di}} + \varDelta {C_{di}})x(t - \text {d}(t)) \\ &{}\quad + ({D_i} + \varDelta {D_i})sat(u(t)) + ({D_{\omega i}} + \varDelta {D_{\omega i}})\omega (t)] \\ x(t) &{}= \phi (t),\mathrm{{ }}t \in ( - {d_2},0) \\ \end{array} \right. \end{aligned}$$
(5)

with \({h_i}(\varepsilon (t)) = {{{\varpi _i}(\varepsilon (t))} \big / {\sum \nolimits _{i = 1}^r {{\varpi _i}(\varepsilon (t))} }}\), \({\varpi _i}(\varepsilon (t)) = \varPi _{k = 1}^p{M_{ik}}({\varepsilon _k}(t))\) and \({M_{ik}}({\varepsilon _k}(t))\) is the grade of membership of \({\varepsilon _k}(t)\) in \({M_{ik}}\). Where \({\varpi _i}(\varepsilon (t)) \ge 0\) for \(i = 1,2, \ldots ,r\) and \(\sum \nolimits _{i = 1}^r {{\varpi _i}(\varepsilon (t)) \ge 0} \) for all t. Therefore, we have \({h_i}(\varepsilon (t)) \ge 0\) and \(\sum \nolimits _{i = 1}^r {{h_i}(\varepsilon (t))} = 1\). For brevity, \({h_i}(\varepsilon (t))\) is represented by \({h_i}\) in the following description.

The following, we construct a parallel distributed compensation controller which has the same fuzzy sets in the premise parts with the fuzzy system (1). Then we have

$$\begin{aligned} u(t) = \sum \limits _{i = 1}^r {{h_i}(\varepsilon (t)} ){K_i}x(t), \end{aligned}$$
(6)

with \({K_i}\) being the constant controller gains.

We next establish conditions under which a given ellipsoid \(\mathfrak {I}({E^\mathrm{T}}PE,\rho )\) is contractively invariant. To better illustrate this, let us emphasize the significance of the important symbol and some equations which have been given in many references [30, 31]. \({h_{ik}}\) denote the \(k\mathrm{{ - th}}\) row of the matrix \({H_i}\), \(\ell ({H_i})\) is a polyhedral consisting of states without saturation satisfying

$$\begin{aligned} \ell ({H_i}) = \left\{ {x(t) \in {R^n}:|{h_{ik}}x(t)| \le 1,\mathrm{{ }}k \in \left[ {1, \ldots ,m} \right] } \right\} , \end{aligned}$$

\(P \in {R^{n \times n}}\) is a symmetric matrix and \({E^\mathrm{T}}PE \ge 0\), \(\eta > 0\) is a scalar. Denote

$$\begin{aligned} \mathfrak {I}({E^\mathrm{T}}PE,\rho ) = \left\{ {x(t) \in {R^n}:{x^\mathrm{T}}(t){E^\mathrm{T}}PEx(t) \le \rho } \right\} . \end{aligned}$$

Thus, \(\mathfrak {I}({E^\mathrm{T}}PE,\rho )\) is an ellipsoid. And \(\varXi \) is the set of \(m \times m\) diagonal matrices with 1 or 0 as its diagonal elements. \({E_s}\): \(s = 1,2, \ldots ,\gamma = {2^m}\) represents the element of \(\varXi \). In addition, \(E_s^ - = I - {E_s}\). Obviously, if \({E_s} \in \varXi \), then \(E_s^ - \in \varXi \).

Lemma 1

(Hu et al. 2002). Let \(F,H \in {R^{p \times n}}\). Then for any \(x(t) \in L(H)\),

$$\begin{aligned} sat(Fx(t)) \in co\left\{ {{E_s}Fx(t) + E_s^ - Hx(t),s = 1,2 \ldots ,{2^m}} \right\} ; \end{aligned}$$

or equivalently

$$\begin{aligned} sat(Fx(t)) = \sum \limits _{s = 1}^{{2^m}} {{\alpha _s}({E_s}F + } E_s^ - H)x(t), \end{aligned}$$

where co denotes the convex hull, \({\alpha _s}\) for \(s = 1,2, \ldots ,{2^m}\) satisfying \(0 \le {\alpha _s} \le 1\) and \(\sum \limits _{s = 1}^{{2^m}} {{\alpha _s} = 1} \).

Under the fuzzy state feedback controller law (6) and Lemma 1, the system (5) can be represented as

$$\begin{aligned} \left\{ \begin{array}{r@{~}l} E\dot{x}(t) =&{} Ax(t) + {A_d}x(t - d(t))\mathrm{{ + }}{B_\omega }\omega (t) \\ z(t) =&{} Cx(t) + {C_d}x(t - d(t)) + {D_\omega }\omega (t) \\ x(t) =&{} \phi (t),\mathrm{{ }}t \in ( - {d_2},0), \\ \end{array} \right. \end{aligned}$$
(7)

where

$$\begin{aligned} \left\{ \begin{array}{r@{~}l} &{}A = \sum \limits _{i = 1}^r {\sum \limits _{j = i}^r {\sum \limits _{s = 1}^{{2^m}} {{h_i}{h_j}{\alpha _s}[} } } ({A_i} + \varDelta {A_i}) + ({B_i} + \varDelta {B_i})({E_s}{K_j} + E_s^ - {H_j})] \\ &{}{A_d} = \sum \limits _{i = 1}^r {\sum \limits _{j = i}^r {\sum \limits _{s = 1}^{{2^m}} {{h_i}{h_j}{\alpha _s}} } } ({A_{di}} + \varDelta {A_{di}}),\mathrm{{ }}{B_\omega } = \sum \limits _{i = 1}^r {\sum \limits _{j = i}^r {\sum \limits _{s = 1}^{{2^m}} {{h_i}{h_j}{\alpha _s}({B_i} + \varDelta {B_{\omega i}})} } } \\ &{}C = \sum \limits _{i = 1}^r {\sum \limits _{j = i}^r {\sum \limits _{s = 1}^{{2^m}} {{h_i}{h_j}{\alpha _s}[} } } ({C_i} + \varDelta {C_i}) + ({D_i} + \varDelta {D_i})({E_s}{K_j} + E_s^ - {H_j})],\mathrm{{ }} \\ &{}{C_d} = \sum \limits _{i = 1}^r {\sum \limits _{j = i}^r {\sum \limits _{s = 1}^{{2^m}} {{h_i}{h_j}{\alpha _s}} } } ({C_{di}} + \varDelta {C_{di}}),\mathrm{{ }}{D_\omega } = \sum \limits _{i = 1}^r {\sum \limits _{j = i}^r {\sum \limits _{s = 1}^{{2^m}} {{h_i}{h_j}{\alpha _s}({B_i} + \varDelta {B_{\omega i}})} } }. \\ \end{array} \right. \end{aligned}$$
(8)

Definition 1

(Ma and Yan 2016) If det(\(sE - A\))\( \ne 0 \) for some complex number s; (EA) is regular; if deg(det(EA))\(=\)rank(E), a regular pair (EA) is said to be impulse free.

Definition 2

(He and Liu 2013) System (7) is called finite-time bounded (FTB) for \((c_1^2\), \(c_2^2\), \({d^2}\), T, \({R_c})\), in which \({R_c}\) is a symmetric positive definite matrix and \({c_1}\), T are positive constants, if there exists scalar \({c_2} > {c_1}\), such that

$$\begin{aligned} \begin{array}{l} \mathop {\sup }\limits _{ - {d_2} \le \theta \le 0} \left\{ {{x^\mathrm{T}}(\theta ){E^\mathrm{T}}{R_c}Ex(\theta ),{{\dot{x}}^\mathrm{T}}(\theta ){E^\mathrm{T}}{R_c}E\dot{x}(\theta ),{x^\mathrm{T}}(t){R_c}x(t)} \right\} \le c_1^2,\mathrm{{ }} \\ \qquad \qquad \qquad \Rightarrow {x^\mathrm{T}}(t){E^\mathrm{T}}{R_c}Ex(t) \le c_2^2,\mathrm{{ }}\forall \mathrm{{t}} \in \left[ {0,T} \right] . \\ \end{array} \end{aligned}$$

Definition 3

(Han et al. 2012a, b) The regular descriptor system (1) is \((Q,V,R) - \alpha \) dissipates with respect to \((c_1^2\), \(c_2^2\), \({d^2},T,\alpha \), \({R_c})\), if there exists some scalar \(\alpha > 0\), the following condition

$$\begin{aligned} W(\omega ,z,t) > \alpha {\langle \omega ,\omega \rangle _t} \end{aligned}$$

holds for any \(t \in \left[ {0,T} \right] \) under zero initial state, where

$$\begin{aligned} \begin{array}{l} W(\omega ,z,t) = {\langle z,Qz\rangle _t} + 2{\langle z,V\omega \rangle _t} + {\langle \omega ,R\omega \rangle _t}, \\ {\langle x,My\rangle _t} = \int _0^t {{x^\mathrm{T}}(s)My(s)ds.} \\ \end{array} \end{aligned}$$

The quadratic supply rate denotes as \(W(\omega ,z,t)\); Q, V and R are real matrices and Q and R are symmetric. And \(Q < 0\), \(Q \le 0\) or \(Q = 0\).

Remark 1

It should be noted that, by Definition 3, the \((Q,V,R) - \alpha \) dissipative performance contains the following special cases: \({H_\infty }\) performance as special cases, as follows:

1. If \(Q = 0\), \(V = I\) and \(R = 0\), the finite-time \((Q,V,R) - \alpha \) dissipativity corresponds to a finite-time passivity or positive realness property.

2. If \(Q = - I\), \(V = 0\) and \(R = \alpha + {\gamma ^2}I\), the finite-time \((Q,V,R) - \alpha \) dissipativity reduces to an \({H_\infty }\) performance.

Lemma 2

(Ma et al. 2015a, b). Let \({T_1}\),\({T_2}\) and Y(t) are real matrices with appropriate dimensions. Y(t) satisfies \(Y(t)\mathrm{{ }}{Y^\mathrm{T}}(t) \le I\). Then for any constant \(\varepsilon > 0\), the inequality holds as follows:

$$\begin{aligned} {T_1}Y(t){T_2} + T_2^\mathrm{T}{Y^\mathrm{T}}(t)T_1^\mathrm{T} \le \varepsilon {T_1}T_1^\mathrm{T} + {\varepsilon ^{ - 1}}T_2^\mathrm{T}{T_2}. \end{aligned}$$

Lemma 3

(Ma and Zhang 2012). Given matrices X,Y,Z with proper dimensions, if there is a positive scalar \(\alpha > 0\), we have

$$\begin{aligned} - {Z^\mathrm{T}}YZ \le {X^\mathrm{T}}Z + {Z^\mathrm{T}}X + {X^\mathrm{T}}{Y^{ - 1}}X. \end{aligned}$$

Lemma 4

(Duan et al. 2013) For any constant \(d > 0\), vector function \(\dot{x}:\left[ { - r,0}\right] \rightarrow {\mathfrak {R}^n}\) and constant matrix \(R \in \mathfrak {R}\), \(R = {R^\mathrm{T}}\), the following integration holds:

$$\begin{aligned} - d\int _{t - d}^t {{{\dot{x}}^\mathrm{T}}} (s)R\dot{x}(s)ds \le {\left[ {\begin{array}{*{20}{c}} {x(t)} \\ {x(t - d)} \\ \end{array}} \right] ^\mathrm{T}}\left[ {\begin{array}{*{20}{c}} { - R} &{} R \\ R &{} { - R} \\ \end{array}} \right] \left[ {\begin{array}{*{20}{c}} {x(t)} \\ {x(t - d)} \\ \end{array}} \right] . \end{aligned}$$

Lemma 5

(Duan et al. 2013) Suppose that \({h_1} \le h(t) \le {h_2}\), where \(h(t):{R^ + } \rightarrow {R^ + }\). Then for any \(R = {R^\mathrm{T}} > 0\), singular matrix E and free matrices \({X_1}\) and \({X_2}\), the following integral inequality holds:

$$\begin{aligned}\begin{array}{l} - \int _{t - {h_2}}^{t - {h_1}} {{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}RE\dot{x}(s)ds} \le {\xi ^\mathrm{T}}(t)((h(t) - {h_1}){X_1}{R^{ - 1}}X_1^\mathrm{T} + ({h_2} - h(t)){X_2}{R^{ - 1}}X_2^\mathrm{T} \\ \quad + \left[ {\begin{array}{*{20}{c}} {{X_1}} &{} {{X_2} - {X_1}} &{} { - {X_2}} \\ \end{array}} \right] E + {E^\mathrm{T}}{\left[ {\begin{array}{*{20}{c}} {{X_1}} &{} {{X_2} - {X_1}} &{} { - {X_2}} \\ \end{array}} \right] ^\mathrm{T}})\xi (t), \\ \end{array} \end{aligned}$$

where

$$\begin{aligned} \begin{array}{l} {\xi ^\mathrm{T}}(t) = \left[ {\begin{array}{*{20}{c}} {{x^\mathrm{T}}(t - {h_1})} &{} {{x^\mathrm{T}}(t - h(t))} &{} {{x^\mathrm{T}}(t - {h_2})} \\ \end{array}} \right] , \\ {X_a} = {\left[ {\begin{array}{*{20}{c}} {X_{a1}^\mathrm{T}} &{} {X_{a2}^\mathrm{T}} &{} {X_{a3}^\mathrm{T}} \\ \end{array}} \right] ^\mathrm{T}},a = 1,2. \\ \end{array} \end{aligned}$$
(9)

Remark 2

The Lemma 5 is important in reducing the conservative property result from introducing slack variables \({X_{al}}(a = 1,2;l = 1,2,3)\). The free-weighting matrix method enables our results possessing superiority.

Lemma 6

(Tian et al. 2009) \({\varLambda _1}\),\({\varLambda _2}\) and \(\varTheta \) are constant matrices and \(0 \le {d_1} \le d(t) \le {d_2}\), then

$$\begin{aligned} (\text {d}(t) - {d_1}){\varLambda _1} + ({d_2} - \text {d}(t)){\varLambda _2} + \varTheta < 0, \end{aligned}$$

if and only if \(({d_2} - {d_1}){\varLambda _1} + \varTheta < 0\) and \(({d_2} - {d_1}){\varLambda _2} + \varTheta < 0\) hold.

3 Main results

3.1 FTS analysis and finite-time \((Q,V,R) - \alpha \) dissipative

Theorem 1

System (7) is FTB with respect to \(\left( {c_1^2,c_2^2,{d^2},{T_c},{R_c}} \right) \) at the origin with \(\mathfrak {I}({E^\mathrm{T}}PE,\rho )\) contained in the domain of attraction for positive constants \({c_1}\),d,\({T_c}\),\(\delta \) and matrix \({R_c} > 0\), if there exist a constant \({c_2} > 0\), a nonsingular matrix P, matrices \({Q_1}> 0,{Q_2}> 0,{Q_3}> 0,{Z_1}> 0,{Z_2} > 0\), \({X_{al}}(a = 1,2;l = 1,2,3)\) and \(\mathfrak {I}({E^\mathrm{T}}PE,\rho ) \subset \ell ({H_i})\) such that

$$\begin{aligned} E{P^\mathrm{T}}= & {} P{E^\mathrm{T}} \ge 0, \end{aligned}$$
(10)
$$\begin{aligned} {\varPhi _a}= & {} \left[ {\begin{array}{*{20}{c}} {\varOmega + {d_{12}}{\varGamma _2}E + {d_{12}}{E^\mathrm{T}}\varGamma _2^\mathrm{T}} &{} {{d_1}{\varGamma _1}} &{} {{d_{12}}{\varGamma _1}} &{} {{d_{12}}{X_a}} \\ * &{} { - Z_1^{ - 1}} &{} 0 &{} 0 \\ * &{} * &{} { - Z_2^{ - 1}} &{} 0 \\ * &{} * &{} * &{} { - {Z_2}} \\ \end{array}} \right] < 0, \end{aligned}$$
(11)
$$\begin{aligned} {[}{\lambda _2} + {d_1}{\lambda _3} + {d_2}({\lambda _4} + {\lambda _5}) + \frac{{d_1^3}}{2}{\lambda _6} + \frac{{d_{12}^3}}{2}{\lambda _7}]c_1^2 + {d^2}(1 - {e^{ - \delta {T_c}}}) < {\lambda _1}c_2^2{e^{ - \delta {T_c}}}, \end{aligned}$$
(12)

where

$$\begin{aligned}\begin{array}{l} \varOmega = \left[ {\begin{array}{*{20}{c}} \varDelta &{} {P{E^\mathrm{T}}{Z_1}E} &{} {{A_d}} &{} 0 &{} {{B_\omega }} \\ * &{} { - {Q_1} - {E^\mathrm{T}}{Z_1}E} &{} 0 &{} 0 &{} 0 \\ * &{} * &{} { - (1 - h){Q_3}} &{} 0 &{} 0 \\ * &{} * &{} * &{} { - {Q_2}} &{} 0 \\ * &{} * &{} * &{} * &{} { - \delta I} \\ \end{array}} \right] , \\ \varDelta = A{P^\mathrm{T}} + P{A^\mathrm{T}} + P{Q_1}{P^\mathrm{T}} + P{Q_2}{P^\mathrm{T}} + P{Q_3}{P^\mathrm{T}} - P{E^\mathrm{T}}{Z_1}E{P^\mathrm{T}} - \delta E{P^\mathrm{T}}, \\ {\varGamma _1} = {\left[ {\begin{array}{*{20}{c}} {A{P^\mathrm{T}}} &{} 0 &{} {{A_d}} &{} 0 &{} {{B_\omega }} \\ \end{array}} \right] ^\mathrm{T}},{\varGamma _2} = \left[ {\begin{array}{*{20}{c}} 0 &{} {{X_1}} &{} {{X_2} - {X_1}} &{} { - {X_2}} &{} 0 \\ \end{array}} \right] , \\ {X_a} = {\left[ {\begin{array}{*{20}{c}} 0 &{} {X_{a1}^\mathrm{T}} &{} {X_{a2}^\mathrm{T}} &{} {X_{a3}^\mathrm{T}} &{} 0 \\ \end{array}} \right] ^\mathrm{T}},a = 1,2, \;\;{d_{12}} = {d_2} - {d_1},\\ {P^{ - 1}}E = {E^\mathrm{T}}R_c^{{1 \big / 2}}{\bar{PR}}_c^{{1 \big / 2}}E,{Q_1} = R_c^{{1 \big / 2}}{{{\bar{Q}}}_1}R_c^{{1 \big / 2}}, {Q_2} = R_c^{{1 \big / 2}}{{{\bar{Q}}}_2}R_c^{{1 \big / 2}}, \\ {Q_3} = R_c^{{1 \big / 2}}{{{\bar{Q}}}_3}R_c^{{1 \big / 2}},{Z_1} = R_c^{{1 \big / 2}}{{{\bar{Z}}}_1}R_c^{{1 \big / 2}},{Z_2} = R_c^{{1 \big / 2}}{{{\bar{Z}}}_2}R_c^{{1 \big / 2}}, \\ {\lambda _1} = {\lambda _{\min }}\left( {{\bar{P}}} \right) ,{\lambda _2} = {\lambda _{\max }}\left( {{\bar{P}}} \right) ,{\lambda _3} = {\lambda _{\max }}\left( {{{{\bar{Q}}}_1}} \right) ,{\lambda _4} = {\lambda _{\max }}\left( {{{{\bar{Q}}}_2}} \right) , \\ {\lambda _5} = {\lambda _{\max }}\left( {{{{\bar{Q}}}_3}} \right) ,{\lambda _6} = {\lambda _{\max }}\left( {{{{\bar{Z}}}_1}} \right) ,{\lambda _7} = {\lambda _{\max }}\left( {{{{\bar{Z}}}_2}} \right) . \\ \end{array} \end{aligned}$$

Proof

First, we prove the regular and impulse-free of system (7). Since \(rank(E) = r < n\), we assume \({G_1}\) and \({G_2}\) are non-singular matrices such that

$$\begin{aligned}&\displaystyle {G_1}E{G_2} = \left[ {\begin{array}{*{20}{c}} {{I_r}} &{} 0 \\ 0 &{} 0 \\ \end{array}} \right] ,{G_1}A{G_2} = \left[ {\begin{array}{*{20}{c}} {{A_1}} &{} {{A_2}} \\ {{A_3}} &{} {{A_4}} \\ \end{array}} \right] , \\&\displaystyle {G_1}PG_2^{ - \mathrm{T}} = \left[ {\begin{array}{*{20}{c}} {{P_1}} &{} {{P_2}} \\ {{P_3}} &{} {{P_4}} \\ \end{array}} \right] ,G_1^{ - \mathrm{T}}{Z_1}G_1^{ - 1} = \left[ {\begin{array}{*{20}{c}} {{Z_{11}}} &{} {{Z_{12}}} \\ {{Z_{13}}} &{} {{Z_{14}}} \\ \end{array}} \right] . \end{aligned}$$

From \(E{P^\mathrm{T}} = P{E^\mathrm{T}}\), we can easily obtain \({P_3} = 0\) and \({P_1} = P_1^\mathrm{T}\). From (11), we have that \(\varDelta < 0\). Pre-and post-multiply \(\varDelta \) by \(G_2^\mathrm{T}\) and \({G_2}\), respectively. Moreover, \({Q_1},{Q_2},{Q_3} > 0\), so it is easily obtained

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} * &{} * \\ * &{} {{A_4}P_4^\mathrm{T} + {P_4}A_4^\mathrm{T}} \\ \end{array}} \right] < 0. \end{aligned}$$

From the above discussion, we can obtain \({A_4}P_4^\mathrm{T} + {P_4}A_4^\mathrm{T} < 0\), which implies \({A_4}\) is nonsingular. Thus, det\((sE - A)\) is not identity zero and deg(det\((sE - A)\))\( = \) rankE. In the light of Definition 1, the closed-loop system (7) is regular and impulse free.

Now, we prove the system is FTB, choosing the following Lyapunov functional candidate as

$$\begin{aligned} V(x(t)) = {V_1}(x(t)) + {V_2}(x(t)) + {V_3}(x(t))+ {V_4}(x(t)), \end{aligned}$$
(13)

where

$$\begin{aligned} {V_1}(x(t))= & {} {x^\mathrm{T}}(t){P^{ - 1}}Ex(t),\\ {V_2}(x(t))= & {} \smallint _{t - {d_1}}^t {{e^{\delta (t - s)}}{x^\mathrm{T}}(s){Q_1}x(s)} \text {d}\\&+ \int _{t - {d_2}}^t {{e^{\delta (t - s)}}{x^\mathrm{T}}(s){Q_2}x(s)} \text {d}s\\&+ \smallint _{t -\text {d}(t)}^t {{e^{\delta (t - s)}}{x^\mathrm{T}}(s){Q_3}x(s)} \text {d}s,\\ {V_3}(x(t))= & {} {d_1}\int _{ - {d_1}}^0 {\int _{t + \theta }^t {{e^{\delta (t - s)}}{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_1}E\dot{x}(s)} } \text {d}s\text {d}\theta , \\ {V_4}(x(t))= & {} {d_{12}}\int _{ - {d_2}}^{ - {d_1}} {\int _{t + \theta }^t {{e^{\delta (t - s)}}{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_2}E\dot{x}(s)}} \text {d}s \text {d}\theta . \end{aligned}$$

Then

$$\begin{aligned} \begin{array}{lll} {{\dot{V}}_1}(x(t)) &{}=&{} 2{x^\mathrm{T}}(t){P^{ - 1}}E\dot{x}(t),\\ {{\dot{V}}_2}(x(t)) &{}=&{} \delta \int _{t - {d_1}}^t {{e^{\delta (t - s)}}{x^\mathrm{T}}(s){Q_1}x(s)} ds + {x^\mathrm{T}}(t){Q_1}x(t)\\ &{}&{}- {e^{ - \delta {d_1}}}{x^\mathrm{T}}(t - {d_1}){Q_1}x(t - {d_1})\\ &{}&{}+ \delta \int _{t - {d_2}}^t {{e^{\delta (t - s)}}{x^\mathrm{T}}(s){Q_2}x(s)} \text {d}s + {x^\mathrm{T}}(t){Q_2}x(t)\\ &{}&{}- {e^{ - \delta {d_2}}}{x^\mathrm{T}}(t - {d_2}){Q_2}x(t - {d_2})\\ &{}&{}+ \delta \int _{t - \text {d}(t)}^t {{e^{\delta (t - s)}}{x^\mathrm{T}}(s){Q_3}x(s)} \text {d}s + {x^\mathrm{T}}(t){Q_3}x(t)\\ &{}&{}- (1 - \dot{d}(t)){e^{ - \delta \text {d}(t)}}{x^\mathrm{T}}(t - d(t)){Q_3}x(t - \text {d}(t))\\ &{}\le &{} \delta {V_2}(x(t)) + {x^\mathrm{T}}(t)({Q_1} + {Q_2} + {Q_3})x(t)\\ &{}&{}- {x^\mathrm{T}}(t - {d_1}){Q_1}x(t - {d_1}) - {x^\mathrm{T}}(t - {d_2}){Q_2}x(t - {d_2})\\ &{}&{}- (1 - h){x^\mathrm{T}}(t - \text {d}(t)){Q_3}x(t - \text {d}(t)),\\ {{\dot{V}}_3}(x(t)) &{}=&{} \delta {V_3}(x(t)) + {d_1}\int _{ - {d_1}}^0 {{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_1}E\dot{x}(s)\text {d}\theta } \\ &{}&{}- {d_1}\int _{t - {d_1}}^t {{e^{\delta (t - s)}}{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_1}E\dot{x}(s)\text {d}s} \\ &{}\le &{} \delta {V_3}(x(t)) + d_1^2{{\dot{x}}^\mathrm{T}}(t){E^\mathrm{T}}{Z_1}E\dot{x}(t)\\ &{}&{}- {d_1}\int _{t - {d_1}}^t {{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_1}E\dot{x}(s)\text {d}s} ,\\ {{\dot{V}}_4}(x(t)) &{}=&{} \delta {V_4}(x(t)) + {d_{12}}\int _{ - {d_2}}^{ - {d_1}} {{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_2}E\dot{x}(s)\text {d}\theta } \\ &{}&{}- {d_{12}}\int _{t - {d_2}}^{t - {d_1}} {{e^{\delta (t - s)}}{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_2}E\dot{x}(s)\text {d}s} \\ &{}\le &{} \delta {V_4}(x(t)) + d_{12}^2{{\dot{x}}^\mathrm{T}}(t){E^\mathrm{T}}{Z_2}E\dot{x}(t)\\ &{}&{}- {d_{12}}\int _{t - {d_2}}^{t - {d_1}} {{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_2}E\dot{x}(s)\text {d}s}, \end{array} \end{aligned}$$
(14)

from (13) to (14), we can get

$$\begin{aligned}\begin{array}{l} \dot{V}(x(t)) \le \delta {V_2}(x(t)) + {x^\mathrm{T}}(t)({Q_1} + {Q_2} + {Q_3})x(t) + 2{x^\mathrm{T}}(t)PE\dot{x}(t)\\ - {x^\mathrm{T}}(t - {d_1}){Q_1}x(t - {d_1}) - {x^\mathrm{T}}(t - {d_2}){Q_2}x(t - {d_2})\\ - (1 - h){x^\mathrm{T}}(t - d(t)){Q_3}x(t - d(t)) + d_1^2{{\dot{x}}^\mathrm{T}}(t){E^\mathrm{T}}{Z_1}E\dot{x}(t)\\ + d_{12}^2{{\dot{x}}^\mathrm{T}}(t){E^\mathrm{T}}{Z_2}E\dot{x}(t) - {d_1}\int _{t - {d_1}}^t {{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_1}E\dot{x}(s)\text {d}s} \\ - {d_{12}}\int _{t - {d_2}}^{t - {d_1}} {{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_2}E\dot{x}(s)\text {d}s}. \end{array} \end{aligned}$$

Via Lemma 4 and Lemma 5, it is obtained that

$$\begin{aligned}&- {d_1}\int _{t - {d_1}}^t {{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_1}E\dot{x}(s)\text {d}s} \le {\left[ {\begin{array}{*{20}{c}} {x(t)} \\ {x(t - {d_1})} \\ \end{array}} \right] ^\mathrm{T}}\left[ {\begin{array}{*{20}{c}} { - {E^\mathrm{T}}{Z_1}E} &{} {{E^\mathrm{T}}{Z_1}E} \\ {{E^\mathrm{T}}{Z_1}E} &{} { - {E^\mathrm{T}}{Z_1}E} \\ \end{array}} \right] \left[ {\begin{array}{*{20}{c}} {x(t)} \\ {x(t - {d_1})} \\ \end{array}} \right] , \end{aligned}$$
(15)
$$\begin{aligned}&\begin{array}{r} - \int _{t - {d_2}}^{t - {d_1}} {{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_2}E\dot{x}(s)\text {d}s} \le {\xi ^\mathrm{T}}(t)\left\{ {(d(t) - {d_1}){X_1}Z_2^{ - 1}X_1^\mathrm{T} + ({d_2} - \text {d}(t)){X_2}Z_2^{ - 1}X_2^\mathrm{T}} \right. \\ \mathrm{{ }}\left. {\mathrm{{ }} + \left[ {\begin{array}{*{20}{c}} {{X_1}}&{}{{X_2} - {X_1}}&{}{ - {X_2}} \end{array}} \right] E + {E^\mathrm{T}}{{\left[ {\begin{array}{*{20}{c}} {{X_1}}&{{X_2} - {X_1}}&{ - {X_2}} \end{array}} \right] }^\mathrm{T}}} \right\} \xi (t) \end{array},\nonumber \\ \end{aligned}$$
(16)

where \(\xi (t)\), \({X_a}\) satisfies Eq. (9). Using Lemma 6, (16) is equivalent to

$$\begin{aligned} {\xi ^\mathrm{T}}(t)\left\{ {{d_{12}}{X_1}Z_2^{ - 1}X_1^\mathrm{T} + } \right. \left[ {\begin{array}{*{20}{c}} {{X_1}} &{} {{X_2} - {X_1}} &{} { - {X_2}} \\ \end{array}} \right] E + {\left[ {\begin{array}{*{20}{c}} {{X_1}} &{} {{X_2} - {X_1}} &{} { - {X_2}} \\ \end{array}} \right] ^\mathrm{T}}\left. {{E^\mathrm{T}}} \right\} \xi (t) < 0\end{aligned}$$

and

$$\begin{aligned} {\xi ^\mathrm{T}}(t)\left\{ {{d_{12}}{X_2}Z_2^{ - 1}X_2^\mathrm{T} + } \right. \left[ {\begin{array}{*{20}{c}} {{X_1}} &{} {{X_2} - {X_1}} &{} { - {X_2}} \\ \end{array}} \right] E + {\left[ {\begin{array}{*{20}{c}} {{X_1}} &{} {{X_2} - {X_1}} &{} { - {X_2}} \\ \end{array}} \right] ^\mathrm{T}}\left. {{E^\mathrm{T}}} \right\} \xi (t) < 0. \end{aligned}$$

Then it can be obtained that

$$\begin{aligned} \begin{array}{l} \dot{V}(x(t)) - \delta V(x(t)) - \delta {\omega ^\mathrm{T}}(t)\omega (t) \\ \quad = {\zeta ^\mathrm{T}}(t)[\varOmega + d_1^2\varGamma _1^\mathrm{T}{Z_1}{\varGamma _1} + d_{12}^2\varGamma _1^\mathrm{T}{Z_2}{\varGamma _1} + d_{12}^2X_a^\mathrm{T}Z_2^{ - 1}{X_a} + {d_{12}}{\varGamma _3}E + {d_{12}}{E^\mathrm{T}}\varGamma _3^\mathrm{T}]\zeta (t), \\ \end{array} \end{aligned}$$

where \({\zeta ^\mathrm{T}}(t) = \left[ {\begin{array}{lllll} {{x^\mathrm{T}}(t)} &{} {{x^\mathrm{T}}(t - {d_1})} &{} {{x^\mathrm{T}}(t - \text {d}(t))} &{} {{x^\mathrm{T}}(t - {d_2})} &{} {{w^\mathrm{T}}(t)} \\ \end{array}} \right] .\)

Pre- and post-multiplying (11) with diag \(\left\{ {{P^{ - 1}},\underbrace{I, \cdots I}_7} \right\} \) and diag\(\left\{ {{P^{ - \mathrm{T}}},\underbrace{I, \cdots I}_7} \right\} \), respectively. And by Schur complement, it yields

$$\begin{aligned} \dot{V}(x(t)) - \delta V(x(t)) - \delta {\omega ^\mathrm{T}}(t)\omega (t) < 0. \end{aligned}$$
(17)

First, both sides of (17) pre-and post-multiplying by \({e^{ - \delta t}}\), then integrating it from 0 to \(t,(t \in \left[ {0,{T_c}} \right] )\), we can derive

$$\begin{aligned} V(x(t)) < {e^{\delta {T_c}}}[V(x(0)) + \delta \int _0^{{T_c}} {{e^{ - \delta s}}} {\omega ^\mathrm{T}}(t)\omega (t)\text {d}t]. \end{aligned}$$

From (11) and Definition 2, we can obtain

$$\begin{aligned} \begin{array}{lll} V(x(0)) &{}=&{} {x^\mathrm{T}}(0){P^{ - 1}}Ex(0) + \int _{ - {d_1}}^0 {{e^{ - \delta s}}{x^\mathrm{T}}(s){Q_1}x(s)} \text {d}s + \int _{ - {d_2}}^0 {{e^{ - \delta s}}{x^\mathrm{T}}(s){Q_2}x(s)} \text {d}s\\ &{}&{}+ \int _{ - \text {d}(t)}^0 {{e^{ - \delta s}}{x^\mathrm{T}}(s){Q_3}x(s)} \text {d}s + {d_1}\int _{ - {d_1}}^0 {\int _\theta ^0 {{e^{ - \delta s}}{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_2}E\dot{x}(s)} \text {d}s} \text {d}\theta \\ &{}&{}+ {d_{12}}\int _{ - {d_2}}^{ - {d_1}} {\int _\theta ^0 {{e^{ - \delta s}}{{\dot{x}}^\mathrm{T}}(s){E^\mathrm{T}}{Z_2}E\dot{x}(s)} \text {d}s} \text {d}\theta \\ &{}\le &{} {\lambda _2}{x^\mathrm{T}}(0){R_c}x(0) + [{d_1}{\lambda _3} + {d_2}({\lambda _4} + {\lambda _5}) + \frac{{d_1^3}}{2}{\lambda _6} + \frac{{d_{12}^3}}{2}{\lambda _7}]\\ &{}&{}\mathop {\sup }\limits _{ - {d_2} \le \theta \le 0} \left\{ {{x^\mathrm{T}}(\theta ){R_c}x(\theta ),{{\dot{x}}^\mathrm{T}}(\theta ){R_c}\dot{x}(\theta )} \right\} \\ &{}\le &{} [{\lambda _2} + {d_1}{\lambda _3} + {d_2}({\lambda _4} + {\lambda _5}) + \frac{{d_1^3}}{2}{\lambda _6} + \frac{{d_{12}^3}}{2}{\lambda _7}]c_1^2. \end{array} \end{aligned}$$

Combine with (2), then

$$\begin{aligned} V(x(t)) \le [{\lambda _2} + {d_1}{\lambda _3} + {d_2}({\lambda _4} + {\lambda _5}) + \frac{{d_1^3}}{2}{\lambda _6} + \frac{{d_{12}^3}}{2}{\lambda _7}]c_1^2 + {d^2}(1 - {e^{ - \delta {T_c}}}). \end{aligned}$$

On the other hand,

$$\begin{aligned} V(x(t)) \ge {x^\mathrm{T}}(t){P^{ - 1}}Ex(t) \ge {\lambda _1}{x^\mathrm{T}}(t){E^\mathrm{T}}{R_c}Ex(t). \end{aligned}$$

Then from condition (12), we have \({x^\mathrm{T}}(t){E^\mathrm{T}}{R_c}Ex(t) \le c_2^2\). According to Definition 2, system (7) is finite-time bounded. The proof is completed. \(\square \)

Theorem 2

System (7) is finite-time \((Q,V,R) - \alpha \) dissipative with respect to \(\left( c_1^2,c_2^2,{d^2},{T_c},\alpha ,{R_c} \right) \) at the origin with \(\mathfrak {I}({E^\mathrm{T}}PE,\rho )\) contained in the domain of attraction for positive constants \({c_1}\), d, \({T_c}\), \(\delta \) and matrix \({R_c} > 0\), if there exist a constant \({c_2} > 0\), a nonsingular matrix P, matrices \({Q_1}> 0,{Q_2}> 0,{Q_3}> 0,{Z_1}> 0,{Z_2} > 0\), \({X_{al}}(a = 1,2;l = 1,2,3)\) and \(\mathfrak {I}({E^\mathrm{T}}PE,\rho ) \subset \ell (H)\) such that

$$\begin{aligned}&E{P^\mathrm{T}} = P{E^\mathrm{T}} \ge 0, \end{aligned}$$
(18)
$$\begin{aligned}&{\varPhi _a} = \left[ {\begin{array}{lllll} {\varOmega + {d_{12}}{\varGamma _2}E + {d_{12}}{E^\mathrm{T}}\varGamma _2^\mathrm{T}} &{} {{d_1}{\varGamma _1}} &{} {{d_{12}}{\varGamma _1}} &{} {{d_{12}}{X_a}} &{} {\varGamma _3^\mathrm{T}} \\ * &{} { - Z_1^{ - 1}} &{} 0 &{} 0 &{} 0 \\ * &{} * &{} { - Z_2^{ - 1}} &{} 0 &{} 0 \\ * &{} * &{} * &{} { - {Z_2}} &{} 0 \\ * &{} * &{} * &{} * &{} { - I} \\ \end{array}} \right] < 0, \end{aligned}$$
(19)
$$\begin{aligned}&[{\lambda _2} + {d_1}{\lambda _3} + {d_2}({\lambda _4} + {\lambda _5}) + \frac{{d_1^3}}{2}{\lambda _6} + \frac{{d_{12}^3}}{2}{\lambda _7}]c_1^2 + {d^2}(1 - {e^{ - \delta {T_c}}}) < {\lambda _1}c_2^2{e^{ - \delta {T_c}}}, \end{aligned}$$
(20)

where

$$\begin{aligned} \varOmega = \left[ {\begin{array}{lllll} \varDelta &{} {P{E^\mathrm{T}}{Z_1}E} &{} {{A_d}} &{} 0 &{} {{B_\omega } - P{C^\mathrm{T}}V} \\ * &{} { - {Q_1} - {E^\mathrm{T}}{Z_1}E} &{} 0 &{} 0 &{} 0 \\ * &{} * &{} { - (1 - h){Q_3}} &{} 0 &{} { - C_d^\mathrm{T}V} \\ * &{} * &{} * &{} { - {Q_2}} &{} 0 \\ * &{} * &{} * &{} * &{} { - (R - \alpha I) - D_\omega ^\mathrm{T}V - V{D_\omega }} \\ \end{array}} \right] ,\\ \begin{array}{l} \varDelta = A{P^\mathrm{T}} + P{A^\mathrm{T}} + P({Q_1} + {Q_2} + {Q_3}){P^\mathrm{T}} - P{E^\mathrm{T}}{Z_1}E{P^\mathrm{T}} - \delta E{P^\mathrm{T}}, \\ {\varGamma _1} = {\left[ {\begin{array}{lllll} {A{P^\mathrm{T}}} &{} 0 &{} {{A_d}} &{} 0 &{} {{B_\omega }} \\ \end{array}} \right] ^\mathrm{T}},{\varGamma _2} = \left[ {\begin{array}{lllll} 0 &{} {{X_1}} &{} {{X_2} - {X_1}} &{} { - {X_2}} &{} 0 \\ \end{array}} \right] , \\ {\varGamma _3} = \left[ {\begin{array}{lllll} {\sqrt{ - Q} C{P^\mathrm{T}}} &{} 0 &{} {\sqrt{ - Q} {C_d}} &{} 0 &{} {\sqrt{ - Q} {D_\omega }} \\ \end{array}} \right] ,{X_a} = {\left[ {\begin{array}{lllll} 0 &{} {X_{a1}^\mathrm{T}} &{} {X_{a2}^\mathrm{T}} &{} {X_{a3}^\mathrm{T}} &{} 0 \\ \end{array}} \right] ^\mathrm{T}}. \\ {P^{ - 1}}E = {E^\mathrm{T}}R_c^{{1 \big / 2}}{\bar{P}}R_c^{{1 \big / 2}},{Q_1} = R_c^{{1 \big / 2}}{{{\bar{Q}}}_1}R_c^{{1 \big / 2}}E,{Q_2} = R_c^{{1 \big / 2}}{{{\bar{Q}}}_2}R_c^{{1 \big / 2}}, \\ {Q_3} = R_c^{{1 \big / 2}}{{{\bar{Q}}}_3}R_c^{{1 \big / 2}},{Z_1} = R_c^{{1 \big / 2}}{{{\bar{Z}}}_1}R_c^{{1 \big / 2}},{Z_2} = R_c^{{1 \big / 2}}{{{\bar{Z}}}_2}R_c^{{1 \big / 2}}, \\ {\lambda _1} = {\lambda _{\min }}\left( {{\bar{P}}} \right) ,{\lambda _2} = {\lambda _{\max }}\left( {{\bar{P}}} \right) ,{\lambda _3} = {\lambda _{\max }}\left( {{{{\bar{Q}}}_1}} \right) ,{\lambda _4} = {\lambda _{\max }}\left( {{{{\bar{Q}}}_2}} \right) , \\ {\lambda _5} = {\lambda _{\max }}\left( {{{{\bar{Q}}}_3}} \right) ,{\lambda _6} = {\lambda _{\max }}\left( {{{{\bar{Z}}}_1}} \right) ,{\lambda _7} = {\lambda _{\max }}\left( {{{{\bar{Z}}}_2}} \right) . \\ \end{array} \end{aligned}$$

Proof

In the part, we prove system (7) is finite-time dissipative. First, let

$$\begin{aligned} J(t) = {z^\mathrm{T}}(t)Qz(t) + 2{\omega ^\mathrm{T}}(t)Vz(t) + {\omega ^\mathrm{T}}(t)(R - \alpha I)\omega (t). \end{aligned}$$

From (19) and Schur complement, it is easily obtained

$$\begin{aligned} \begin{array}{lll} \dot{V}(x(t)) - \delta V(x(t)) - J(t) &{}=&{} {\zeta ^\mathrm{T}}(t)[\varDelta + d_1^2\varGamma _1^\mathrm{T}{Z_1}{\varGamma _1} + d_{12}^2\varGamma _1^\mathrm{T}{Z_2}{\varGamma _1} + d_{12}^2X_a^\mathrm{T}Z_2^{ - 1}{X_a} \\ &{}&{}+ \,{d_{12}}{\varGamma _2}E + {d_{12}}{E^\mathrm{T}}\varGamma _2^\mathrm{T} + \varGamma _3^\mathrm{T}{\varGamma _3}]\zeta (t) < 0. \\ \end{array} \end{aligned}$$

With zero initial state, the following equality holds

$$\begin{aligned} \begin{array}{l} 0< V(x(t)) < {e^{\delta {T_c}}}\int _0^t {J(t)} \text {d}t \\ \Rightarrow \int _0^t {{z^\mathrm{T}}(t)Qz(t) + 2{\omega ^\mathrm{T}}(t)Vz(t) + {\omega ^\mathrm{T}}(t)R\omega (t)} \text {d}t > \alpha \int _0^t {{\omega ^\mathrm{T}}(t)\omega (t)} \text {d}t. \\ \end{array} \end{aligned}$$

From Definition 2, system (7) is finite-time bounded and \((Q,V,R) - \alpha \) dissipative. The proof is completed. \(\square \)

3.2 Finite-time state feedback dissipative controller design

Remark 3

The following we replace coefficient matrices in Theorem 2 with (8) and Eq. (3). On the basis of Theorem 2, the stability conditions in form of LMIs can be obtained in Theorem 3.

Theorem 3

System (7) with state feedback control law (6) and \({K_j} = {F_j}{P^{ - \mathrm{T}}}\) is finite-time \((Q,V,R) - \alpha \) dissipative with respect to \(\left( {c_1^2,c_2^2,{d^2},{T_c},\alpha ,{R_c}} \right) \) at the origin with \(\mathfrak {I}({E^\mathrm{T}}PE,\rho )\) contained in the domain of attraction for the constants \({c_1}> 0,d> 0,{T_c}> 0,\delta > 0\), \(\alpha > 0\) and matrix \({R_c} > 0\), if there exist a constant \({c_2} > 0\), a nonsingular matrix P, matrices \({{\tilde{Q}}_1}> 0,{{\tilde{Q}}_2}> 0,{{\tilde{Q}}_3}> 0,{{\hat{Z}}_1}> 0,{{\tilde{Z}}_1}> 0,{{\tilde{Z}}_2} > 0\), \({X_{al}}\), \({{\bar{X}}_{al}}\)\((a = 1,2; l = 1,2,3)\), scalars \({\varepsilon _{ij}} > 0\), such that the following set of LMIs satisfies

$$\begin{aligned}&E{P^\mathrm{T}} = P{E^\mathrm{T}} \ge 0, \end{aligned}$$
(21)
$$\begin{aligned}&\left[ {\begin{array}{lll} {{\varSigma _{iisa}}} &{} {\varUpsilon _1^\mathrm{T}} &{} {{\varepsilon _{ii}}{\varUpsilon _2}} \\ * &{} { - {\varepsilon _{ii}}I} &{} 0 \\ * &{} * &{} { - I} \\ \end{array}} \right] < 0,\mathrm{{ }}i = 1,2, \ldots ,r, \end{aligned}$$
(22)
$$\begin{aligned}&\left[ {\begin{array}{lll} {{\varSigma _{ijsa}}} &{} {\varUpsilon _1^\mathrm{T}} &{} {{\varepsilon _{ij}}{\varUpsilon _2}} \\ * &{} { - {\varepsilon _{ij}}I} &{} 0 \\ * &{} * &{} { - I} \\ \end{array}} \right] + \left[ {\begin{array}{lll} {{\varSigma _{jisa}}} &{} {\varUpsilon _1^\mathrm{T}} &{} {{\varepsilon _{ji}}{\varUpsilon _2}} \\ * &{} { - {\varepsilon _{ji}}I} &{} 0 \\ * &{} * &{} { - I} \\ \end{array}} \right]< 0,\mathrm{{ }}i < j,\mathrm{{ }}i = 1,2, \ldots ,r, \end{aligned}$$
(23)
$$\begin{aligned}&\left[ {\begin{array}{ll} { - {\rho ^{ - 1}}} &{} {{y_{jk}}} \\ {y_{jk}^T} &{} { - E{\tilde{P}}{E^\mathrm{T}}} \\ \end{array}} \right] \le 0,\mathrm{{ }}k = 1,2, \ldots ,l;\mathrm{{ }}j = 1,2, \ldots ,r, \end{aligned}$$
(24)
$$\begin{aligned}&\begin{array}{l} {\mu _1}I< {R^{{1 \big / 2}}}U{\psi _1}{U^\mathrm{T}}{R^{{1 \big / 2}}} < I,\;\;\mathrm{{ }}{{{\tilde{Q}}}_1}> -P-P^\mathrm{T} - \mu _2R_c^{ - 1},\;\;\mathrm{{ }}{{{\tilde{Q}}}_2}> -P-P^\mathrm{T} - 2\mu _3R_c^{ - 1}, \\ {{{\tilde{Q}}}_3}>-P-P^\mathrm{T} - 2\mu _4R_c^{ - 1},\;\;\mathrm{{ }} {{{\tilde{Z}}}_1}>-P-P^\mathrm{T} - \frac{1}{2}\mu _5R_c^{ - 1},\;\;\mathrm{{ }} {{{\tilde{Z}}}_2} > -P-P^\mathrm{T} -\frac{1}{2}\mu _6R_c^{ - 1}, \\ \end{array} \end{aligned}$$
(25)
$$\begin{aligned} \&\left[ {\begin{array}{ll} {({d_1}{\mu _2} + {d_2}{\mu _3} + {d_2}{\mu _4} + d_1^3{\mu _5} + d_{12}^3{\mu _6}){c_1} + d(1 - {e^{ - \delta {T_c}}}) - {c_2}{e^{ - \delta {T_c}}}} &{} {{c_1}} \\ * &{} { - {\mu _1}} \\ \end{array}} \right] < 0, \end{aligned}$$
(26)

where

$$\begin{aligned}&{\varSigma _{ijsa}} = \left[ {\begin{array}{llllllll} {{\varOmega _{ijs}} + {d_{12}}Sym({{{\tilde{\varGamma }} }_2}E)}&{}{{d_1}{\varGamma _{1ijs}}}&{}{{d_{12}}{\varGamma _{1ijs}}}&{}{{d_{12}}{{{\bar{X}}}_a}}&{}{\varGamma _{3ijs}^\mathrm{T}}&{}{{P^\mathrm{T}}}&{}{{P^\mathrm{T}}}&{}{{P^\mathrm{T}}}\\ * &{}{ - {{{\tilde{Z}}}_1}}&{}0&{}0&{}0&{}0&{}0&{}0\\ * &{}*&{}{ - {{{\tilde{Z}}}_2}}&{}0&{}0&{}0&{}0&{}0\\ * &{}*&{}*&{}{{\varSigma ^1}}&{}0&{}0&{}0&{}0\\ * &{}*&{}*&{}*&{}{ - I}&{}0&{}0&{}0\\ * &{}*&{}*&{}*&{}*&{}{ - {{{\tilde{Q}}}_1}}&{}0&{}0\\ * &{}*&{}*&{}*&{}*&{}*&{}{ - {{{\tilde{Q}}}_2}}&{}0\\ * &{}*&{}*&{}*&{}*&{}*&{}*&{}{ - {{{\tilde{Q}}}_3}} \end{array}} \right] , \\&\begin{array}{l} {\varSigma _{ijsa}} = \left[ {\begin{array}{lllll} {{\varOmega _{ijs}} + {d_{12}}Sym({{{\tilde{\varGamma }} }_2}E)} &{} {{d_1}{\varGamma _{1ijs}}} &{} {{d_{12}}{\varGamma _{1ijs}}} &{} {{d_{12}}{{{\bar{X}}}_a}} &{} {\varGamma _{3ijs}^\mathrm{T}} \\ * &{} {{\varSigma ^1}} &{} 0 &{} 0 &{} 0 \\ * &{} * &{} {{\varSigma ^2}} &{} 0 &{} 0 \\ * &{} * &{} * &{} { - {{{\tilde{Z}}}_2}} &{} 0 \\ * &{} * &{} * &{} * &{} { - I} \\ \end{array}} \right] , \\ {\varOmega _{ijs}} = \left[ {\begin{array}{lllll} {{\varDelta _{ijs}}} &{} {E{{{\hat{Z}}}_1}{E^\mathrm{T}}} &{} {{A_{di}}{P^\mathrm{T}}} &{} 0 &{} {{\varOmega ^1}} \\ * &{} {{\varOmega ^2}} &{} 0 &{} 0 &{} 0 \\ * &{} * &{} {{\varOmega ^3}} &{} 0 &{} { - C_{di}^\mathrm{T}V} \\ * &{} * &{} * &{} {{{\tilde{Q}}}_2} &{} 0 \\ * &{} * &{} * &{} * &{} {{\varOmega ^4}} \\ \end{array}} \right] , \\ {\varDelta _{ijs}} = {A_i}{P^\mathrm{T}} + {B_i}({E_s}{F_j} + E_s^ - {Y_j}) + PA_i^\mathrm{T} + {({E_s}{F_j} + E_s^ - {Y_j})^\mathrm{T}}B_i^\mathrm{T} - E{{{\hat{Z}}}_1}{E^\mathrm{T}} - \delta E{P^\mathrm{T}}\\ \qquad \qquad + {\tilde{Q}}_1 + {\tilde{Q}}_2 + {\tilde{Q}}_3, \\ {\varGamma _{1ijs}} = {\left[ {\begin{array}{lllll} {{A_i}{P^\mathrm{T}} + {B_i}({E_s}{F_j} + E_s^ - {Y_j})} &{} 0 &{} {{A_{di}}{P^\mathrm{T}}} &{} 0 &{} {{B_{\omega i}}} \\ \end{array}} \right] ^\mathrm{T}}, {{{\tilde{\varGamma }} }_2} = \left[ {\begin{array}{lllll} 0 &{} {{{{\tilde{X}}}_1}} &{} {{{{\tilde{X}}}_2} - {{{\tilde{X}}}_1}} &{} { - {{{\tilde{X}}}_2}} &{} 0 \\ \end{array}} \right] , \\ {\varGamma _{3ijs}} = \left[ {\begin{array}{lllll} {\sqrt{ - Q} {C_i}{P^\mathrm{T}} + \sqrt{ - Q} {D_i}({E_s}{F_j} + E_s^ - {Y_j})} &{} 0 &{} {\sqrt{ - Q} {C_{di}}{P^\mathrm{T}}} &{} 0 &{} {\sqrt{ - Q} {D_{\omega i}}} \\ \end{array}} \right] , \\ {\varUpsilon _1} = \left[ {\begin{array}{lllll} {{N_i}{P^\mathrm{T}} + {N_{bi}}({E_s}{F_j} + E_s^ - {Y_j})} &{} 0 &{} {{N_{di}}{P^\mathrm{T}}} &{} 0 &{} {{N_{\omega i}}\mathrm{{ }}\begin{array}{llll} 0 &{} 0 &{} 0 &{} 0 \\ \end{array}} \\ \end{array}} \right] , \\ {\varUpsilon _2} = {\left[ {\begin{array}{lllllllll} {H_1^\mathrm{T}} &{} 0 &{} 0 &{} 0 &{} { - H_2^\mathrm{T}\mathrm{{V}}} &{} {{d_1}H_1^\mathrm{T}} &{} {{d_{12}}H_1^\mathrm{T}} &{} 0 &{} {H_2^\mathrm{T}{{\sqrt{ - Q} }^\mathrm{T}}} \\ \end{array}} \right] ^\mathrm{T}}, \\ {{{\bar{X}}}_a} = {\left[ {\begin{array}{lllll} 0 &{} {{\bar{X}}_{a1}^\mathrm{T}} &{} {{\bar{X}}_{a2}^\mathrm{T}} &{} {{\bar{X}}_{a3}^\mathrm{T}} &{} 0 \\ \end{array}} \right] ^\mathrm{T}},{{{\tilde{X}}}_a} = {\left[ {\begin{array}{lllll} 0 &{} {{\tilde{X}}_{a1}^\mathrm{T}} &{} {{\tilde{X}}_{a2}^\mathrm{T}} &{} {{\tilde{X}}_{a3}^\mathrm{T}} &{} 0 \\ \end{array}} \right] ^\mathrm{T}},a = 1,2, \\ {\varOmega ^1} = {B_{\omega i}} - {({C_i}{P^\mathrm{T}} + {D_i}({E_s}{F_j} + E_s^ - {Y_j}))^\mathrm{T}}V,\mathrm{{ }}{\varSigma ^1} = P + {P^\mathrm{T}} + {{{\tilde{Z}}}_1},{\varSigma ^2} = P + {P^\mathrm{T}} + {{{\tilde{Z}}}_2}, \\ {\varOmega ^2} = {{{\tilde{Q}}}_1} - E{{{\hat{Z}}}_1}{E^\mathrm{T}},\mathrm{{ }}{\varOmega ^3} = (1 - h) {{{\tilde{Q}}}_3}, \mathrm{{ }}{\varOmega ^4} = - (R - \alpha I) - D_{\omega i}^\mathrm{T}V - V{D_{\omega i}}. \\ \end{array} \end{aligned}$$

Proof

According to (19) in Theorem 2, we have

$$\begin{aligned} \begin{array}{lll} {\varPhi _a} &{}=&{} \sum \limits _{i = 1}^r {\sum \limits _{j = i}^r {\sum \limits _{s = 1}^\gamma {{h_i}{h_j}{\alpha _s}} } } ({\varPhi _{ijsa}} + \varDelta {\varPhi _{ijsa}}) \\ &{}=&{} \sum \limits _{s = 1}^\gamma {{\alpha _s}} \left[ {\sum \limits _{i = 1}^r {h_i^2({\varPhi _{iisa}} + \varDelta {\varPhi _{iisa}}) + \sum \limits _{i< j}^r {{h_i}{h_j}[({\varPhi _{ijsa}} + {\varPhi _{jisa}}) + (\varDelta {\varPhi _{ijsa}} + \varDelta {\varPhi _{jisa}})]} } } \right] < 0, \\ \end{array} \end{aligned}$$

where

$$\begin{aligned} {\Phi _{ijsa}} = \left[ {\begin{array}{lllll} {{\Omega _{ijs}} + {d_{12}}{\Gamma _2}E + {d_{12}}{E^\mathrm{T}}\Gamma _{2ijs}^\mathrm{T}} &{} {{d_1}{\Gamma _{1ijs}}} &{} {{d_{12}}{\Gamma _{1ijs}}} &{} {{d_{12}}{X_a}} &{} {\Gamma _{3ijs}^{\mathrm{T}}} \\ * &{} { - Z_1^{ - 1}} &{} 0 &{} 0 &{} 0 \\ * &{} * &{} { - Z_2^{ - 1}} &{} 0 &{} 0 \\ * &{} * &{} * &{} { - {Z_2}} &{} 0 \\ * &{} * &{} * &{} * &{} { - I} \\ \end{array}} \right] , \end{aligned}$$
(27)
$$\begin{aligned} {\Omega _{ijs}} = \left[ {\begin{array}{lllll} \Delta &{} {P{E^\mathrm{T}}{Z_1}E} &{} {{A_{di}}} &{} 0 &{} {{\Omega ^1}} \\ * &{} { - {Q_1} - {E^\mathrm{T}}{Z_1}E} &{} 0 &{} 0 &{} 0 \\ * &{} * &{} { - (1 - h){Q_3}} &{} 0 &{} { - C_{di}^\mathrm{T}V} \\ * &{} * &{} * &{} { - {Q_2}} &{} 0 \\ * &{} * &{} * &{} * &{} { - (R - \alpha I) - D_{\omega i}^\mathrm{T}V - V{D_{\omega i}}} \\ \end{array}} \right] , \end{aligned}$$
$$\begin{aligned} \begin{array}{l} {\Delta _{ijs}} = {A_i}{P^\mathrm{T}} + {B_i}({E_s}{K_j} + E_s^ - {H_j}){P^\mathrm{T}} + PA_i^\mathrm{T} + P{({E_s}{K_j} + E_s^ - {H_j})^\mathrm{T}}B_i^\mathrm{T} \\ \qquad \qquad + P({Q_1} + {Q_2} + {Q_3}){P^\mathrm{T}} - P{E^\mathrm{T}}{Z_1}E{P^\mathrm{T}} - \delta E{P^\mathrm{T}}, \\ {\Gamma _{1ijs}} = {\left[ {\begin{array}{lllll} {{A_i}{P^\mathrm{T}} + {B_i}({E_s}{K_j} + E_s^ - {H_j}){P^\mathrm{T}}} &{} 0 &{} {{A_{di}}} &{} 0 &{} {{B_{\omega i}}} \\ \end{array}} \right] ^\mathrm{T}}, \\ {\Gamma _{2ijs}} = \left[ {\begin{array}{lllll} 0 &{} {{X_1}} &{} {{X_2} - {X_1}} &{} { - {X_2}} &{} 0 \\ \end{array}} \right] , {X_a} = {\left[ {\begin{array}{lllll} 0 &{} {X_{a1}^\mathrm{T}} &{} {X_{a2}^\mathrm{T}} &{} {X_{a3}^\mathrm{T}} &{} 0 \\ \end{array}} \right] ^\mathrm{T}},\\ {\Gamma _{3ijs}} = \left[ {\begin{array}{lllll} {\sqrt{ - Q} {C_i}{P^\mathrm{T}} + \sqrt{ - Q} {D_i}({E_s}{K_j} + E_s^ - {H_j}){P^\mathrm{T}}} &{} 0 &{} {\sqrt{ - Q} {C_{di}}} &{} 0 &{} {\sqrt{ - Q} {D_{\omega i}}} \\ \end{array}} \right] , \\ \mathrm{{ }}{\Omega ^1} = {B_{\omega i}} - P{({C_i} + {D_i}({E_s}{K_j} + E_s^ - {H_j}))^\mathrm{T}}V. \\ \end{array} \end{aligned}$$

Since \({h_i},{h_j},{\alpha _s} > 0\), then Eq. (27) holds if matrix inequality \({\varPhi _{iisa}} + \varDelta {\varPhi _{iisa}} < 0\) and \(({\varPhi _{ijsa}} + {\varPhi _{jisa}}) + (\varDelta {\varPhi _{ijsa}} + \varDelta {\varPhi _{jisa}}) < 0\) is constructed. Refer to Lemma 2, \({\varPhi _{iisa}} + \varDelta {\varPhi _{iisa}} < 0\) equals to the following equality if there exists a positive \({\varepsilon _{ii}}\) such that

$$\begin{aligned}&{\varPhi _{iisa}} + {\varUpsilon _1}F(t){\varUpsilon _2} + \varUpsilon _2^\mathrm{T}{F^\mathrm{T}}(t)\varUpsilon _1^\mathrm{T} \nonumber \\&\quad \le {\varPhi _{iisa}} + {\varepsilon _{ii}}{\varUpsilon _1}\varUpsilon _1^\mathrm{T} + \varepsilon _{ii}^{ - 1}\varUpsilon _2^\mathrm{T}{\varUpsilon _2}. \end{aligned}$$
(28)

Pre- and post-multiplying Eq. (22) by

\(diag\left\{ {I,{P^{ - 1}},{P^{ - 1}},{P^{ - 1}},\underbrace{I, \ldots ,I}_3,{P^{ - 1}},\underbrace{I, \ldots ,I}_4} \right\} \) and

\(diag\left\{ {I,{P^{ - \mathrm{T}}},{P^{ - \mathrm{T}}},{P^{ - \mathrm{T}}},\underbrace{0, \ldots ,0}_3,{P^{ - \mathrm{T}}},\underbrace{0, \ldots ,0}_3} \right\} \).

Denote \({{\hat{Z}}_1} = {P^\mathrm{T}}{Z_1}P,{{\tilde{Z}}_1}= P{Z_1}{P^\mathrm{T}},\)\({{\tilde{Q}}_l} = P{Q_1}{P^\mathrm{T}}\), \({{\bar{X}}_{al}} = P{X_{al}}{P^\mathrm{T}}\), \({{\tilde{X}}_{al}} = P{X_{al}}P\)\((a = 1,2;l = 1,2,3)\), \({K_j} = {F_j}{P^{ - \mathrm{T}}}\). Using Lemma 3, we can get \( - Z_1^{ - 1} \le P + {P^\mathrm{T}} + {{\tilde{Z}}_1}\), \(- Z_2^{ - 1} \le P + {P^\mathrm{T}} + {\tilde{Z}_2}\). From (22), (28) and considering Schur complement, condition (19) can be obtained.

Using Lemma 3, we can obtain \({\lambda _3} < {\mu _2}\) from \({{\tilde{Q}}_1} < - P - {P^\mathrm{T}} - {\mu _2}R_c^{ - 1}\). Similarly, we can get \({\lambda _4} < {\mu _3}\), \({\lambda _5} < {\mu _4}\), \({\lambda _6} < 2{\mu _5}\), \({\lambda _7} < 2{\mu _6}\) from \({{\tilde{Q}}_2} < - P - {P^\mathrm{T}} - 2\mu _3R_c^{ - 1}\), \(\mathrm{{ }}{{\tilde{Q}}_3} < - P - {P^\mathrm{T}} - 2\mu _4R_c^{ - 1}\), \({{\tilde{Z}}_1} < - P - {P^\mathrm{T}} - 0.5\mu _5R_c^{ - 1}\), \(\mathrm{{ }}{{\tilde{Z}}_2} < - P - {P^\mathrm{T}} - 0.5\mu _6R_c^{ - 1}\), respectively.

Noting that P is nonsingular matrix, we can obtain the decomposition of E as follows by Lemma 6, if there exist two orthogonal matrices U and V:

$$\begin{aligned} E = U\left[ {\begin{array}{ll} {{\varSigma _r}} &{} 0 \\ * &{} 0 \\ \end{array}} \right] {V^\mathrm{T}} = U\left[ {\begin{array}{ll} {{I_r}} &{} 0 \\ * &{} 0 \\ \end{array}} \right] {\nu ^\mathrm{T}}, \end{aligned}$$

where \({\varSigma _r} = diag\left\{ {{\sigma _1},{\sigma _2}, \ldots ,{\sigma _r}} \right\} \) with \({\sigma _k} > 0\) for all \(k = 1,2, \ldots ,r\). Partition \(U = \left[ {\begin{array}{ll} {{U_1}} &{} {{U_2}} \\ \end{array}} \right] \), \(V = \left[ {\begin{array}{ll} {{V_1}} &{} {{V_2}} \\ \end{array}} \right] \) and \(\nu = \left[ {\begin{array}{ll} {{V_1}{\varSigma _r}} &{} {{V_2}} \\ \end{array}} \right] \) with \(U_2^\mathrm{T}E = 0\) and \(E{V_2} = 0\). Let \({\tilde{P}} = {U^\mathrm{T}}P\nu \), from (21), \({\tilde{P}}\) is of the following form \(\left[ {\begin{array}{ll} {{P_{11}}} &{} {{P_{12}}} \\ 0 &{} {{P_{22}}} \\ \end{array}} \right] \) and P can be expressed as follows:

$$\begin{aligned} P = E{\nu ^{ - \mathrm{T}}}{\psi _1}{\nu ^{ - 1}} + U{\psi _2}V_2^\mathrm{T}, \end{aligned}$$

where \({\psi _1} = diag\left\{ {{P_{11}},{\psi _{12}}} \right\} \), \({\psi _2} = diag\left\{ {P_{12}^\mathrm{T},P_{22}^\mathrm{T}} \right\} \) with a parameter matrix \({\psi _{12}}\). If we choose \(\psi > 0\) and symmetric, then \({\psi _1} > 0\) and symmetric. Furthermore, \({\bar{P}} = {R^{{{ - 1} \big / 2}}}U\psi _1^{ - 1}{U^\mathrm{T}}{R^{{{ - 1} \big / 2}}}\) is a solution of \({P^{ - 1}}E = {E^\mathrm{T}}{R^{{1 \big / 2}}}{\bar{P}}{R^{{1 \big / 2}}}E\) and P satisfies

$$\begin{aligned} P{E^\mathrm{T}} = E{P^\mathrm{T}} = E{\nu ^{ - \mathrm{T}}}{\psi _1}{\nu ^{ - 1}}{E^\mathrm{T}}. \end{aligned}$$

On the other hand, let \(I< {\bar{P}} < \mu _1^{ - 1}I\) and noting that \({\bar{P}} = {R^{{{ - 1} \big / 2}}} U\psi _1^{ - 1}{U^\mathrm{T}}{R^{{{ - 1} \big / 2}}}\) and \({\psi _1}\) is an orthogonal matrix, so \({\mu _1}I< {R^{{1 \big / 2}}} U{\psi _1}{U^\mathrm{T}}{R^{{1 \big / 2}}} < I\), we have \({\lambda _1} > 1,{\lambda _2} < {\mu _1}\).

From the above discussion, condition (20) can be guaranteed by

$$\begin{aligned} \mu _1^{ - 1}{c_1} + ({d_1}{\mu _2} + {d_2}{\mu _3} + {d_2}{\mu _4} + d_1^3{\mu _5} + d_{12}^2{\mu _6}){c_1} + d(1 - {e^{ - \delta {T_c}}}) - {c_2}{e^{ - \delta {T_c}}} < 0. \end{aligned}$$
(29)

Given Schur complement Lemma, (29) can be rewritten as

$$\begin{aligned} \left[ {\begin{array}{ll} {({d_1}{\mu _2} + {d_2}{\mu _3} + {d_2}{\mu _4} + d_1^3{\mu _5} + d_{12}^3{\mu _6}){c_1} + d(1 - {e^{ - \delta {T_c}}}) - {c_2}{e^{ - \delta {T_c}}}} &{} {\sqrt{{c_1}} } \\ * &{} { - {\mu _1}} \\ \end{array}} \right] < 0. \end{aligned}$$

First, for every \(x(t) \in \mathfrak {I}({E^\mathrm{T}}PE,\rho )\), by \(\mathfrak {I}({E^\mathrm{T}}PE,\rho ) \subset \ell ({H_i})\), then \(x(t) \in \ell ({H_i})\). We can obtain the following singular with orthogonal matrices U, V and non-singular matrix \({\varSigma _r}\)

$$\begin{aligned} {U^\mathrm{T}}EV = \left[ {\begin{array}{ll} {{\varSigma _r}} &{} 0 \\ 0 &{} 0 \\ \end{array}} \right] ,\,{U^\mathrm{T}}PU = \left[ {\begin{array}{ll} {{{{\bar{P}}}_1}} &{} {{{{\bar{P}}}_2}} \\ {{\bar{P}}_2^\mathrm{T}}\, &{} {{{{\bar{P}}}_3}} \\ \end{array}} \right] ,{V^\mathrm{T}}x(t) = \left[ {\begin{array}{l} {{x_1}(t)} \\ {{x_2}(t)} \\ \end{array}} \right] ,\,{H_i}V = \left[ {\begin{array}{ll} {{H_{i1}}} &{} {{H_{i2}}}. \\ \end{array}} \right] \end{aligned}$$

It follows that \({H_{i2}} = 0\), otherwise, let \({x_1}(t) = 0\) and \(|{h_{i2k}}{x_2}(t)| > {\rho ^{{1 \big / \big / 2}}}\), then \({x^\mathrm{T}}(t){E^\mathrm{T}}PEx(t) = 0\) that is \(|{h_{ik}}x(t)| > {\rho ^{{1 \big / 2}}}\), it contradicts that \(\mathfrak {I}({E^\mathrm{T}}PE,\rho ) \subset \ell ({H_i})\). Then \({x^\mathrm{T}}(t){E^\mathrm{T}}PEx(t) = x_1^\mathrm{T}(t){P_1}{x_1}(t) \le \rho \), \({H_i}x(t) = {H_{i1}}{x_1}(t)\).

From the discussion, the condition \(\mathfrak {I}({E^\mathrm{T}}PE,\rho ) \subset \ell ({H_i})\) in Theorem 1 is equivalent to

$$\begin{aligned} {h_{i1k}}{(\varSigma _r^\mathrm{T}{P_1}{\varSigma _r})^{ - 1}}h_{i1k}^T \le {\rho ^{ - 1}},\quad \mathrm{{ }}k = 1,2, \ldots ,l. \end{aligned}$$

Using Schur complements, we have

$$\begin{aligned} \left[ {\begin{array}{ll} { - {\rho ^{ - 1}}} &{} {{h_{i1k}}} \\ {h_{i1k}^T} &{} { - \varSigma _r^\mathrm{T}{P_1}{\varSigma _r}} \\ \end{array}} \right] \le 0,\quad \mathrm{{ }}k = 1,2, \ldots ,l \end{aligned}$$

or

$$\begin{aligned} \left[ {\begin{array}{ll} { - {\rho ^{ - 1}}} &{} {\left[ {\begin{array}{ll} {h_{i1k}^{}} &{} 0 \\ \end{array}} \right] } \\ {{{\left[ {\begin{array}{ll} {h_{i1k}^{}} &{} 0 \\ \end{array}} \right] }^\mathrm{T}}} &{} { - {{\left[ {\begin{array}{ll} {{\varSigma _r}} &{} 0 \\ 0 &{} 0 \\ \end{array}} \right] }^\mathrm{T}}\left[ {\begin{array}{ll} {{{{\bar{P}}}_1}} &{} {{{{\bar{P}}}_2}} \\ {{\bar{P}}_2^\mathrm{T}} &{} {{{{\bar{P}}}_3}} \\ \end{array}} \right] \left[ {\begin{array}{ll} {{\varSigma _r}} &{} 0 \\ 0 &{} 0 \\ \end{array}} \right] } \\ \end{array}} \right] \le 0,\mathrm{{ }}k = 1,2, \ldots ,l, \end{aligned}$$
(30)

where \({h_{i1k}}\) is the \(k\mathrm{{th}}\) row of \({H_{i1}}\).

Denote \({Y_j} = {H_j}{P^\mathrm{T}}\), pre-and post-multiplying Eq.(30) by \(diag\left\{ {I,PV} \right\} \) and \(diag\left\{ {{I^\mathrm{T}},{V^\mathrm{T}}{P^\mathrm{T}}} \right\} \). \({y_{jk}}\) is the \(k\mathrm{{th}}\) row of \({Y_j}\), so we can obtain (24). The proof is complete. \(\square \)

Remark 4

To achieve the goal of reducing the result conservatism, we choose the largest ellipsoid that satisfies the conditions of Theorem 1. In this way, the gravitational domain is more accurate.

Then the size of the ellipsoids is surveyed by a form reference set. \({x_0} \in {R^n}\) is the initial state. To verify the stability of initial state \({x_0} \in {R^n}\), we can establish the optimization problem as follows:

$$\begin{aligned} \begin{array}{l} \mathop {\max }\limits _{P,Q> 0,{K_j},{H_j},\alpha > 0} \beta \\ s.t.\left\{ {\begin{array}{*{20}{c}} {(1)\mathrm{{ }}\beta {\chi _R} \subset \mathfrak {I}({E^\mathrm{T}}PE,\rho )\mathrm{{ }}} \\ {(2)\mathrm{{ Inequality(24 - 26,28) }}} \\ {(3)\mathrm{{ |}}{h_{iq}}x(t)| \le 1,\forall x(t) \in \mathfrak {I}({E^\mathrm{T}}PE,\rho )} \\ \end{array}} \right. \\ \end{array}. \end{aligned}$$
(31)

Moreover, condition (1) in (31) is equivalent to

$$\begin{aligned} {\beta ^2}x_0^\mathrm{T}{E^\mathrm{T}}PE{x_0} \le \rho . \end{aligned}$$

By Schur’s complements, it can be represented as

$$\begin{aligned} \left[ {\begin{array}{ll} { - \mu } &{} {x_0^\mathrm{T}{E^\mathrm{T}}} \\ * &{} { - {P^{ - 1}}} \\ \end{array}} \right] \le 0,\mathrm{{ }}, \end{aligned}$$
(32)

where \(\mu = {\rho \big / {{\beta ^2}}}\).

Pre-and post-multiplying (32) by \(diag\left\{ {I,P} \right\} \) and \(diag\left\{ {{I^\mathrm{T}},{P^\mathrm{T}}} \right\} \), one sufficient condition satisfying (32) is

$$\begin{aligned} \left[ {\begin{array}{ll} { - \mu } &{} {x_0^\mathrm{T}{E^\mathrm{T}}{P^\mathrm{T}}} \\ * &{} { - {P^\mathrm{T}}} \\ \end{array}} \right] \le 0,\mathrm{{ }}. \end{aligned}$$
(33)

By Theorem 3 and the above certificate, the problem in (31) is equivalent to the minimization problem

$$\begin{aligned} \begin{array}{l} \min \mu \mathrm{{ }} \\ s.t.\mathrm{{ inequality}}(24) \sim (28),(36). \\ \end{array} \end{aligned}$$
(34)

Remark 5

In Theorem 3, the results that guarantee finite-time dissipative of the uncertain systems can transform to a LMI problem with parameter \(\delta \), if there is a known \({T_c}\):

$$\begin{aligned} \begin{array}{l} \mathop {\min \mathrm{{ c}}_2^2 + \alpha }\limits _{P,{{{\tilde{Q}}}_1},{{{\tilde{Q}}}_2},{{{\tilde{Q}}}_3},{{{\hat{Z}}}_1},{{{\tilde{Z}}}_1},{{{\tilde{Z}}}_2},{X_{al}},{{{\bar{X}}}_{al}},\mu ,\delta } \\ \mathrm{{ }}s.t\mathrm{{ (24)}} \sim (\mathrm{{28),(36)}}\mathrm{{.}} \\ \end{array} \end{aligned}$$
(35)

4 Numerical examples

In this section, two examples are introduced to show the effectiveness of our results.

Example 1

Consider the inverted pendulum model as follows (Han et al. 2012a, b):

$$\begin{aligned}\begin{array}{l} {{\dot{x}}_1} = {x_2}, \\ \left[ {(M + m)(J + m{l^2}) - {m^2}{l^2}{{\cos }^2}{x_1}} \right] {{\dot{x}}_2} \\ \mathrm{{ }} = (M + m)mg{x_3} - {m^2}{l^2}{x_3}{\cos ^2}{x_1} - ml\cos {x_1}u, \\ 0 = l\sin {x_1} - {x_3}, \\ \end{array} \end{aligned}$$

where \({x_1} \in ({{ - \pi } \big / 2},{\pi \big / {2)}}\) stands for the angle of from the vertical to the pendulum, \({x_2}\) is the angular velocity, \({x_3}\) is the horizontal distance from cart to the center of pendulum, Mkg and m are the mass of the cart and the pendulum, \(J = {{m{l^2}} \big / 3}\) is the moment of inertia, \(g = 9.8{m \big / {{s^2}}}\) is the gravity constant, l is the length from shaft axis to the pendulum center of mass and u is the force exert on the cart. \(\beta \) stands for the maximal angular velocity, and \(x_2^2(t) = {\beta ^2}\varDelta ({x_2}(t))\) with \({\varDelta ^2}({x_2}(t)) \le 1\). Choose membership function \({h_1}({x_1}) = ({\sin ^2}{\theta _0} - {\sin ^2}{x_1}(t))/{\sin ^2}{\theta _0}\), , \({h_2}({x_1}) = 1 - {h_1}({x_1})\), \({\theta _0} \in ({{ - \pi } \big / 2},{\pi \big / {2)}}\). And external disturbances \(\omega (t) = \sin (5t)\). Let \(M = 1.3282\), \(m = 0.22\), \(\beta = 3\), \(l = 0.304\). Thus, the global fuzzy systems is represented by the following systems:

$$\begin{aligned} \begin{array}{l} E\dot{x}(t) = \sum \limits _{i = 1}^2 {{h_i}(\xi (t))} [({A_i} + \varDelta {A_i})x(t) + {B_i}u(t) + {B_{\omega i}}\omega (t)], \\ z(t) = \sum \limits _{i = 1}^2 {{h_i}(\xi (t))[} {C_i}x(t) + {D_{\omega i}}\omega (t)], \\ \end{array} \end{aligned}$$

where

$$\begin{aligned}&\begin{array}{l} E = \left[ {\begin{array}{lll} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 \\ \end{array}} \right] ,{A_1} = \left[ {\begin{array}{lll} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} {\mathrm{{89}}\mathrm{{.0189}}} \\ {\mathrm{{0}}\mathrm{{.3040}}} &{} 0 &{} { - 1} \\ \end{array}} \right] ,{A_2} = \left[ {\begin{array}{lll} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} {\mathrm{{81}}\mathrm{{.7087}}} \\ {\mathrm{{0}}\mathrm{{.2514}}} &{} 0 &{} { - 1} \\ \end{array}} \right] , \\ \end{array}\\&\begin{array}{l} {C_i} = \left[ {\begin{array}{lll} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 \\ \end{array}} \right] ,{B_1} = \left[ {\begin{array}{l} 0 \\ { - \mathrm{{1}}\mathrm{{.7836}}} \\ 0 \\ \end{array}} \right] ,{B_2} = \left[ {\begin{array}{l} 0 \\ { - 0.8186} \\ 0 \\ \end{array}} \right] ,{B_{\omega i}} = \left[ {\begin{array}{l} 0 \\ 1 \\ 0 \\ \end{array}} \right] ,{D_{\omega i}} = \left[ {\begin{array}{*{20}{c}} 1 \\ 1 \\ 0 \\ \end{array}} \right] , \\ \end{array}\\&\begin{array}{l} {H_1} = {\left[ {\begin{array}{lll} 0 &{} { - 0.3924} &{} 0 \\ \end{array}} \right] ^\mathrm{T}},{H_2} = {\left[ {\begin{array}{lll} 0 &{} { - 0.1801} &{} 0 \\ \end{array}} \right] ^\mathrm{T}},{N_i} = \left[ {\begin{array}{lll} 0 &{} 0 &{} 9 \\ \end{array}} \right] . \\ \end{array} \end{aligned}$$
Fig. 1
figure 1

The local optimal bound of \(\alpha \) and \(\delta \)(Example 1)

Fig. 2
figure 2

The simulation of state response of the open-loop system (Example 1)

Fig. 3
figure 3

The simulation of state response of the closed-loop system(Example 1)

Given \(Q = - 0.1{I_3},V = {\left[ {\begin{array}{lll} {1.5} &{} {1.5} &{} {1.5} \\ \end{array}} \right] ^\mathrm{T}},R = 1.\) From Theorem 3, it is easy to know that the minimum value of \(\mathrm{{c}}_2^2 + \alpha \) is related to \(\delta \). Figure 1 depicts the corresponding value with various \(\delta \). Choosing \(\delta = 1.4\), \(\alpha = 0.1\) and the optimal value \({c_2} = \mathrm{{3}}\mathrm{{.9300}}\), we have the value of \({K_i}\) as follows:

$$\begin{aligned} \begin{array}{l} {K_1} = \left[ {\begin{array}{lll} { - \mathrm{{1}}\mathrm{{.8859}}} &{} { - \mathrm{{1}}\mathrm{{.9997}}} &{} { - \mathrm{{39}}\mathrm{{.1980}}} \\ \end{array}} \right] , \\ {K_2} = \left[ {\begin{array}{lll} { - \mathrm{{2}}\mathrm{{.3255}}} &{} { - \mathrm{{2}}\mathrm{{.8000}}} &{} { - \mathrm{{26}}\mathrm{{.3229}}} \\ \end{array}} \right] . \\ \end{array} \end{aligned}$$

The state response and the state response of the system for initial situation are shown in Figs. 2 and 3. It can explain that the systems are finite-time bound and \((Q,V,R) - \alpha \) dissipative under the controllers, which implies that the result is well achieved and supports our theoretical.

Example 2

Consider the following fuzzy system described by T–S fuzzy model with two fuzzy rules:

$$\begin{aligned} Ex(t)= & {} \sum \limits _{i = 1}^2 {{h_i}} (\varepsilon (t))[({A_i} + \varDelta {A_i}(t))x(t) + ({A_{di}} + \varDelta {A_{di}}(t))x(t - d(t)) \\&+ {B_i}sat(u(t)) + {B_{\omega i}}\omega (t)], \\ z(t)= & {} \sum \limits _{i = 1}^2 {{h_i}} (\varepsilon (t))[{C_i}x(t) + {C_{di}}x(t - d(t)) + {D_i}u(t) + {D_{\omega i}}\omega (t)], \\ \end{aligned}$$

where

$$\begin{aligned}\begin{array}{l} E = \left[ {\begin{array}{lll} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 \\ \end{array}} \right] ,{A_1} = \left[ {\begin{array}{lll} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 1 \\ {0.1} &{} 0 &{} 1 \\ \end{array}} \right] ,{A_2} = \left[ {\begin{array}{lll} 1 &{} 1 &{} 1 \\ 0 &{} 0 &{} 1 \\ {0.1} &{} 0 &{} 1 \\ \end{array}} \right] ,{A_{d1}} = \left[ {\begin{array}{lll} 1 &{} 1 &{} {0.1} \\ 1 &{} 1 &{} 0 \\ {0.1} &{} 0 &{} {0.1} \\ \end{array}} \right] , \\ \end{array}\\\begin{array}{l} {A_{d2}} = \left[ {\begin{array}{lll} 0 &{} 1 &{} {0.1} \\ 1 &{} 1 &{} 0 \\ 1 &{} 0 &{} {0.1} \\ \end{array}} \right] ,{C_1} = \left[ {\begin{array}{lll} 1 &{} 0 &{} 0 \\ 0 &{} {0.1} &{} 0 \\ 0 &{} 0 &{} 0 \\ \end{array}} \right] ,{C_2} = \left[ {\begin{array}{lll} 1 &{} 0 &{} 1 \\ 0 &{} {0.1} &{} 0 \\ 0 &{} 0 &{} {0.1} \\ \end{array}} \right] ,{B_i} = \left[ {\begin{array}{l} {0.1} \\ {0.1} \\ {0.1} \\ \end{array}} \right] , \\ \end{array}\\\begin{array}{l} {C_{d1}} = \left[ {\begin{array}{lll} {0.1} &{} 0 &{} 0 \\ 0 &{} {0.1} &{} 0 \\ 0 &{} 0 &{} {0.1} \\ \end{array}} \right] ,{C_{d2}} = \left[ {\begin{array}{lll} 1 &{} 0 &{} 0 \\ 0 &{} {0.1} &{} 0 \\ 0 &{} 0 &{} 0 \\ \end{array}} \right] ,{D_1} = \left[ {\begin{array}{l} {0.1} \\ {0.1} \\ {0.1} \\ \end{array}} \right] ,{D_2} = \left[ {\begin{array}{l} {0.1} \\ 0 \\ {0.1} \\ \end{array}} \right] , \\ \end{array}\\\begin{array}{l} {B_{wi}} = \left[ {\begin{array}{lll} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} {0.1} \\ \end{array}} \right] ,{D_{wi}} = \left[ {\begin{array}{lll} {0.1} &{} 0 &{} 0 \\ 0 &{} {0.3} &{} 0 \\ 0 &{} 0 &{} {0.1} \\ \end{array}} \right] ,{H_{1i}} = {H_{2i}} = \left[ {\begin{array}{l} {0.1} \\ {0.1} \\ {0.1} \\ \end{array}} \right] , \\ \end{array}\\\begin{array}{l} {N_i} = \left[ {\begin{array}{lll} {0.1} &{} {0.1} &{} {0.1} \\ \end{array}} \right] ,{N_{di}} = \left[ {\begin{array}{lll} {0.1} &{} {0.1} &{} {0.1} \\ \end{array}} \right] ,{N_{wi}} = \left[ {\begin{array}{lll} {0.1} &{} {0.1} &{} {0.1} \\ \end{array}} \right] . \\ \end{array} \end{aligned}$$

We let

$$\begin{aligned} Q = \left[ {\begin{array}{lll} { - 0.01} &{} 0 &{} 0 \\ 0 &{} { - 0.01} &{} 0 \\ 0 &{} 0 &{} { - 0.01} \\ \end{array}} \right] , V = \left[ {\begin{array}{lll} {0.1} &{} 0 &{} 0 \\ 0 &{} {0.1} &{} 0 \\ 0 &{} 0 &{} {0.1} \\ \end{array}} \right] , R = \left[ {\begin{array}{lll} 1 &{} 0 &{} 0 \\ 0 &{} 2 &{} 0 \\ 0 &{} 0 &{} 2 \\ \end{array}} \right] . \end{aligned}$$

Choose \(\delta = 0.285\), \({\varepsilon _{11}} = {\varepsilon _{21}} = 0.21\), \({\varepsilon _{12}} = {\varepsilon _{22}} = 0.2\), \({d_1} = 0\), \({d_2} = 1\), \({T_c} = 2,d = 1\), then the following results are obtained by LMIs Toolbox in Matlab,

$$\begin{aligned}&P = \left[ {\begin{array}{lll} { - 36.9566} &{} { - 23.0220} &{} {0.2704} \\ { - 17.0093} &{} { - 32.4030} &{} {0.0292} \\ { - 0.2738} &{} {0.0549} &{} { - 0.0950} \\ \end{array}} \right] \mathrm{{, }}{{{\tilde{Z}}}_2} = \left[ {\begin{array}{lll} {\mathrm{{8}}\mathrm{{.8451}}} &{} {\mathrm{{19}}\mathrm{{.0409}}} &{} {\mathrm{{0}}\mathrm{{.0082}}} \\ {\mathrm{{19}}\mathrm{{.0409}}} &{} {\mathrm{{39}}\mathrm{{.6309}}} &{} { - \mathrm{{0}}\mathrm{{.0799}}} \\ {\mathrm{{0}}\mathrm{{.0082}}} &{} { - \mathrm{{0}}\mathrm{{.0799}}} &{} {\mathrm{{0}}\mathrm{{.1905}}} \\ \end{array}} \right] ,\\&{{{\tilde{Z}}}_1} = 1.0e + 003\mathrm{{ }}*\left[ {\begin{array}{lll} {\mathrm{{2}}\mathrm{{.6207 }}} &{} { - 0.0000} &{} {0.0000} \\ { - 0.0000} &{} {\mathrm{{ 2}}\mathrm{{.6207}}} &{} { - 0.0000} \\ {0.0000} &{} { - 0.0000} &{} {\mathrm{{2}}\mathrm{{.6207}}} \\ \end{array}} \right] ,\\&{{{\hat{Z}}}_1} = 1.0e + 003\mathrm{{ }}*\left[ {\begin{array}{lll} {\mathrm{{8}}\mathrm{{.8451}}} &{} {\mathrm{{0}}\mathrm{{.5222}}} &{} 0 \\ {\mathrm{{0}}\mathrm{{.5222}}} &{} {\mathrm{{7}}\mathrm{{.6363}}} &{} 0 \\ 0 &{} 0 &{} 0 \\ \end{array}} \right] ,\\&{{{\tilde{Q}}}_1} = 1.0e + 003\mathrm{{ }}*\left[ {\begin{array}{lll} {\mathrm{{1}}\mathrm{{.6556}}} &{} { - \mathrm{{0}}\mathrm{{.0056}}} &{} { - \mathrm{{0}}\mathrm{{.0003}}} \\ { - \mathrm{{0}}\mathrm{{.0056}}} &{} {\mathrm{{1}}\mathrm{{.6554}}} &{} { - \mathrm{{0}}\mathrm{{.0001}}} \\ { - \mathrm{{0}}\mathrm{{.0003}}} &{} { - \mathrm{{0}}\mathrm{{.0001}}} &{} {0.0002} \\ \end{array}} \right] ,\\&{{{\tilde{Q}}}_2} = 1.0e + 003\mathrm{{ }}*\left[ {\begin{array}{lll} {\mathrm{{1}}\mathrm{{.6556}}} &{} { - \mathrm{{0}}\mathrm{{.0056}}} &{} { - \mathrm{{0}}\mathrm{{.0003}}} \\ { - \mathrm{{0}}\mathrm{{.0056}}} &{} {\mathrm{{1}}\mathrm{{.6554}}} &{} { - \mathrm{{0}}\mathrm{{.0001}}} \\ { - \mathrm{{0}}\mathrm{{.0003}}} &{} { - \mathrm{{0}}\mathrm{{.0001}}} &{} {0.0002} \\ \end{array}} \right] ,\\&{{{\tilde{Q}}}_3} = 1.0e + 003\mathrm{{ }}*\left[ {\begin{array}{lll} {\mathrm{{1}}\mathrm{{.9812}}} &{} {\mathrm{{0}}\mathrm{{.0858}}} &{} { - \mathrm{{0}}\mathrm{{.0011}}} \\ {\mathrm{{0}}\mathrm{{.0858}}} &{} {\mathrm{{1}}\mathrm{{.7466}}} &{} { - \mathrm{{0}}\mathrm{{.0003}}} \\ { - \mathrm{{0}}\mathrm{{.0011}}} &{} { - \mathrm{{0}}\mathrm{{.0003}}} &{} {0.0002} \\ \end{array}} \right] . \end{aligned}$$

The state feedback gain matrix is obtained as

$$\begin{aligned}\begin{array}{l} {K_1} = \left[ {\begin{array}{lll} { - \mathrm{{1}}\mathrm{{.1724}}} &{} { - \mathrm{{0}}\mathrm{{.0116}}} &{} {\mathrm{{4}}\mathrm{{.8657}}} \\ \end{array}} \right] ,{K_2} = \left[ {\begin{array}{lll} { - \mathrm{{1}}\mathrm{{.0967 }}} &{} { - \mathrm{{0}}\mathrm{{.3054}}} &{} {\mathrm{{4}}\mathrm{{.0653}}} \\ \end{array}} \right] , \\ {H_1} = \left[ {\begin{array}{lll} { - \mathrm{{0}}\mathrm{{.3581}}} &{} {\mathrm{{0}}\mathrm{{.1926}}} &{} {\mathrm{{1}}\mathrm{{.1706}}} \\ \end{array}} \right] ,{H_2} = \left[ {\begin{array}{lll} {\mathrm{{0}}\mathrm{{.3926 }}} &{} { - \mathrm{{0}}\mathrm{{.4335}}} &{} { - \mathrm{{1}}\mathrm{{.3598}}} \\ \end{array}} \right] . \\ \end{array} \end{aligned}$$
Fig. 4
figure 4

The simulation of state response of the closed-loop system (Example 2)

Fig. 5
figure 5

The simulation of the control input (Example 2)

Fig. 6
figure 6

The invariant ellipsoids and a trajectory (Example 2)

We take the initial condition is \({x_0} = {\left[ {\begin{array}{lll} { - 1} &{} 1 &{} {0.5} \\ \end{array}} \right] ^\mathrm{T}}\), handle the LMI optimization problem (34), the value of \({\mu _{\min }} = 0.0069\). Let the fuzzy weighting function be \({h_1}({x_1}) = {1 \big / {[1 + \exp (0.5({x_1}))}}]\), \(\mathrm{{ }}{h_2}({x_2}) = 1 - {h_1}({x_1})\), disturbance input is \(\omega (t) = \exp ( - t)\sin ( - t)\), uncertainty is \({F_i}(t) = \sin t\). The state trajectories of the system are shown in Fig. 4. Figure 5 gives the simulation of the control input, and Fig. 6 is the invariant ellipsoids and a trajectory. Those imply the feasibility and superiority of our results.

5 Conclusions

This paper studies the problem of finite-time dissipative control for uncertain singular T–S fuzzy time-varying delay systems subject to actuator saturation. Initially, based on appropriate Lyapunov–Krasovskii functional, introducing some free matrices and using the convexity property of the matrix inequality, sufficient conditions are derived to ensure the closed-loop system is finite-time bounded and satisfies dissipative disturbance attenuation level in a given finite-time interval. Then a designed controller is provided by solving a linear matrix inequality optimization problem. Finally, some results are provided to explain the feasibility and superiority of the proposed method. However, we consider actuator saturation without estimating the domain of attraction of delay. To this point, further study would focus on it.