1 Introduction

Up to now, fractional-order systems have emerged in various fields [1, 3,4,5, 8, 11]. In particular, as a special kind of fractional-order system, PSFS has attracted more and more attention and its stability has been studied by a large number of scholars. There are many papers on the asymptotic stability of PSFS [2, 25], that is, the stability in sufficiently long or infinite time intervals. However, in many cases, the state of the system only needs to remain in a certain range for a finite time interval. Therefore, the so-called finite-time stability (FTS) was first proposed in the reference [21]. The pieces of the literature [13, 24] applied the concept of FTS to PSFS. Later, the paper [14] studied the guaranteed cost FTS of PSFS. On the basis of FTS, guaranteed cost FTS adds a condition of guarantee cost, that is, an upper bound value of the cost is determined.

There are many factors affecting the stability of PSFS, for example, switching signals, external factors, impulse behavior and unstable subsystems. The reference [24] studied the FTS of the PSFS under average dwell time (ADT) switching, and references [15, 18] studied the FTS of the PSFS under mode-dependent ADT switching. However, there is no conclusion on the stability of PSFS under \(\Phi \)DADT strategy [22], which covers both mode-dependent ADT and ADT schemes and is more effective than the two. At the same time, external factors may also affect the stability of PSFS, such as external disturbance, tool wear and model error, many people take these factors as D-disturbance [17]. The reference [18] gave the conclusion of the guaranteed cost FTS of PSFS with D-disturbance. If there is impulse dynamics behavior, that is, the state of the system changes instantaneously, then it may also affect the stability of the PSFS [9, 12]. The reference [15] studied the FTS of PSFS when the impulse occurred only at the switching time. The reference [16] studied the guaranteed cost FTS of the PSFS with impulse occurring at any time. In addition, it is known that a switched system with unstable subsystems may be stabilized by designing a set of suitable controllers and switching signals. In the process of controller design, some small uncertain factors can affect the efficiency of the designed controller. Therefore, the fragility of the controller may also affect the system’s stability. References [19, 20] eliminated this uncertainty by designing a non-fragile controller to ensure that the system’s performance reaches an ideal bound. The above works only consider one or two factors when studying the stability and control issues of the PSFS. As far as we know, there are no research results on the situation where the four factors are considered simultaneously, and it is worth exploring further.

Inspired by the above works, the article studies the guaranteed cost FTS of PSFS with D-disturbance and impulse under the \(\Phi \)DADT strategy. The main contributions can be summarized as follows: (1) Four kinds of instability factors, switching signals, external factors, impulse behavior and unstable subsystems, are first-ever considered simultaneously for PSFS. (2) The \(\Phi \)DADT strategy is applied to the PSFS for the first time, and better stability results are obtained. (3) The conclusions obtained not only cover most of the existing relevant results but also provide some new situations that were not previously considered.

The outline of this paper is as follows. Section 2 gives some preliminary knowledge. In Sect. 3, the guaranteed cost FTS condition of the PSFS with D-disturbance and impulses are given under the \(\Phi \)DADT strategy, and the finite-time certain and robust controllers are designed to ensure the stability of the closed-loop PSFS. In Sect. 4, an example is given to signify the obtained results’ validity. The final section provides the conclusion.

2 Problem Formulation and Preliminaries

In this paper, \(\mathbb {R}\) (\(\mathbb {R}_{+}\)) and \(\mathbb {N}_{+}\), respectively, represent the set of real (positive) numbers and positive integers. \(\mathbb {R}^{n}\) (\(\mathbb {R}^{n}_{+}\)) and \(\mathbb {R}^{n\times n}\) stand for the set of n-dimensional real (positive) vectors and \(n\times n\) real matrices, respectively. The notation \(\forall \) \((\in ,\) \(\triangleq )\) means “for all” (“in,” “a shorthand for”). The notation \(\Longleftrightarrow \) \((\Longrightarrow ,\Longleftarrow )\) means “if and only if” (“sufficient conditions,” “necessary conditions”). \(\succ \) is used to represent the semi-order relationship between vectors. In addition, T represents the transpose for a vector or matrix, and unless otherwise stated, the matrix dimensions in this paper are adaptive to operation. \(M([t_{0},t_{1}])\) and \(C([t_{0},t_{1}])\) represent the space of integrable and continuous functions on time interval \([t_{0},t_{1}]\), respectively. \(\Gamma (\alpha )\) is the Gamma function of the variable \(\alpha \).

First, we review the fractional-order integral and derivative and some inequalities used in this paper.

Definition 1

For \(\forall g(t)\in M([t_{0},t])\) and \(\alpha \in (0,1)\), the fractional integral of \(\alpha \)-order of g(t) is given as follows:

$$\begin{aligned} _{t_{0}}I^{\alpha }_{t}g(t)=\frac{1}{\Gamma (\alpha )} \int _{t_{0}}^{t}(t-u)^{\alpha -1}g(u)\text {d}u. \end{aligned}$$

Definition 2

For \(\forall g(t)\in M([t_{0},t])\) and \(\alpha \in (0,1)\), the Caputo fractional derivative of \(\alpha \)-order of g(t) is given as follows

$$\begin{aligned} _{t_{0}}\mathcal {D}^{\alpha }_{t}g(t)=\frac{1}{\Gamma (1-\alpha )} \int _{t_{0}}^{t}\frac{g^{'}(u)}{(t-u)^{\alpha }}\text {d}u. \end{aligned}$$

Lemma 1

(Gronwall–Bellman inequality) Assume \(M\ge 0\), nonnegative function \(u(t),v(t)\in C([t_{0},t_{1}])\), and satisfy \(u(t)\le M+\int _{t_{0}}^{t_{1}}u(s)v(s)\text {d}s,t\in [t_{0},t_{1}]\), then

$$\begin{aligned} u(t) \le M\exp \left\{ {\int _{{t_{0} }}^{{t_{1} }} v (s){\text {d}}s} \right\} . \end{aligned}$$

Lemma 2

(\(C_{p}\) inequality) For \(\alpha \in (0,1)\) and any \(z_{i}\in \mathbb {R}_{+},i=1,2,\cdots ,k\), then

$$\begin{aligned} \sum _{k=1}^{n}|z_{k}|^{\alpha }\le n^{1-\alpha } \left( \sum _{k=1}^{n}|z_{k}|\right) ^{\alpha }. \end{aligned}$$

Lemma 3

(Young’s inequality) For \(\forall \) \(g,h\in \mathbb {R_{+}}\) and \(x,z\in \mathbb {R}\), then

$$\begin{aligned} |x|^{g}|z|^{h}\le \frac{g}{g+h}|x|^{g+h}+\frac{h}{g+h}|z|^{g+h}. \end{aligned}$$

Consider the following PSFS

$$\begin{aligned} \left\{ \begin{array}{lr} _{t_{0}}\mathcal {D}^{\alpha }_{t}z(t)=D_{1}A_{\delta (t)} z(t)+D_{2}B_{\delta {(t)}}u(t),&{}t\ne t_{k,i},\\ z(t)=D_{3}C_{\varrho (t)}z(t^{-}),&{}t=t_{k,i},\\ \end{array} \right. \end{aligned}$$
(1)

where \(z(t)\in \mathbb {R}^{n}\) and \(u(t)\in \mathbb {R}^{m}\) stand for the system state and control input, respectively. Let \(t_{0}=0\) be the initial time. \(\delta (t):[t_{0},+\infty )\mapsto S=\{1,2,\cdots ,s\}\) is the switching signal, and the impulsive signal \(\varrho (t)\) takes its value from \(H=\{1,2,\cdots ,h\}\), where s and \(h\in \mathbb {N}_{+}\) are the numbers of subsystems and impulsive, respectively. Perturbations \(D_{i}\in [\underline{D}_{i},{\overline{D}}_{i}],(i=1,2,3)\), where \({\overline{D}}_{i}\succeq {\underline{D}}_{i}\succeq 0,\) and matrices \({\overline{D}}_{i},\underline{D}_{i}\) are all diagonal. \(A_{p},B_{p}\) (\(\forall p\in S\)) and \(C_{r}\) (\(\forall r\in H\)) are constant matrices. \(t_{0},t_{1},\cdots ,t_{k},t_{k+1},\cdots \) are the sequence of switching instants over \([t_{0},\infty )\). For \(t\in [t_{k},t_{k+1}),t_{k,i},i=0,1,2,\cdots ,g_{k},\) is the ith impulsive instant over \([t_{k},t_{k+1})\) satisfying \(t_{k}=t_{k,0}\le t_{k,1}<t_{k,2}<\cdots< t_{k,g_{k}}<t_{k+1}\).

Remark 1

In system (1), \(\delta (t)\) and \(\varrho (t)\) are the switching signal and the impulsive signal, respectively. If they are the same (i.e., impulses only occur at switching time), then system (1) only considers the impact of impulse caused by subsystem switching. In this paper, they are different, that is, impulses can occur at any time. Obviously, we consider a more general form of impulsive signal that includes the former.

Lemma 4

[15] The system (1) is positive \(\Longleftrightarrow \) \(\forall m\in S, r\in H,\) \(D_{1}A_{m}\) are Metzler matrices, \(D_{2}B_{m}\succeq 0\) and \(D_{3}C_{r}\succeq 0\).

Let \(\widetilde{A}_{m}=\overline{D}_{1}A_{m},\widetilde{B}_{m}= \overline{D}_{2}B_{m},\widetilde{C}_{r}=\overline{D}_{3}C_{r}\). Then consider the following system:

$$\begin{aligned} \left\{ \begin{array}{lr} _{t_{0}}\mathcal {D}^{\alpha }_{t}z(t)={\widetilde{A}}_{\delta (t)} z(t)+{\widetilde{B}}_{\delta {(t)}}u(t),&{}t\ne t_{k,i},\\ z(t)={\widetilde{C}}_{\varrho (t)}z(t^{-}),&{}t=t_{k,i}.\\ \end{array} \right. \end{aligned}$$
(2)

Remark 2

According to Lemma 4, we know that the system (2) is positive \(\Longleftrightarrow \) \(\forall m\in S, r\in H,\) \({\widetilde{A}}_{m}\) are Metzler matrices, \({\widetilde{B}}_{m}\succeq 0\) and \(\widetilde{C}_{r}\succeq 0\). That is, the stability of system (1) can be obtained by studying system (2).

Now, we present the \(\Phi \)DADT switching strategy required for this article. Let \(\mathcal {O}=\{1,2,\cdots ,u\}\), where \(u\in \mathbb {N}_{+}\) and \(u\le s\). Define the mapping \(\Phi :S\mapsto \mathcal {O}\) to be surjective and \(\Phi _{i}=\{p\in S|\Phi (p)=i\}\).

Definition 3

[22] For a known \(\delta (t)\) and \(\forall t_{2}\ge t_{1}\ge 0\). Let \(N_{\delta \Phi _{i}}(t_{1},t_{2})\) denote total switching numbers of subsystems group \(\Phi _{i}\) being activated over \([t_{1},t_{2})\), and \(T_{\Phi _{i}}(t_{1},t_{2})\) denote the sum of the running time of subsystems group \(\Phi _{i}\) over \([t_{1},t_{2})\). If there are positive constants \(N_{0\Phi _{i}}\) and \(\tau _{l\Phi _{i}}\), such that

$$\begin{aligned} N_{\delta \Phi _{i}}(t_{1},t_{2})\le N_{0\Phi _{i}} +\frac{T_{\Phi _{i}}(t_{1},t_{2})}{\tau _{l\Phi _{i}}}, \end{aligned}$$

then we say \(\delta (t)\) has a \(\Phi \)DADT \(\tau _{l\Phi _{i}}\).

Definition 4

[16] For a impulsive signal \(\varrho (t)\) and any \(r_{2}\ge r_{1}\ge 0\), let \(M_{\varrho }(r_{1},r_{2})\) denote impulse numbers over the interval \([r_{1},r_{2})\), \(T(r_{1},r_{2})\) is the running time over the interval \([r_{1},r_{2})\). If there are positive constants \(M_{0}\) and \(T_{l}\), such that

$$\begin{aligned} M_{\varrho }(r_{1},r_{2})\le M_{0}+\frac{T(r_{1},r_{2})}{T_{l}}, \end{aligned}$$

then \(\varrho (t)\) is said to have an average impulsive interval \(T_{l}\).

Remark 3

Ordinarily the values of \(N_{0\Phi _{i}}\) and \(M_{0}\) do not affect the conclusion, so this paper sets \(N_{0\Phi _{i}}=0\) and \(M_{0}=0\) for the convenience of calculation.

Definition 5

[15] For a given time constant \(T_{f}\) and two vectors \(\sigma \succ \varepsilon \succ 0\), the system (2) is said to be finite time stable (FTS) with respect to \((\sigma ,\varepsilon ,T_{f},\delta (t))\), if

$$\begin{aligned} z^{\textrm{T}}(t_{0})\sigma \le 1\Rightarrow z^{\textrm{T}}(t)\varepsilon \le 1,\forall t\in [t_{0},T_{f}]. \end{aligned}$$

Finally, the cost function of the paper is given as follows:

$$\begin{aligned} \begin{aligned} J&=_{t_{0}}I^{\alpha }_{t}\left( z^{\textrm{T}}(s)R_{1}+u^{\textrm{T}}(s)R_{2}\right) \\&=\frac{1}{\Gamma (\alpha )}\int _{t_{0}}^{t}(t-s)^{\alpha -1} \left( z^{\textrm{T}}(s)R_{1}+u^{\textrm{T}}(s)R_{2}\right) \text {d}s. \end{aligned} \end{aligned}$$

3 Main Results

3.1 Guaranteed Cost Finite-Time Stability Analysis

This part gives a conclusion on the guaranteed cost FTS of PSFS (2) without u(t) under \(\Phi \)DADT switching.

Theorem 1

Consider the system (2) with average impulsive interval \(T_{l}\). Suppose that there exist positive constants \(\zeta _{1},\zeta _{2}\) \((\zeta _{1}>\zeta _{2}),\) \(T_{i},\) \(\lambda _{i}>1,\) \(\mu _{i}\ge 1,i\in \mathcal {O}\), \(\omega \ge 1\), and positive vectors \(v_{p},p\in S\), \(r\in H\), vectors \(\sigma \succ \varepsilon \succ 0\), and \(R_{1}\succ 0\), \(\Phi (p)=i\), such that

$$\begin{aligned}&{\widetilde{A}}_{p}^{\textrm{T}}v_{p}+R_{1}-\lambda _{i}v_{p}\preceq 0, \end{aligned}$$
(3)
$$\begin{aligned}&\zeta _{1}\varepsilon \prec v_{p}\prec \zeta _{2}\sigma , \end{aligned}$$
(4)
$$\begin{aligned}&{\widetilde{C}}_{r}^{\textrm{T}}v_{p}-\omega v_{p}\prec 0, \end{aligned}$$
(5)
$$\begin{aligned}&v_{p}-\mu _{i}v_{q}\preceq 0, \end{aligned}$$
(6)
$$\begin{aligned}&v_{p}\prec R_{1}, \end{aligned}$$
(7)
$$\begin{aligned}&\ln \zeta _{1}-\ln \zeta _{2}>\frac{T_{i}\ln \omega }{T_{l}}+ \frac{(\lambda _{i}-1)(1-\alpha )T_{i}}{\Gamma (1+\alpha )T_{l}} +\frac{(\lambda _{i}-1)(1+\alpha T_{i}-\alpha )}{\Gamma (1+\alpha )}, \end{aligned}$$
(8)

then the FTS with respect to \((\sigma ,\varepsilon ,T_{i},\delta (t))\) for the system (2) with \(u(t)=0\) can be obtained under the \(\Phi \)DADT satisfying

$$\begin{aligned} \tau _{l\Phi _{i}}>\tau ^{*}_{l\Phi _{i}}=\frac{T_{i}(\ln \mu _{i} +\frac{(\lambda _{i}-1)(1-\alpha )}{\Gamma (\alpha +1)})}{\ln \frac{\zeta _{1}}{\zeta _{2}}-\frac{T_{i}\ln \omega }{T_{l}}- \frac{(\lambda _{i}-1)(1-\alpha )T_{i}}{\Gamma (\alpha +1)T_{l}} -\frac{(\lambda _{i}-1)(1-\alpha +\alpha T_{i})}{\Gamma (\alpha +1)}}, \end{aligned}$$
(9)

and the system cost is given as follows

$$\begin{aligned} \begin{aligned} J&=\frac{1}{\Gamma (\alpha )}\int _{t_{0}}^{t}(t-s)^{\alpha -1}\left( z^{\textrm{T}}(s)R_{1}\right) \text {d}s\\&\le J^{*}=\omega ^{\frac{T_{f}}{T_{l}}}\mu ^{\frac{T_{f}}{\tau _{l}}}\zeta _{2} +\frac{\zeta _{2}\lambda \omega ^{\frac{T_{f}}{T_{l}}} \mu ^{\frac{T_{f}}{\tau _{l}}}}{\Gamma (\alpha +1)} \left[ (1-\alpha )\left( \frac{T_{f}}{T_{l}}+\frac{T_{f}}{\tau _{l}}+1\right) +\alpha T_{f}\right] \\&\quad \times \exp \left\{ {\frac{{T_{f} }}{{T_{l} }}\ln \omega + \frac{{T_{f} }}{{\tau _{l} }}\ln \mu + \frac{{(\lambda - 1)}}{{\Gamma (\alpha + 1)}}[(1 - \alpha )\left( {\frac{{T_{f} }}{{T_{l} }} + \frac{{T_{f} }}{{\tau _{l} }} + 1} \right) + \alpha T_{f} ]} \right\} .\\ \end{aligned} \end{aligned}$$
(10)

Proof

Firstly, we prove the FTS of the PSFS (2).

Constructing the candidate switching Lyapunov function

$$\begin{aligned} V_{\delta (t)}(t,z(t))=z^{\textrm{T}}(t)v_{\delta (t)}=z^{\textrm{T}} (t)v_{p},v_{p}\in \mathbb {R}^{n}_{+},\forall p\in S. \end{aligned}$$
(11)

For \(t\in [t_{k,g_{k}},t_{k+1})\), it follows from (2) that

$$\begin{aligned} z^{\textrm{T}}(t)R_{1}+ _{t_{k,g_{k}}}\mathcal {D}^{\alpha }_{t} V_{\delta (t)}(t,z(t))=z^{\textrm{T}}(t)\left( {\widetilde{A}}^{\textrm{T}}_{p}v_{p}+R_{1}\right) . \end{aligned}$$
(12)

Then combine (3), one gets

$$\begin{aligned} z^{\textrm{T}}(t)R_{1}+ _{t_{k,g_{k}}}\mathcal {D}^{\alpha }_{t} V_{\delta (t)}(t,z(t))\le \lambda _{i}z^{\textrm{T}}(t)v_{p} =\lambda _{i}V_{\delta (t)}(t,z(t)). \end{aligned}$$
(13)

When \(t=t_{k,g_{k}}\), from (2) and (5),

$$\begin{aligned} V_{\delta (t)}(t,z(t))&=V_{\delta (t_{k,g_{k}})}(t_{k,g_{k}},z(t_{k,g_{k}}))=z^{\textrm{T}}(t_{k,g_{k}})v_{\delta (t_{k,g_{k}})}=z^{\textrm{T}}(t^{-}_{k,g_{k}})C^{\textrm{T}}_{\varrho (t_{k,g_{k}})}v_{\delta (t_{k,g_{k}})}\nonumber \\&<\omega z^{\textrm{T}}(t^{-}_{k,g_{k}})v_{\delta (t_{k,g_{k}})}=\omega z^{\textrm{T}}(t^{-}_{k,g_{k}})v_{\delta (t^{-}_{k,g_{k}})}=\omega V_{\delta (t^{-}_{k,g_{k}})}(t^{-}_{k,g_{k}},z(t^{-}_{k,g_{k}})). \end{aligned}$$
(14)

If \(t=t_{k}\), from (6), one has

$$\begin{aligned} V_{\delta (t)}(t,z(t))&=V_{\delta (t_{k})}(t_{k},z(t_{k}))=z^{\textrm{T}}(t_{k})v_{\delta (t_{k})}\le \mu _{\Phi (\delta (t_{k}))}z^{\textrm{T}}(t_{k})v_{\delta (t^{-}_{k})}\nonumber \\&= \mu _{\Phi (\delta (t_{k}))}z^{\textrm{T}}(t^{-}_{k})v_{\delta (t^{-}_{k})}=\mu _{\Phi (\delta (t_{k}))}V_{\delta (t^{-}_{k})}(t^{-}_{k},z(t^{-}_{k})). \end{aligned}$$
(15)

For \(t\in [t_{k,g_{k}},t_{k+1}]\), take the integral of \(\alpha \)-order on two sides of (13) over the interval \([t_{k,g_{k}},t]\), then

$$\begin{aligned} V_{\delta (t)}(t,z(t))&\le V_{\delta (t_{k,g_{k}})}(t_{k,g_{k}}, z(t_{k,g_{k}}))+\frac{\lambda _{\Phi (\delta (t_{k,g_{k}}))}}{\Gamma (\alpha )} \int ^{t}_{t_{k,g_{k}}}(t-s)^{\alpha -1}V_{\delta (t)}(s,z(s))\text {d}s\nonumber \\&\quad -\frac{1}{\Gamma (\alpha )}\int _{t_{k,g_{k}}}^{t}(t-s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}\text {d}s. \end{aligned}$$
(16)

Together with (7) and (11), (16) can overwrite as

$$\begin{aligned} V_{\delta (t)}(t,z(t))&\le V_{\delta (t_{k,g_{k}})}(t_{k,g_{k}}, z(t_{k,g_{k}}))+\frac{\lambda _{\Phi (\delta (t_{k,g_{k}}))}}{\Gamma (\alpha )} \int ^{t}_{t_{k,g_{k}}}(t-s)^{\alpha -1}V_{\delta (t)}(s,z(s))\text {d}s\nonumber \\&\quad -\frac{1}{\Gamma (\alpha )}\int _{t_{k,g_{k}}}^{t}(t-s)^{\alpha -1} z^{\textrm{T}}(s)v_{\delta (t)}\text {d}s=V_{\delta (t_{k,g_{k}})}(t_{k,g_{k}},z(t_{k,g_{k}}))\nonumber \\&\quad + \frac{\lambda _{\Phi (\delta (t_{k,g_{k}}))}-1}{\Gamma (\alpha )} \int ^{t}_{t_{k,g_{k}}}(t-s)^{\alpha -1}V_{\delta (t)}(s,z(s))\text {d}s. \end{aligned}$$
(17)

Combining the Gronwall–Bellman inequality,

$$\begin{aligned} \begin{aligned} V_{\delta (t)}(t,z(t))&\le V_{\delta (t_{k,g_{k}})}(t_{k,g_{k}},z(t_{k,g_{k}})) \exp \left\{ {\frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} }} ))}} - 1}}{{\Gamma (\alpha )}}\int _{{t_{{k,g_{k} }} }}^{t} {(t - s)^{{\alpha - 1}} } {\text {d}}s} \right\} \\&=V_{\delta (t_{k,g_{k}})}(t_{k,g_{k}},z(t_{k,g_{k}}))\exp \left\{ {\frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} }} ))}} - 1}}{{\alpha \Gamma (\alpha )}}\left( {t - t_{{k,g_{k} }} } \right) ^{\alpha } } \right\} \\&=V_{\delta (t_{k,g_{k}})}(t_{k,g_{k}},z(t_{k,g_{k}}))\exp \left\{ {\frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} }} ))}} - 1}}{{\Gamma (\alpha + 1)}}\left( {t - t_{{k,g_{k} }} } \right) ^{\alpha } } \right\} . \end{aligned} \end{aligned}$$
(18)

Then for \(t\in [t_{0},T_{f}]\), by (14) and (18),

$$\begin{aligned} V_{\delta (t)}(t,z(t))&\le V_{\delta (t_{k,g_{k}})}(t_{k,g_{k}},z(t_{k,g_{k}})) \exp \left\{ {\frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} }} ))}} - 1}}{{\Gamma (\alpha + 1)}}\left( {t - t_{{k,g_{k} }} } \right) ^{\alpha } } \right\} \nonumber \\&\le \omega V_{\delta (t^{-}_{k,g_{k}})}(t^{-}_{k,g_{k}},z(t^{-}_{k,g_{k}})) \exp \left\{ {\frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} }} ))}} - 1}}{{\Gamma (\alpha + 1)}}\left( {t - t_{{k,g_{k} }} } \right) ^{\alpha } } \right\} \nonumber \\&\le \omega \left[ {V_{\delta (t_{{k,g_{k} - 1}} )}} (t_{{k,g_{k} - 1}} ,z(t_{{k,g_{k} - 1}} ))\right. \nonumber \\&\quad \times \exp \left. \left\{ {\frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} - 1}} ))}} - 1}}{{\Gamma (\alpha + 1)}}(t_{{k,g_{k} }} - t_{{k,g_{k} - 1}} )^{\alpha } } \right\} \right] \nonumber \\&\quad \times \exp \left\{ {\frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} }} ))}} - 1}}{{\Gamma (\alpha + 1)}}(t - t_{{k,g_{k} }} )^{\alpha } } \right\} \nonumber \\&\le \omega \omega V_{\delta (t^{-}_{k,g_{k}-1})}(t^{-}_{k,g_{k}-1},z(t^{-}_{k,g_{k}-1}))\nonumber \\&\exp \left\{ {\frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} }} ))}} - 1}}{{\Gamma (\alpha + 1)}}(t - t_{{k,g_{k} }} )^{\alpha } + \frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} - 1}} ))}} - 1}}{{\Gamma (\alpha + 1)}}(t_{{k,g_{k} }} - t_{{k,g_{k} - 1}} )^{\alpha } } \right\} \nonumber \\&\le \cdots \nonumber \\&\le \omega ^{{M_{{\varrho }} (t_{k} ,t)}} V_{{\delta (t_{k} )}} (t_{k} ,z(t_{k} ))\exp \left\{ {\frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} }} ))}} - 1}}{{\Gamma (\alpha + 1)}}(t - t_{{k,g_{k} }} )^{\alpha } } \right. \nonumber \\&\quad \left. { + \frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} - 1}} ))}} - 1}}{{\Gamma (\alpha + 1)}}(t_{{k,g_{k} }} - t_{{k,g_{k} - 1}} )^{\alpha } \!+\! \cdots \! +\! \frac{{\lambda _{{\Phi (\delta (t_{k} ))}} \!-\! 1}}{{\Gamma (\alpha \!+\! 1)}}(t_{{k,1}} \!-\! t_{k} )^{\alpha } } \right\} . \end{aligned}$$
(19)

Again according to (15), we can obtain

$$\begin{aligned} V_{\delta (t)}(t,z(t))&\le \omega ^{M_{\varrho }(t_{k},t)}\mu _{\Phi (\delta (t_{k}))} V_{\delta (t^{-}_{k})}(t^{-}_{k},z(t^{-}_{k})) \exp \left\{ {\frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} }} ))}} - 1}}{{\Gamma (\alpha + 1)}}(t - t_{{k,g_{k} }} )^{\alpha } } \right. \nonumber \\&\quad \left. { + \frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} - 1}} ))}} - 1}}{{\Gamma (\alpha + 1)}}(t_{{k,g_{k} }} - t_{{k,g_{k} - 1}} )^{\alpha } \!+\! \cdots \!+\! \frac{{\lambda _{{\Phi (\delta (t_{k} ))}} \! -\! 1}}{{\Gamma (\alpha \!+\! 1)}}(t_{{k,1}} \!-\! t_{k} )^{\alpha } } \right\} \nonumber \\&\quad \times \omega ^{{M_{{\varrho }} (t_{k} ,t)}} \mu _{{\Phi (\delta (t_{k} ))}} \left[ {V_{{\delta (t_{{k - 1,g_{k} - 1}} )}} (t_{{k - 1,g_{k} - 1}} ,z(t_{{k - 1,g_{k} - 1}} ))} \right. \nonumber \\&\quad \times \left. {\exp \left\{ {\frac{{\lambda _{{\Phi (\delta (t_{{k - 1,g_{k} - 1}} ))}} - 1}}{{\Gamma (\alpha + 1)}}(t_{k} - t_{{k - 1,g_{k} - 1}} )^{\alpha } } \right\} } \right] \nonumber \\&\quad \times \exp \left\{ {\frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} }} ))}} - 1}}{{\Gamma (\alpha + 1)}}(t - t_{{k,g_{k} }} )^{\alpha } } \right. \nonumber \\&\quad \left. { + \frac{{\lambda _{{\Phi (\delta (t_{{k,g_{k} - 1}} ))}} - 1}}{{\Gamma (\alpha + 1)}}(t_{{k,g_{k} }} - t_{{k,g_{k} - 1}} )^{\alpha } \!+\! \cdots \!+\! \frac{{\lambda _{{\Phi (\delta (t_{k} ))}} \!-\! 1}}{{\Gamma (\alpha \!+\! 1)}}(t_{{k,1}} \!- \!t_{k} )^{\alpha } } \right\} \nonumber \\&<\cdots \nonumber \\&\le \left( \prod ^{s}_{i=1}\omega ^{M_{i}}\right) \left( \prod ^{s}_{i=1} \mu _{i}^{N_{i}}\right) V_{\delta (t_{0})}(t_{0},z(t_{0}))\nonumber \\&\quad \times \exp \left\{ \frac{\sum ^{s}_{i=1}[(\lambda _{i}-1)\sum _{\delta (t_{k,m}) \in \Phi _{i}}(t_{k,m+1}-t_{k,m})^{\alpha }]}{\Gamma (\alpha +1)}\right\} , \end{aligned}$$
(20)

where \(M_{i}\triangleq M_{\varrho \Phi _{i}}(t_{0},T_{f}),N_{i} \triangleq N_{\delta \Phi _{i}}(t_{0},T_{f})\), namely \(\sum ^{s}_{i=1}M_{i}=M_{\varrho },\sum ^{s}_{i=1}N_{i}=N_{\delta }\).

From Lemmas 2 and 3, (20) can overwrite as

$$\begin{aligned} V_{\delta (t)}(t,z(t))&\le \left( \prod ^{s}_{i=1}\omega ^{M_{i}}\right) \left( \prod ^{s}_{i=1}\mu _{i}^{N_{i}}\right) V_{\delta (t_{0})}(t_{0},z(t_{0}))\nonumber \\&\quad \times \exp \left\{ {\frac{{\sum \limits _{{i = 1}}^{s} {(\lambda _{i} - 1)} }}{{\Gamma (\alpha + 1)}}\left[ {(M_{i} + N_{i} + 1)^{{1 - \alpha }} (T_{i} )^{\alpha } } \right] } \right\} \nonumber \\&\le \left( e^{\sum ^{s}_{i=1}{M_{i}}\ln {\omega }}\right) \left( e^{\sum ^{s}_{i=1}{N_{i}}\ln {\mu _{i}}}\right) V_{\delta (t_{0})}(t_{0},z(t_{0}))\nonumber \\&\quad \times \exp \left\{ {\frac{{\sum \limits _{{i = 1}}^{s} {(\lambda _{i} - 1)} }}{{\Gamma (\alpha + 1)}}\left[ {(1 - \alpha )(M_{i} + N_{i} + 1) + \alpha T_{i} } \right] } \right\} , \end{aligned}$$
(21)

where \(T_{i}\triangleq T_{\Phi _{i}}(t_{0},T_{f})\), namely \(\sum ^{s}_{i=1}T_{i}=T_{f}\). Then from Definitions 3 and 4, we have

$$\begin{aligned} \begin{aligned} V_{\delta (t)}(t,z(t))&\le \left( e^{\sum ^{s}_{i=1}\frac{\ln \omega }{T_{l}}T_{i}}\right) \left( e^{\sum ^{s}_{i=1}\frac{\ln {\mu _{i}}}{\tau _{l\Phi _{i}}} T_{i}}\right) V_{\delta (t_{0})}(t_{0},z(t_{0}))\\&\quad \times \exp \left\{ {\frac{{\sum \limits _{{i = 1}}^{s} {(\lambda _{i} - 1)} }}{{\Gamma (\alpha + 1)}}\left[ {(1 - \alpha )\left( {\frac{{T_{i} }}{{T_{l} }} + \frac{{T_{i} }}{{\tau _{{l\Phi _{i} }} }} + 1} \right) + \alpha T_{i} } \right] } \right\} ,\\&\le V_{{\delta (t_{0} )}} (t_{0} ,z(t_{0} ))\exp \left\{ {\sum \limits _{{i = 1}}^{s} {\frac{{\ln \omega }}{{T_{l} }}} T_{i} + \sum \limits _{{i = 1}}^{s} {\frac{{\ln \mu _{i} }}{{\tau _{{l\Phi _{i} }} }}} T_{i} } \right. \\&\quad \left. { + \,\frac{{\sum \limits _{{i = 1}}^{s} {(\lambda _{i} - 1)} }}{{\Gamma (\alpha + 1)}}\left[ {(1 - \alpha )\left( {\frac{{T_{i} }}{{T_{l} }} + \frac{{T_{i} }}{{\tau _{{l\Phi _{i} }} }} + 1} \right) + \alpha T_{i} } \right] } \right\} . \end{aligned} \end{aligned}$$
(22)

From (4) and (11), it can be further gotten that

$$\begin{aligned} V_{\delta (t)}(t,z(t))\ge \zeta _{1}z^{\textrm{T}}(t)\varepsilon , \end{aligned}$$
(23)

and

$$\begin{aligned} V_{\delta (t)}(t,z(t))&\le \zeta _{2}\{z^{\textrm{T}}(t_{0})\sigma \}\exp \left\{ \sum \limits _{{i = 1}}^{s} {\frac{{\ln \omega }}{{T_{l} }}} T_{i} + \sum \limits _{{i = 1}}^{s} {\frac{{\ln \mu _{i} }}{{\tau _{{l\Phi _{i} }} }}} T_{i} + \frac{{\sum \limits _{{i = 1}}^{s} {(\lambda _{i} - 1)} }}{{\Gamma (\alpha + 1)}}\right. \nonumber \\&\quad \times \left. \left[ {(1 - \alpha )\left( {\frac{{T_{i} }}{{T_{l} }} + \frac{{T_{i} }}{{\tau _{{l\Phi _{i} }} }} + 1} \right) + \alpha T_{i} } \right] \right\} . \end{aligned}$$
(24)

Substituting (23) into (24),

$$\begin{aligned} z^{\textrm{T}}(t)\varepsilon&\le \frac{\zeta _{2}}{\zeta _{1}}\{z^{\textrm{T}}(t_{0})\sigma \}\exp \left\{ \sum \limits _{{i = 1}}^{s} {\frac{{\ln \omega }}{{T_{l} }}} T_{i} + \sum \limits _{{i = 1}}^{s} {\frac{{\ln \mu _{i} }}{{\tau _{{l\Phi _{i} }} }}} T_{i} + \frac{{\sum \limits _{{i = 1}}^{s} {(\lambda _{i} - 1)} }}{{\Gamma (\alpha + 1)}}\right. \nonumber \\&\quad \times \left. \left[ {(1 - \alpha )\left( {\frac{{T_{i} }}{{T_{l} }} + \frac{{T_{i} }}{{\tau _{{l\Phi _{i} }} }} + 1} \right) + \alpha T_{i} } \right] \right\} . \end{aligned}$$
(25)

Substituting (9) into (25), and when \(z^{T}(t_{0})\sigma \le 1\), we get

$$\begin{aligned} \begin{aligned} z^{\textrm{T}}(t)\varepsilon&\le 1. \end{aligned} \end{aligned}$$
(26)

Therefore, by Definition 5, system (2) is FTS respecting \((\sigma ,\varepsilon ,T_{f},\delta (t))\).

Secondly, we give proof of the guaranteed cost of the PSFS (2).

According to (14) and (16), we can derive

$$\begin{aligned} V_{\delta (t)}(t,z(t))&\le \omega V_{\delta (t^{-}_{k,g_{k}})}(t^{-}_{k,g_{k}},z(t^{-}_{k,g_{k}})) \!+\!\frac{\lambda _{\Phi (\delta (t_{k,g_{k}}))}}{\Gamma (\alpha )} \int ^{t}_{t_{k,g_{k}}}(t\!-\!s)^{\alpha -1}V_{\delta (t)}(s,z(s))\text {d}s\nonumber \\&\quad -\frac{1}{\Gamma (\alpha )}\int _{t_{k,g_{k}}}^{t}(t-s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}\text {d}s\nonumber \\&\le \omega [V_{\delta (t_{k,g_{k}-1})}(t_{k,g_{k}-1}, z(t_{k,g_{k}-1})) +\frac{\lambda _{\Phi (\delta (t_{k,g_{k}-1}))}}{\Gamma (\alpha )} \int ^{t_{k,g_{k}}}_{t_{k,g_{k}-1}}(t_{k,g_{k}}-s)^{\alpha -1}\nonumber \\&\quad \times V_{\delta (t_{k,g_{k}})}(s,z(s))\text {d}s -\frac{1}{\Gamma (\alpha )}\int ^{t_{k,g_{k}}}_{t_{k,g_{k}-1}} (t_{k,g_{k}}-s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}\text {d}s]\nonumber \\&\quad +\frac{\lambda _{\Phi (\delta (t_{k,g_{k}}))}}{\Gamma (\alpha )} \int ^{t}_{t_{k,g_{k}}}(t-s)^{\alpha -1}V_{\delta (t)}(s,z(s))\text {d}s\nonumber \\&\quad -\frac{1}{\Gamma (\alpha )}\int _{t_{k,g_{k}}}^{t}(t-s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}\text {d}s\nonumber \\&\le \omega [\omega V_{\delta (t^{-}_{k,g_{k}-1})}(t^{-}_{k,g_{k}-1},z(t^{-}_{k,g_{k}-1}))]\nonumber \\&\quad +\frac{\omega \lambda _{\Phi (\delta (t_{k,g_{k}-1}))}}{\Gamma (\alpha )} \int ^{t_{k,g_{k}}}_{t_{k,g_{k}-1}}(t_{k,g_{k}}-s)^{\alpha -1} V_{\delta (t_{k,g_{k}})}(s,z(s))\text {d}s\nonumber \\&\quad -\frac{\omega }{\Gamma (\alpha )} \int ^{t_{k,g_{k}}}_{t_{k,g_{k}-1}}(t_{k,g_{k}}-s)^{\alpha -1} z^{\textrm{T}}(s)R_{1}\text {d}s\nonumber \\&\quad +\frac{\lambda _{\Phi (\delta (t_{k,g_{k}}))}}{\Gamma (\alpha )} \int ^{t}_{t_{k,g_{k}}}(t-s)^{\alpha -1}V_{\delta (t)}(s,z(s))\text {d}s\nonumber \\&\quad -\frac{1}{\Gamma (\alpha )}\int ^{t}_{t_{k,g_{k}}} (t-s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}\text {d}s\nonumber \\&\le \cdots \end{aligned}$$
(27)

Continue to iterate,

$$\begin{aligned} V_{\delta (t)}(t,z(t))&\le \omega ^{M_{\varrho }(t_{k},t)}V_{\delta (t_{k})}(t_{k},z(t_{k})) +\frac{\omega ^{M_{\varrho }(t_{k},t)}\lambda _{\Phi (\delta (t_{k}))}}{\Gamma (\alpha )}\int ^{t_{k,1}}_{t_{k}}(t_{k,1}-s)^{\alpha -1}\nonumber \\&\quad \times V_{\delta (t_{k,1})}(s,z(s))\text {d}s- \frac{\omega ^{M_{\varrho }(t_{k},t)}}{\Gamma (\alpha )} \int ^{t_{k,1}}_{t_{k}}(t_{k,1}-s)^{\alpha -1}z^{\textrm{T}}(s) R_{1}\text {d}s+\cdots \nonumber \\&\quad +\frac{\lambda _{\Phi (\delta (t_{k,g_{k}}))}}{\Gamma (\alpha )} \int ^{t}_{t_{k,g_{k}}}(t-s)^{\alpha -1}V_{\delta (t)}(s,z(s))\text {d}s\nonumber \\&\quad -\frac{1}{\Gamma (\alpha )} \int ^{t}_{t_{k,g_{k}}}(t-s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}\text {d}s\nonumber \\&\le \omega ^{M_{\varrho }(t_{k},t)}[\mu _{\Phi (\delta (t_{k}))} V_{\delta (t^{-}_{k})}(t^{-}_{k},z(t^{-}_{k}))]+\cdots \nonumber \\&\quad +\frac{\lambda _{\Phi (\delta (t_{k,g_{k}}))}}{\Gamma (\alpha )} \int ^{t}_{t_{k,g_{k}}}(t-s)^{\alpha -1}V_{\delta (t)}(s,z(s))\text {d}s\nonumber \\&\quad -\frac{1}{\Gamma (\alpha )}\int ^{t}_{t_{k,g_{k}}}(t-s)^{\alpha -1} z^{\textrm{T}}(s)R_{1}\text {d}s\nonumber \\&\le \omega ^{M_{\varrho }(t_{k},t)}\mu _{\Phi (\delta (t_{k}))} [V_{\delta (t_{k-1,g_{k}-1})}(t_{k-1,g_{k}-1},z(t_{k-1,g_{k}-1}))\nonumber \\&\quad +\frac{\lambda _{\Phi (\delta (t_{k-1,g_{k}-1}))}}{\Gamma (\alpha )} \int ^{t^{k}}_{t_{k-1,g_{k}-1}}(t_{k}-s)^{\alpha -1} V_{\delta (t_{k})}(s,z(s))\text {d}s\nonumber \\&\quad -\frac{1}{\Gamma (\alpha )}\int ^{t_{k}}_{t_{k-1,g_{k}-1}} (t_{k}-s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}ds]+\cdots \nonumber \\&\quad +\frac{\lambda _{\Phi (\delta (t_{k,g_{k}}))}}{\Gamma (\alpha )} \int ^{t}_{t_{k,g_{k}}}(t-s)^{\alpha -1}V_{\delta (t)}(s,z(s))\text {d}s\nonumber \\&\quad -\frac{1}{\Gamma (\alpha )}\int ^{t}_{t_{k,g_{k}}} (t-s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}\text {d}s\nonumber \\&\le \omega ^{M_{\varrho }(t_{k},t)}\mu _{\Phi (\delta (t_{k}))} V_{\delta (t_{k-1,g_{k}-1})}(t_{k-1,g_{k}-1},z(t_{k-1,g_{k}-1}))\nonumber \\&\quad +\!\frac{\omega ^{M_{\varrho }(t_{k},t)}\mu _{\Phi (\delta (t_{k}))} \lambda _{\Phi (\delta (t_{k-1,g_{k}-1}))}}{\Gamma (\alpha )} \int ^{t_{k}}_{t_{k-1,g_{k}-1}}\!\!(t_{k}\!-\!s)^{\alpha -1} V_{\delta (t_{k})}(s,z(s))\text {d}s\nonumber \\&\quad -\frac{\omega ^{M_{\varrho }(t_{k},t)} \mu _{\Phi (\delta (t_{k}))}}{\Gamma (\alpha )} \int ^{t_{k}}_{t_{k-1,g_{k}-1}}(t_{k}-s)^{\alpha -1} z^{\textrm{T}}(s)R_{1}\text {d}s+\cdots \nonumber \\&\quad +\frac{\lambda _{\Phi (\delta (t_{k,g_{k}}))}}{\Gamma (\alpha )} \int ^{t}_{t_{k,g_{k}}}(t-s)^{\alpha -1}V_{\delta (t)}(s,z(s))\text {d}s\nonumber \\&\quad -\frac{1}{\Gamma (\alpha )}\int ^{t}_{t_{k,g_{k}}} (t-s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}\text {d}s\nonumber \\&\le \cdots \nonumber \\ \end{aligned}$$
(28)

Therefore, we further obtain

$$\begin{aligned} V_{\delta (t)}(t,z(t))&\le \left( \prod ^{s}_{i=1}\omega ^{M_{i}}\right) \left( \prod ^{s}_{i=1} \mu _{i}^{N_{i}}\right) V_{\delta (t_{0})}(t_{0},z(t_{0})) +\frac{(\prod ^{s}_{i=1}\omega ^{M_{i}})(\prod ^{s}_{i=1} \mu _{i}^{N_{i}})}{\Gamma (\alpha )}\nonumber \\&\quad \times \left( \sum ^{s}_{i=1}\lambda _{i}\right) \left[ \sum _{\delta (t_{k,m})\in \Phi _{i}} \int ^{t_{k,m+1}}_{t_{k,m}}(t_{k,m+1}-s)^{\alpha -1} V_{\delta (t_{k,m})}(s,z(s))\text {d}s\right] \nonumber \\&\quad -\frac{1}{\Gamma (\alpha )}\int ^{t}_{t_{0}} (t-s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}\text {d}s.\nonumber \\ \end{aligned}$$
(29)

From (24), letting \(P=\zeta _{2}\{z^{\textrm{T}}(t_{0})\sigma \}\exp \{\sum ^{s}_{i=1} \frac{\ln \omega }{T_{l}}T_{i}+\sum ^{s}_{i=1} \frac{\ln {\mu _{i}}}{\tau _{l\Phi _{i}}}T_{i} +\frac{\sum ^{s}_{i=1}(\lambda _{i}-1)}{\Gamma (\alpha +1)}[(1-\alpha ) (\frac{T_{i}}{T_{l}}+\frac{T_{i}}{\tau _{l\Phi _{i}}}+1)+\alpha T_{i}]\}\), then we know \(V_{\delta (t)}(t,z(t))\le P\). Therefore, (29) can be rewritten as

$$\begin{aligned} V_{\delta (t)}(t,z(t))&\le \left( \prod ^{s}_{i=1}\omega ^{M_{i}}\right) \left( \prod ^{s}_{i=1} \mu _{i}^{N_{i}}\right) V_{\delta (t_{0})}(t_{0},z(t_{0}))\nonumber \\&\quad +\!\frac{(\prod ^{s}_{i=1}\omega ^{M_{i}})(\prod ^{s}_{i=1} \mu _{i}^{N_{i}})P}{\Gamma (\alpha )\alpha }\left( \sum ^{s}_{i=1}\lambda _{i}\right) \! \left[ \big (\!\sum _{\delta (t_{k,m})\in \Phi _{i}}\!\big )(t_{k,m+1}\!-\!t_{k,m})^{\alpha }\right] \nonumber \\&\quad -\frac{1}{\Gamma (\alpha )}\int ^{t}_{t_{0}} (t-s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}\text {d}s. \end{aligned}$$
(30)

Using Lemmas 2 and 3,

$$\begin{aligned} V_{\delta (t)}(t,z(t))&\le \left( \prod ^{s}_{i=1}\omega ^{M_{i}}\right) \left( \prod ^{s}_{i=1} \mu _{i}^{N_{i}}\right) V_{\delta (t_{0})}(t_{0},z(t_{0})) +\frac{(\prod ^{s}_{i=1}\omega ^{M_{i}})(\prod ^{s}_{i=1} \mu _{i}^{N_{i}})P}{\Gamma (\alpha +1)}\nonumber \\&\quad \times \left( \sum ^{s}_{i=1}\lambda _{i}\right) [(M_{i}\!+\!N_{i}\!+\!1)^{1-\alpha } (T_{i})^{\alpha }]\!-\!\frac{1}{\Gamma (\alpha )} \int ^{t}_{t_{0}}(t\!-\!s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}\text {d}s\nonumber \\&\le \left( \prod ^{s}_{i=1}\omega ^{M_{i}}\right) \left( \prod ^{s}_{i=1} \mu _{i}^{N_{i}}\right) \zeta _{2}\{z^{\mathcal {T}}(t_{0}\sigma )\} +\frac{(\prod ^{s}_{i=1}\omega ^{M_{i}})(\prod ^{s}_{i=1} \mu _{i}^{N_{i}})P}{\Gamma (\alpha +1)}\nonumber \\&\quad \times \left( \sum ^{s}_{i=1}\lambda _{i}\right) [(1-\alpha )(M_{i} +N_{i}+1)+\alpha T_{i}]\nonumber \\&\quad -\frac{1}{\Gamma (\alpha )} \int ^{t}_{t_{0}}(t-s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}\text {d}s. \end{aligned}$$
(31)

Noting that \(V_{\delta (t)}>0\), therefore

$$\begin{aligned} J&=\frac{1}{\Gamma (\alpha )}\int ^{t}_{t_{0}} (t-s)^{\alpha -1}z^{\textrm{T}}(s)R_{1}\text {d}s\le \exp \left\{ \sum ^{s}_{i=1}\frac{\ln \omega }{T_{l}}T_{i}+\sum ^{s}_{i=1} \frac{\ln {\mu _{i}}}{\tau _{l\Phi _{i}}}T_{i}\right\} \zeta _{2}\nonumber \\&\quad +\frac{\exp \left\{ \sum ^{s}_{i=1}\frac{\ln \omega }{T_{l}}T_{i}+ \sum ^{s}_{i=1}\frac{\ln {\mu _{i}}}{\tau _{l\Phi _{i}}} T_{i}\right\} \zeta _{2}}{\Gamma (\alpha +1)} \left( \sum ^{s}_{i=1}\lambda _{i}\right) \left[ (1-\alpha )\left( \frac{T_{i}}{T_{l}} +\frac{T_{i}}{\tau _{l\Phi _{i}}}+1\right) +\alpha T_{i}\right] \nonumber \\&\quad \times \exp \left\{ {\sum \limits _{{i = 1}}^{s} {\frac{{\ln \omega }}{{T_{l} }}} T_{i} + \sum \limits _{{i = 1}}^{s} {\frac{{\ln \mu _{i} }}{{\tau _{{l\Phi _{i} }} }}} T_{i} + \frac{{\sum \limits _{{i = 1}}^{s} {(\lambda _{i} - 1)} }}{{\Gamma (\alpha + 1)}}\left[ {(1 - \alpha )\left( {\frac{{T_{i} }}{{T_{l} }} + \frac{{T_{i} }}{{\tau _{{l\Phi _{i} }} }} + 1} \right) + \alpha T_{i} } \right] } \right\} . \end{aligned}$$
(32)

Letting \(\lambda =\max _{i\in \mathcal {O}}\{\lambda _{i}\},\mu =\max _{i\in \mathcal {O}}\{\mu _{i}\},\tau _{l}=\max _{i\in \mathcal {O}}\{\tau _{l\Phi _{i}}\}\), so

$$\begin{aligned} J&\le \exp \left\{ \sum ^{s}_{i=1}\frac{\ln \omega }{T_{l}}T_{i}+\sum ^{s}_{i=1} \frac{\ln {\mu }}{\tau _{l}}T_{i}\right\} \zeta _{2}+\frac{\exp \left\{ \sum ^{s}_{i=1}\frac{\ln \omega }{T_{l}}T_{i}+\sum ^{s}_{i=1} \frac{\ln {\mu }}{\tau _{l}}T_{i}\right\} \zeta _{2}}{\Gamma (\alpha +1)} \nonumber \\&\quad \times \left( \sum ^{s}_{i=1}\lambda \right) W_{i}\exp \left\{ {\sum \limits _{{i = 1}}^{s} {\frac{{\ln \omega }}{{T_{l} }}} T_{i} + \sum \limits _{{i = 1}}^{s} {\frac{{\ln \mu }}{{\tau _{l} }}} T_{i} + \frac{{\sum \limits _{{i = 1}}^{s} {(\lambda - 1)} }}{{\Gamma (\alpha + 1)}}W_{i} } \right\} \nonumber \\&\le \left( \omega ^{\frac{1}{T_{l}}}\mu ^{\frac{1}{\tau _{l}}}\right) ^{T_{f}}\zeta _{2} +\frac{\zeta _{2}\lambda \omega ^{\frac{T_{f}}{T_{l}}} \mu ^{\frac{T_{f}}{\tau _{l}}}}{\Gamma (\alpha +1)} W_{f}\exp \left\{ {\frac{{T_{f} }}{{T_{l} }}\ln \omega + \frac{{T_{f} }}{{\tau _{l} }}\ln \mu + \frac{{(\lambda - 1)}}{{\Gamma (\alpha + 1)}}W_{f} } \right\} ,\nonumber \\ \end{aligned}$$
(33)

where \(W_{i}\triangleq (1-\alpha )\left( \frac{T_{i}}{T_{l}}+\frac{T_{i}}{\tau _{l}}+1\right) +\alpha T_{i}\) and \(W_{f}\triangleq (1-\alpha )\left( \frac{T_{f}}{T_{l}} +\frac{T_{f}}{\tau _{l}}+1\right) +\alpha T_{f}.\) \(\square \)

Remark 4

By taking \(\mathcal {O}=\{1\}\), and \(\mathcal {O}=S\) in Theorem 1, we can obtain the guaranteed cost FTS conditions of PSFS (2) under ADT strategy and mode-dependent ADT strategy, respectively.

3.2 Design of Finite-Time (Non-fragile) Controller

Now we first give the design of finite-time controllers for system (2).

Under the controller \(u(t)=K_{\delta (t)}z(t)\), the following closed-loop system is given

$$\begin{aligned} \left\{ \begin{array}{lr} ^{C}_{t_{0}}\mathcal {D}^{\alpha }_{t}z(t)=({\widetilde{A}}_{(\delta (t))} +{\widetilde{B}}_{\delta (t)}K_{\delta (t)})z(t),&{}t\ne t_{k,i},\\ z(t)={\widetilde{C}}_{\varrho (t)}z(t),&{}t=t_{k,i}.\\ \end{array} \right. \end{aligned}$$
(34)

Theorem 2

Consider the system (34) with average impulsive interval \(T_{l}\). Suppose that, for \(p\in S\), \(r\in H\), \(\Phi (p)=i\in \mathcal {O}\), there exist positive constants \(\zeta _{1},\zeta _{2} (\zeta _{1}>\zeta _{2})\), \(T_{i}, \lambda _{i}>1, \mu _{i}\ge 1\), \(\omega \ge 1\), and positive vectors \(v_{p}\), \(\sigma \succ \varepsilon \succ 0\), \(R_{1}\succ 0\) and \(R_{2}\succ 0\), such that (4)-(6), (8) and following inequalities hold:

$$\begin{aligned}&{\widetilde{A}}_{p}+{\widetilde{B}}_{p}K_{p}~ \text {are}~\text {Metzler}~\text {matrices}, \end{aligned}$$
(35)
$$\begin{aligned}&{\widetilde{A}}_{p}^{\textrm{T}}v_{p}+g_{p}+R_{1} +K^{\textrm{T}}_{p}R_{2}-\lambda _{i}v_{p}\preceq 0, \end{aligned}$$
(36)
$$\begin{aligned}&v_{p}\prec R_{1}+K^{\textrm{T}}_{p}R_{2}, \end{aligned}$$
(37)

where \(g_{p}=K^{\textrm{T}}_{p}{\widetilde{B}}^{\textrm{T}}_{p}v_{p}\). Then, system (34) is FTS respecting \((\sigma ,\varepsilon ,T_{i},\delta (t))\) under the \(\Phi \)DADT satisfying (9), and the cost is given as follows

$$\begin{aligned} \begin{aligned} J&=\frac{1}{\Gamma (\alpha )}\int _{t_{0}}^{t}(t-s)^{\alpha -1} (z^{\textrm{T}}(s)R_{1}+u^{\textrm{T}}(s)R_{2})\text {d}s\\&\le J^{*}=\omega ^{\frac{T_{f}}{T_{l}}} \mu ^{\frac{T_{f}}{\tau _{l}}}\zeta _{2} +\frac{\zeta _{2}\lambda \omega ^{\frac{T_{f}}{T_{l}}}\mu ^{\frac{T_{f}}{\tau _{l}}}}{\Gamma (\alpha +1)} \left[ {(1 - \alpha )\left( {\frac{{T_{f} }}{{T_{l} }} + \frac{{T_{f} }}{{\tau _{l} }} + 1} \right) + \alpha T_{f} } \right] \\&\quad \times \exp \left\{ {\frac{{T_{f} }}{{T_{l} }}\ln \omega + \frac{{T_{f} }}{{\tau _{l} }}\ln \mu + \frac{{(\lambda - 1)}}{{\Gamma (\alpha + 1)}}\left[ {(1 - \alpha )\left( {\frac{{T_{f} }}{{T_{l} }} + \frac{{T_{f} }}{{\tau _{l} }} + 1} \right) + \alpha T_{f} } \right] } \right\} .\\ \end{aligned} \end{aligned}$$
(38)

Proof

The system positivity can be gotten from Lemma 4 and (35). Replacing \({\widetilde{A}}_{p}\) in (3) with \(\widetilde{A}_{p}+{\widetilde{B}}_{p}K_{p}\), then according to Theorem 1, we conclude that the system (34) is FTS with the cost (38).

Next, take \(u(t)=(K_{\delta (t)}+\Delta K_{\delta (t)})z(t)\), the closed-loop system is as follows

$$\begin{aligned} \left\{ \begin{array}{lr} ^{C}_{t_{0}}\mathcal {D}^{\alpha }_{t}z(t)=({\widetilde{A}}_{\delta (t)}+ {\widetilde{B}}_{\delta _{(t)}}(K_{\delta (t)}+\Delta K_{\delta (t)}))z(t),&{}t\ne t_{k,i},\\ z(t)={\widetilde{C}}_{\varrho (t)}z(t),&{}t=t_{k,i},\\ \end{array} \right. \end{aligned}$$
(39)

where \(\Delta K_{\delta (t)}\) are the uncertainty matrices satisfying \(\Delta K_{\delta (t)}\in [\overline{K}_{1},\overline{K}_{2}]\), \(\overline{K}_{1}\) and \(\overline{K}_{2}\) are known matrices. \(\square \)

Theorem 3

Consider the system (39) with average impulsive interval \(T_{l}\). Suppose that there exist positive constants \(\zeta _{1},\zeta _{2}(\zeta _{1}>\zeta _{2})\) and positive vectors \(v_{p},p\in S\), \(r\in H\), positive constants \(T_{i},\lambda _{i}(\lambda _{i}>1),\mu _{i}(\mu _{i}\ge 1),i\in \mathcal {O}\), \(\omega (\omega \ge 1)\), vectors \(\sigma \succ \varepsilon \succ 0\), \(R_{1}\succ 0\) and \(R_{2}\succ 0\), \(\Phi (p)=i\), such that (4)-(6), (8) and following inequalities hold:

$$\begin{aligned}{} & {} {\widetilde{A}}_{p}+{\widetilde{B}}_{p}(K_{p}+\overline{K}_{1})~{\text {are}}~{\text {Metzler}}~{\text {matrices}}, \end{aligned}$$
(40)
$$\begin{aligned}{} & {} {\widetilde{A}}_{p}+{\widetilde{B}}_{p}(K_{p}+\overline{K}_{2})~\text {are}~\text {Metzler}~\text {matrices},\end{aligned}$$
(41)
$$\begin{aligned}{} & {} {\widetilde{A}}_{p}^{\textrm{T}}v_{p}+g_{p}+\overline{K}^{\textrm{T}}_{2} {\widetilde{B}}^{\textrm{T}}_{p}v_{p}+R_{1}+K^{\textrm{T}}_{p}R_{2} \nonumber \\{} & {} \qquad \quad + \overline{K}^{\textrm{T}}_{2}R_{2}-\lambda _{i}v_{p}\preceq 0,\end{aligned}$$
(42)
$$\begin{aligned}{} & {} v_{p}\prec R_{1}+K^{\textrm{T}}_{p}R_{2}+\overline{K}^{\textrm{T}}_{1}R_{2}, \end{aligned}$$
(43)

where \(g_{p}=K^{\textrm{T}}_{p}{\widetilde{B}}^{\textrm{T}}_{p}v_{p}\). Then, system (39) is FTS respecting \((\sigma ,\varepsilon ,T_{i},\delta (t))\) under the \(\Phi \)DADT satisfying (9), and the system cost is given by (38).

Proof

According to \(\Delta K_{\delta (t)}\in [\overline{K}_{1},\overline{K}_{2}]\), we have \({\widetilde{A}}_{p}+{\widetilde{B}}_{p}(K_{p}+\overline{K}_{1})\le {\widetilde{A}}_{p}+{\widetilde{B}}_{p}(K_{p}+\Delta K_{p})\le \widetilde{A}_{p}+{\widetilde{B}}_{p}(K_{p}+\overline{K}_{2})\). From (40) and (41), we get \({\widetilde{A}}_{p}+{\widetilde{B}}_{p}(K_{p}+\Delta K_{p})\) are also Metzler matrices, so (39) is positive. Because \(\widetilde{A}_{p}^{\textrm{T}}v_{p}+g_{p}+\Delta K^{\textrm{T}}_{p}\widetilde{B}^{\textrm{T}}_{p} v_{p}+R_{1}+K^{\textrm{T}}_{p}R_{2}+\Delta K^{\textrm{T}}_{p}R_{2} \prec \widetilde{A}_{p}^{\textrm{T}}v_{p}+g_{p}+\overline{K}^{\textrm{T}}_{2} \widetilde{B}^{\textrm{T}}_{p}v_{p}+R_{1} +K^{\textrm{T}}_{p}R_{2}+\overline{K}^{\textrm{T}}_{2}R_{2}\), \({\widetilde{A}}_{p}^{\textrm{T}}v_{p}+g_{p}+\Delta K^{\textrm{T}}_{p}\widetilde{B}^{\textrm{T}}_{p} v_{p}+R_{1}+K^{\textrm{T}}_{p}R_{2}+\Delta K^{\textrm{T}}_{p}R_{2}\preceq \lambda _{i}v_{p}\). Because \(R_{1}+K^{\textrm{T}}_{p}R_{2}+\overline{K}^{\textrm{T}}_{1}R_{2}\prec R_{1}+K^{\textrm{T}}_{p}R_{2}+\Delta K^{\textrm{T}}_{p}R_{2}\), \(v_{p}\prec R_{1}+K^{\textrm{T}}_{p}R_{2}+\Delta K^{\textrm{T}}_{p}R_{2}\). Replace \(K_{p}\) in (36) and (37) with \(K_{p}+\Delta K_{p}\). Then according to Theorem 2, we conclude that it is FTS with the cost (38). \(\square \)

Remark 5

In Theorems 1-3, there are some unknown nonlinear terms in conditions, for example, \(\lambda _{i}, v_{p}, K_{p}\), and \(g_{p}\). Here are the following steps to solve these problems.

Step 1: Based on the cost function under consideration, one can determine positive constant \(T_{i}\) and vectors \(\sigma \), \(\varepsilon \), \(R_1\), \(R_2\).

Step 2: Based on the system’s information, one can directly calculate the matrices \({\widetilde{A}}_p\), \({\widetilde{B}}_p\), \({\widetilde{C}}_r\), \(\overline{K}_{1}\), \(\overline{K}_{2}\).

Step 3: Select the parameters appropriately \(\zeta _{1}, \zeta _{2}, T_{l}\), \(\alpha \), \(\lambda _i\), \(\mu _i\) and \(\omega \).

Step 4: By adjusting the parameters \(\lambda _i\), \(\mu _i\) and \(\omega \) and solving the inequalities in Theorem 1 via linear programming, we obtain the solution \(v_p\), \(\tau ^{*}_{l\Phi _{i}}\).

Step 5: By solving the inequalities of Theorem 2/3, one can obtain \(g_p\) and \(K_{p}\).

Remark 6

As we know, the Caputo fractional-order derivatives with the non-switched system are relevant to all history states. In fact, there are two types of research structures for switched fractional-order systems, namely fractional-order derivative related to all previous states or only related to all previous states after switching. There are many results for the first type [6, 7, 10]. However, some practical systems can be modeled as the second type, such as manual vehicle driving systems, vertical takeoff and landing helicopters, and the birth and death process. On the other hand, compared to the first one, which is complex and cumbersome, the second reduces a lot of redundant calculations and saves actual costs. Thus, the following example adopts the second type of structure.

4 A Numerical Example

Consider the PSFS (39) with the following parameters:

$$\begin{aligned} {\widetilde{A}}_{1}= & {} \left( \begin{array}{cc} 18.83 &{} 0.61 \\ 0.43 &{} -1.45 \\ \end{array} \right) , {\widetilde{A}}_{2}=\left( \begin{array}{cc} 13.78 &{} 0.18 \\ 0.29 &{} -2.62 \\ \end{array} \right) , {\widetilde{A}}_{3}=\left( \begin{array}{cc} 19.12 &{} 0.33 \\ 0.39 &{} -2.36 \\ \end{array} \right) ,\\ {\widetilde{B}}_{1}= & {} \left[ \begin{array}{cc} 20 &{} 0 \\ 0 &{} 0.5 \\ \end{array} \right] , {\widetilde{B}}_{2}=\left[ \begin{array}{cc} 15 &{} 0 \\ 0 &{} 0.6 \\ \end{array} \right] , {\widetilde{B}}_{3}=\left[ \begin{array}{cc} 2 &{} 0 \\ 0 &{} 1 \\ \end{array} \right] ,\\ {\widetilde{C}}_{1}= & {} C_{2}=C_{3}=\left[ \begin{array}{cc} 1.59 &{} 0 \\ 0 &{} 1.59 \\ \end{array} \right] , \varepsilon =\left[ \begin{array}{cc} 0.01 \\ 0.02 \\ \end{array} \right] , \sigma =\left[ \begin{array}{cc} 0.8 \\ 2.9 \\ \end{array} \right] , R_{1}=\left[ \begin{array}{cc} 0.7 \\ 1.0 \\ \end{array} \right] , \end{aligned}$$

where \((A_{i},B_{i})\) (\(i=1,2,3\)) represents the ith subsystem and \(C_{i}\) (\(i=1,2,3\)) represents the ith impulsive. Clearly, \(A_{i}\) are unstable.

By Theorem 3, we design the following non-fragile controllers

$$\begin{aligned} \Delta K_{\delta (t)}= & {} 0.0005*\left( \begin{array}{cc} 1+\text {sin}t &{} 0 \\ 0 &{} 1+\text {sin}t \\ \end{array} \right) ,\\ K_{1}= & {} \left( \begin{array}{cc} -1 &{} 0 \\ 0 &{} 1.5 \\ \end{array} \right) , K_{2}=\left( \begin{array}{cc} -1 &{} 0 \\ 0 &{} 2.0 \\ \end{array} \right) , K_{3}=\left( \begin{array}{cc} -2 &{} 0 \\ 0 &{} 1.6 \\ \end{array} \right) , \end{aligned}$$

then

$$\begin{aligned} {\overline{K}}_{1}=\left( \begin{array}{cc} 0 &{} 0 \\ 0 &{} 0 \\ \end{array} \right) , {\overline{K}}_{2}=\left( \begin{array}{cc} 0.001 &{} 0 \\ 0 &{} 0.001 \\ \end{array} \right) . \end{aligned}$$

Therefore, the matrices of the closed-loop stabilization subsystem are as follows:

$$\begin{aligned} {\hat{A}}_{1}=\left( \begin{array}{cc} -1.15 &{} 0.61 \\ 0.43 &{} -0.70 \\ \end{array} \right) , {\hat{A}}_{2}=\left( \begin{array}{cc} -1.21 &{} 0.18 \\ 0.29 &{} -1.42 \\ \end{array} \right) , {\hat{A}}_{3}=\left( \begin{array}{cc} -0.87 &{} 0.33 \\ 0.39 &{} -0.76 \\ \end{array} \right) . \end{aligned}$$

Let \(\alpha =0.9,T_{f}=8,T_{l}=2,\omega =1.6,\zeta _{1}=31,\zeta _{2}=1\). Then we present some different switching signal designs based on the different groups (\(\Phi ,\mathcal {O}\)) by choosing some appropriate parameters.

It can be seen in Table 1.

(1). When \(\mathcal {O}=\{1,2,3\}\) and \(\mathcal {O}=\{1\}\), we can respectively obtain the mode-dependent ADT and ADT strategies. At the same time, it can be clearly seen the relationship between the two strategies.

(2). By choosing different grouping functions \(\Phi \), we obtain the switching signal of each column separately, that is to say, the stability conditions of each column are different, and have their own virtues. For example, \(\mathcal {O}=\{1\}\) only focuses on the compensation among subsystems and \(\mathcal {O}=\{1,2,3\}\) only thinks about the differences of among subsystems, while \(\mathcal {O}=\{1,2\}\) takes into account both.

Table 1 Comparison of switching signal designs under the different (\(\Phi ,\mathcal {O}\))

5 Conclusion

In this paper, the guaranteed cost FTS of PSFS with D-disturbance and impulse is studied under the \(\Phi \)DADT strategy. The impulse can occur at any time of the switched system, namely the switching time and the non-switching time. The impulse time refers to the average impulse interval. In addition, both certain and robust controllers are designed to ensure the closed-loop system’s finite-time stability with guaranteed cost. Finally, a numerical example is given to signify the validity of the conclusions. It is one of our subsequent works that how to extend the results of this article to the guaranteed cost FTS of non-positive fractional-order switched systems. On the other hand, recently, a more general strategy (named “binary F-dependent ADT”) covering the \(\Phi \)DADT has been proposed [23]. The corresponding achievements based on the binary F-dependent ADT will become one of the directions for our future in-depth research.

Fig. 1
figure 1

a Switching signal 1. b State response of the system under the signal 1

Fig. 2
figure 2

a Switching signal 2. b State response of the system under the signal 2

Fig. 3
figure 3

a Switching signal 3. b State response of the system under the signal 3

Fig. 4
figure 4

a Switching signal 4. b State response of the system under the signal 4

Fig. 5
figure 5

a Switching signal 5. b State response of the system under the signal 5

Fig. 6
figure 6

Impulsive signal