1 Introduction

In the past decades, tremendous advances have occurred in the study of Markov jump linear systems [5, 8, 28, 34, 41, 48]. The motivation for concentrating on such systems is their widespread applications in power systems [22, 26], biomedicine [23], aerospace [3], and networked control systems (NCSs) [25]. This system can successfully describe the structural variations induced by external or internal discrete events such as random failures, repairs, changing subsystem interconnections and unexpected configuration conversions [2, 20]. MJLSs arise as a special class of both hybrid and stochastic systems. They consist of a set of continuous dynamics, the so-called modes, usually described by linear difference or differential equations that are affected by discrete events governed by a Markov chain with a finite number of states.

Up to now, various researches have been dedicated to the stabilization [5, 6, 8, 15, 34, 41] and control [16, 23, 43, 48] problems of MJLSs. In these studies [5, 6, 8, 15, 16, 34, 41, 43, 48], one key assumption is that the controller is strongly synchronous to the system, i.e., the Markov process which orchestrates switching between the controller modes is exactly the same as the Markov chain governing the system dynamic variations. This assumption does not come true in practice because the Markovian states of the system may not always be available to the controller instantly. In other words, the actual and the observed Markov chains may not be matched or synchronous. An example of this type of asynchronous switching is found at NCSs without time stamp information [4, 38].

One introduced solution in case of inaccurate mode observations is the mode-independent controller [2, 22]. By the mode-independent design, it means that all the system variations are neglected and the controller has a simple non-switching structure. However, the mode-independent controller simplifies the studies and has a practical appeal in case of non-accessible mode information, but it is very conservative and cannot deal with the complex asynchronous phenomenon between the system and the controller. Unlike the mode-independent structure, the asynchronous scheme takes the advantage of an observed Markov chain to deal with the system variations. In this case, the observed modes depend on the system modes according to some probabilities. Toward relaxing the assumption of perfectly synchronously controlled systems, different types of asynchronous phenomenon have been considered by researchers [3, 9,10,11, 18, 21, 27, 30, 31, 35,36,37,38,39, 45,46,47]. These studies can be categorized into two general categories: the studies dedicated to the deterministic switched systems and those devoted to stochastic switched systems. In deterministic switched system, the synchronous phenomenon is generally modeled by lags between the modes of the controller and system modes [18, 21, 27, 36, 37, 45,46,47]. This type of asynchronous phenomenon generally arises in mechanical [3, 31] or chemical systems [21]. For predetermined lags, stability problems are discussed in [9, 35], controllers are designed in [10, 31, 36, 37], and asynchronous filters are developed in [47]. Unlike the deterministic switching systems, the asynchronous switching in the stochastic systems has not gained much attention. The asynchronousy in Markovian systems is discussed in [2, 22, 32] through mode-independent designs and in [45, 46] by assuming a random switching lag between the system and the controller by a Bernoulli distribution. In [30, 45, 46], the Markov chains of the system and the controller (filter) are supposed to be completely independent. This assumption is conservative because the observed chain contains useful information on the original chain, and ignoring this information can lead to performance loss. This assumption is eliminated by [11, 38]; however, these studies are limited to the state estimation problems, acquire perfectly known Markovian properties of the observed chain and involve multiple design and slack variables which make the design too complex.

The motivation of this study is to design a more practical control scheme for MJLSs such that following two problems could be dealt with simultaneously:

  • The imperfect observation of the switching mechanism of a MJLS.

  • The imperfect implementation of the controller gains.

Practically, not only the Markovian states of the system may not always be available to the controller instantly, but also the specifications of the observed chain, namely the transition rates (TRs), may not precisely be obtained and may include uncertainties. On the other hand, the controller gains face implementation limitations which may result in relatively different designed and implemented gains due to finite word length in digital systems, the inherent imprecision in analog systems, and analog to digital or digital to analog transformers [12]. These factors lead to a poor design or an undesired or even unstable response of the implemented controller.

The robust, non-fragile asynchronous controller proposed in this study is a solution to the above mentioned problems. Such a scheme is applicable to more practical and realistic situations. It not only involves less computational effort when modeling the system’s Markov chain, but also minimizes the influence of unknown environmental disturbances that affect the observed mode information or the controller gains. To design the robust, non-fragile, asynchronous controller, the asynchronousy is expressed through an additional Markov chain which is mismatched with the system’s Markov chain but depends on it probabilistically. This additional chain also considers the effect of the inaccurate modeling in the form of uncertain, but bounded TRs. Then, the whole system is viewed as a piecewise homogeneous MJLS as a special case of non-homogeneous MJLSs [1] in which the TRs are time-varying but invariant within an interval [6, 8, 13, 42, 44]. The controller is also assumed to contain additive bounded uncertainties which represent the inaccuracies of the implementation procedures. In this framework, the analysis and controller synthesis procedures are fulfilled by the multiple, piecewise Lyapunov function.

Compared with the previous works, this paper mainly has the following four advantages:

(i) Unlike the well-known mode-independent structures [2, 22, 32], the proposed controller is mode-dependent; it imports the information of subsystems and their interactions in the multiple Lyapunov functions; thus, it is a less conservative design. (ii) The controller is designed based on an observation of the system’s switching signal and not on the actual switching signal of the system which is usually unavailable. (iii) In comparison with other similar mode-dependent asynchronous design schemes [11, 38], this scheme removes the assumption of exactly known TRs of the observed Markov process. (iv) It leads to a simpler set of LMIs with fewer number of design variables compared with similar asynchronous schemes [11, 30, 38], due to the non-homogeneous Markovian model utilized for the asynchronousy description. Simulation results and comparisons with the controller referred at [2] show the potentials of the proposed method.

The remainder of the paper is organized as follows. In Sect. 2, the preliminaries are provided and the problem is formulated. In the Sect. 3, the robust stochastic stabilization problem is tackled and a sufficient condition is derived. Then, the asynchronous non-fragile controller gains are designed and some discussions are provided about the important implementation issues of the proposed method. Section 4 includes a practical example and the comparisons related to a VTOL subject to faults. Finally, there are concluding remarks in Sect. 5.

Notation All the notations in the present paper are standard and can be found in the relevant literature of Markovian switching systems. Additionally, all the matrices contain real values with proper dimensions.

2 Preliminaries and Problem Formulation

Consider a complete probability space, (\(\varOmega , F, \rho \)) satisfying usual conditions, where \(\varOmega , F\) and \(\rho \) represent the sample space, the algebra of events and the probability measure on F, respectively. The uncertain MJLS is described over the probability space by Eq. (1),

$$\begin{aligned} \left\{ {\begin{array}{lllll} {\dot{x}}(t)=A(r_t )x(t)+B(r_t )u(t), \\ x(t_0 )=x_0 ,{} r_{t_0 } =r_0 \\ \end{array}} \right. \end{aligned}$$
(1)

where \(x(t)\in {\mathbb {R}}^{n}\) is the system state vector of dimension n,\(u(t)\in {\mathbb {R}}^{m}\) is the controlled input vector of dimension m and \(x(t_0 )=x_0 \) is the initial state vector. The jumping parameter {\(r_{t}, t \ge \) 0} represents a time-homogeneous Markov chain with discrete values of the finite set, \({\underline{N}}=:\{1,2,\ldots ,N\}.\) Here \(r_{t_0 } =r_0 ,\)is the initial condition, and the Markov chain has a square transition rate matrix specified by

$$\begin{aligned} \Lambda =\left[ {{\begin{array}{lllll} {\lambda _{11} }&{}\quad {\lambda _{12} }&{}\quad {\ldots }&{}\quad {\lambda _{1N} } \\ {\lambda _{21} }&{}\quad {\lambda _{22} }&{}\quad {\ldots }&{}\quad {\lambda _{2N} } \\ \vdots &{}\quad \vdots &{} \quad \ddots &{}\quad \vdots \\ {\lambda _{N1} }&{}\quad {\lambda _{N2} }&{}\quad {\ldots }&{}\quad {\lambda _{NN} } \\ \end{array} }} \right] \end{aligned}$$
(2)

in which the transition probabilities are as the following with h as the sojourn time.

$$\begin{aligned} \Pr \left\{ {r_{t+h} =j|r_t =i} \right\} =\left\{ {\begin{array}{ll} \lambda _{ij} h+o(h) &{}\quad {i\ne j} \\ 1+\lambda _{ii} h+o(h)&{} \quad {i=j} \\ \end{array}} \right. . \end{aligned}$$
(3)

In Eq. (3), \(\lambda _{ij} \) denotes the transition rates from mode i at time t to mode j at time \(t+h\) with the following conditions:

$$\begin{aligned} \lambda _{ij}\ge & {} 0 \end{aligned}$$
(4)
$$\begin{aligned} \lambda _{ii}= & {} -\sum _{j=1,j\ne i}^N {\lambda _{ij} } \end{aligned}$$
(5)

The condition (4) means that TRs are never negative. By condition (5), it is guaranteed that the system moves from mode i to some mode j with probability one. It is assumed that the Markov chain is irreducible, possible to move from any modes to another in a countable number of jumps.

In Eq. (1), \(A(r_t )\) and \(B(r_t )\) are linear mode-dependent system matrices with appropriate dimensions. For this system, the following controller is assumed.

$$\begin{aligned} u(t)=K(\sigma _t ,t)x(t) \end{aligned}$$
(6)

The matrix \(K(\sigma _{t},t)\) is the mode-dependent, time-varying controller gain with the compatible dimension, designed to make the closed-loop system robustly, stochastically stable. The parameter {\(\sigma _{t}, t \ge \) 0} is the Markov chain governing switching between candidate gains of \(K(\sigma _{t},t)\). It is a continuous-time, discrete-valued Markov process defined in finite set, \({\underline{M}}=:\left\{ {1,2,\ldots ,M} \right\} \) with the initial mode \(\sigma _{t_0 } =\sigma _0 ,\) a generator square matrix in the form of (7) and elements given by (8).

$$\begin{aligned}&\displaystyle P^{r_t }=\left[ {{\begin{array}{lllll} {p_{11}^{r_t } }&{}\quad {p_{12}^{r_t } }&{}\quad {\ldots }&{} \quad {p_{1M}^{r_t } } \\ {p_{21}^{r_t } }&{}\quad {p_{22}^{r_t } }&{} \quad {\ldots }&{}\quad {p_{2M}^{r_t } } \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ {p_{M1}^{r_t } }&{}\quad {p_{M2}^{r_t } }&{}\quad {\ldots }&{} \quad {p_{MM}^{r_t } } \\ \end{array} }} \right] \end{aligned}$$
(7)
$$\begin{aligned}&\displaystyle \Pr \left\{ {\sigma _{t+h} =n|\sigma _t =m} \right\} =\left\{ {\begin{array}{ll} p_{mn}^{r_t } h+o(h) &{}\quad {m\ne n} \\ 1+p_{mm}^{r_t } h+o(h)&{} \quad {m=n} \\ \end{array}} \right. \end{aligned}$$
(8)

Here \(p_{mn}^{r_t } \ge 0\) with the condition of \(p_{mm}^{r_t } =-\sum _{n=1,n\ne m}^M {p_{mn}^{r_t } } \) is the transition rate from mode m of the controller at time t to mode n at time \(t+h\). In the stochastic variation, the Markov chain \(r_{t}\) is assumed to be independent on the \(\sigma \)-algebra generated by \(\sigma _{t}\).

The controller (6) is an asynchronous structure because the Markov chain governing the controller switching is different from the Markov process orchestrating the system modes. However, the controller modes are mismatched with the modes of the system, but depend on them according to certain probabilities. This dependency is shown by defining Eq. (9).

As mentioned before, generally it is difficult to determine the exact values of the conditional TRs (8). Thus, it is assumed that the controller chain is modeled by uncertain transition specifications. In this case, the exact values of the TR entries are not known, but their upper bounds and lower bounds are available. Such an uncertain TR matrix is specified as \(P^{r_t }=\bar{{P}}^{r_t }+\Delta P^{r_t }\) here, where \(\bar{{P}}^{r_t }=[\bar{{p}}_{mn}^{r_t } ],\bar{{p}}_{mm}^{r_t } =-\sum _{n=1,n\ne m}^M {\bar{{p}}_{mn}^{r_t } } \) and \(\Delta P^{r_t }=[\Delta p_{mn}^{r_t } ], \Delta p_{mm}^{r_t } =-\sum _{n=1,n\ne m}^M {\Delta p_{mn}^{r_t } } \) denote the nominal TRs and the uncertain part of TRs. It is supposed that the TR uncertainty is bounded by a maximum value of \(\zeta _{_{mn} }^{r_t } >0\), i.e., \(\left| {\Delta p_{mn}^{r_t } } \right| \le \zeta _{mn}^{r_t } m\ne n.\)

The proposed asynchronous control structure in which the controller Markov chain \(\sigma _{t}\) is an uncertain observation of the system Markov process \(r_{t}\) is shown in Fig. 1.

Fig. 1
figure 1

Architecture of the asynchronous control scheme

The transition rates of the Markov process which governs the switching between the controller modes are not continuously time-varying so the Markov process is not purely non-homogeneous. The time variation of the TRs of \(\sigma _{t}\) is due to their dependency to the signal \(r_{t}\); considering the fact that \(r_{t}\) is a piecewise constant signal, the TRs of \(\sigma _{t}\) are piecewise constant functions of time t, i.e., they are varying but invariant within an interval. Therefore, \(\sigma _{t }\) is a piecewise homogeneous Markov chain. Since the system of Eq. (1) is a time-homogeneous MJLS and the controller structure evolves with a piecewise homogeneous Markov chain, the closed-loop system involves two decoupled Markov processes and generally is a piecewise homogeneous Markov jump linear system.

To include uncertainties in the controller, the controller gain is assumed as \(K(\sigma _t ,t)=\bar{{K}}(\sigma _t )+\Delta K(\sigma _t ,t).\bar{{K}}(\sigma _t )\) is the nominal gain to be designed while \(\Delta K(\sigma _t ,t)\) represents the time-varying, norm-bounded parametric uncertainty of the controller. \(\Delta K(\sigma _t ,t)\) is an unknown, mode-dependent matrix with the following form,

$$\begin{aligned} \Delta K(\sigma _t ,t)=D_K (\sigma _t )F_K (\sigma _t ,t)E_K (\sigma _t ) \end{aligned}$$
(9)

in which \(D_{K}(\sigma _{t})\), and \(E_{K}(\sigma _{t})\) are known real-valued constant matrices, and \(F_{K}(\sigma _{t},t)\) is unknown time-varying matrix with Lebesgue measurable elements satisfying \(F_K^T (\sigma _t ,t)F_K (\sigma _t ,t)\le I.\)

Before obtaining the main results and designing the asynchronous controller, an important definition and lemma are recalled.

Definition

[2] For any initial mode \(r_{0}\), and any given initial state vector \(x_{0}\), the uncertain system of Eq. (1) with u(t) = 0 is said to be robustly stochastically stable, if for all admissible uncertainties the following condition holds

$$\begin{aligned} E\left[ {\int _0^\infty {\left\| {x(t)} \right\| ^{2}\hbox {d}t} |x_0 ,{}r_0 } \right] <\infty \end{aligned}$$
(10)

where E{\(\cdot {\vert }\cdot \)} is the expectation conditioning on the initial values of \(x_{0}\) and \(r_{0}\).

Lemma

[40] Let Y be a symmetric matrix, H and E be given matrices with the appropriate dimensions and F satisfy \(F^{T}F\le I\), then the following equivalent relations hold:

  1. 1.

    For any \(\varepsilon >0,{} HFE+E^{T}F^{T}H^{T}\le \varepsilon HH^{T}+\varepsilon ^{-1}E^{T}E\)

  2. 2.

    \( Y+HFE+E^{T}F^{T}H^{T}<0\) holds if and only if there exists a scalar \(\varepsilon>\) 0 such that\(Y+\varepsilon HH^{T}+\varepsilon ^{-1}E^{T}E<0.\)

After introducing the system and the controller structure, finally the problem is formulated; consider a MJLS and derive stabilizability conditions for the closed-loop system which involves an asynchronous controller. Additionally, design non-fragile, state-feedback gains which are only dependent to the observed chain such that the closed-loop system is robustly, stochastically stable.

3 Main Results

The purpose of this section is to deal with the stabilization problem. By a sufficient condition, the existence of the asynchronous, non-fragile, state-feedback controller is checked, and then, the gains (6) are designed such that the system (1) is stochastically stable over all admissible uncertainties in the system and controller. The condition as well as the feedback gains are reported in terms of a set of coupled LMIs which can be solved systematically and effectively.

Consider the control law (6), substituting the controller gains into Eq. (1) yields the dynamic of the closed-loop system described by

$$\begin{aligned} \left\{ {\begin{array}{lllll} {\dot{x}}(t)=\bar{{A}}(r_t ,\sigma _t ,t)x(t) \\ x(t_0 )=x_0 ,{} r_{t_0 } =r_0 ,{} \sigma _{t_0 } =\sigma _0 \\ \end{array}} \right. \end{aligned}$$
(11)

where

$$\begin{aligned} \bar{{A}}(r_t ,\sigma _t ,t)=A(r_t )+B(r_t )K(\sigma _t ,t) \end{aligned}$$
(12)

The upcoming theorem presents a sufficient stabilizability condition.

Remark 1

Hereafter in the whole paper, for the convenience of notations \(r_{t }=i\) is used. It specifies the modei of the system. Thus, matrices are labeled as \(A(i), B(i), K(m), \Delta \)K(\(m,t), E_{K}(m), D_{K}(m)\) and \(F_{K}(m, t)\). The initial time is set to be zero, \(t_{0}\) = 0. Additionally, the initial state vector, \(x_{0 }\) and the initial modes, \(r_{0}\) and \(\sigma _{0 }\), are supposed to be available.

Theorem

There exist controller gains \(K({ \sigma }_{t},t)\) such that the closed-loop system \({\dot{x}}(t)=\bar{{A}}(r_t ,\sigma _t ,t)x(t)\) with the initial conditions \(x(t_0 )=x_0 , r_{t_0 } =r_0 ,{} \sigma _{t_0 } =\sigma _0 ,\) is robustly stochastically stable, if there exists a set of square, symmetric, positive definite mode-dependent matrices X(m) and P(m), a set of mode-dependent matrices Y(m), V(m), Z(m) and a set of positive mode-dependent scalars \(\varepsilon _{K}(i),\varepsilon _p^i(m,n),\) such that the following set of constraints hold for all \(i\in {\underline{N}}\) and \(m\in {\underline{M}}.\)

$$\begin{aligned}&\displaystyle \left[ {{\begin{array}{lllll} {J(i,m)}&{} \quad {X(m)E_K^T (m)}&{} \quad {X(m)} \\ {E_K (i)X(m)}&{} \quad {-\varepsilon _K (m)I}&{} \quad 0 \\ {X(m)}&{} \quad 0&{} \quad {-Z(m)} \\ \end{array} }} \right] <0 \end{aligned}$$
(13)
$$\begin{aligned}&\displaystyle \left[ {{\begin{array}{lllll} {Q(i,m)}&{} \quad {S(m)} \\ {S^{T}(m)}&{} \quad {-R(m)} \\ \end{array} }} \right] <0 \end{aligned}$$
(14)

where

$$\begin{aligned} X(m)= & {} P^{-1}(m) \end{aligned}$$
(15)
$$\begin{aligned} V(m)= & {} Z^{-1}(m) \end{aligned}$$
(16)
$$\begin{aligned} J(i,m)= & {} X(m)A^{T}(i)+A(i)X(m)+Y^{T}(m)B^{T}(i)\nonumber \\&+\,B(i)Y(m)+\varepsilon _K (m)B(i)D_K (m)D_K^T (m)B^{T}(i) \end{aligned}$$
(17)
$$\begin{aligned} Q(i,m)= & {} -V(m)+\mathop \sum \limits _{n=1}^M {\bar{{p}}_{mn}^i P(n)} +\frac{1}{4}\mathop \sum \limits _{n=1}^M {\varepsilon _p^i (m,n)(\zeta _{mn}^i )^{2}I} \end{aligned}$$
(18)
$$\begin{aligned} S(m)= & {} \left[ P(m)-P(1),\ldots ,P(m)-P(m-1),\nonumber \right. \\&\left. P(m)-P(m+1),\ldots ,P(m)-P(M) \right] \end{aligned}$$
(19)
$$\begin{aligned} R(m)= & {} \hbox {diag}\left[ \varepsilon _p^i (m,1)I,\ldots ,\varepsilon _p^i (m,m-1)I,\nonumber \right. \\&\left. \varepsilon _p^i (m,m+1)I,\ldots ,\varepsilon _p^i (m,M)I \right] \end{aligned}$$
(20)

then, the stabilizer gain is obtained as\( K(m)=Y(m)X^{-1}(m)\).

Proof

Construct the multiple quadratic Lyapunov candidate as

$$\begin{aligned} V(x(t),r_t ,\sigma _t )=x^{T}(t)P(\sigma _t )x(t) \end{aligned}$$
(21)

The function is multiple because of the variations of the dynamics and controller gains. P(m) denounce symmetric and positive definite matrices.

The infinitesimal generator of the Lyapunov function is as Eq. (22).

$$\begin{aligned} \hbox {L}V(x(t),i,m)= & {} \mathop {\lim }\limits _{h\rightarrow 0} \left( E[V(x(t+h),r_{t+h} ,\sigma _{t+h} )|x(t)=x(t),r_t =i,\sigma _t =m]\nonumber \right. \\&\left. -\,V(x(t),i,m) \right) /h \end{aligned}$$
(22)

Applying the law of total probability and using the property of the conditional expectation, the infinitesimal generator is written as Eq. (23).

(23)

For the system of (1), the probabilities of the both Markov chains are involved. Considering decoupled Markov chains of the two levels; it can be found that Pr (\(r_{t+h}=j\), \(\sigma \)\(_{t+h}=n\) \({\vert }\) \(r_{t}=i\), \(\sigma \)\(_{t}=m)\) = Pr (\(r_{t+h}=j\) \({\vert }\) \(r_{t }=i\), \(\sigma \)\(_{t+h}=n\), \(\sigma \)\(_{t}=m)\) = Pr ( \(\sigma \)\(_{t+h}=n\) \({\vert }\) \(r_{t}=i\), \(\sigma \)\(_{t}=m)\). Thus, the infinitesimal generator of the Lyapunov function will be computed as:

(24)

By using the relations \(\sum _{n\in \underline{ \hbox {{M},}} n\ne m} {p_{mn} +} p_{mm} =\sum _{n\in \underline{\hbox {{M}}}} {p_{mn} } \) and \(\sum _{j\in \underline{\hbox {{N},}} j\ne i} {\lambda _{ij} P(m)}+ \lambda _{ii} P(m)=\sum _{j\in \underline{\hbox {{N}}}} {\lambda _{ij} P(m)} ,\) Eq. (24) can be written as (25);

(25)

Consider \(\sum _{j=1}^N {\lambda _{ij} } P(m)=0,\) as a result of the total probability law and the conditions of (4) and (5), if the following inequality holds,

$$\begin{aligned} \bar{{A}}^{T}(i,m,t)P(m)+P(m)\bar{{A}}(i,mt)+\sum _{n=1}^M {p_{mn}^i P(n)} <0 \end{aligned}$$
(26)

there exist stabilizing state-feedback gains K(m) such that one has \(\hbox {L}V(x(t),i,m)<0.\) By considering a similar line in the proof of Theorem 4 in Section 2.2 of [2], the closed-loop system is stochastically stable and the Definition 1 is verified for the closed-loop system (11).

To derive the controller gains (6), replace \(\bar{{A}}(i,m,t)\) with \(A(i)+B(i)\left( K(m)+\Delta K(m,t) \right) \) in Eq. (26), thus, the following is achieved.

$$\begin{aligned}&A^{T}(i)P(m)+P(m)A(i)+K^{T}(m)B^{T}(i)P(m)\nonumber \\&\quad +\,P(m)B(i)K(m)+\Delta K^{T}(m,t)B^{T}(i)P(m)\nonumber \\&\quad +\, P(m)B(i)\Delta K(m,t)+\sum _{n=1}^M {p_{mn}^i P(n)} <0 \end{aligned}$$
(27)

If substitute the uncertain TRs of the observed chain by \(p^{r_t }=\bar{{p}}^{r_t }+\Delta p^{r_t },\) Eq. (27) turns to Eq. (28).

$$\begin{aligned}&A^{T}(i)P(m)+P(m)A(i)+K^{T}(m)B^{T}(i)P(m)\nonumber \\&\quad +\,P(m)B(i)K(m)+E_K^T (m)F_K^T (m,t)D_K^T (m)B^{T}(i)P(m) \nonumber \\&\quad +\, P(m)B(i)D_K (m)F_K (m,t)E_K (m)\nonumber \\&\quad +\,\sum _{n=1}^M {\bar{{p}}_{mn}^i P(n)} +\sum _{n=1,n\ne m}^M {\Delta p_{mn}^i \left( {P(n)-P(m)} \right) } <0 \end{aligned}$$
(28)

Based on Lemma 1, the following inequalities can be written for the uncertain parts of (28),

$$\begin{aligned}&P(m)B(i)D_K (m)F_K (m,t)E_K (m)+E_K^T (m)F_K^T (m,t)D_K^T (m)B^{T}(i)P(m)\nonumber \\&\quad \le \varepsilon _K (m)P(m)B(i)D_K (m)D_K^T (m)B^{T}(i)P(m)+\varepsilon _K^{-1} (m)E_K^T (m)E_K (m) \end{aligned}$$
(29)
$$\begin{aligned}&\sum _{n=1,n\ne m}^M {\Delta p_{_{mn} }^i \left( {P(n)-P(m)} \right) } \le \frac{1}{4}\sum _{n=1}^M {\varepsilon _p^i (m,n)(\zeta _{mn}^i )^{2}I} \nonumber \\&\quad +\,\sum _{n=1,n\ne m}^N {\varepsilon _p^i (m,n)^{-1}\left( {P(n)-P(m)} \right) ^{2}} \end{aligned}$$
(30)

where \(\varepsilon \)\(_{K}(m)\) and \(\varepsilon _p^i (m,n),\) specify the degree of robustness of the system.

Taking the advantage of the inequalities (29) and (30), Eq. (28) can be rewritten in the form of the following.

$$\begin{aligned}&A^{T}(i)P(m)+P(m)A(i)+K^{T}(m)B^{T}(i)P(m)\nonumber \\&\quad +\,P(m)B(i)K(m)+\varepsilon _K (m)P(m)B(i)D_K (m)D_K^T (m)B^{T}(i)P(m)\nonumber \\&\quad +\,\varepsilon _K^{-1} (m)E_K^T (m)E_K (m)\nonumber \\&\quad +\,\sum _{n=1}^M {\bar{{p}}_{mn}^i P(n)} +\frac{1}{4}\sum _{n=1}^M {\varepsilon _p^i (m,n)(\zeta _{mn}^i )^{2}I} \nonumber \\&\quad +\,\sum _{n=1,n\ne m}^N {\varepsilon _p^i (m,n)^{-1}\left( {P(n)-P(m)} \right) ^{2}} <0 \end{aligned}$$
(31)

Define \(V(m)=Z^{-1}(m)\) such that

$$\begin{aligned} \sum _{n=1}^M {\bar{{p}}_{mn}^i P(n)} +\frac{1}{4}\sum _{n=1}^M {\varepsilon _p^i (m,n)(\zeta _{_{mn} }^i )^{2}I} +\sum _{n=1}^M {\varepsilon _p^i (m,n)^{-1}\left( {P(n)-P(m)} \right) ^{2}} <V(m)\nonumber \\ \end{aligned}$$
(32)

By defining Q(im), S(m) and R(m) in the form of Eqs. (18), (19) and (20) and applying the Schur complement lemma to Eq. (32), the inequality (14) of the theorem is achieved.

Furthermore, Eq. (31) leads to (33) if one takes the advantage of Eq. (32).

$$\begin{aligned}&A^{T}(i)P(m)+P(m)A(i)+K^{T}(m)B^{T}(i)P(m)\ \nonumber \\&\quad +\,P(m)B(i)K(m)+\varepsilon _K (m)P(m)B(i)D_K (m)D_K^T (m)B^{T}(i)P(m) \nonumber \\&\quad +\, \varepsilon _K^{-1} (m)E_K^T (m)E_K (m)+V(m)<0 \end{aligned}$$
(33)

The condition of Eq. (33) is nonlinear in P(m) and K(m). In order to find controller gains it is desired to transform (33) into a linear form, let \(X(m)= P^{-1}(m)\). Pre- and post-multiplying (33) by X(m) provides (34).

$$\begin{aligned}&X(m)A^{T}(i)+A(i)X(m)+X(m)K^{T}(m)B^{T}(i)\nonumber \\&\quad +\,B(i)K(m)X(m)+\varepsilon _K (m)B(i)D_K (m)D_K^T (m)B^{T}(i)\nonumber \\&\quad +\,\varepsilon _K^{-1} (m)X(m)E_K^T (m)E_K (m)X(m)+X(m)V(m)X(m)<0 \end{aligned}$$
(34)

Let \(Y(m)=K(m)X(m)\), then the inequality of (35) is obtained from (34).

$$\begin{aligned}&X(m)A^{T}(i)+A(i)X(m)+Y^{T}(m)B^{T}(i)\nonumber \\&\quad +\,B(i)Y(m)+\varepsilon _K(m)B(i)D_K (m)D_K^T (m)B^{T}(i)\nonumber \\&\quad +\,\varepsilon _K^{-1} (m)X(m)E_K^T (m) E_K (m)X(m)+X(m)V(m)X(m)<0 \end{aligned}$$
(35)

By definingJ(im) as (17) and using the Schur complement equivalence mentioned in Lemma 2, the inequality of (35) can be written in the form of inequality (13). Finally, the state-feedback gains are derived as \(K(m)=Y(m)X^{-1}(m)\) which ends the proof. \(\square \)

Remark 2

It should be noted that, due to the uncertainties in the system (1), the theorem is only a sufficient condition on the stochastic stabilizability of the system. Thus, further works need to be done to improve the conditions.

Remark 3

Compared with the Markov jump systems, semi-Markov jump systems (S-MJLSs) are more practical stochastic models for real-world applications [7, 17, 29]. While MJLSs are characterized by a fixed matrix of transition rates, S-MJLSs are identified by time-dependent TRs with relaxed conditions on the sojourn time probability distributions. Definitely, the method of this study can be extended to deal with the asynchronous switching phenomenon in the control of semi-Markov jump systems. In that case, the switching of the system and the controller could be modeled by two distinguished semi-Markov processes. By considering both processes integrated in the closed-loop system, the controller can be readily designed.

3.1 Discussions

3.1.1 Design Parameters

A drawback of the proposed method is that the conditions depend on a number of parameters \(\varepsilon \) that must be suitably tuned. These parameters rise due to the lemma used for dealing with the system uncertainty. According to [40], this lemma holds for any \(\varepsilon > 0\). Although these parameters could take any values, it is preferred to select them properly. The reason is that these parameters determine the system degree of robustness and improper values may increase conservativeness and even lead to infeasible LMI sets. There exist two approaches for dealing with the parameters \(\varepsilon \). The first approach is to select them a priori to afford a prescribed degree of robustness. This approach is extensively used in the robust controller design problems [40] and is also preferred by the authors in the current study. The main advantage of this approach is providing the conditions of a fair comparison between the multiple results. The second approach is to optimize the parameters \(\varepsilon \) which is addressed in [24].

3.1.2 Uncertainties

As mentioned before, the uncertainty in the proposed control structure appears both in the observed Markov chain and the gains of the controller. Both uncertainties are a result of imperfect system information and modeling errors. The TR uncertainty is specified by a bound of \(\zeta _{mn}^{r_t } \), and the controller gains’ uncertainty is specified by the matrices of \(E_{K}(m), D_{K}(m)\) and \(F_{K}(m,t)\). Generally, these specifications are determined empirically from an admissible portion (for example up to, 20%) of the nominal value of the transition rates and the system gains after lots of statistics in practice. In this study, these bounds and matrices are supposed to be a priori available from the modeling procedure.

3.1.3 Number of the Controller Modes

There is no specific relation between the number of system modes and the number of the controller modes. In other words, M and N may either be equal or not. Although in normal situations M and N are supposed to be equal, in a case where the mode information is not complete and some modes are not observable, M may be less than N. Also, in noisy and disturbed situations, the number of observed modes may be more than the number of actual modes. In this study, it is assumed that both M and N are previously known. Obtaining the Markov chain of the controller, or the problem of how to observe its parameters, is beyond the scopes of this paper and may be a significant and interesting problem for further investigations.

3.1.4 Feasibility of the Controller from Implementation and Computational Point of Views

The asynchronous controller is highly feasible from the implementation point of view, but it is also subject to some technical limitations. These limitations are relevant to the implementation of state-feedback gains and the construction of the switching mechanism. In case of feedback gain implementation, the proposed scheme faces difficulties exactly similar to those that come up in the implementation of a normal control gain for a linear system and it is not a major concern. In case of Markov chain construction and the mode identification, the proposed controller is also feasible and there exist plenty of studies dedicated to the Markov chain implementations [33]. Remarkably, the presented switching controller is more feasible than the traditional control scheme for MJLSs since it does not require perfect access to the Markov chain of the system but depends on the observed chain. From the computational point of view, the factors that affect the feasibility of the controller are the uncertainty bounds of the TRs and feedback gains involved in the LMI constraints. Although the LMIs are essentially convex constraints and easily solvable by optimization algorithms and software, large uncertainty bounds may reduce the feasibility of the LMIs.

4 Illustrative Example

In this section, simulation results are provided to test the effectiveness and the applicability of the proposed theory. For this purpose, an stabilizing non-fragile asynchronous controller is designed and applied to a vertical takeoff–landing vehicle extended from [14] which is modeled as a MJLS. To get the simulation results, MATLAB 2016b, and for LMI solving, YALMIP [19] is used here. Also, the computer is an Intel \( \circledR Core^{\mathrm{TM}}\) I7-6700HQ 2.60 GHz CPU with 16 GB RAM.

Due to the stochastic nature of the Markovian systems, the simulation results cannot be convincing on the basis of a single realization; therefore, the results are obtained for 10 multiple runs and also represented as the average of 10,000 Monte Carlo runs. Furthermore, to show the superiority of the proposed controller, the results are compared to the mode-independent controller referred at [2].

The VTOL states, \(x(t) = [x_{1}(t), x_{2}(t), x_{3}(t), x_{4}(t)\)], are defined in Table 1.

Table 1 VTOL states and state variables

The vertical takeoff–landing vehicle is a fault-prone system and can be represented by a Markovian jump linear model. The VTOL system matrices are A(1) and B(1) in the normal working conditions. The fault scenario in this system is the lost effectiveness of the collective pitch control input \(u_{1}(t)\) to the vertical velocity \(x_{2}(t)\) by 50%. Under such fault circumstances, the system matrices are A(2) (= A(1)) and B(2), in which the first element of the second row of the input matrix B(2) is half of that of B(1). Without loss of generality, only one type of fault is considered in the numerical example, so \(N_{ }=_{ }M_{ }=_{ }\)2. System matrices are as the following.

$$\begin{aligned} A(1)= & {} A(2)=\left[ {{\begin{array}{llll} {-\,0.0366}&{}\quad {0.0271}&{}\quad {0.0188}&{} \quad {-\,0.4555} \\ {0.0482}&{}\quad {-\,1.010}&{}\quad {0.0024}&{}\quad {-\,4.0208} \\ {0.1002}&{}\quad {0.3681}&{} \quad {-\,0.7070}&{}\quad {1.4200} \\ 0&{}\quad 0&{}\quad 1&{}\quad 0 \\ \end{array} }} \right] , \nonumber \\ B(1)= & {} \left[ {{\begin{array}{ll} {0.4422}&{}\quad {0.1761} \\ {3.5446}&{}\quad {-\,7.5922} \\ {-\,5.520}&{}\quad {4.490} \\ 0&{} 0 \\ \end{array} }} \right] ,\nonumber \\ B(2)= & {} \left[ {{\begin{array}{ll} {0.4422}&{}\quad {0.1761} \\ {1.7723}&{} \quad {-\,7.5922} \\ {-\,5.520}&{}\quad {4.490} \\ 0&{} \quad 0 \\ \end{array} }} \right] \end{aligned}$$
(36)

The TR matrix of the fault process is (37), and the nominal observed transition rate matrices are as (38).

$$\begin{aligned}&\Lambda =\left[ {{\begin{array}{lllll} {-\,2}&{}\quad 2 \\ 1&{}\quad {-\,1} \\ \end{array} }} \right] \end{aligned}$$
(37)
$$\begin{aligned}&P^{1}=\left[ {{\begin{array}{lllll} {-\,0.1}&{}\quad {0.1} \\ {0.15}&{} \quad {-\,0.15} \\ \end{array} }} \right] , \quad P^{2}=\left[ {{\begin{array}{lllll} {-\,0.2}&{}\quad {0.2} \\ {0.1}&{} \quad {-\,0.1} \\ \end{array} }} \right] \end{aligned}$$
(38)

It is assumed that the observed chain has uncertainties up to 50% of the nominal values. It means that the upper and lower bounds of the TRs are assumed to be \(\left| {\Delta p_{mn} } \right| \le p_{mn} /2\, m\ne n.\)

A single mode evolution of the Markov process \(r_{t}\) which demonstrates the fault occurrence trajectory is depicted in Fig. 2. The corresponding, observed Markov chain, \(\sigma _{t }\) is also illustrated in Fig. 3.

The controller gain uncertainties in this example are assumed to be (39).

$$\begin{aligned} D_K (1)=D_K (2)=\left[ {{\begin{array}{lllll} 1&{} \quad 0&{}\quad 0&{} \quad 0 \\ 0&{} \quad 1&{}\quad 0&{}\quad 0 \\ \end{array} }} \right] ,E_B (1)=E_B (2)=0.1*\hbox {eye}(4) \end{aligned}$$
(39)

The uncontrolled states of the VTOL system are depicted in Fig. 4 by the initial conditions and modes of \(x_{1}\)(0) = -1.2, \(x_{2}\)(0) = 0.9, \(x_{3}\)(0) = 0.2, \(x_{4}\)(0) = -1, \(r_{0 }=_{ }\) \(\sigma \)\(_{0 }=_{ }\)1. It is clear that all the states of the uncontrolled system are unstable.

Fig. 2
figure 2

Fault occurrence trajectory of the VTOL

Fig. 3
figure 3

States of the observed Markov process

Fig. 4
figure 4

Uncontrolled states of the VTOL system

By solving the conditions of Theorem in Sect. 3, with the prescribed degree of robustness \(\varepsilon \)\(_{A}\)(1) = \(\varepsilon \)\(_{B}\)(1) = 0.5, \(\varepsilon \)\(_{A}\)(2) = \(\varepsilon \)\(_{B}\)(2) = 0.1, and \(\varepsilon _p^i (m,n)\)= 0.5, the following controllers are obtained.

$$\begin{aligned} K(1)= & {} \left[ {{\begin{array}{llll} {-\,1.1255}&{} \quad {-\,1.0359}&{}\quad {1.5716}&{} \quad {1.6765} \\ {-\,0.1085}&{} \quad {1.0903}&{} \quad {-\,0.1290}&{} \quad {-\,0.5368} \\ \end{array} }} \right] ,\nonumber \\ \;K(2)= & {} \left[ {{\begin{array}{llll} {-\,0.7770}&{} \quad {0.1750}&{}\quad {0.7440}&{}\quad {1.0035} \\ {0.5341}&{}\quad {1.6649}&{} \quad {-\,0.1679}&{} \quad {-\,1.4685} \\ \end{array} }} \right] \end{aligned}$$
(40)

By the initial conditions mentioned above, the state trajectories of the controlled system and the corresponding control signals are illustrated in Figs. 5 and 6 for 10 individual runs.

Fig. 5
figure 5

Controlled states of the VTOL for 10 runs

Fig. 6
figure 6

Control signals of the VTOL for 10 runs

For a better understanding, the average state responses and the relevant control efforts are also shown in Figs. 7 and 8 for 10,000 Monte Carlo runs.

Fig. 7
figure 7

Average controlled states of the VTOL for 10,000 Monte Carlo runs

Fig. 8
figure 8

Average control signals of the VTOL for 10,000 Monte Carlo runs

For more investigation, denote the settling time, \(T_{s}\) given by (41),

$$\begin{aligned} \left\| {x(t)} \right\| _2 \le 1.5\% \left\| {x(0)} \right\| _2 , t>T_s \end{aligned}$$
(41)

Statistics of the settling time and the approximated normal distribution are summarized in Fig. 9.

Fig. 9
figure 9

Statistics of the settling time of the controlled states for 10,000 Monte Carlo runs

The figures clearly demonstrate that by the asynchronous controller the states tend to the origin. Also, the control signals are enough smooth. Consequently, the proposed non-fragile controller can effectively manage the effect of the mismatched Markov chains of the system and the controller as well as the controller uncertainties.

In order to show the superiority of the developed method to the mode-independent controller in case of mismatched chains, the results are compared with the mode-independent controller referred at [2]. By solving the conditions for the non-fragile case, the controller is obtained as (42).

$$\begin{aligned} K=\left[ {{\begin{array}{llll} {-\,0.4326}&{} \quad {-\,0.3214}&{} \quad {0.5680}&{}\quad {0.6607} \\ {-\,0.1034}&{}\quad {0.4026}&{}\quad {0.0018}&{} {-\,0.4363} \\ \end{array} }} \right] \end{aligned}$$
(42)

The average controlled states of 10,000 Monte Carlo runs are depicted in Fig. 10 with the relevant control efforts of Fig. 11. The histogram and the normal distribution of the settling time are also depicted in Fig. 12.

Notably, to provide a fair comparison, the uncertainty bounds and the robustness degrees are selected equal to the asynchronous controller.

Fig. 10
figure 10

Average controlled states of the VTOL under the mode-independent control [2] for 10,000 Monte Carlo runs

Fig. 11
figure 11

Average control signals of the VTOL under the mode-independent control [2] for 10,000 Monte Carlo runs

Fig. 12
figure 12

Statistics of the settling time of the controlled states under the mode-independent control [2] for 10,000 Monte- Carlo runs

As the figures show, the asynchronous controller shows faster response (smaller settling time) than the mode-independent controller. The following table summarizes the comparison results (Table 2).

Table 2 Results of comparing the asynchronous controller with the mode-independent controller for the VTOL

The reason of the more efficient performance of the proposed control scheme is that the mode-independent controller ignores the switching information of the dynamics, thus leading to more conservative gains. It is fairly admitted that, the cost of this improvement is the more number of LMIs to be solved. Although the asynchronous controller has more computational complexity, in complex asynchronous situations it is more likely to provide feasible results.

5 Conclusions

In this study, an asynchronous control scheme is proposed for the continuous-time MJLS. The proposed scheme includes an additional uncertain Markov chain as an observation of the original chain. This chain is slightly different from the original chain, but depends on it according to certain probabilities. It also contains uncertain TR information to take account for the imperfect observation procedures. Such a scheme is capable to deal with practical situations in which the real-time, exact and precise detection of the system modes is not possible. This design also takes inaccuracies of the controller implementation into account and helps reducing the implementation cost. Here, the piecewise homogenous Markov chain approach helps obtaining the results in the form of LMIs which are easily solvable. The proposed method to deal with the asynchronous phenomenon of MJLSs is superior to the previous techniques in case of conservativeness of the stability analysis conditions and the designed controller gains; the reason is that it provides a more realistic representation for the asynchronous switching. It is also more likely to provide feasible results in comparison with the previous methods. Remarkably, the proposed method is capable of being extended to multi-objective problems such as asynchronous controller design for MJLSs subject to disturbances or noises. It can also be generalized to deal with S-MJLSs with asynchronous switching between the system and controller modes.