1 Introduction

Due to the capability of modeling a class of hybrid systems in which the structure is subject to random abrupt changes, Markov jump systems have attracted considerable research attentions and have potential applications in manufacturing systems, chemical process, power systems, networked control systems, and so on [1, 25]. Many interesting results have been reported in the literature, including stability and stabilization [10, 37], controller design [18, 21, 22], filter design [6, 11,12,13], and state estimation problem [33, 38].

Nonlinear is very common in the real applications, and repeated scalar nonlinearities include some typical nonlinearities, such as semi-linear function, sine function, saturation function, and hyperbolic tangent function [4]. So repeated scalar nonlinear systems are used widely to model manufacturing systems, recurrent neural networks, cold rolling mills, and so on [7]. Therefore, lots of researches devote efforts to stability analysis and stabilization problems [4], control problems [8], filtering problems [9], and model reduction problems [5] of the repeated scalar nonlinear systems. Due to its advantage of modeling the abrupt changes in real applications, Markov jump systems with repeated scalar nonlinearities attract considerable attentions. Lots of instructive results of Markov jump systems with repeated scalar nonlinearities have been reported, for example, output feedback control [26], stabilization problem [35], filtering design [14, 15, 23, 36], and \(L_{2}-L_{\infty }\) tracking control [24].

On the other hand, with the development of network technology, lots of real systems are controlled through networks. But the network bandwidth is limited, which restricts the transmission of the data or makes the network-based control low efficient. Traditionally, the data is transmitted through networks within a given time interval, which is often called time-triggered communication scheme [2, 3]. In time-triggered communication scheme, the data is transmitted every time interval even if the data has not changed or little changed, which is easy to implement but often leads to unnecessary waste of network resource [39]. In order to solve this problem, a novel method called event-triggered communication scheme has been proposed during these decades. In event-triggered communication scheme, whether the newly sampled data should be transmitted or not is predefined by an event condition [16, 19]. Only when the predefined event condition is violated, the newly sampled data will be transmitted. So the network communication load can be greatly reduced and the network resources can be largely saved under the event-triggered communication scheme [27,28,29]. Therefore, event-triggered control or filter for different systems becomes a hot research topic and lots of event-triggered results have been discussed during these decades. To name a few, the event-triggered \(H_{\infty }\) filtering for continuous-time and discrete-time Markov jump systems with time delays was studied in [30, 31] and [32, 34], respectively. In [32], the \(H_{\infty }\) performance criterion is derived, and co-design method of the event detector and the \(H_{\infty }\) filter is given. The event-based \(H_{\infty }\) filtering for networked linear systems with communication delays has been studied in [17], and the linear system is converted into a time-delay linear system by using the time interval analysis approach. The problem of event-triggered state estimation is investigated in [40], and a novel state estimator is presented to estimate the networked states. The paper [20] has addressed event-triggered fault detection filtering for discrete-time Markov jump systems. Although there are many event-triggered results on Markov jump systems, to the best of the author’s knowledge, the \(H_{\infty }\) filtering problem for Markov jump systems with repeated scalar nonlinearities under an event-triggered scheme has not been fully studied in the open literature. The results on Markov jump systems with repeated scalar nonlinearities under an event-triggered scheme can be applied to many practical systems, which increases the author’s research motivation.

Motivated by the above discussion, we focus on the event-triggered \(H_{\infty }\) filtering for discrete-time Markov jump systems with repeated scalar nonlinearities. The main contributions of this paper include three points: (1) Event-triggered scheme for dynamic discrete-time Markov jump systems with repeated scalar nonlinearities is presented to reduce network resource wastage; (2) based on the diagonally dominant Lyapunov function approach, a sufficient condition is presented, which guarantees that the filtering error system is stochastically stable and has a \(H_{\infty }\) performance; and (3) the parameter of the event-triggers and \(H_{\infty }\) filter can be co-designed.

The rest of the paper is organized as follows. Section 2 formulates the problem under consideration. \(H_{\infty }\) filtering performance analysis and the co-design method of event-based condition and \(H_{\infty }\) filter are presented in Sect. 3. An illustrative example is given in Sect. 4, and we conclude the paper in Sect. 5.

Notations Through this paper, the superscripts “T” and “\(-1\)” stand for the transpose of a matrix and the inverse of a matrix; \(R^{n}\) denotes n-dimensional Euclidean space; \(R^{n\times m}\) is the set of all real matrices with m rows and n columns; \(P>0\) means that P is positive definite; I is the identity matrix with appropriate dimensions; the space of square-integrable vector functions over \([0,\infty )\) is denoted by \(\mathcal {L}_{2}[0,\infty )\); |x| represents the absolute value (or modulus) of x; and \(\mathcal {E}\{\cdot \}\) denotes the expectation operator; for a symmetric matrix, \(*\) denotes the matrix entries implied by symmetry.

2 Problem Formulation

2.1 System description

The framework of event-triggered \(H_{\infty }\) filtering for discrete Markov jump system in this paper is shown in Fig. 1, where the plant is discrete-time Markov jump system with repeated scalar nonlinearities.

Fig. 1
figure 1

The frame of event-triggered \(H_{\infty }\) filter

We suppose that the discrete-time Markov jump system with repeated scalar nonlinearities can be described as follows:

$$\begin{aligned} \left\{ \begin{aligned} x(k+1)&=A(r_{k})f(x(k))+B(r_{k})w(k)\\ y(k)&=C(r_{k})f(x(k))+D(r_{k})w(k)\\ z(k)&=E(r_{k})f(x(k)) \end{aligned} \right. , \end{aligned}$$
(1)

where \(x(k) \in \) \(R^{n}\) is the state of the plant; \(y(k) \in \) \(R^{m}\) represents the measurement output; \(z(k) \in \) \(R^{p}\) is the signal to be estimated; \(w(k) \in \) \(\mathcal {L}_{2}[0,\infty )\) is the disturbance input; and \(r_{k}\) represents a discrete-time homogeneous Markov chain, which takes values in a finite set S = {1,2,3,...,N} with the following mode transition probabilities:

$$\begin{aligned} P_{r}\{r(k+1)=j|r(k)=i\}=\pi _{ij}, \end{aligned}$$

where \(0 \le \pi _{ij} \le 1\), \(\forall i,j \in S\) and \(\sum _{j=1}^N \pi _{ij}\) \(= 1\), \(\forall i \in S\). f(x(k)) is the nonlinear function; for the vector \(x(k)=[x_{1}(k)~x_{2}(k)~\ldots ~x_{n}(k)]^\mathrm{T}\), we denote \(f(x(k))=[f(x_{1}(k))~f(x_{2}(k))~\ldots ~f(x_{n}(k))]^\mathrm{T}\). The function f(x(k)) satisfies the following assumption.

Assumption 1

[4] The nonlinear function f(x(k)) in (1) satisfies:

$$\begin{aligned} \forall x,y \in R, \quad |f(x) + f(y)| \le |x + y|. \end{aligned}$$
(2)

For notational simplicity, in this paper, when \(r_{k} = i \in S\), a matrix \(M(r_{k})\) is denoted by \(M_{i}\); for example, \(A(r_{k})\) is denoted by \(A_{i}\), \(B(r_{k})\) by \(B_{i}\), and so on.

2.2 Event Detector

In Fig. 1, an event detector is employed between the plant and the filter to determine whether the current data should be transmitted to the filter or not. y(k) is the current measurement data, and \(y(s_{l})\) is the latest transmitted data. So the event-triggered scheme can be defined as follows:

$$\begin{aligned}{}[y(k) - y(s_{l})]^\mathrm{T} \varPhi _{i} [y(k) - y(s_{l})] \ge \varepsilon _{i} y^{T}(k) \varPhi _{i} y(k), \end{aligned}$$
(3)

where the \(\varPhi _{i}\) is the positive-definite weighting matrix to be design and the \(\varepsilon _{i} \in \) [0,1) is a given scalar parameter. Obviously, if y(k) and \(y(s_{l})\) satisfies (3), y(k) will be transmitted to the filter.

Remark 1

Note that the event-triggered scheme (3) is a dynamic condition with some adjustable parameters. The event-triggered parameter \(\varPhi _{i}\) is different for different jumping modes, which is more practicable.

2.3 \(H_{\infty }\) Filter

Since the network bandwidth is limited, time delay is the inevitable phenomenon in the process of network transmission. In this paper, we suppose that the time delay is \(\tau _{k}\) and bounded at time instant k. \(\tau _{k}\) satisfies \(0< \tau _{k} < \tilde{\tau }\), where \(\tilde{\tau }\) is a positive integer. Taking time delay into account, the output \(y(s_{l})\) reaches the filter at the time instant \(s_{l} + \tau _{s_{l}}\), and considering the behavior of zero-order holder (ZOH), we have

$$\begin{aligned} y_{f}(k)= y(s_{l})\quad k \in [s_{l} + \tau _{s_{l}}, s_{l+1} + \tau _{s_{l+1}} - 1]. \end{aligned}$$
(4)

The \(H_{\infty }\) filter used in this paper is supposed to be

$$\begin{aligned} \left\{ \begin{aligned} x_{f}(k+1)&=A_{fi}f(x_{f}(k))+B_{fi}y_{f}(k)\\ z_{f}(k)&=C_{fi}f(x_{f}(k))+D_{fi}y_{f}(k),\\ \end{aligned} \right. \end{aligned}$$
(5)

where \(x_{f}(k) \in R^{n}\) is the state vector of the filter, the \(y_{f}(k)\in R^{m}\) is the actual input of the filter, and \(z_{f}(k) \in R^{p}\) is the output of the filter. The matrices \(A_{fi}, B_{fi}, C_{fi}, D_{fi}\) are appropriate dimensional filter parameters to be determined.

Substituting (4) into (5), we have

$$\begin{aligned} \left\{ \begin{aligned} x_{f}(k+1)&=A_{fi}f(x_{f}(k))+B_{fi}y(s_{l})\\ z_{f}(k)&=C_{fi}f(x_{f}(k))+D_{fi}y(s_{l}),\quad k \in [s_{l} + \tau _{s_{l}}, s_{l+1} + \tau _{s_{l+1}} - 1]. \\ \end{aligned} \right. \end{aligned}$$
(6)

2.4 Time-Delay Modeling Based on Event-Triggered Scheme

Using the similar methods in [30], we convert the networked discrete-time Markov jump system (1) under event-triggered scheme (3) into a new time-delay system, which will simplify the analysis and design. So, the following two cases should be considered:

Case A: if \(s_{l} + \tilde{\tau } + 1 \ge s_{l+1} + \tau _{s_{l+1}} - 1\), we define a function:

$$\begin{aligned} \tau (k) = k - s_{l} \quad k \in [s_{l} + \tau _{s_{l}}, s_{l+1} + \tau _{s_{l+1}} - 1]. \end{aligned}$$
(7)

Obviously,

$$\begin{aligned} \tau _{s_{l}} \le \tau (k) \le (s_{l+1} - s_{l}) + \tau _{s_{l+1}} - 1 \le 1 + \tilde{\tau }. \end{aligned}$$
(8)

Case B: If \(s_{l} + 1 + \tilde{\tau } < s_{l+1} + \tau _{s_{l+1}} - 1\), we should consider the following two intervals:

$$\begin{aligned}{}[s_{l} + \tau _{s_{l}}, s_{l} + \tilde{\tau }],\quad [s_{l} + \tilde{\tau } + d, s_{l} + 1 + \tilde{\tau } + d], \end{aligned}$$
(9)

where d is a positive integer, \(d \ge 1\).

As \(\tau _{s_{l}} \le \tilde{\tau }\), it can be easily shown that there exists a positive integer \(d_{M}\), such that

$$\begin{aligned} s_{l} + \tilde{\tau } +d_{M}< s_{l+1} + \tau _{s_{l+1}} - 1 < s_{l} + d_{M} + 1 + \tilde{\tau }, \end{aligned}$$
(10)

and \(y(s_{l})\), \(y(s_{l} + d)\) with \(d = 1, 2, 3,\ldots , d_{M}\) satisfy

$$\begin{aligned}{}[y(s_{l} + d) - y(s_{l})]^\mathrm{T} \varPhi _{i} [y(s_{l} + d) - y(s_{l})] \le \varepsilon _{i} y^{T}(s_{l} + d) \varPhi _{i} y(s_{l} + d). \end{aligned}$$
(11)

From (8) − (10), we can obtain:

$$\begin{aligned}{}[s_{l}+\tau _{s_{l}},s_{l+1}+\tau _{s_{l+1}}-1]&=[s_{l}+\tau _{s_{l}},s_{l}+\tilde{\tau }+1)\bigcup \left\{ \bigcup \limits _{d=1}^{d_{M}-1}[s_{l+\tilde{\tau }}+d,s_{l+\tilde{\tau }}+d+1]\right\} \\&\bigcup [s_{l}+d_{M}+\tilde{\tau },s_{l+1}+\tau _{s_{l+1}}-1]. \end{aligned}$$

Define function \(\tau (k)\) as

$$\begin{aligned} \tau (k)=\left\{ \begin{array}{ll} k-s_{l} &{}\quad k \in \varOmega _{1}\\ k-s_{l}-d &{}\quad k \in \varOmega _{2}\\ k-s_{l}-d_{M} &{}\quad k\in \varOmega _{3},\\ \end{array} \right. \end{aligned}$$
(12)

where

$$\begin{aligned} \varOmega _{1}&=[s_{l}+\tau _{s_{l}},s_{l}+\tilde{\tau }+1),\\ \varOmega _{2}&=[s_{l}+\tilde{\tau }+d,s_{l}+\tilde{\tau }+d+1),\quad d=1,2,3,\ldots d_{M}-1,\\ \varOmega _{3}&=[s_{l}+d_{M}+\tilde{\tau },s_{l+1}+\tau _{s_{l+1}}-1]. \end{aligned}$$

From (12), we have

$$\begin{aligned} \left\{ \begin{array}{ll} \tau _{s_{l}} \le \tau (k) \le 1 + \tilde{\tau } = \tau _{M}&{}\quad k \in \varOmega _{1}\\ \tau _{s_{l}} \le \tau (k) \le \tau _{M} &{}\quad k \in \varOmega _{2}\\ \tau _{s_{l}} \le \tau (k) \le \tau _{M} &{}\quad k\in \varOmega _{3}.\\ \end{array} \right. \end{aligned}$$
(13)

So we can obtain that

$$\begin{aligned} 0 \le \tau _{m} \le \tau _{s_{l}} \le \tau (k) \le \tau _{M} \end{aligned}$$
(14)

and \(\tau _{m} = \hbox {inf}\{ \tau _{s_{l}} \}\).

For Case A, \(k \in [s_{l} + \tau _{s_{l}}, s_{l+1} + \tau _{s_{l+1}} - 1]\), define an error vector \(e_{i}(k) = 0\). For Case B, we define

$$\begin{aligned} e_{i}(k)=\left\{ \begin{array}{ll} 0 &{}\quad k \in \varOmega _{1}\\ y(s_{l}+d)-y(s_{l}) &{}\quad k \in \varOmega _{2}\\ y(s_{l}+d_{M})-y(s_{l})&{}\quad k\in \varOmega _{3}.\\ \end{array} \right. \end{aligned}$$
(15)

From the definition of \(e_{i}(k)\) and the triggered scheme (3), we can have

$$\begin{aligned} e_{i}^{T}(k) \varPhi _{i} e_{i}(k) \le \varepsilon _{i}y^{T}(k - \tau (k)) \varPhi _{i} y(k - \tau (k)). \end{aligned}$$
(16)

Utilizing \(\tau (k)\) and \(e_{i}(k)\), the filter (6) can be rewritten as

$$\begin{aligned} \left\{ \begin{aligned} x_{f}(k+1)&=A_{fi}f(x_{f}(k)) + B_{fi}y(k - \tau (k)) - B_{fi}e_{i}(k)\\ z_{f}(k)&=C_{fi}f(x_{f}(k))+D_{fi}y(k - \tau (k))- D_{fi} e_{i}(k), \\ \end{aligned} \right. \end{aligned}$$
(17)

where \(k \in [s_{l} + \tau _{s_{l}}, s_{l+1} + \tau _{s_{l+1}} - 1]\).

Define the new state vector \(\xi ^{T}(k) =\begin{bmatrix} x^{T}(k)&x_{f}^{T}(k)\end{bmatrix}\), \(e(k) = z(k) - z_{f}(k)\), and \(\widehat{w}^{T}(k) =\begin{bmatrix} w^{T}(k)&w^{T}(k - \tau (k))\end{bmatrix}\), and then, the following filtering error system can be obtained from (1) and (17),

$$\begin{aligned} \left\{ \begin{aligned} \xi (k+1)&= \bar{A_{i}}f(\xi (k)) + \bar{E_{i}} H f(\xi (k - \tau (k))) + \bar{B}_{wi}\widehat{w}(k) - \bar{B}_{ei}e_{i}(k)\\ e(k)&= \bar{C}_{i}f(\xi (k)) + \bar{F}_{i}Hf(\xi (k - \tau (k))) + \bar{D}_{i}\widehat{w}(k) + D_{fi}e_{i}(k), \\ \end{aligned} \right. \end{aligned}$$
(18)

where

$$\begin{aligned} \begin{aligned} \bar{A}_{i}&= \begin{bmatrix} A_{i} &{} 0 \\ 0 &{} A_{fi} \end{bmatrix}, \bar{E}_{i}=\begin{bmatrix} 0 \\ B_{fi}C_{i} \end{bmatrix}, H= \begin{bmatrix} I_{n} &{} 0 \\ \end{bmatrix}, \bar{B}_{wi}= \begin{bmatrix} B_{i} &{} 0 \\ 0 &{} B_{fi}D_{i} \end{bmatrix}, \bar{B}_{ei}= \begin{bmatrix} 0 \\ B_{fi}\\ \end{bmatrix}\\ \bar{C}_{i}&= \begin{bmatrix} E_{i} &{} -C_{fi} \\ \end{bmatrix}, \bar{D}_{i}= \begin{bmatrix} 0 &{} -D_{fi}D_{i} \\ \end{bmatrix}, \bar{F}_{i}=-D_{fi}C_{i}. \end{aligned} \end{aligned}$$

2.5 Event-Triggered \(H_{\infty }\) Filter Problem

Definition 1

The filtering error system (18) with \(\widehat{w}(k)=0\) is stochastically stable, if for any initial conditions, the following equality holds

$$\begin{aligned} \mathcal {E} \left\{ \sum \limits _{k=0}^{\infty } \Vert \xi (k)\Vert ^{2}|\xi (0),r(0) \right\} < \infty . \end{aligned}$$
(19)

Definition 2

Given a scalar \(\gamma >0\), the filtering error system (18) is said to be stochastically stable with \(H_\infty \) performance \(\gamma \), if for any nonzero \(\widehat{w}(k)\in \mathcal {L}_2[0,\infty )\) under zero initial condition, the following inequality holds

$$\begin{aligned} \Vert \bar{z}(k)\Vert ^2_{\infty }<\gamma ^2\Vert \widehat{w}(k) \Vert ^2_2, \end{aligned}$$
(20)

where \(\Vert \bar{z}(k)\Vert ^2_{\infty }=\sup _{k}\bar{z}^T(k)\bar{z}(k)\) and \(\Vert \widehat{w}(k)\Vert ^2_2=\sum ^{\infty }_{l=0}\widehat{w}^T(l)\widehat{w}(l)\).

The objective of this paper is to design the \(H_\infty \) filter (17) such that the filtering error system (18) is stochastically stable with \(H_\infty \) performance \(\gamma \).

Before ending this section, we first introduce the following definition and lemma, which will help us in deriving the main results.

Definition 3

[4] A square matrix \(P = [p_{ij}] \in R^{n \times n}\) is called diagonally dominant if the i satisfies:

$$\begin{aligned} p_{ii} \ge \sum \limits _{i \ne j} |p_{ij}|. \end{aligned}$$
(21)

Lemma 1

[4] If \(P > 0\) is diagonally dominant, for all nonlinear functions f(x(k)) satisfying (2), it holds for all \(\varpi \):

$$\begin{aligned} f^{T}(\varpi )P f(\varpi ) \le \varpi ^{T} P \varpi . \end{aligned}$$
(22)

Lemma 2

[4] If and only if \(P > 0\) is diagonally dominant, there exists a symmetric matrix \(T = [t_{ij}] \in R^{n}\) such that

$$\begin{aligned} \left\{ \begin{array}{ll} t_{ij} \ge 0, \quad p_{ij} + t_{ij} \ge 0,&{}\quad \forall i\ne j \\ p_{ii} \ge \sum \limits _{i \ne j}(p_{ij} + 2t_{ij}),&{}\quad \forall i. \end{array} \right. \end{aligned}$$
(23)

3 Main Results

3.1 \(H_{\infty }\) filter performance analysis

In this subsection, we will discuss the \(H_{\infty }\) filter performance for the filtering error system (18).

Theorem 1

For given scalars \(\gamma > 0\), \(\tau _{M}> \tau _{m} > 0\), \(0 \le \varepsilon _{i} < 1\), the filtering error system (18) is stochastically stable with an \(H_{\infty }\) index \(\gamma \), if there exist diagonally dominant matrices \(P_{i} = [\begin{array}{cc} P_{1i} &{}\quad P_{2i} \\ P^{T}_{2i} &{}\quad P_{3i} \end{array} ]> 0\), \(\varPhi _{i} > 0\), \(Q_{1} > 0\), \(Q_{2} > 0\), \(Q_{3} > 0\), and \(Q_{4} > 0\) with appropriate dimensions such that

$$\begin{aligned} \begin{bmatrix} (1,1) &{} (1,2)\\ *&{} (2,2) \end{bmatrix}<0 \end{aligned}$$
(24)

and

$$\begin{aligned} \sum \limits _{j = 1}^{N} \pi _{ij}P_{j} \le P_{i}, \end{aligned}$$
(25)

with

$$\begin{aligned} (1,1)= & {} \begin{bmatrix} \varXi _{11} &{} H^{T}Q_{3} &{} 0 &{} 0 &{} 0 &{} 0\\ *&{} \varXi _{22} &{} Q_{4} &{} Q_{3}+Q_{4} &{} 0 &{} \varXi _{26}\\ *&{} *&{} -Q_{2}-Q_{4} &{} 0 &{} 0 &{} 0\\ *&{} *&{} *&{} -Q_{1}-Q_{3}-Q_{4} &{} 0 &{} 0\\ *&{} *&{} *&{} *&{} -\varPhi _{i} &{} 0\\ *&{} *&{}*&{} *&{} *&{} \varXi _{66} \end{bmatrix},\\ (1,2)= & {} \begin{bmatrix} \bar{A}^{T}_{i}P_{i} &{} \bar{A}^{T}_{i}H^{T}Q_{1} &{} \bar{A}^{T}_{i}H^{T}Q_{2} &{} \varXi _{110} &{} \varXi _{111} &{} \bar{C}^{T}_{i}\\ \bar{E}^{T}_{i}P_{i} &{} \bar{E}^{T}_{i}H^{T}Q_{1} &{} \bar{E}^{T}_{i}H^{T}Q_{2} &{} \varXi _{210} &{} \varXi _{211} &{} \bar{F}^{T}_{i} \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ -\bar{B}^{T}_{ei}P_{i} &{} -\bar{B}^{T}_{ei}H^{T}Q_{1} &{} -\bar{B}^{T}_{ei}H^{T}Q_{2} &{} \varXi _{510} &{} \varXi _{511} &{} D^{T}_{fi} \\ \bar{B}^{T}_{wi}P_{i} &{} \bar{B}^{T}_{wi}H^{T}Q_{1} &{} \bar{B}^{T}_{wi}H^{T}Q_{2} &{} \varXi _{610} &{} \varXi _{611} &{} \bar{D}^{T}_{i} \\ \end{bmatrix},\\ (2,2)= & {} \begin{bmatrix} -P_{i} &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ *&{} -Q_{1} &{} 0 &{} 0 &{} 0 &{} 0 \\ *&{} *&{} -Q_{2} &{} 0 &{} 0 &{} 0 \\ *&{} *&{} *&{} -Q_{3} &{} 0 &{} 0 \\ *&{} *&{} *&{} *&{} -Q_{4} &{} 0 \\ *&{} *&{} *&{} *&{} *&{} -I \\ \end{bmatrix}, \end{aligned}$$

and

$$\begin{aligned} \varXi _{11}= & {} -P_{i} - H^{T}Q_{3}H,\quad \varXi _{110} = \tau _{M}(\bar{A}_{i} - I)^{T}H^{T}Q_{3},\\ \varXi _{111}= & {} (\tau _{M}- \tau _{m})(\bar{A}_{i} - I)^{T}H^{T}Q_{4},\\ \varXi _{22}= & {} -2Q_{3}-2Q_{4} + \varepsilon _{i}C^{T}_{i}\varPhi _{i}C_{i},\quad \varXi _{26} = \varepsilon _{i}C^{T}_{i}\varPhi _{i}[0\quad D_{i}],\\ \varXi _{210}= & {} \tau _{M}\bar{E}^{T}_{i}H^{T}Q_{3},\quad \varXi _{211} = (\tau _{M}- \tau _{m})\bar{E}^{T}_{i} H^{T}Q_{4},\\ \varXi _{510}= & {} -\tau _{M}\bar{B}^{T}_{ei}H^{T}Q_{3},\quad \varXi _{511} = -(\tau _{M}- \tau _{m})\bar{B}^{T}_{ei} H^{T}Q_{4},\\ \varXi _{66}= & {} -\gamma ^{2}I + \varepsilon _{i}[0\quad D_{i}]^\mathrm{T}\varPhi _{i}[0\quad D_{i}],\\ \varXi _{610}= & {} \tau _{M}\bar{B}^{T}_{wi}H^{T}Q_{3},\quad \varXi _{611} = (\tau _{M}- \tau _{m})\bar{B}^{T}_{wi} H^{T}Q_{4}. \end{aligned}$$

Proof

For the filtering error system (18), construct the following Lyapunov functional:

$$\begin{aligned} V(x(k),r(k)) =\sum \limits _{l=1}^5 V_{l}(x(k),r(k)), \end{aligned}$$
(26)

where \(r(k)= i, i\in S\), with

$$\begin{aligned} \begin{aligned} V_{1}(x(k),r(k))&= \xi ^{T}(k) P_{i} \xi (k),\quad P_{i}> 0,\\ V_{2}(x(k),r(k))&= \sum \limits _{s = k- \tau _{M}}^{k} \xi ^{T}(s)H^{T}Q_{1}H\xi (s),\quad Q_{1}> 0,\\ V_{3}(x(k),r(k))&= \sum \limits _{s = k- \tau _{m}}^{k} \xi ^{T}(s)H^{T}Q_{2}H\xi (s),\quad Q_{2}> 0, \\ V_{4}(x(k),r(k))&= \tau _{M} \sum \limits _{s = -\tau _{M}+1}^{0}\sum \limits _{l = k +s-1}^{k-1} \delta ^{T}(l)H^{T}Q_{3}H\delta (l),\quad Q_{3}> 0, \\ V_{5}(x(k),r(k))&= (\tau _{M} - \tau _{m}) \sum \limits _{s = -\tau _{M}+1}^{-\tau _{m}}\sum \limits _{l = k +s-1}^{k-1} \delta ^{T}(l)H^{T}Q_{4}H\delta (l), \quad Q_{4} > 0, \\ \delta (l)&= \xi (l+1) - \xi (l).\\ \end{aligned} \end{aligned}$$

\(\square \)

When the disturbance \(w(k) = 0\), we consider the stochastically stable of the filtering error system (18). Let \(\mathcal {E}(\cdot ) \) stand for the mathematics statistical expectation of the stochastic process, for \(r(k) = i, r(k+1) = j\), we have

$$\begin{aligned} \mathcal {E}\{\varDelta V(k)\} = \mathcal {E}\{V(x(k+1),r(k+1)\mid x(k),r(k))\} - V(x(k),r(k)), \end{aligned}$$
(27)

with

$$\begin{aligned} \begin{aligned} \mathcal {E}\{\varDelta V_{1}(k)\}&= \xi ^{T}(k+1)\sum \limits _{j=1}^{N}\pi _{ij}P_{j}\xi (k+1) - \xi ^{T}(k)P_{i}\xi (k),\\ \mathcal {E}\{\varDelta V_{2}(k)\}&= \xi ^{T}(k+1)H^{T}Q_{1}H\xi (k+1) -\xi ^{T}(k-\tau _{M})H^{T}Q_{1}H\xi (k-\tau _{M}),\\ \mathcal {E}\{\varDelta V_{3}(k)\}&= \xi ^{T}(k+1)H^{T}Q_{2}H\xi (k+1) -\xi ^{T}(k-\tau _{m})H^{T}Q_{2}H\xi (k-\tau _{m}),\\ \mathcal {E}\{\varDelta V_{4}(k)\}&= \tau ^{2}_{M}\delta ^{T}(k)H^{T}Q_{3}H\delta (k)-\tau _{M}\sum \limits _{l=k-\tau _{M}}^{k-1}\delta ^{T}(l)H^{T}Q_{3}H\delta (l),\\ \mathcal {E}\{\varDelta V_{5}(k)\}&= (\tau _{M}-\tau _{m})^{2}\delta ^{T}(k)H^{T}Q_{4}H\delta (k) - (\tau _{M}-\tau _{m})\sum \limits _{l=k-\tau _{M}}^{k-1-\tau _{m}}\delta ^{T}(l)H^{T}Q_{4}H\delta (l).\\ \end{aligned} \end{aligned}$$

Since

$$\begin{aligned} -\tau _{M}\sum \limits _{l=k-\tau _{M}}^{k-1}\delta ^{T}(l)H^{T}Q_{3}H\delta (l)&= -\tau _{M}\sum \limits _{l=k-\tau _{M}}^{k-\tau (k)-1}\delta ^{T}(l)H^{T}Q_{3}H\delta (l) \\&\qquad -\tau _{M}\sum \limits _{l=k-\tau (k)}^{k-1}\delta ^{T}(l)H^{T}Q_{3}H\delta (l),\\ -(\tau _{M}-\tau _{m})\sum \limits _{l=k-\tau _{M}}^{k-1-\tau _{m}}\delta ^{T}(l)H^{T}Q_{4}H\delta (l)&= -(\tau _{M}-\tau _{m})\sum \limits _{l=k-\tau _{M}}^{k-1-\tau (k)} \\&\qquad -(\tau _{M}-\tau _{m})\sum \limits _{l=k-\tau (k)}^{k-1-\tau _{m}} \delta ^{T}(l)H^{T}Q_{4}H\delta (l).\\ \end{aligned}$$

According to the Jensen inequality, we have

$$\begin{aligned}&-\tau _{M}\sum \limits _{l=k-\tau _{M}}^{k-\tau (k)-1}\delta ^{T}(l)H^{T}Q_{3}H\delta (l) -\tau _{M}\sum \limits _{l=k-\tau (k)}^{k-1}\delta ^{T}(l)H^{T}Q_{3}H\delta (l)\nonumber \\&\quad \le -[\sum \limits _{l=k-\tau _{M}}^{k-\tau (k)-1}\delta (l)]^\mathrm{T}H^{T}Q_{3}H[\sum \limits _{l=k-\tau _{M}}^{k-\tau (k)-1}\delta (l)] -[\sum \limits _{l=k-\tau (k)}^{k-1}\delta (l)]^\mathrm{T}H^{T}Q_{3}H[\sum \limits _{l=k-\tau (k)}^{k-1}\delta (l)] \nonumber \\&\qquad -(\tau _{M}-\tau _{m})\sum \limits _{l=k-\tau _{M}}^{k-\tau (k)-1}\delta ^{T}(l)H^{T}Q_{4}H\delta (l)-(\tau _{M}-\tau _{m})\sum \limits _{l=k-\tau (k)}^{k-1-\tau _{m}}\delta ^{T}(l)H^{T}Q_{4}H\delta (l) \nonumber \\&\quad \le -[\sum \limits _{l=k-\tau _{M}}^{k-\tau (k)-1}\delta (l)]^\mathrm{T}H^{T}Q_{4}H[\sum \limits _{l=k-\tau _{M}}^{k-\tau (k)-1}\delta (l)] -[\sum \limits _{l=k-\tau (k)}^{k-1-\tau _{m}}\delta (l)]^\mathrm{T}H^{T}Q_{4}H[\sum \limits _{l=k-\tau (k)}^{k-1-\tau _{m}}\delta (l)]. \end{aligned}$$
(28)

Combining the event-triggered scheme (16), we have

$$\begin{aligned} \begin{aligned}&\mathcal {E}\{\varDelta V(k)\}\le \mathcal {E}\{\varDelta V_{1}(k)\} + \mathcal {E}\{\varDelta V_{2}(k)\} + \mathcal {E}\{\varDelta V_{3}(k)\} +\mathcal {E}\{\varDelta V_{4}(k)\}\\&\qquad +\mathcal {E}\{\varDelta V_{5}(k)\}+ \varepsilon _{i}y^{T}(k-\tau (k))\varPhi _{i}y(k-\tau (k)) - e^{T}_{i}(k)\varPhi _{i}e_{i}(k). \end{aligned} \end{aligned}$$
(29)

From Lemma 1, we know that

$$\begin{aligned} f^{T}(\xi (k))P_{i}f(\xi (k)) \le \xi ^{T}(k)P_{i}\xi (k). \end{aligned}$$
(30)

Clearly, from (27) to (30), we have

$$\begin{aligned} \begin{aligned}&\mathcal {E}\{\varDelta V(k)\}\le \varphi ^{T}(k)\varPi \varphi (k), \end{aligned} \end{aligned}$$
(31)

with

$$\begin{aligned} \varphi ^{T}(k)&= \begin{bmatrix} f^{T}(\xi (k))&f^{T}(\xi (k-\tau (k)))H^{T}&f^{T}(\xi (k-\tau _{m}))H^{T}&f^{T}(\xi (k-\tau _{M}))H^{T}&e^{T}_{i}(k) \end{bmatrix},\\ \varPi&= \varTheta + \varGamma ^{T}_{1}P_{i}\varGamma _{1} + \varGamma ^{T}_{1}H^{T}Q_{1}H\varGamma _{1} + \varGamma ^{T}_{1}H^{T}Q_{2}H\varGamma _{1},\\&\qquad + \tau ^{2}_{M}(\varGamma _{1}-\tilde{I})^{T}H^{T}Q_{3}H(\varGamma _{1}-\tilde{I})+ (\tau _{M}-\tau _{m})^{2}(\varGamma _{1}-\tilde{I})^{T}H^{T}Q_{4}H(\varGamma _{1}-\tilde{I}),\\ \varGamma _{1}&= \begin{bmatrix} \bar{A}_{i}&\bar{E}_{i}&0&0&-\bar{B}_{ei}\end{bmatrix},\quad \tilde{I}= \begin{bmatrix} I&0&0&0&0\end{bmatrix},\\ \varTheta&= \begin{bmatrix} \varXi _{11} &{} H^{T}Q_{3} &{} 0 &{} 0 &{} 0 \\ \\ *&{} \varXi _{22} &{} Q_{4} &{} Q_{3}+Q_{4} &{} 0 \\ \\ *&{} *&{} -Q_{2}-Q_{4} &{} 0 &{} 0 \\ \\ *&{} *&{} *&{} -Q_{1}-Q_{2}-Q_{3} &{} 0 \\ \\ *&{} *&{} *&{} *&{} -\varPhi _{i} \\ \end{bmatrix},\\ \varXi _{11}&= -P_{i} - H^{T}Q_{3}H,~ \varXi _{22} = -2Q_{3}-2Q_{4} + \varepsilon _{i}C^{T}_{i}\varPhi _{i}C_{i}. \end{aligned}$$

By using the Schur complement, (24) and (25) ensure that \(\varPi < 0\), which implies that \(\mathcal {E}\{\varDelta V(k)\} < 0\). Similar to literature [32], we have

$$\begin{aligned} \begin{aligned} \mathcal {E}\{\varDelta V(k)\}&= \mathcal {E}\{V(x(k+1),r(k+1)\mid x(k),r(k))\} - V(x(k),r(k))\\&\le -\beta x^{T}(k)x(k), \end{aligned} \end{aligned}$$
(32)

where \(\beta = inf \{\lambda _{\min }(-\varPi )\}\) and \(\{\lambda _{\min }(-\varPi )\}\) denotes the minimal eigenvalue of \(-\varPi \).

From (32), if there is a \(T > 1\), we know that

$$\begin{aligned} \mathcal {E}\{V(x(T+1),r(T+1))\} - \mathcal {E}\{V(x(0),r(0))\} \le -\beta \sum \limits _{k=0}^{T} \mathcal {E} \{x^{T}(k)x(k)\}. \end{aligned}$$

Then, for any \(T > 1\), the following equation is satisfied

$$\begin{aligned} \sum \limits _{k=0}^{T} \mathcal {E} \{x^{T}(k)x(k)\}&\le \frac{1}{\beta }(\mathcal {E}\{V(x(0),r(0)) -\mathcal {E}\{V(x(T+1),r(T+1)) \})\\&\le \frac{1}{\beta } \mathcal {E}\{V(x(0),r(0))\}, \end{aligned}$$

which implies that

$$\begin{aligned} \sum \limits _{k=0}^{T} \mathcal {E} \{x^{T}(k)x(k)\} \le \frac{1}{\beta } \mathcal {E}\{V(x(0),r(0))\} <\infty . \end{aligned}$$

So according to Definition 2, the filtering error system (18) is stochastically stable.

Next we will show the \(H_{\infty }\) performance of the filtering error system (18). When \(w(k) \ne 0\), under the zero initial conditions, we have

$$\begin{aligned} \mathcal {E}\{\varDelta V(k)\} \le \eta ^{T}(k)\varPsi \eta (k) - e^{T}(k)e(k) - \gamma ^{2}w^{T}(k)w(k), \end{aligned}$$
(33)

where

$$\begin{aligned} \eta ^{T}(k)&= \begin{bmatrix} \varphi ^{T}(k)&\widehat{w}^{T}(k)\end{bmatrix},\\ \varPsi&= \varTheta _{1} + \varGamma ^{T}_{2}P_{i}\varGamma _{2} + \varGamma ^{T}_{2}H^{T}Q_{1}H\varGamma _{2}+ \varGamma ^{T}_{2}H^{T}Q_{2}H\varGamma _{2}\\&\qquad + \tau ^{2}_{M}(\varGamma _{2}-\tilde{I}_{1})^{T}H^{T}Q_{3}H(\varGamma _{2}-\tilde{I}_{1})+ \varGamma ^{T}_{3}\varGamma _{3} \\&\qquad +(\tau _{M}-\tau _{m})^{2}(\varGamma _{2}-\tilde{I}_{1})^{T}H^{T}Q_{4}H(\varGamma _{2}-\tilde{I}_{1}), \\ \varGamma _{2}&= \begin{bmatrix} \bar{A}_{i}&\bar{E}_{i}&0&0&-\bar{B}_{ei}&-\bar{B}_{wi}\end{bmatrix},~ \tilde{I}_{1}=\begin{bmatrix} I&0&0&0&0&0\end{bmatrix},\\ \varGamma _{3}&= \begin{bmatrix} \bar{C}_{i}&\bar{F}_{i}&0&0&D_{fi}&\bar{D}_{i} \end{bmatrix},\\ \varTheta _{1}&= \begin{bmatrix} \varXi _{11} &{} H^{T}Q_{3} &{} 0 &{} 0 &{} 0 &{} 0 \\ *&{} \varXi _{22} &{} Q_{4} &{} Q_{3}+Q_{4} &{} 0 &{} \varXi _{26} \\ *&{} *&{} -Q_{2}-Q_{4} &{} 0 &{} 0 &{} 0 \\ *&{} *&{} *&{} -Q_{1}-Q_{2}-Q_{3} &{} 0 &{} 0 \\ *&{} *&{} *&{} *&{} -\varPhi _{i} &{} 0 \\ *&{} *&{} *&{} *&{} *&{} \varXi _{66} \end{bmatrix},\\ \varXi _{22}&= -2Q_{3}-2Q_{4} + \varepsilon _{i}C^{T}_{i}\varPhi _{i}C_{i}, \varXi _{66} = -\gamma ^{2}I + \varepsilon _{i}\begin{bmatrix}0&D_{i}\end{bmatrix}^{T}\varPhi _{i}\begin{bmatrix}0&D_{i}\end{bmatrix}. \end{aligned}$$

By using the Schur complement, (24) guarantees \(\varPsi < 0\), so we have

$$\begin{aligned} \mathcal {E}\{\varDelta V(k)\} + e^{T}(k)e(k) - \gamma ^{2}w^{T}(k)w(k) \le \eta ^{T}(k)\varPsi \eta (k) <0 \end{aligned}$$
(34)

According to Definition 1 and under the zero initial conditions, the following holds

$$\begin{aligned} \mathcal {E}\left\{ \sum \limits _{k=0}^{\infty }\parallel e(k) \parallel ^{2} \right\} \le \gamma ^{2}\sum \limits _{k=0}^{\infty }\parallel w(k) \parallel ^{2}. \end{aligned}$$

Therefore, the filtering error system (18) is stochastically stable with \(H_{\infty }\) performance index \(\gamma \). This completes the proof.

3.2 \(H_{\infty }\) Filter Design

In this subsection, we will discuss the \(H_{\infty }\) filter algorithm for the filtering error system (18).

Theorem 2

For given scalars \(0 \le \varepsilon _{i} <1\), \(\gamma >0\), and \(\tau _{M}>\tau _{m}>0\), the filtering error system (18) is stochastically stable with a guaranteed \(H_{\infty }\) performance \(\gamma \), if there exist matrices \(P_{i} = [\begin{array}{cc} P_{1i} &{}\quad P_{2i} \\ P^{T}_{2i} &{}\quad P_{3i} \end{array} ]= [p^{ab}_{i}]> 0\), the block \(P_{1i}> 0\), \(W_{i}>0\), \(\varPhi _{i}>0\), \(Q_{1}>0\), \(Q_{2}>0\), \(Q_{3}> 0\), \(Q_{4}>0\) and \(T = T^{T}=[t^{ab}_{i}]\), \(\bar{A}_{fi}\), \(\bar{B}_{fi}\), \(\bar{C}_{fi}\), \(\bar{D}_{fi}\) with appropriate dimensions such that

$$\begin{aligned}&P_{1i} - W_{i} > 0, \end{aligned}$$
(35)
$$\begin{aligned}&\begin{bmatrix} \tilde{(1,1)} &{} \tilde{(1,2)}\\ *&{} \tilde{(2,2)} \end{bmatrix}<0,\end{aligned}$$
(36)
$$\begin{aligned}&\sum \limits _{j=1}^{N}\pi _{ij}(P_{1j}-W_{j})\le P_{1i}-W_{i},\end{aligned}$$
(37)
$$\begin{aligned}&p^{aa}_{i} - \sum \limits _{b\ne a}(p^{ab}_{i} + 2t^{ab}_{i}) \ge 0, \quad \forall a,\end{aligned}$$
(38)
$$\begin{aligned}&t^{ab}_{i} \ge 0, \quad \forall a \ne b,\end{aligned}$$
(39)
$$\begin{aligned}&p^{ab}_{i} + t^{ab}_{i} \ge 0, \quad \forall a \ne b, \end{aligned}$$
(40)

with

$$\begin{aligned} \tilde{(1,1)}&=\begin{bmatrix} \bar{\varXi }_{11} &{} W_{i} &{} Q_{3} &{} 0 &{} 0 &{} 0 &{} 0 &{} 0\\ *&{} W_{i} &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0\\ *&{} *&{} \bar{\varXi }_{33} &{} Q_{4} &{} \bar{\varXi }_{35} &{} 0 &{} 0 &{} \bar{\varXi }_{38}\\ *&{} *&{} *&{} \bar{\varXi }_{44} &{} 0 &{} 0 &{} 0 &{} 0\\ *&{} *&{} *&{} *&{} \bar{\varXi }_{55} &{} 0 &{} 0 &{} 0\\ *&{} *&{} *&{} *&{} *&{} -\varPhi _{i} &{} 0 &{} 0\\ *&{} *&{} *&{} *&{} *&{} *&{} -\gamma ^{2}I &{} 0\\ *&{} *&{} *&{} *&{} *&{} *&{} *&{} \bar{\varXi }_{88} \end{bmatrix},\\ \tilde{(1,2)}&=\begin{bmatrix} A^{T}_{i}P_{1i} &{} A^{T}_{i}W_{i} &{} A^{T}_{i}Q_{1} &{} A^{T}_{i}Q_{2} &{} \bar{\varXi }_{113} &{} \bar{\varXi }_{114} &{} E^{T}_{i} \\ \bar{A}^{T}_{fi} &{} \bar{A}^{T}_{fi} &{} 0 &{} 0 &{} 0 &{} 0 &{} -\bar{C}^{T}_{fi} \\ \bar{C}^{T}_{i}\bar{B}^{T}_{fi} &{} \bar{C}^{T}_{i}\bar{B}^{T}_{fi} &{} 0 &{} 0 &{} 0 &{} 0 &{} -C^{T}_{i}\bar{D}^{T}_{fi} \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \bar{B}^{T}_{fi} &{} \bar{B}^{T}_{fi} &{} 0 &{} 0 &{} 0 &{} 0 &{} \bar{D}^{T}_{fi} \\ B^{T}_{i}P_{1i} &{} B^{T}_{i}W_{i} &{} B^{T}_{i}Q_{1} &{} B^{T}_{i}Q_{2} &{} \bar{\varXi }_{713} &{} \bar{\varXi }_{714} &{} 0 \\ D^{T}_{fi}\bar{B}^{T}_{fi} &{} D^{T}_{fi}\bar{B}^{T}_{fi} &{} 0 &{} 0 &{} 0 &{} 0 &{} -D^{T}_{i}\bar{D}^{T}_{fi} \end{bmatrix},\\ \tilde{(2,2)}&=\begin{bmatrix} -P_{1i} &{} W_{i} &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ *&{} -W_{i} &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ *&{} *&{} -Q_{1} &{} 0 &{} 0 &{} 0 &{} 0 \\ *&{} *&{} *&{} -Q_{2} &{} 0 &{} 0 &{} 0 \\ *&{} *&{} *&{} *&{} -Q_{3} &{} 0 &{} 0 \\ *&{} *&{} *&{} *&{} *&{} -Q_{4} &{} 0 \\ *&{} *&{} *&{} *&{} *&{} *&{}-I \end{bmatrix},\\ \bar{\varXi }_{11}&= -P_{1i} - Q_{3},\quad \bar{\varXi }_{113} = \tau _{M}(A^{T}_{i}-I)Q_{3},\\ \bar{\varXi }_{114}&= (\tau _{M}-\tau _{m})(A^{T}_{i}-I)Q_{4},\quad \bar{\varXi }_{33} = -2Q_{3}-2Q_{4}+\varepsilon _{i}C^{T}_{i}\varPhi _{i}C_{i},\\ \bar{\varXi }_{35}&= Q_{3}+Q_{4},\quad \bar{\varXi }_{38} = \varepsilon _{i}C^{T}_{i}\varPhi _{i}D_{i},\\ \bar{\varXi }_{44}&= -Q_{2}-Q_{4},\quad \bar{\varXi }_{55} = -Q_{1}-Q_{3}-Q_{4},\\ \bar{\varXi }_{713}&= \tau _{M}B^{T}_{i}Q_{3},\quad \bar{\varXi }_{714} = (\tau _{M}-\tau _{m})B^{T}_{i}Q_{4},\\ \bar{\varXi }_{88}&= -\gamma ^{2}I + \varepsilon _{i}D^{T}_{i}\varPhi _{i}D_{i}. \end{aligned}$$

If the above conditions are feasible, the following filter parameters can be obtained:

$$\begin{aligned} A_{fi} = W^{-1}_{i}\bar{A}_{fi},\quad B_{fi}=W^{-1}_{i}\bar{B}_{fi},\quad C_{fi}=\bar{C}_{fi},\quad D_{fi}=\bar{D}_{fi}. \end{aligned}$$
(41)

Proof

According to Theorem 1, if (24) and (25) are feasible, the filtering error system (18) is stochastically stable with an \(H_{\infty }\) performance index \(\gamma \). Now we define \(J_{1i} = \hbox {diag}\{I, P_{2i}P^{-1}_{3i}\}\), \(J_{2i} = \hbox {diag}\{ J_{1i},I,I,I,I,I,J_{1i},I,I,I,I,I\}\). Pre- and post-multiply (22) by \(J_{2i}\) and \(J^{T}_{2i}\), respectively, and define new variables

$$\begin{aligned} \begin{aligned} W_{i}&= P_{2i}P^{-1}_{3i}P^{T}_{2i},\quad \bar{A}_{fi}= P_{2i}A_{fi}P^{-1}_{3i}P^{T}_{2i},\quad \bar{B}_{fi}= P_{2i}B_{fi},\\ \bar{C}_{fi}&=C_{fi}P^{-1}_{3i}P^{T}_{2i},\quad \bar{D}_{fi} = D_{fi}. \end{aligned} \end{aligned}$$

\(\square \)

Then, (24) is equivalent to (36). In addition, according to the Schur complement, (37) holds, and the matrix \(P_{i} = \begin{bmatrix} P_{1i} &{} P_{2i} \\ P^{T}_{2i} &{} P_{3i} \end{bmatrix} > 0\) is equivalent to \(P_{1i} - P_{2i}P^{-1}_{3i}P^{T}_{2i} = P_{1i} - W_{i} >0\).

Note that \(P_{2i}\), \(P_{3i}\) cannot be directly derived from the condition (36), but we know the continuous-time transfer function from \(y_{f}(k)\) to \(z_{f}(k)\) by

$$\begin{aligned} \begin{aligned} T_{zfy_{f}}&= C_{fi}(zI-A_{fi})^{-1}B_{fi}+D_{fi},\\ {}&= \bar{C}_{fi}P^{-T}_{2i}P_{3i}(zI-P^{-1}_{2i}\bar{A}_{fi}P^{-T}_{2i}P_{3i})^{-1}P^{-1}_{2i}\bar{B}_{fi}+\bar{D}_{fi},\\ {}&= \bar{C}_{fi}(zW_{i}-\bar{A}_{fi})^{-1}\bar{B}_{fi}+\bar{D}_{fi},\\ {}&= \bar{C}_{fi}(zI-W^{-1}_{i}\bar{A}_{fi})^{-1}W^{-1}_{i}\bar{B}_{fi}+\bar{D}_{fi}. \end{aligned} \end{aligned}$$
(42)

Furthermore, from (38) to (40), for each \(i \in S\), one has

$$\begin{aligned} p^{aa}_{i} \ge \sum \limits _{b\ne a}(p^{ab}_{i}+2t^{ab}_{i})= \sum \limits _{b\ne a}(\mid p^{ab}_{i}+t^{ab}_{i}\mid + \mid -t^{ab}_{i}\mid )\ge \sum \limits _{b\ne a}p^{ab}_{i}. \end{aligned}$$
(43)

Therefore, according to Definition 3 and (43), we know that the positive-definite matrix \(P_{i}\) is diagonally dominant. This completes the proof.

4 Numerical Examples

In this section, we provide an example to illustrate the effectiveness of our proposed method.

Consider the system described by (1) with two modes, \(S=\{1,2\}\). The mode switching is governed by a Markov process with the generator

$$\begin{aligned} \varPi =\begin{bmatrix} 0.35 &{} 0.65 \\ 0.8 &{} 0.2 \\ \end{bmatrix}. \end{aligned}$$

Mode 1:

$$\begin{aligned} \begin{aligned} A_{1}&=\begin{bmatrix} -1.2 &{} 0 &{} 0.5\\ -1.2 &{} -0.2 &{} 0.5\\ -0.5 &{} 0 &{} -0.6\\ \end{bmatrix}, B_{1}=\begin{bmatrix} 0.2 \\ 0.3 \\ 0.5 \\ \end{bmatrix}, C_{1}=\begin{bmatrix} 1.2 &{} 0.5 &{} 1.4 \\ \end{bmatrix},\\ D_{1}&=0.4, E_{1}=\begin{bmatrix} 0.4 &{} 1 &{} -0.3\\ \end{bmatrix}. \end{aligned} \end{aligned}$$

Mode 2:

$$\begin{aligned} \begin{aligned} A_{2}&=\begin{bmatrix} -0.9 &{} 0.4 &{} 0.8\\ -0.9 &{} -0.2 &{} 0.9\\ 0.5 &{} 0.1 &{} -1\\ \end{bmatrix}, B_{2}=\begin{bmatrix} 0.6 \\ 0.2 \\ 0.4 \\ \end{bmatrix}, C_{2}=\begin{bmatrix} 0.9 &{} -0.6 &{} -0.2 \\ \end{bmatrix},\\ D_{2}&=0.3, E_{1}=\begin{bmatrix} -0.5 &{} -0.9 &{} 0.2\\ \end{bmatrix}. \end{aligned} \end{aligned}$$

For this system, two cases are considered to show the effectiveness of the proposed method, that is, Case 1: \(\varepsilon _{1}=\varepsilon _{2}\) and Case 2: \(\varepsilon _{1}\ne \varepsilon _{2}\).

In Case 1, according to Theorem 2, when \(\tau _{m} = 1\) and \(\tau _{M}=7\), Table 1 shows the minimum \(H_{\infty }\) performance index \(\gamma \) for different triggered threshold \(\varepsilon _{i}\). Moreover, when \(\gamma = 7\) and \(\tau _{m}=1\), Table 2 shows the maximum allowable delay \(\tau _{M}\). We can find that the bigger \(\varepsilon _{i}\), the bigger \(\gamma _{\min }\) and the smaller delay \(\tau _{M}\). So the triggered threshold \(\varepsilon _{i}\) can affect the network-induced delay and \(H_{\infty }\) performance.

Table 1 \(\gamma _{\min }\) and for different \(\varepsilon _{i}\) under \(\tau _{m}=1\) and \(\tau _{M}=7\)(Case 1)
Table 2 \(\tau _{M}\) and for different \(\varepsilon _{i}\) (Case 1)
Fig. 2
figure 2

Possible mode for Case 1

Fig. 3
figure 3

Release instants and release interval for Case 1

Let \(\gamma = 6.5\), \(\varepsilon _{1}=\varepsilon _{2}=0.15\), \(\tau _{M} = 5\), and \(\tau _{M} = 1\), according to Theorem 2, we obtain that the triggered matrix \(\varPhi _{1}= 1.5167\), \(\varPhi _{2}= 4.9145\), and the filter parameters are

$$\begin{aligned} \begin{aligned} A_{f1}&=\begin{bmatrix} -3.7163 &{} -0.9915 &{} 1.3592\\ 0.7667 &{} -1.5582&{} 1.7441\\ 0.4688 &{} 0.1203 &{}-0.7619\\ \end{bmatrix}, B_{f1}=\begin{bmatrix} 1.0510 \\ 1.9644 \\ -0.9003 \\ \end{bmatrix},\\ C_{f1}&=\begin{bmatrix} -0.5117&-2.1349&0.8781&\end{bmatrix}, D_{f1}=-0.0011\\ A_{f2}&=\begin{bmatrix} -1.5614 &{} -0.2143 &{} 0.3419\\ 0.2219 &{} -2.5903&{} 0.7419\\ 0.5584 &{} 1.5587 &{}-4.3419\\ \end{bmatrix}, B_{f2}=\begin{bmatrix} 0.7957 \\ 1.0367 \\ -0.3314 \\ \end{bmatrix},\\ C_{f2}&=\begin{bmatrix} 2.2251&0.0576&-1.0018 \end{bmatrix}, D_{f2}=-0.0026. \end{aligned} \end{aligned}$$
Fig. 4
figure 4

z(k) and its estimation \(z_{f}(k)\) for Case 1

Fig. 5
figure 5

Estimation error e(k) for Case 1

Table 3 \(\tau _{M}\) and for different \(\varepsilon _{i}\) (Case 2)
Fig. 6
figure 6

Possible mode for Case 2

Fig. 7
figure 7

Release instants and release interval for Case 2

Fig. 8
figure 8

z(k) and its estimation \(z_{f}(k)\) for Case 2

We assume that the repeated scalar nonlinearity is \(f(x(k)) = sin(x(k))\), which satisfies Assumption 1. The initial condition is assumed to be \(x(0) = x_{f}(0) = \begin{bmatrix} 0&0&0 \end{bmatrix}^{T}\), and the disturbance input is \(w(k)= 0.5/(1 + k^{2})\). Figure 2 shows the possible system mode transition. Figure 3 shows the event-triggered release instants and intervals. From Fig. 3, we know that 44 times are triggered, in contrast to the time-triggered scheme, in which 150 times are need to be triggered. So the event-triggered scheme reduces the use of networked bandwidth. The z(k) and its estimation of \(z_{f}(k)\) are depicted in Fig. 4. Figure 5 shows the response of filter error e(k).

In Case 2, we consider that \(\varepsilon _{1}\ne \varepsilon _{2}\), and the \(\varepsilon _{1}\) varies; the \(\varepsilon _{2}\) is constant. For given \(\gamma = 7\) and \(\tau _{m}=1\), according to Theorem 2, Table 3 shows the maximum allowable delay \(\tau _{M}\). Similar to Case 1, from Table 3, we can also find that the bigger \(\varepsilon _{i}\), the smaller delay \(\tau _{M}\).

Let \(\gamma = 6.5\), \(\varepsilon _{1} = 0.15\), \(\varepsilon _{2}=0.1\), \(\tau _{m} = 1\) and \(\tau _{M} = 5\), according to Theorem 2, we obtain that the triggered matrix \(\varPhi _{1} = 1.0053\) and \(\varPhi _{2}= 1.9517\), and the filter parameters are

$$\begin{aligned} \begin{aligned} A_{f1}&=\begin{bmatrix} -7.1169 &{} -2.3815 &{} 0.6318\\ -0.0365 &{} -4.6229 &{} 0.1938\\ 0.0326 &{} 0.0988 &{}-2.7159 \end{bmatrix}, B_{f1}=\begin{bmatrix} 0.7724 \\ 1.0031 \\ -2.6471 \end{bmatrix},\\ C_{f1}&=\begin{bmatrix} -0.0647&-4.6718&2.1571 \end{bmatrix}, D_{f1}=-0.3175\\ A_{f2}&=\begin{bmatrix} -5.5449 &{} -2.1638 &{} 0.0812\\ -2.3391 &{} -6.0017 &{} 1.5384\\ 1.4473 &{} 0.0912 &{}-7.0946\\ \end{bmatrix}, B_{f2}=\begin{bmatrix} -1.1367 \\ 0.0488 \\ -2.5715 \\ \end{bmatrix},\\ C_{f2}&=\begin{bmatrix} 1.3641&0.0076&-3.6116 \end{bmatrix}, D_{f2}=-0.0791 \end{aligned} \end{aligned}$$

Giving a possible system mode evolution as in Fig. 6. Figure 7 shows the event-triggered release instants and intervals. From Fig. 7, we know that only 41 times are triggered, in contrast to the time-triggered scheme, in which 150 times need to be triggered. So our method can reduce the use of networked bandwidth. z(k) and its estimation of \(z_{f}(k)\) are depicted in Fig. 8. Figure 9 shows the response of filter error e(k).

Fig. 9
figure 9

Estimation error e(k) for Case 2

5 Conclusion

The problem of event-triggered \(H_{\infty }\) filtering for discrete-time Markov jump systems with repeated scalar nonlinearities is studied in this paper. In order to reduce the communication bandwidth utilization, a dynamic discrete event-triggered scheme has been employed to determine whether the current sampled output signals should be transmitted or not. By using the diagonally-dominant-type Lyapunov function, the sufficient conditions of stochastically stability with \(H_{\infty }\) performance are obtained for discrete-time Markov jump filter error system. The \(H_{\infty }\) filter is designed based on the sufficient conditions. Finally, a numerical example is given to illustrate the effectiveness of our proposed method. The future work may focus on the event-triggered control or filter problem for Markov jump systems under different network conditions, such as network attack and asynchronous communication.