Keywords

1 Introduction

In recent years, sampled-data systems have attracted the attention of many researchers [1, 2] due to the high-speed development of the digital control systems and networked control systems. Most results about sampled-data systems use a periodic triggered control method [3], periodic sampling method is easy for system modeling and analysis [4], but considering the resource utilization, this way has its limitations. When the system is running smoothly, periodical transmission will result in a waste of resources and bandwidth. At the same time, we should have noticed another fact, with the growing of the systems scale [5, 6], the amount of data transmitted by the network is great, thus, it is necessary to save resources and bandwidth. From the two aspects, the event-triggering mechanism shows its unique advantages [7]. Recently, the research of networked control system based on event-driven mechanism gets an increasing attention, and so far, many research results have been achieved [8,9,10]. Therefore, it is necessary to analyze and design the networked control system based on the event-driven mechanism.

In the event-triggered mechanism, the transmission of data mainly depends on the predefined trigger algorithm [11]. Therefore, the advantage of an event-triggered mechanism depends on the choice of trigger algorithm and the corresponding parameters settings. At the same time, the stability analysis based on the event-triggered mechanism is dependent on the selection of the Lyapunov-Krasovskii equation, an appropriate Lyapunov-Krasovskii equation and the treatment of the corresponding integral term will reduce the conservativeness of the system to a certain extent. A smaller degree of conservatism will make the proposed solution more valuable.

Inspired by literature [12, 13], compared to other aperiodic sampling methods, we take nonperiodic sampled-data system into account and model it as a state delay system, then we proposed a more advanced event-triggered algorithm based on nonperiodic sampling, this algorithm has its own unique advantage. Simultaneously, after changing the corresponding parameters, the different set of the element in \( \Theta \) can reduce the amount of transmitted sampled data. In the selection of the Lyapunov-Krasovskii equation, we choose a discrete Lyapunov-Krasovskii equation to reduce the conservative. At the same time, in the processing of some integral items, we choose the improved Jason inequality [14] and some results in literature [15] to further reduce the conservative.

2 Problem Formulation

Consider a class of linear systems:

$$ \dot{x}(t) = Ax(t) + Bu(t) $$
(1)

where \( x(t) \in R^{n} \) is the state vector, \( u(t) \in R^{m} \) is the control input, \( A \in R^{n \times n} \), \( B \in R^{n \times m} \) are known constant matrices with appropriate dimensions.

Similar to [13], this paper considers an event-triggered mechanism, the last released instant \( r_{k} ,k = 1,2, \ldots \) the next released instant \( r_{k + 1} = r_{k} + \sum\limits_{s = 0}^{{l_{k} }} {\Delta t_{s} } \) \( 1 \le l_{k} < \infty ,{\kern 1pt} \;l_{k} \in N \), then we divide the time interval \( [r_{k} ,r_{k + 1} ) \) into the following subintervals:

$$ [r_{k} ,r_{k + 1} ) = U_{d = - 1}^{{l_{k} - 1}} I_{d}^{k} $$
(2)

where \( I_{d}^{k} = [r_{k} + \sum\limits_{s = 0}^{d} {\Delta t_{s} } ,r_{k} + \sum\limits_{s = 0}^{d + 1} {\Delta t_{s} } ),d \in [0,l_{k} - 1] \), and the trigger instants \( r_{k} \) satisfying \( 0 = r_{0} < r_{1} < \ldots < r_{k} < \ldots \) and \( 0 \le \underline{r} \le r_{k + 1} - r_{k} \le \bar{r}, \) for \( \forall k \in N \).

The triggered algorithm proposed in this paper is:

$$ \varepsilon^{2} e^{T} (r_{k} + \sum\limits_{s = 0}^{d} {\Delta t_{s} } )\Omega _{1} e(r_{k} + \sum\limits_{s = 0}^{d} {\Delta t_{s} } ) \le x^{T} (r_{k} )\Theta \,\Omega _{2}\Theta x(r_{k} ) $$

where

\( e(r_{k} + \sum\limits_{s = 0}^{d} {\Delta t_{s} } ) = x(r_{k} + \sum\limits_{s = 0}^{d} {\Delta t_{s} } ) - x(r_{k} ),\Theta = diag\{ \sqrt {\sigma_{1} } ,\sqrt {\sigma_{2} } , \ldots ,\sqrt {\sigma_{n} } \} \) with \( \sigma_{i} > 0(i = 1,2, \ldots ,n),\Omega _{1} > 0 \), and \( \Omega _{2} > 0 \) are two weighting matrices.

Remark 1.

Notice that, compared with traditional event-triggered algorithm, this algorithm introduces a diagonal matrix \( \Theta \), this matrix contains different weighting factor \( \sigma_{i} \) which corresponds to each component \( x_{i} \) of the latest transmitted sampled state \( x \), the different set of the element in \( \Theta \) can reduce the amount of transmitted sampled data, in this way, the communication and computation resources will be saved deeply. Another point, we can see: if taking \( \Theta = diag\{ \sqrt \sigma ,\sqrt \sigma , \ldots ,\sqrt \sigma \} \), \( \varepsilon = 1 \), this event-triggered algorithm turns into a traditional algorithm. Therefor, this event-triggered algorithm is more general than some existing ones.

Similar to [13], define a time-varying delay \( \tau (t) \) as:

$$ \tau (t) = \left\{ {\begin{array}{*{20}l} {t - r_{k} ,t \in [r_{k} ,r_{k} + \Delta t_{0} )} \hfill \\ {t - r_{k} - \sum\limits_{s = 0}^{d} {\Delta t_{s} ,t \in [r_{k} + \sum\limits_{s = 0}^{d} {\Delta t_{s} } ,r_{k} + \sum\limits_{s = 0}^{d + 1} {\Delta t_{s} } )} } \hfill \\ \end{array} } \right. $$

where \( d \in [0,l_{k} - 1] \).

Then we have

$$ \begin{array}{*{20}c} {e(r_{k} + \sum\limits_{s = 0}^{d} {\Delta t_{s} } ) = e(t - \tau (t)) = e_{\tau } (t),} \\ {x(r_{k} ) = x\left( {t - \tau \left( t \right)} \right) - e(r_{k} + \sum\limits_{s = 0}^{d} {\Delta t_{s} } ) = x_{\tau } (t) - e_{\tau } (t)} \\ \end{array} $$

The event-triggered algorithm can be written as:

$$ \varepsilon^{2} e_{\tau }^{T} (t)\Omega _{1} e_{\tau } (t) \le [x_{\tau } (t) - e_{\tau } (t)]^{T}\Theta \,\Omega _{2}\Theta [x_{\tau } (t) - e_{\tau } (t)] $$

Considering the event-triggered mechanism, we can design the controller as follow:

$$ u(t) = Kx(r_{k} ),t \in [r_{k} ,r_{k + 1} ) $$
(3)

where \( u(t) \in R^{m} \) is the control input satisfying \( u(t) = u(r_{k} ). \)

$$ x(r_{k} ) = x_{\tau } (t) - e_{\tau } (t) $$
(4)

Substituting (3) into (1) yields.

$$ \dot{x}(t) = Ax(t) + BKx(r_{k} ) $$
(5)

Then substituting (4) into (5) yields, the original model can be converted into:

$$ \dot{x}(t) = Ax(t) + A_{1} (x_{\tau } (t) - e_{\tau } (t)),t \in [r_{k} ,r_{k + 1} ) $$
(6)

where \( A_{1} = BK \).

Lemma 1 [14].

For a given matrix \( R \in S_{ + }^{n} \), any differentiable function \( x \) in \( [a,b] \to R^{n} \), the inequality holds:

$$ \int_{a}^{b} {\dot{x}^{T} } (u)R\dot{x}(u)du \ge \frac{1}{b - a}\Omega ^{T} diag(R,3R)\Omega $$

where

$$ \Omega = \left[ {\begin{array}{*{20}l} {x(b) - x(a)} \hfill \\ {x(b) + x(a) - \frac{2}{b - a}\int_{a}^{b} {x(u)du} } \hfill \\ \end{array} } \right] $$

3 Stability Analysis

Theorem 1.

For given positive \( \underline{r} \) and \( \bar{r} \), \( 1 \times n \) matrix \( K \), if there exist symmetric matrix \( P > 0,\,Q > 0, \) \( \Omega > 0,Q_{1} > 0,Q_{2} \in R^{n \times n} M_{1} ,M_{2} \in R^{n \times n} \) and \( N_{1j} ,N_{2j} , \) \( N_{3j} \in R^{n \times n} (j = 1,2,3,4) \), the following inequalities hold.

$$ \left[ {\begin{array}{*{20}c} {\Pi _{11} } & * & * & * & * & * \\ {\Pi _{21} } & {\Pi _{22} } & * & * & * & * \\ {\Pi _{31} } & {\Pi _{32} } & {\Pi _{33} } & * & * & * \\ {\Pi _{41} } & {\Pi _{42} } & {\Pi _{43} } & {\Pi _{44} } & * & * \\ {rN_{11}^{T} } & {rN_{12}^{T} } & {rN_{13}^{T} } & {rN_{14}^{T} } & { - rQ} & * \\ {3rN_{21}^{T} } & {3rN_{22}^{T} } & {3rN_{23}^{T} } & {3rN_{24}^{T} } & 0 & { - 3rQ} \\ \end{array} } \right] < 0 $$
$$ \left[ {\begin{array}{*{20}c} {{\rm X}_{11} } & * & * & * & * \\ {{\rm X}_{21} } & {{\rm X}_{22} } & * & * & * \\ {{\rm X}_{31} } & {{\rm X}_{32} } & {{\rm X}_{33} } & * & * \\ {{\rm X}_{41} } & {{\rm X}_{42} } & {{\rm X}_{43} } & {{\rm X}_{44} } & * \\ {rAQ} & {rA_{1} Q} & { - rA_{1} Q} & 0 & { - rQ} \\ \end{array} } \right] < 0 $$

where

$$ \begin{aligned}\Pi _{11} & = A^{T} P + PA - N_{11} - N_{11}^{T} - N_{31} - N_{31}^{T} - 3N_{21} - 3N_{21}^{T} - 2M_{1} \\\Pi _{21} & = A_{1}^{T} P - N_{12} + N_{11}^{T} - N_{32} + N_{31}^{T} - 3N_{22} - 3N_{21}^{T} - M_{2} + M_{1} + rA_{1}^{T} N_{31}^{T} \\\Pi _{22} & = N_{12} + N_{12}^{T} + N_{32} + N_{32}^{T} - 3N_{22} - 3N_{22}^{T} + 2M_{2} + rN_{32} A_{1} + rA_{1}^{T} N_{32}^{T} \\ & \quad - rQ_{2} +\Theta \,\Omega \,\Theta \\\Pi _{31} & = A_{1}^{T} P - N_{13} - N_{11}^{T} - N_{33} - N_{31}^{T} - 3N_{23} + 3N_{21}^{T} + M_{2} - M_{1} - rA_{1}^{T} N_{31}^{T} \\\Pi _{32} & = N_{13} - N_{12}^{T} + N_{33} - N_{32}^{T} - 3N_{23} + 3N_{22}^{T} - 2M_{2} + rN_{33} A_{1} - rA_{1}^{T} N_{32}^{T} \\ & \quad + rQ_{2} -\Theta \,\Omega \,\Theta \\\Pi _{33} & = - N_{13} - N_{13}^{T} - N_{33} - N_{33}^{T} + 3N_{23} + 3N_{23}^{T} + 2M_{2} - rN_{33} A_{1} - rA_{1}^{T} N_{33}^{T} \\ & \quad - rQ_{2} - \varepsilon^{2}\Omega +\Theta \,\Omega \,\Theta \\ \end{aligned} $$
$$ \begin{aligned} \Pi _{41} & = - N_{14} - N_{34} - 3N_{24} + 6N_{21}^{T} + rA^{T} N_{31}^{T} \\\Pi _{42} & = N_{14} + N_{34} - 3N_{24} + 6N_{22}^{T} + rN_{34} A_{1} + rA^{T} N_{32}^{T} \\\Pi _{43} & = - N_{14} - N_{34} + 3N_{24} + 6N_{23}^{T} - rN_{34} A_{1} + rA^{T} N_{33}^{T} \\\Pi _{44} & = 6N_{24} + 6N_{24}^{T} + rN_{34} A + rA^{T} N_{34}^{T} - rQ_{1} \\ {\rm X}_{11} & = A^{T} P + PA - N_{11} - N_{11}^{T} - N_{31} - N_{31}^{T} - 3N_{21} - 3N_{21}^{T} - 2M_{1} + rA^{T} M_{1} \\ & \quad + 2rM_{1} A + rA^{T} M_{1} + rQ_{1} \\ {\rm X}_{21} & = A_{1}^{T} P - N_{12} + N_{11}^{T} - N_{32} + N_{31}^{T} - 3N_{22} - 3N_{21}^{T} - M_{2} + M_{1} + rA_{1}^{T} M_{1} \\ & \quad + rM_{2} A + rA_{1}^{T} M_{1} - rM_{1} A \\ {\rm X}_{22} & = N_{12} + N_{12}^{T} + N_{32} + N_{32}^{T} - 3N_{22} - 3N_{22}^{T} + 2M_{2} - rA_{1}^{T} M_{1} + rM_{2} A_{1} \\ & \quad - rM_{1} A_{1} + rA_{1}^{T} M_{2} + rQ_{2} \\ {\rm X}_{31} & = A_{1}^{T} P - N_{13} - N_{11}^{T} - N_{33} - N_{31}^{T} - 3N_{23} + 3N_{21}^{T} + M_{2} - M_{1} - rA_{1}^{T} M_{1} \\ & \quad - rM_{2} A - rA_{1}^{T} M_{1} + rM_{1} A \\ {\rm X}_{32} & = N_{13} - N_{12}^{T} + N_{33} - N_{32}^{T} - 3N_{23} + 3N_{22}^{T} - 2M_{2} + rA_{1}^{T} M_{1} - rM_{2} A_{1} \\ & \quad + rM_{1} A_{1} - rA_{1}^{T} M_{2} - rQ_{2} \\ {\rm X}_{33} & = - N_{13} - N_{13}^{T} - N_{33} - N_{33}^{T} + 3N_{23} + 3N_{23}^{T} + 2M_{2} - rA_{1}^{T} M_{1} + rM_{2} A_{1} \\ & \quad - rM_{1} A_{1} + rA_{1}^{T} M_{2} + rQ_{2} \\ {\rm X}_{41} & = - N_{14} - N_{34} - 3N_{24} + 6N_{21}^{T} \\ {\rm X}_{42} & = N_{14} + N_{34} - 3N_{24} + 6N_{22}^{T} \\ {\rm X}_{43} & = - N_{14} - N_{34} + 3N_{24} + 6N_{23}^{T} \\ \end{aligned} $$

Then the system (6) is asymptotically stable.

Proof.

Similar to [12], select a Lyapunov-like functional:

$$ V(x(t),t) = V_{1} (x(t)) + V_{2} (x(t),t) + V_{3} (x(t),t) $$

where

$$ \begin{aligned} & V_{1} (x(t)) = x^{T} (t)Px(t) \\ & V_{2} (x(t),t) = 2(r_{k + 1} - t)(x^{T} (t)M_{1} + x^{T} (r_{k} )M_{2} )(x(t) - x(r_{k} )) + (r_{k + 1} - t)\int_{{r_{k} }}^{t} {\dot{x}(s)} Q\dot{x}(s)ds \\ & V_{3} (x(t),t) & = (r_{k + 1} - t)\int_{{r_{k} }}^{t} {x^{T} (s)} Q_{1} x(s)ds + (r_{k + 1} - t)(t - r_{k} )x^{T} (r_{k} )Q_{2} x(r_{k} ) \\ \end{aligned} $$

Then define \( \xi (t) = \left[ {\begin{array}{*{20}c} {x^{T} (t)} & {x_{\tau }^{T} (t)} & {e_{\tau }^{T} (t)} & {\nu^{T} (t)} \\ \end{array} } \right]^{T} \) where \( \nu (t) = \frac{1}{{t - r_{k} }}\int_{{r_{k} }}^{t} {x(s)ds} . \)

Taking the derivative of \( V(x(t),t) \) along the trajectory of system (6).

$$ \begin{aligned} & \dot{V}(x(t),t) = \dot{V}_{1} (x(t)) + \dot{V}_{2} (x(t),t) + \dot{V}_{3} (x(t),t) \\ & \dot{V}_{1} (x(t)) = x^{T} (t)(A^{T} P + PA)x(t) + 2x^{T} (t)PA_{1} x_{\tau } (t) - 2x^{T} (t)PA_{1} e_{\tau } (t) \\ & \dot{V}_{2} (x(t),t) = 2\xi^{T} (t){\rm Z}_{1} \xi (t) + (r_{k + 1} - t)\xi^{T} (t)(He({\rm Z}_{2} ) + {\rm Z}_{3} )\xi (t) - \int_{{r_{k} }}^{t} {\dot{x}^{T} } (s)Q\dot{x}(s)ds \\ & \dot{V}_{3} (x(t),t) = (r_{k + 1} - t)\xi^{T} (t)\Gamma_{1} \xi (t) + (r_{k + 1} - t)\xi^{T} (t)\Gamma_{2} \xi (t) - (t - r_{k} )\xi^{T} (t)\Gamma_{2} \xi (t) - \int_{{r_{k} }}^{t} {x^{T} (s)Q_{1} } x(s)ds \\ \end{aligned} $$

where

$$ {\rm Z}_{1} = \left[ {\begin{array}{*{20}c} { - M_{1} } & {M_{1} } & { - M_{1} } & 0 \\ { - M_{2} } & {M_{2} } & { - M_{2} } & 0 \\ {M_{2} } & { - M_{2} } & {M_{2} } & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} } \right] $$
$$ {\rm Z}_{2} = \left[ {\begin{array}{*{20}c} {A^{T} M_{1} + M_{1} A} & {M_{1} A_{1} - A^{T} M_{1} } & { - M_{1} A_{1} + A^{T} M_{1} } & 0 \\ {A_{1}^{T} M_{1} + M_{2} A} & { - A_{1}^{T} M_{1} + M_{2} A_{1} } & {A_{1}^{T} M_{1} - M_{2} A_{1} } & 0 \\ { - A_{1}^{T} M_{1} - M_{2} A} & {A_{1}^{T} M_{1} - M_{2} A_{1} } & { - A_{1}^{T} M_{1} + M_{2} A_{1} } & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} } \right] $$
$$ {\rm Z}_{3} = \left[ {\begin{array}{*{20}c} {A^{T} QA} & * & * & * \\ {A_{1}^{T} QA} & {A_{1}^{T} QA_{1} } & * & * \\ { - A_{1}^{T} QA} & { - A_{1}^{T} QA_{1} } & {A_{1}^{T} QA_{1} } & * \\ 0 & 0 & 0 & 0 \\ \end{array} } \right] $$
$$ \Gamma _{1} = \left[ {\begin{array}{*{20}c} {Q_{1} } & * & * & * \\ 0 & 0 & * & * \\ 0 & 0 & 0 & * \\ 0 & 0 & 0 & 0 \\ \end{array} } \right]\quad\Gamma _{2} = \left[ {\begin{array}{*{20}c} 0 & * & * & * \\ 0 & {Q_{2} } & * & * \\ 0 & { - Q_{2} } & {Q_{2} } & * \\ 0 & 0 & 0 & 0 \\ \end{array} } \right] $$

Integrating both sides of system (5) 0n \( [r_{k} ,t) \), we have

$$ x(t) - x(r_{k} ) = A\int_{{r_{k} }}^{t} {x(s)ds + (t - r_{k} )A_{1} } x(r_{k} ) $$
(7)

According to (7), there exists \( N_{3} \in R^{4n \times n} \) such that

$$ \begin{aligned} - 2\xi^{T} & (t)N_{3} (e_{1} - e_{2} + e_{3} )\xi (t) + 2(t - r_{k} )\xi^{T} (t)N_{3} Ae_{4} \xi (t) \\ & + 2(t - r_{k} )\xi^{T} (t)N_{3} A_{1} (e_{2} - e_{3} )\xi (t) = 0 \\ \end{aligned} $$
(8)

By Lemma 1, we have

$$ \begin{aligned} - \int_{{r_{k} }}^{t} {\dot{x}^{T} } (s)Q\dot{x}(s)ds \le - \frac{1}{{t - r_{k} }}\xi^{T} (t)(e_{1} - e_{2} + e_{3} )^{T} Q(e_{1} - e_{2} + e_{3} )\xi (t) \hfill \\ \quad \;\; - \frac{3}{{t - r_{k} }}\xi^{T} (t)(e_{1} + e_{2} - e_{3} - 2e_{4} )^{T} Q(e_{1} + e_{2} - e_{3} - 2e_{4} )\xi (t) \hfill \\ \end{aligned} $$

In addition, there exist \( N_{1} ,N_{2} \in R^{4n \times n} \) satisfies the following inequalities.

$$ \begin{aligned} - \int_{{r_{k} }}^{t} {\dot{x}^{T} } (s)Q\dot{x}(s)ds \le (t - r_{k} )N_{1} Q^{ - 1} N_{1}^{T} - N_{1} (e_{1} - e_{2} + e_{3} ) - N_{1}^{T} (e_{1} - e_{2} + e_{3} )^{T} \hfill \\ \;\; + 3(t - r_{k} )N_{2} Q^{ - 1} N_{2}^{T} - 3N_{2} (e_{1} + e_{2} - e_{3} - 2e_{4} ) - 3N_{2}^{T} (e_{1} + e_{2} - e_{3} - 2e_{4} )^{T} \hfill \\ \end{aligned} $$
(9)

where

\( e_{1} = \left[ {\begin{array}{*{20}c} I & 0 & 0 & 0 \\ \end{array} } \right],e_{2} = \left[ {\begin{array}{*{20}c} 0 & I & 0 & 0 \\ \end{array} } \right],e_{3} = \left[ {\begin{array}{*{20}c} 0 & 0 & I & 0 \\ \end{array} } \right],e_{4} = \left[ {\begin{array}{*{20}c} 0 & 0 & 0 & I \\ \end{array} } \right] \) According to Jensen inequality, we have the following inequality.

$$ - \int_{{r_{k} }}^{t} {x^{T} (s)Q_{1} } x(s)ds \le - (t - r_{k} )\nu^{T} (t)Q_{1} \nu (t) $$
(10)

From (8)–(10), by Schur complement lemma, Theorem 1 can be derived for \( r \in \{ \underline{r} ,\bar{r}\} \).

4 Numerical Examples

In this section, a numerical simulation is given to verify the results proposed in the previous section.

Example 1.

Consider the system in [12] with the parameter matrices.

$$ A = \left[ {\begin{array}{*{20}c} 0 & 1 \\ 0 & { - 0.1} \\ \end{array} } \right],\quad A_{1} = \left[ {\begin{array}{*{20}c} 0 & 0 \\ { - 0.375} & { - 1.15} \\ \end{array} } \right] $$

When \( \underline{r} = 0 \), the admissible upper bound \( \bar{r} \) can be calculated by Matlab LMI toolbox according to Theorem 1.

  1. (1)

    Case 1: Set event-trigger parameters

$$ \varepsilon = 1,\Theta = diag\{ 0.59,0.47\} $$

The admissible upper bound \( \bar{r} \) and some results in [12, 16, 17] are shown in Table 1.

Table 1. Admissible upper bound \( \bar{r} \) under different schemes

From Table 1, it can be seen clearly that Theorem 1 has a less conservatism than the results in [12, 16, 17].

So as to further verify the effectiveness of the event-triggered algorithm, we make the following experiments.

  1. (2)

    Case 2: we set different \( \varepsilon \) to do several simulations.

$$ \varepsilon_{1} = 10,\varepsilon_{2} = 15,\varepsilon_{3} = 30 $$

The admissible upper bounds corresponding to different \( \varepsilon \) are shown in Table 2.

Table 2. Admissible upper bound \( \bar{r} \) with different \( \varepsilon \)

From Table 2, we can see clearly that different \( \varepsilon \) can reduce the conservatism to different degrees.

Example 2.

Consider the system in [12].

$$ A = \left[ {\begin{array}{*{20}c} {0.05} & {0.6} & {0.1} \\ { - 3} & { - 2} & {0.1} \\ {0.1} & 0 & { - 2} \\ \end{array} } \right],\quad A_{1} = \left[ {\begin{array}{*{20}c} {0.05} & {0.05} & {0.4} \\ { - 1} & 1 & {0.05} \\ {0.5} & {0.05} & { - 0.9} \\ \end{array} } \right] $$
  1. (1)

    Case 1: Set event-trigger parameters

$$ \varepsilon = 15,\quad\Theta = diag\{ 0.59,0.47,0.51\} $$

The admissible upper bound and some results in [14, 17] are shown in Table 3.

Table 3. Admissible upper bound \( \bar{r} \) under different schemes

For the different simulation model, the Table 3 shows that Theorem 1 has a less conservatism than the results in [14, 17].

  1. (1)

    Case 2: Now we set \( \Theta = diag\{ 0.59,0.47,0.51\} \) and choose different \( \varepsilon \) to do several simulations.

$$ \varepsilon_{1} = 20,\varepsilon_{2} = 25,\varepsilon_{3} = 30 $$

The admissible upper bounds \( \bar{r} \) corresponding to different \( \varepsilon \) are shown in Table 4.

Table 4. Admissible upper bound \( \bar{r} \) with different \( \varepsilon \)

From Table 4, we can see clearly that different \( \varepsilon \) can reduce the conservatism to different degrees.

From Tables 1 and 3, it is seen clearly that Theorem 1 is less conservative than the results in literature [16, 17]. From Tables 2 and 4, we can see clearly that the event-triggering algorithm has a certain effect in reducing conservatism.

5 Conclusion

In this paper, based on the sampling-dependent stability for sampled-data systems, a more general event-triggering mechanism is taken into account. In terms of reducing conservatism, we utilize a Lyapunov-like functional including the integral of the state. Simultaneously, we use the improved Jensen inequality for the derivative of the Lyapunov-like functional. At last, a sampling-dependent stability theorem is derived. The validity of this theorem is verified by several simulation experiments.