1 Introduction

In the past few decades, researchers have been increasingly interested in neural networks because they can be applied to many real-world systems in various fields of science and engineering such as combinatorial optimization, automatic control, information science, systems engineering, parallel computations, fault diagnosis, pattern recognition and signal processing [1,2,3,4,5]. As a special kind of neural networks, neutral neural network contains delays in both the state and the derivatives of the state. It is generally known that neutral neural network has more complicated characteristics. Many real-world systems can be fitly described by neutral-type neural networks.

It is well known that time delay is usually encountered in electronic implementations of neural networks due to the finite switching speed of the amplifiers and communication time. Therefore, delayed neural networks have been proposed and have received a great deal of attention. In the successful application of neural networks, such as signal processing, pattern recognition and associative memory design, stability is a prerequisite. In the hardware implementation of neural networks, some signal transmission delays are unavoidable, which may cause undesirable dynamical behaviors such as instability and oscillation. Therefore, it is necessary to take time delays into consideration in studying stability of neural networks. In the past decades, stability analysis of delayed neural networks has been extensively investigated and many useful stability criteria have been established [5,6,7,8]. On the other hand, the stabilization of delayed neural networks has attracted considerable attention and several feedback stabilizing control methods have been proposed [9,10,11].

As an important class of hybrid systems, switched systems consist of a finite number of subsystems and a logical rule that orchestrates switching between these subsystems [12]. Due to the success in practical applications [13, 14], switched systems have been studied during the past decades. For recent progress, readers can refer to survey papers ([15,16,17,18,19,20] and the references therein). Ma et al. [14] investigated stabilization of networked switched linear systems by an asynchronous switching delay system approach. Dong et al. [15] dealt with exponential stabilization and L2-gain for uncertain switched nonlinear systems with interval time-varying delay. Liu et al. [17] investigated the stability and stabilization of nonlinear switched systems by average dwell time method. Arunkumar et al. [19] gave robust stability criteria for discrete-time switched neural networks with various activation functions. Wu et al. [20] investigated the global exponential stability for switched stochastic neural networks with time-varying delays. Shen et al. [21] considered stability analysis for uncertain switched neural networks with time-varying delay. In [22], stabilizability analysis and stabilizable switching signals design of switched Boolean networks were investigated via semi-tensor product of matrices. The above discussion shows that it is significant to investigate switched neural networks with time delay and parametric uncertainty.

To the best of our knowledge, the problem of robust exponential stabilization for uncertain switched neutral neural networks with mixed time-varying delays has rarely been studied. This paper filled up this blank space by investigating the problem of exponential stabilization for a class of uncertain switched neutral neural networks with mixed time-varying delays. By employing new multiple Lyapunov-like functional and introducing free-weighting matrices, we have developed novel sufficient conditions for exponential stabilization for a class of uncertain switched neutral neural networks with time-varying delay. Moreover, a design scheme for the stabilizing feedback controllers is proposed to guarantee exponential stability of corresponding closed-loop systems. Finally, two numerical examples are presented to illustrate the developed results.

This paper is organized as follows. Section 2 gives the system description, an assumption and some lemmas. In Sect. 3, some sufficient conditions for exponential stabilization are presented for a class of uncertain switched neutral neural networks with time-varying delay. Numerical examples are provided to illustrate the validity of the proposed method in Sect. 4. Finally, a conclusion and remarks are made in Sect. 5.

Notations: throughout this paper, \( R^{n} \) denotes the n-dimensional Euclidean space. \( R^{n \times m} \) is the set of all \( n \times m \) real matrices; \( * \) represents the elements below the main diagonal of a symmetric matrix. \( M^{T} \) means the transpose of M; \( \left\| {{\kern 1pt} {\kern 1pt} \cdot {\kern 1pt} } \right\|{\kern 1pt} \) is the Euclidean norm of a vector; \( M > 0\left( { < 0,{\kern 1pt} {\kern 1pt} \; \le 0,\;{\kern 1pt} \ge 0} \right) \) means that the matrix is symmetric positive(negative, semi-negative, semi-positive) definite matrix; I is an appropriately dimensioned identify matrix; \( \lambda_{\hbox{max} } \left( M \right) \) and \( \lambda_{\hbox{min} } \left( M \right) \) stand for the maximal and minimum eigenvalue of a matrix M respectively; \( Sup_{x \in [a,b]} f(x) \) denotes the minimum value of upper bounds of the function \( f(x) \) on the interval \( [a,b] \).

2 Problem Formulation and Preliminaries

Consider the following uncertain switched neutral neural networks with mixed time-varying delays

$$ \begin{aligned} & \dot{x}(t) - C_{\sigma (t)} (t)\dot{x}(t - h(t)) = - A_{\sigma (t)} (t)x(t) + B_{\sigma (t)} (t)f(x(t)) + D_{\sigma (t)} (t)g(x(t - \tau (t))\\ & + E_{\sigma (t)} (t)u(t), \\& x(t) = \phi (t),\quad \forall t \in [ - \bar{\tau },0], \\ \end{aligned} $$
(1)

where \( x(t) = [x_{1} (t),x_{2} (t), \ldots ,x_{n} (t)]^{T} \in R^{n} \) is the state vector of the neural networks associated with n neurons at time t. \( u(t) \in R^{n} \) is the control input.

$$ \begin{aligned} & A_{\sigma (t)} (t) = A_{\sigma (t)} + \Delta A_{\sigma (t)} (t),\quad B_{\sigma (t)} (t) = B_{\sigma (t)} + \Delta B_{\sigma (t)} (t),\quad C_{\sigma (t)} (t) = C_{\sigma (t)} + \Delta C_{\sigma (t)} (t), \\ & D_{\sigma (t)} (t) = D_{\sigma (t)} + \Delta D_{\sigma (t)} (t),\quad E_{\sigma (t)} (t) = E_{\sigma (t)} + \Delta E_{\sigma (t)} (t). \\ \end{aligned} $$

\( f(x(t)) = [f_{1} (x_{1} (t)),f_{2} (x_{2} (t)), \ldots ,f_{n} (x_{n} (t))]^{T} \) and \( g(x(t - \tau (t)) = [g_{1} (x_{1} (t - \tau (t)), \ldots ,g_{n} (x_{n} (t - \tau (t))]^{T} \)denote the neuron activation function with \( f(0) = 0,g(0) = 0. \)\( h(t) \) and \( \tau (t) \) denote the time-varying delays which are everywhere time-differentiable and satisfies

$$ \begin{aligned} & 0 \le h(t) \le h_{1} ,\quad \dot{h}(t) \le h < 1, \\ & 0 \le \tau (t) \le \tau_{1} ,\quad \dot{\tau }(t) \le \tau < 1,\quad \bar{\tau } = \hbox{max} \left\{ {h_{1} ,\tau_{1} } \right\}, \\ \end{aligned} $$
(2)

for known constants \( \tau_{1} ,h_{1} ,h \) and \( \tau . \) The initial condition \( \phi (t) \) denotes a continuous vector-valued initial function on the interval \( [ - \bar{\tau },0] \). The right continuous function \( \sigma (t):[0,\infty ) \to {\mathbb{N}} \triangleq \left\{ {1,2, \ldots ,N} \right\} \) is the switching signal. The matrices \( A_{\sigma (t)} = diag\{ a_{1,\sigma (t)} ,a_{2,\sigma (t)} , \ldots ,a_{n,\sigma (t)} \} \) is the positive definite matrix. \( A_{i} ,\;B_{i} ,C_{i} ,D_{i} , \)\( {\kern 1pt} {\kern 1pt} E_{i} ,{\kern 1pt} {\kern 1pt} \;\;i \in {\mathbb{N}}, \) are known constant matrices with appropriate dimension. \( B_{\sigma (t)} (t) \) and \( D_{\sigma (t)} (t) \) are the connection weight matrix and delayed connection weight matrix respectively. The matrices \( \Delta A_{\sigma (t)} (t), \)\( \Delta B_{\sigma (t)} (t), \)\( \Delta C_{\sigma (t)} (t), \)\( \Delta D_{\sigma (t)} (t) \) and \( \Delta E_{\sigma (t)} (t) \) are all unknown time-varying matrices with appropriate dimensions which represent the system uncertainty and stochastic perturbation uncertainty, respectively, which satisfy:

$$ \left[ {\begin{array}{*{20}l} {\Delta A_{i} (t)} \hfill & {\Delta B_{i} (t)} \hfill & {\Delta C_{i} (t)} \hfill & {\Delta D_{i} (t)} \hfill & {\Delta E_{i} (t)} \hfill \\ \end{array} } \right] = H_{i} F_{i} (t)\left[ {\begin{array}{*{20}l} {M_{1}^{i} } \hfill & {M_{2}^{i} } \hfill & {M_{3}^{i} } \hfill & {M_{4}^{i} } \hfill & {M_{5}^{i} } \hfill \\ \end{array} } \right], $$

where \( \, H_{i} ,M_{j}^{i} ,\, j = 1,2,3,4,5,i \in {\mathbb{N}}, \) are known real constant matrices with appropriate dimensions, \( F_{i} (t) \) is unknown real time-varying matrix with Lebesgue measurable elements bounded by \( F_{i}^{T} (t)F_{i} (t) \le I \).

Assumption 1

[23] For any \( i = 1,2, \ldots ,n, \) there exist constants \( l_{i}^{ - } \), \( l_{i}^{ + } \), \( \gamma_{i}^{ - } \), and \( \gamma_{i}^{ + } \), such that

$$ \begin{aligned} l_{i}^{ - } \le \frac{{f_{i} (\alpha ) - f_{i} (\beta )}}{\alpha - \beta } \le l_{i}^{ + } ,\quad i = 1,2, \ldots ,n, \hfill \\ \gamma_{i}^{ - } \le \frac{{g_{i} (\alpha ) - g_{i} (\beta )}}{\alpha - \beta } \le \gamma_{i}^{ + } ,\quad i = 1,2, \ldots ,n, \hfill \\ \end{aligned} $$
(3)

where \( \alpha ,\beta \in {\mathbb{R}},\alpha \ne \beta \).

For expression convenience, we denote

$$ \begin{aligned} L_{1} & = diag\left\{ {l_{1}^{ - } l_{1}^{ + } ,l_{2}^{ - } l_{2}^{ + } , \ldots ,l_{n}^{ - } l_{n}^{ + } } \right\},\quad \varGamma_{1} = diag\left\{ {\gamma_{1}^{ - } \gamma_{1}^{ + } ,\;\gamma_{2}^{ - } \gamma_{2}^{ + } ,\; \cdots ,\;\gamma_{n}^{ - } \gamma_{n}^{ + } } \right\}, \\ L_{2} & = diag\left\{ {\frac{{l_{1}^{ - } + l_{1}^{ + } }}{2},\;\frac{{l_{2}^{ - } + l_{2}^{ + } }}{2}, \ldots ,\frac{{l_{n}^{ - } + l_{n}^{ + } }}{2}} \right\}, \\ \varGamma_{2} & = diag\left\{ {\frac{{\gamma_{1}^{ - } + \gamma_{1}^{ + } }}{2},\;\frac{{\gamma_{2}^{ - } + \gamma_{2}^{ + } }}{2}, \ldots ,\frac{{\gamma_{n}^{ - } + \gamma_{n}^{ + } }}{2}} \right\}. \\ \end{aligned} $$
(4)

Lemma 1

[24] For any\( x,y \in R^{n} \)and a positive definite matrix\( P \in R^{n \times n} \), the following matrix inequality holds:

$$ -\,2x^{T} y \le x^{T} Py + y^{T} P^{ - 1} y . $$
(5)

Lemma 2

[23] For any matrices\( Q = Q^{T} ,\;H,\;E \)with appropriate dimensions, the inequality

$$ Q + HF(t)E + E^{T} F^{T} (t)H^{T} < 0, $$

holds for all\( F(t) \)satisfying\( F^{T} (t)F(t) \le I \), if and only if there exist a scalar\( \varepsilon > 0 \), such that the following inequality holds:

$$ Q + \varepsilon^{ - 1} HH^{T} + \varepsilon E^{T} E < 0. $$

Definition 1

[18] For any \( T_{2} > T_{1} \ge 0, \) let \( N_{\sigma } (T_{1} ,T_{2} ) \) denote the switching number of \( \sigma (t) \) on an interval \( (T_{1} ,T_{2} ). \) If

$$ N_{\sigma } (T_{1} ,T_{2} ) \le N_{0} + (T_{2} - T_{1} )/\tau_{a} , $$
(6)

holds for given \( N_{0} \ge 0,\tau_{a} \ge 0, \) then the constant \( \tau_{a} \) is called the average dwell time and \( N_{0} \) is the chatter bound. Without loss of generality, we choose \( N_{0}=0 \) in this paper.

Definition 2

[15] The switched system (1) with \( u(t) = 0 \) is said to be exponentially stable under switching signal \( \sigma (t) \) if there exist two constants \( k > 0 \) and \( \lambda > 0 \) such that

$$ \left\| {x(t)} \right\| \le ke^{{ - \lambda (t - t_{0} )}} \left\| \varphi \right\|_{c} ,\quad \forall t \ge t_{0} , $$

where \( \left\| \phi \right\|_{c} = \sup_{{ - \bar{\tau } \le \theta \le 0}} \{ \left\| {x(t_{0} + \theta )} \right\|,\left\| {\dot{x}(t_{0} + \theta )} \right\|\} . \)

3 Main Results

In this section, we shall state and prove the main results.

3.1 Exponential Stability Analysis

Consider the following uncertain neutral neural networks:

$$ \begin{aligned} & \dot{x}(t) - C_{\sigma (t)} (t)\dot{x}(t - h(t)) = - A_{\sigma (t)} (t)x(t) + B_{\sigma (t)} (t)f(x(t)) + D_{\sigma (t)} (t)g(x(t - \tau (t))), \\ & x(t) = \varphi (t),\quad \forall t \in [ - \bar{\tau },0], \\ \end{aligned} $$
(7)

Choose a Lyapunov-like-Krasovskii functional candidate:

$$ \begin{aligned} V_{\sigma (t)} (x(t)) & = V_{1\sigma (t)} (x(t)) + V_{2\sigma (t)} (x(t)) + V_{3\sigma (t)} (x(t)) + V_{4\sigma (t)} (x(t)) \\ & \quad + V_{5\sigma (t)} (x(t)) + V_{6\sigma (t)} (x(t)) + V_{7\sigma (t)} (x(t)), \\ \end{aligned} $$
(8)

where

$$ \begin{aligned} V_{1\sigma (t)} (x(t)) & = x^{T} (t)P_{\sigma (t)} x(t), \\ V_{2\sigma (t)} (t) & = \int\limits_{{t - \tau_{1} }}^{t} {e^{\alpha (s - t)} } x^{T} (s)Q_{\sigma (t)} x(s)ds, \\ V_{3\sigma (t)} (x(t)) & = \int\limits_{{t - h_{1} }}^{t} {e^{\alpha (s - t)} } x^{T} (s)R_{\sigma (t)} x(s)ds, \\ V_{4\sigma (t)} (x(t)) & = \int\limits_{t - \tau (t)}^{t} {e^{\alpha (s - t)} } x^{T} (s)S_{\sigma (t)} x(s)ds, \\ V_{5\sigma (t)} (x(t)) & = \int\limits_{t - h(t)}^{t} {e^{\alpha (s - t)} } x^{T} (s)U_{\sigma (t)} x(s)ds, \\ V_{6\sigma (t)} (x(t)) & = \int\limits_{{ - \tau_{1} }}^{0} {\int\limits_{t + \theta }^{t} {e^{\alpha (s - t)} } } \dot{x}^{T} (s)\varLambda_{\sigma (t)} \dot{x}(s)dsd\theta , \\ V_{7\sigma (t)} (x(t)) & = \int\limits_{t - \tau (t)}^{t} {e^{\alpha (s - t)} } g^{T} (x(s))W_{\sigma (t)} g(x(s))ds. \\ \end{aligned} $$
(9)

First of all, we give the following lemma.

Lemma 3

Consider the system (7). For given constants\( \alpha > 0, \)\( \rho_{i} > 0 \), the following inequality holds:

$$ \dot{V}_{i} (x(t)) \le - \alpha V_{i} (x(t)), $$

if there exist matrices \( P_{i} > 0,\;Q_{i} > 0,\;R_{i} > 0,\;S_{i} > 0,\;U_{i} > 0,\;\varLambda_{i} > 0,\;G_{i1} > 0,\;G_{i2} > 0,\;W_{i} > 0,\;X_{i1} > 0 \) and \( X_{i2} > 0 \), such that the following matrices inequality hold for all \( i \in {\mathbb{N}} \):

$$ \varSigma_{i} (t) = \left[ {\begin{array}{*{20}c} {\varXi_{i} (t)} & {\tau_{1} G_{i} } & {\tau_{1} X_{i} } \\ * & { - \tau_{1} e^{{\alpha \tau_{1} }} \varLambda_{i} } & 0 \\ * & * & { - \tau_{1} e^{{\alpha \tau_{1} }} \varLambda_{i} } \\ \end{array} } \right] < 0, $$
(10)

where

$$ \varXi_{i} (t) = \left[ {\begin{array}{*{20}c} {\varXi_{11}^{i} (t)} & 0 & 0 & {\varXi_{14}^{i} } & 0 & {\varXi_{16}^{i} (t)} & {\varGamma_{2} } & {\varXi_{18}^{i} (t)} & {\varXi_{19}^{i} (t)} & {\varXi_{1,10}^{i} (t)} \\ * & {\varXi_{22}^{i} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & {\varXi_{33}^{i} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & {\varXi_{44}^{i} } & {\varXi_{45}^{i} } & 0 & 0 & {\varGamma_{2} } & 0 & 0 \\ * & * & * & * & {\varXi_{55}^{i} } & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & { - I} & 0 & 0 & {\varXi_{69}^{i} (t)} & {\varXi_{6,10}^{i} (t)} \\ * & * & * & * & * & * & {\varXi_{77}^{i} } & 0 & 0 & 0 \\ * & * & * & * & * & * & * & {\varXi_{88}^{i} } & {\varXi_{89}^{i} (t)} & {\varXi_{8,10}^{i} (t)} \\ * & * & * & * & * & * & * & * & {\varXi_{99}^{i} } & {\varXi_{9,10}^{i} } \\ * & * & * & * & * & * & * & * & * & {\varXi_{10,10}^{i} (t)} \\ \end{array} } \right], $$
$$ \begin{aligned} \varXi_{11}^{i} (t) & = - P_{i} A_{i} (t) - A_{i}^{T} (t)P_{i} + \alpha_{i} P_{i} + Q_{i} + R_{i} + S_{i} + U_{i} + e^{{ - \alpha \tau_{1} }} \left( {G_{i1}^{T} + G_{i1} } \right) - L_{1} - \varGamma_{1} , \\ \varXi_{14}^{i} & = e^{{ - \alpha \tau_{1} }} \left( {G_{i2}^{T} - G_{i1} } \right),\quad \varXi_{16}^{i} (t) = P_{i} B_{i} (t) + L_{2} ,\quad \varXi_{18}^{i} (t) = P_{i} D_{i} (t), \\ \varXi_{19}^{i} (t) & = - \rho_{i} A_{i}^{T} (t)P_{i} ,\quad \varXi_{1,10}^{i} (t) = \rho_{i} A_{i}^{T} (t)P_{i} + P_{i} C_{i} (t),\quad \varXi_{22}^{i} = - (1 - h)e^{{ - \alpha h_{1} }} U_{i} , \\ \varXi_{33}^{i} & = - e^{{ - \alpha h_{1} }} R_{i} ,\;\;\varXi_{44}^{i} = - (1 - \tau )e^{{ - \alpha \tau_{1} }} S_{i} + e^{{ - \alpha \tau_{1} }} \left( { - G_{i2}^{T} - G_{i2} + X_{i1}^{T} + X_{i1} } \right) - \varGamma_{1} , \\ \varXi_{45}^{i} & = e^{{ - \alpha \tau_{1} }} \left( {X_{i2}^{T} - X_{i1} } \right),\quad \varXi_{55}^{i} = - e^{{ - \alpha \tau_{1} }} Q_{i} - e^{{ - \alpha_{i} \tau_{1} }} \left( {X_{i2}^{T} + X_{i2} } \right), \\ \varXi_{69}^{i} (t) & = \rho_{i} B_{i}^{T} (t)P_{i} ,\quad \varXi_{6,10}^{i} (t) = - \rho_{i} B_{i}^{T} (t)P_{i} ,\quad \varXi_{77}^{i} = W_{i} - I, \\ \varXi_{88}^{i} & = - (1 - \tau )e^{{ - \alpha \tau_{1} }} W_{i} - I,\quad \varXi_{89}^{i} (t) = \rho_{i} D_{i}^{T} (t)P_{i} ,\quad \varXi_{8,10}^{i} (t) = - \rho_{i} D_{i}^{T} (t)P_{i} , \\ \varXi_{99}^{i} & = \tau_{1} \varLambda_{i} -\,2\rho_{i} P_{i} ,\;\;\varXi_{9,10}^{i} = \rho_{i} P_{i} ,\quad \varXi_{10,10}^{i} (t) = -\,2\rho_{i} P_{i} C_{i} (t), \\ G_{i} & = \left[ {\begin{array}{*{20}l} {G_{i1}^{T} } \hfill & 0 \hfill & 0 \hfill & {G_{i2}^{T} } \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ \end{array} } \right]^{T} , \\ X_{i} & = \left[ {\begin{array}{*{20}l} 0 \hfill & 0 \hfill & 0 \hfill & {X_{i1}^{T} } \hfill & {X_{i2}^{T} } \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ \end{array} } \right]^{T} . \\ \end{aligned} $$

Proof

Assume \( \sigma (t_{k} ) = i \), \( \sigma (t_{k}^{ - } ) = j,\;\;i,j \in {\mathbb{N}}. \) When \( t \in [t_{k} ,t_{k + 1} ) \), we have \( \sigma (t) = i \). Along the trajectories of system (7), the time derivative of \( {\text{V}}_{ki} ({\text{x}}({\text{t}})),\;k = 1,2, \ldots ,7, \) can be obtained:

$$ \begin{aligned} \dot{V}_{1i} (x(t)) & = 2x^{T} (t)P_{i} [C_{i} (t)\dot{x}(t - h(t)) - A_{i} (t)x(t) + B_{i} (t)f(x(t)) + D_{i} (t)g(x(t - \tau (t)))], \\ \dot{V}_{2i} (x(t)) & = x^{T} (t)Q_{i} x(t) - e^{{ - \alpha \tau_{1} }} x^{T} (t - \tau_{1} )Q_{i} x(t - \tau_{1} ) - \alpha V_{2i} (x(t)), \\ \dot{V}_{3i} (x(t)) & = x^{T} (t)R_{i} x(t) - e^{{ - \alpha h_{1} }} x^{T} (t - h_{1} )R_{i} x(t - h_{1} ) - \alpha V_{3i} (x(t)), \\ \dot{V}_{4i} (x(t)) & = x^{T} (t)S_{i} x(t) - (1 - \dot{\tau }(t))e^{ - \alpha \tau (t)} x^{T} (t - \tau (t))S_{i} x(t - \tau (t)) - \alpha V_{4i} (x(t)) \\ & \le x^{T} (t)S_{i} x(t) - (1 - \tau )e^{{ - \alpha \tau_{1} }} x^{T} (t - \tau (t))S_{i} x(t - \tau (t)) - \alpha V_{4i} (x(t)), \\ \dot{V}_{5i} (x(t)) & = x^{T} (t)U_{i} x(t) - (1 - \dot{h}(t))e^{ - \alpha h(t)} x^{T} (t - h(t))U_{i} x(t - h(t)) - \alpha V_{5i} (x(t)) \\ & \le x^{T} (t)U_{i} x(t) - (1 - h)e^{{ - \alpha h_{1} }} x^{T} (t - h(t))U_{i} x(t - h(t)) - \alpha V_{5i} (x(t)), \\ \dot{V}_{6i} (x(t)) & = - \alpha V_{6i} (x(t)) + \tau_{1} \dot{x}^{T} (t)\varLambda_{i} \dot{x}(t) - \int\limits_{{t - \tau_{1} }}^{t} {e^{\alpha (s - t)} } \dot{x}^{T} (s)\varLambda_{i} \dot{x}(s)ds \\ & \le - \alpha V_{6i} (x(t)) + \tau_{1} \dot{x}^{T} (t)\varLambda_{i} \dot{x}(t) - e^{{ - \alpha \tau_{1} }} \int\limits_{{t - \tau_{1} }}^{t} {\dot{x}^{T} } (s)\varLambda_{i} \dot{x}(s)ds, \\ \end{aligned} $$
$$ \begin{aligned} \dot{V}_{7i} (x(t)) & = - \alpha V_{7i} (x(t)) + g^{T} (x(t))W_{i} g(x(t))\; - (1 - \dot{\tau }(t))e^{ - \alpha \tau (t)}\\ & \times g^{T} (x(t - \tau (t)))W_{i} g(x(t - \tau (t))) \\ & \le - \alpha V_{7i} (x(t)) + g^{T} (x(t))W_{i} g(x(t)) \\ & \quad - (1 - \tau )e^{{ - \alpha \tau_{1} }} g^{T} (x(t - \tau (t)))W_{i} g(x(t - \tau (t))). \\ \end{aligned} $$
(11)

Let

$$ \begin{aligned} \xi (t) & = \left[ {x^{T} (t),\quad x^{T} (t - h(t)),\quad x^{T} (t - h_{1} ),\quad x^{T} (t - \tau (t)),\quad x^{T} (t - \tau_{1} ),\quad f^{T} (x(t)),} \right. \\ & \left. {\quad \quad g^{T} (x(t)), \quad g^{T} (x(t - \tau (t))),\;\;\dot{x}^{T} (t)\;,\;\dot{x}^{T} (t - h(t))} \right]^{T} . \\ \end{aligned} $$

Using the Newton Leibniz formula, it follows

$$ \begin{aligned} 2\xi^{T} (t)G_{i} \left[ {x(t) - x(t - \tau (t)) - \int\limits_{t - \tau (t)}^{t} {\dot{x}} (s)ds} \right] = 0, \hfill \\ 2\xi^{T} (t)X_{i} \left[ {x(t - \tau (t)) - x(t - \tau_{1} ) - \int\limits_{{t - \tau_{1} }}^{t - \tau (t)} {\dot{x}(s)} ds} \right] = 0, \hfill \\ \end{aligned} $$
(12)

where

$$ \begin{aligned} G_{i} & = \left[ {\begin{array}{*{20}l} {G_{i1}^{T} } \hfill & 0 \hfill & 0 \hfill & {G_{i2}^{T} } \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ \end{array} } \right]^{T} , \\ X_{i} & = \left[ {\begin{array}{*{20}l} 0 \hfill & 0 \hfill & 0 \hfill & {X_{i1}^{T} } \hfill & {X_{i2}^{T} } \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ \end{array} } \right]^{T} . \\ \end{aligned} $$

According to Lemma 1, one obtains

$$ \begin{aligned} -\,2\xi^{T} (t)G_{i} \int\limits_{t - \tau (t)}^{t} {\dot{x}} (s)ds & = \int\limits_{t - \tau (t)}^{t} { (-\,2)\xi^{T} (t)G_{i} \dot{x}(s)} ds \\ & \le \tau (t)\xi^{T} (t)G_{i} \varLambda_{i}^{ - 1} G_{i}^{T} \xi (t) + \int\limits_{t - \tau (t)}^{t} {\dot{x}^{T} } (s)\varLambda_{i} \dot{x}(s)ds \\ & \le \tau_{1} \xi^{T} (t)G_{i} \varLambda_{i}^{ - 1} G_{i}^{T} \xi (t) + \int\limits_{t - \tau (t)}^{t} {\dot{x}^{T} } (s)\varLambda_{i} \dot{x}(s)ds, \\ -\,2\xi^{T} (t)X_{i} \int_{{t - \tau_{1} }}^{t - \tau (t)} {\dot{x}} (s)ds & = \int\limits_{{t - \tau_{1} }}^{t - \tau (t)} { (-\,2)\xi^{T} (t)X_{i} \dot{x}(s)} ds \\ & \le (\tau_{1} - \tau (t))\xi^{T} (t)X_{i} \varLambda_{i}^{ - 1} X_{i}^{T} \xi (t) + \int\limits_{{t - \tau_{1} }}^{t - \tau (t)} {\dot{x}^{T} } (s)\varLambda_{i} \dot{x}(s)ds \\ & \le \tau_{1} \xi^{T} (t)X_{i} \varLambda_{i}^{ - 1} X_{i}^{T} \xi (t) + \int\limits_{{t - \tau_{1} }}^{t - \tau (t)} {\dot{x}^{T} } (s)\varLambda_{i} \dot{x}(s)ds. \\ \end{aligned} $$

So, it follows that

$$ \begin{aligned} \int\limits_{t - \tau (t)}^{t} {\dot{x}^{T} } (s)\varLambda_{i} \dot{x}(s)ds & \le \tau_{1} \xi^{T} (t)G_{i} \varLambda_{i}^{ - 1} G_{i}^{T} \xi (t) + 2\xi^{T} (t)G_{i} \int\limits_{t - \tau (t)}^{t} {\dot{x}} (s)ds, \\ \int\limits_{{t - \tau_{1} }}^{t - \tau (t)} {\dot{x}^{T} } (s)\varLambda_{i} \dot{x}(s)ds & \le \tau_{1} \xi^{T} (t)X_{i} \varLambda_{i}^{ - 1} X_{i}^{T} \xi (t) + 2\xi^{T} (t)X_{i} \int\limits_{{t - \tau_{1} }}^{t - \tau (t)} {\dot{x}} (s)ds. \\ \end{aligned} $$
(13)

From (13), we have

$$ \begin{aligned} - e^{{ - \alpha \tau_{1} }} \int\limits_{{t - \tau_{1} }}^{t} {\dot{x}^{T} } (s)\varLambda_{i} \dot{x}(s)ds & = - e^{{ - \alpha \tau_{1} }} \left[ {\int\limits_{t - \tau (t)}^{t} {\dot{x}^{T} } (s)\varLambda_{i} \dot{x}(s)ds + \int\limits_{{t - \tau_{1} }}^{t - \tau (t)} {\dot{x}^{T} } (s)\varLambda_{i} \dot{x}(s)ds} \right] \\ & \le e^{{ - \alpha \tau_{1} }} \left[ {2\xi^{T} (t)G_{i} \int_{t - \tau (t)}^{t} {\dot{x}} (s)ds + \tau_{1} \xi^{T} (t)G_{i} \varLambda_{i}^{ - 1} G_{i}^{T} \xi (t)} \right. \\ & \quad \left. { +\,2\xi^{T} (t)X_{i} \int_{{t - \tau_{1} }}^{t - \tau (t)} {\dot{x}} (s)ds + \tau_{1} \xi^{T} (t)X_{i} \varLambda_{i}^{ - 1} X_{i}^{T} \xi (t)} \right] \\ & = e^{{ - \alpha \tau_{1} }} \left\{ {2\xi^{T} (t)G_{i} [x(t) - x(t - \tau (t))] + 2\xi^{T} (t)X_{i} } \right. \\ & \times \left. { [x(t - \tau (t)) - x(t - \tau_{1} )]} \right. \\ & \quad \left. { +\,\tau_{1} \xi^{T} (t)G_{i} \varLambda_{i}^{ - 1} G_{i}^{T} \xi (t) + \tau_{1} \xi^{T} (t)X_{i} \varLambda_{i}^{ - 1} X_{i}^{T} \xi (t)} \right\}. \\ \end{aligned} $$
(14)

From Assumption 1, we get

$$ \begin{aligned} & (f_{i} (x_{i} (t)) - l_{i}^{ - } x_{i} (t))(l_{i}^{ + } x_{i} (t) - f_{i} (x_{i} (t))) \ge 0, \\ & (g_{i} (x_{i} (t)) - \gamma_{i}^{ - } x_{i} (t))(\gamma_{i}^{ + } x_{i} (t) - g_{i} (x_{i} (t))) \ge 0, \\ & (g_{i} (x_{i} (t - \tau (t))) - \gamma_{i}^{ - } x_{i} (t - \tau (t)))(\gamma_{i}^{ + } x_{i} (t - \tau (t))) - g_{i} (x_{i} (t - \tau (t)))) \ge 0. \\ \end{aligned} $$

So, it follows that

$$ \left[ {\begin{array}{*{20}c} {x(t)} \\ {f(x(t))} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - L_{1} } & {L_{2} } \\ * & { - I} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {x(t)} \\ {f(x(t))} \\ \end{array} } \right] \ge 0, $$
(15)
$$ \left[ {\begin{array}{*{20}c} {x(t)} \\ {g(x(t))} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - \varGamma_{1} } & {\varGamma_{2} } \\ * & { - I} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {x(t)} \\ {g(x(t))} \\ \end{array} } \right] \ge 0, $$
(16)
$$ \left[ {\begin{array}{*{20}c} {x(t - \tau (t))} \\ {g(x(t - \tau (t)))} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - \varGamma_{1} } & {\varGamma_{2} } \\ * & { - I} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {x(t - \tau (t))} \\ {g(x(t - \tau (t)))} \\ \end{array} } \right] \ge 0, $$
(17)

where \( L_{i} ,\varGamma_{i} ,\quad i = 1,2, \) are given by (4).

From (7), one obtains

$$ \begin{aligned} & 2\rho_{i} [\dot{x}^{T} (t) - \dot{x}^{T} (t - h(t))]P_{i} [ - \dot{x}(t) + C_{i} (t)\dot{x}(t - h(t)) - A_{i} (t)x(t) \\ & \quad \left. { +\,B_{i} (t)f(x(t)) + D_{i} (t)g(x(t - \tau (t)))} \right] = 0, \\ \end{aligned} $$
(18)

From (11), (14)–(18), we have

$$ \dot{V}_{i} (x(t)) + \alpha V_{i} (x(t)) \le \xi^{T} (t)\left[ {\varXi_{i} (t) + \tau_{1} e^{{ - \alpha \tau_{1} }} \left( {G_{i} \varLambda_{i}^{ - 1} G_{i}^{T} + X_{i} \varLambda_{i}^{ - 1} X_{i}^{T} } \right)} \right]\xi (t), $$
(19)

where

$$ \begin{array}{*{20}c} {\varXi_{i} (t) = \left[ {\begin{array}{*{20}c} {\varXi_{11}^{i} (t)} & 0 & 0 & {\varXi_{14}^{i} } & 0 & {\varXi_{16}^{i} (t)} & {\varGamma_{2} } & {\varXi_{18}^{i} (t)} & {\varXi_{19}^{i} (t)} & {\varXi_{1,10}^{i} (t)} \\ * & {\varXi_{22}^{i} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & {\varXi_{33}^{i} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & {\varXi_{44}^{i} } & {\varXi_{45}^{i} } & 0 & 0 & {\varGamma_{2} } & 0 & 0 \\ * & * & * & * & {\varXi_{55}^{i} } & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & { - I} & 0 & 0 & {\varXi_{69}^{i} (t)} & {\varXi_{6,10}^{i} (t)} \\ * & * & * & * & * & * & {\varXi_{77}^{i} } & 0 & 0 & 0 \\ * & * & * & * & * & * & * & {\varXi_{88}^{i} } & {\varXi_{89}^{i} (t)} & {\varXi_{8,10}^{i} (t)} \\ * & * & * & * & * & * & * & * & {\varXi_{99}^{i} } & {\varXi_{9,10}^{i} } \\ * & * & * & * & * & * & * & * & * & {\varXi_{10,10}^{i} (t)} \\ \end{array} } \right]} \\ \end{array} $$
$$ \begin{aligned} \varXi_{11}^{i} (t) & = - P_{i} A_{i} (t) - A_{i}^{T} (t)P_{i} + \alpha P_{i} + Q_{i} + R_{i} + S_{i} + U_{i} + e^{{ - \alpha \tau_{1} }} (G_{i1}^{T} + G_{i1} ) - L_{1} - \varGamma_{1} , \\ \varXi_{14}^{i} & = e^{{ - \alpha \tau_{1} }} (G_{i2}^{T} - G_{i1} ),\;\;\varXi_{16}^{i} (t) = P_{i} B_{i} (t) + L_{2} ,\;\;\varXi_{18}^{i} (t) = P_{i} D_{i} (t), \\ \varXi_{19}^{i} (t) & = - \rho_{i} A_{i}^{T} (t)P_{i} ,\;\;\varXi_{1,10}^{i} (t) = \rho_{i} A_{i}^{T} (t)P_{i} + P_{i} C_{i} (t),\;\;\varXi_{22}^{i} = - (1 - h)e^{{ - \alpha h_{1} }} U_{i} , \\ \varXi_{33}^{i} & = - e^{{ - \alpha h_{1} }} R_{i} ,\;\;\varXi_{44}^{i} = - (1 - \tau )e^{{ - \alpha \tau_{1} }} S_{i} + e^{{ - \alpha \tau_{1} }} ( - G_{i2}^{T} - G_{i2} + X_{i1}^{T} + X_{i1} ) - \varGamma_{1} , \\ \varXi_{45}^{i} & = e^{{ - \alpha \tau_{1} }} (X_{i2}^{T} - X_{i1} ),\;\;\varXi_{55}^{i} = - e^{{ - \alpha \tau_{1} }} Q_{i} - e^{{ - \alpha \tau_{1} }} (X_{i2}^{T} + X_{i2} ), \\ \varXi_{69}^{i} (t) & = \rho_{i} B_{i}^{T} (t)P_{i} ,\;\;\varXi_{6,10}^{i} (t) = - \rho_{i} B_{i}^{T} (t)P_{i} ,\;\;\varXi_{77}^{i} = W_{i} - I, \\ \end{aligned} $$
$$ \begin{aligned} \varXi_{88}^{i} & = - (1 - \tau )e^{{ - \alpha \tau_{1} }} W_{i} - I,\quad \varXi_{89}^{i} (t) = \rho_{i} D_{i}^{T} (t)P_{i} ,\quad \varXi_{8,10}^{i} (t) = - \rho_{i} D_{i}^{T} (t)P_{i} , \\ \varXi_{99}^{i} & = \tau_{1} \varLambda_{i} -\,2\rho_{i} P_{i} ,\quad \varXi_{9,10}^{i} = \rho_{i} P_{i} ,\quad \varXi_{10,10}^{i} (t) = -\,2\rho_{i} P_{i} C_{i} (t). \\ \end{aligned} $$

From (19), using Schur complement and (10), we can get

$$ \dot{V}_{i} (x(t)) \le - \alpha V_{i} (x(t)). $$

This completes the proof.□

Theorem 1

Under Assumption 1, for given constants\( \alpha > 0,\,\mu \ge 1 \), \( \rho_{i} > 0,\varepsilon_{i} > 0,\;i \in {\mathbb{N}} \), the system (7) is exponentially stable for any switching signal with the average dwell time satisfying\( T_{a} > {{\left( {ln\mu } \right)} \mathord{\left/ {\vphantom {{\left( {ln\mu } \right)} \alpha }} \right. \kern-0pt} \alpha } \), if there exist symmetric and positive definite matrices\( Y_{i} \), \( \bar{Q}_{i} \), \( \bar{R}_{i} \), \( \bar{S}_{i} \), \( \bar{U}_{i} \), \( \bar{\varLambda }_{i} \), \( W_{i} \)and any matrices\( \bar{G}_{i1} \), \( \bar{G}_{i2} \), \( \bar{X}_{i1} \)and\( \bar{X}_{i2} \)such that the following LMIs hold for all\( i,j \in {\mathbb{N}}\):

$$ \begin{array}{*{20}c} {\bar{\varSigma }_{i} = \left[ {\begin{array}{*{20}c} {\bar{\varXi }_{i} } & {\tau_{1} \bar{G}_{i} } & {\tau_{1} \bar{X}_{i} } & {(\bar{\varTheta }_{2}^{i} )^{T} } \\ * & { - \tau_{1} e^{{\alpha \tau_{1} }} \bar{\varLambda }_{i} } & 0 & 0 \\ * & * & { - \tau_{1} e^{{\alpha \tau_{1} }} \bar{\varLambda }_{i} } & 0 \\ * & * & * & { - \varepsilon_{i} I} \\ \end{array} } \right] < 0} \\ \end{array} , $$
(20)
$$ Y_{i} \le \mu Y_{j} ,\quad \bar{Q}_{i} \le \mu \bar{Q}_{j} ,\quad \bar{R}_{i} \le \mu \bar{R}_{j} ,\quad \bar{S}_{i} \le \mu \bar{S}_{j} ,\quad \bar{U}_{i} \le \mu \bar{U}_{j} ,\quad \bar{\varLambda }_{i} \le \mu \bar{\varLambda }_{j} ,\quad W_{i} \le \mu W_{j} , $$
(21)

where

$$ \begin{array}{*{20}c} {\bar{\varXi }_{i} = \left[ {\begin{array}{*{20}c} {\bar{\varXi }_{11}^{i} } & 0 & 0 & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{14}^{i} } & 0 & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{16}^{i} } & {Y_{i} \varGamma_{2} } & {D_{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{19}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{1,10}^{i} } \\ * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{22}^{i} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & { - e^{{ - \alpha h_{1} }} \bar{R}_{i} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & {\bar{\varXi }_{44}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{45}^{i} } & 0 & 0 & {Y_{i} \varGamma_{2} } & 0 & 0 \\ * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{55}^{i} } & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & { - I} & 0 & 0 & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{69}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{6,10}^{i} } \\ * & * & * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{77}^{i} } & 0 & 0 & 0 \\ * & * & * & * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{88}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{89}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{8,10}^{i} } \\ * & * & * & * & * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{99}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{9,10}^{i} } \\ * & * & * & * & * & * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{10,10}^{i} } \\ \end{array} } \right]} \\ \end{array} $$
$$ \begin{aligned} \bar{\varXi }_{11}^{i} & = - A_{i} Y_{i} - Y_{i} A_{i}^{T} + \alpha Y_{i} + \bar{Q}_{i} + \bar{R}_{i} + \bar{S}_{i} + \bar{U}_{i} + e^{{ - \alpha \tau_{1} }} (\bar{G}_{i1}^{T} + \bar{G}_{i1} ) - L_{1} Y_{i} - Y_{i} L_{1} \\ & \quad - \varGamma_{1} Y_{i} - Y_{i} \varGamma_{1} + 2I + \varepsilon_{i} H_{i} H_{i}^{T} , \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{14}^{i} & = e^{{ - \alpha \tau_{1} }} (\bar{G}_{i2}^{T} - \bar{G}_{i1} ),\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{16}^{i} = B_{i} + Y_{i} L_{2} ,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{19}^{i} = - \rho_{i} Y_{i} A_{i}^{T} + \varepsilon_{i} \rho_{i} H_{i} H_{i}^{T} , \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{1,10}^{i} & = \rho_{i} Y_{i} A_{i}^{T} + C_{i} Y_{i} - \varepsilon_{i} \rho_{i} H_{i} H_{i}^{T} ,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{22}^{i} = - (1 - h)e^{{ - \alpha h_{1} }} \bar{U}_{i} , \\ \bar{\varXi }_{44}^{i} & = - (1 - \tau )e^{{ - \alpha \tau_{1} }} \bar{S}_{i} + e^{{ - \alpha \tau_{1} }} ( - \bar{G}_{i2}^{T} - \bar{G}_{i2} + \bar{X}_{i1}^{T} + \bar{X}_{i1} ) - \varGamma_{1} Y_{i} - Y_{i} \varGamma_{1} + I, \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{45}^{i} & = e^{{ - \alpha \tau_{1} }} (\bar{X}_{i2}^{T} - \bar{X}_{i1} ),\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{55}^{i} = - e^{{ - \alpha \tau_{1} }} \bar{Q}_{i} - e^{{ - \alpha \tau_{1} }} (\bar{X}_{i2}^{T} + \bar{X}_{i2} ), \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{69}^{i} & = \rho_{i} B_{i}^{T} ,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{6,10}^{i} = - \rho_{i} B_{i}^{T} ,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{77}^{i} = W_{i} - I, \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{88}^{i} & = - (1 - \tau )e^{{ - \alpha \tau_{1} }} W_{i} - I,\;\quad \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{89}^{i} = \rho_{i} D_{i}^{T} ,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{8,10}^{i} = - \rho_{i} D_{i}^{T} , \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{99}^{i} & = \tau_{1} \bar{\varLambda }_{i} -\,2\rho_{i} Y_{i} + \varepsilon_{i} \rho_{i}^{2} H_{i} H_{i}^{T} ,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{9,10}^{i} = \rho_{i} Y_{i} - \varepsilon_{i} \rho_{i}^{2} H_{i} H_{i}^{T} , \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{10,10}^{i} & = -\,2\rho_{i} C_{i} Y_{i} + \varepsilon_{i} \rho_{i}^{2} H_{i} H_{i}^{T} , \\ \end{aligned} $$
$$ \begin{aligned} \bar{G}_{i} & = \left[ {\begin{array}{*{20}l} {\bar{G}_{i1}^{T} } \hfill & 0 \hfill & 0 \hfill & {\bar{G}_{i2}^{T} } \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ \end{array} } \right]^{T} , \\ \bar{X}_{i} & = \left[ {\begin{array}{*{20}l} 0 \hfill & 0 \hfill & 0 \hfill & {\bar{X}_{i1}^{T} } \hfill & {\bar{X}_{i2}^{T} } \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ \end{array} } \right]^{T} , \\ \bar{\varTheta }_{2}^{i} & = \left[ {\begin{array}{*{20}l} { - M_{1}^{i} Y_{i} } \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & {M_{2}^{i} } \hfill & 0 \hfill & {M_{4}^{i} } \hfill & 0 \hfill & {M_{3}^{i} Y_{i} } \hfill \\ \end{array} } \right]. \\ \end{aligned} $$

Proof

Note that \( \varSigma_{i} (t) < 0 \) is not standard LMIs due to the existence of parameter uncertainties, which will be further dealt with via the following approach. \( \varSigma_{i} (t) \) can be written as

$$ \varSigma_{i} (t) = \hat{\varSigma }_{i} + \Delta \hat{\varSigma }_{i} (t), $$
(22)

where

$$ \hat{\varSigma }_{i} = \left[ {\begin{array}{*{20}c} {\hat{\varXi }_{i} } & {\tau_{1} G_{i} } & {\tau_{1} X_{i} } \\ * & { - \tau_{1} e^{{\alpha \tau_{1} }} \varLambda_{i} } & 0 \\ * & * & { - \tau_{1} e^{{\alpha \tau_{1} }} \varLambda_{i} } \\ \end{array} } \right],\quad \Delta \hat{\varSigma }_{i} (t) = \left[ {\begin{array}{*{20}c} {\Delta \hat{\varXi }_{i} (t)} & 0 & 0 \\ * & 0 & 0 \\ * & * & 0 \\ \end{array} } \right], $$
$$ \begin{array}{*{20}c} {\hat{\varXi }_{i} = \left[ {\begin{array}{*{20}c} {\hat{\varXi }_{11}^{i} } & 0 & 0 & {\varXi_{14}^{i} } & 0 & {\hat{\varXi }_{16}^{i} } & {\varGamma_{2} } & {P_{i} D_{i} } & {\hat{\varXi }_{19}^{i} } & {\hat{\varXi }_{1,10}^{i} } \\ * & {\varXi_{22}^{i} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & { - e^{{ - \alpha h_{1} }} R_{i} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & {\varXi_{44}^{i} } & {\varXi_{45}^{i} } & 0 & 0 & {\varGamma_{2} } & 0 & 0 \\ * & * & * & * & {\varXi_{55}^{i} } & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & { - I} & 0 & 0 & {\hat{\varXi }_{69}^{i} } & {\hat{\varXi }_{6,10}^{i} } \\ * & * & * & * & * & * & {\varXi_{77}^{i} } & 0 & 0 & 0 \\ * & * & * & * & * & * & * & {\varXi_{88}^{i} } & {\hat{\varXi }_{89}^{i} } & {\hat{\varXi }_{8,10}^{i} } \\ * & * & * & * & * & * & * & * & {\varXi_{99}^{i} } & {\varXi_{9,10}^{i} } \\ * & * & * & * & * & * & * & * & * & {\hat{\varXi }_{10,10}^{i} } \\ \end{array} } \right]} \\ \end{array} , $$
$$ \begin{array}{*{20}c} {\Delta \hat{\varXi }_{i} (t) = \left[ {\begin{array}{*{20}c} {\Delta \hat{\varXi }_{11}^{i} (t)} & 0 & 0 & 0 & 0 & {\Delta \hat{\varXi }_{16}^{i} (t)} & 0 & {\Delta \hat{\varXi }_{18}^{i} (t)} & {\Delta \hat{\varXi }_{19}^{i} (t)} & {\Delta \hat{\varXi }_{1,10}^{i} (t)} \\ * & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & 0 & 0 & 0 & {\Delta \hat{\varXi }_{69}^{i} (t)} & {\Delta \hat{\varXi }_{6,10}^{i} (t)} \\ * & * & * & * & * & * & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & * & 0 & {\Delta \hat{\varXi }_{89}^{i} (t)} & {\Delta \hat{\varXi }_{8,10}^{i} (t)} \\ * & * & * & * & * & * & * & * & 0 & 0 \\ * & * & * & * & * & * & * & * & * & {\Delta \hat{\varXi }_{10,10}^{i} (t)} \\ \end{array} } \right]} \\ \end{array} , $$
$$ \begin{array}{*{20}c} \begin{aligned} \hat{\varXi }_{11}^{i} & = - P_{i} A_{i} - A_{i}^{T} P_{i} + \alpha P_{i} + Q_{i} + R_{i} + S_{i} + U_{i} + e^{{ - \alpha \tau_{1} }} (G_{i1}^{T} + G_{i1} ) - L_{1} - \varGamma_{1} , \\ \hat{\varXi }_{16}^{i} & = P_{i} B_{i} + L_{2} ,\;\quad \hat{\varXi }_{19}^{i} = - \rho_{i} A_{i}^{T} P_{i} ,\quad \hat{\varXi }_{1,10}^{i} = \rho_{i} A_{i}^{T} P_{i} + P_{i} C_{i} , \\ \;\hat{\varXi }_{69}^{i} & = \rho_{i} B_{i}^{T} P_{i} ,\quad \hat{\varXi }_{6,10}^{i} = - \rho_{i} B_{i}^{T} P_{i} ,\quad \hat{\varXi }_{89}^{i} = \rho_{i} D_{i}^{T} P_{i} , \\ \hat{\varXi }_{8,10}^{i} & = - \rho_{i} D_{i}^{T} P_{i} ,\;\;\hat{\varXi }_{10,10}^{i} = -\,2\rho_{i} P_{i} C_{i} , \\ \Delta \hat{\varXi }_{11}^{i} (t) & = - P_{i} \Delta A_{i} (t) - \Delta A_{i}^{T} (t)P_{i} ,\quad \Delta \hat{\varXi }_{16}^{i} (t) = P_{i} \Delta B_{i} (t),\quad \\ \Delta \hat{\varXi }_{18}^{i} (t) & = P_{i} \Delta D_{i} (t),\quad \Delta \hat{\varXi }_{19}^{i} (t) = - \rho_{i} \Delta A_{i}^{T} (t)P_{i} , \\ \Delta \hat{\varXi }_{1,10}^{i} (t) & = \rho_{i} \Delta A_{i}^{T} (t)P_{i} + P_{i} \Delta C_{i} (t),\quad \Delta \hat{\varXi }_{69}^{i} (t) = \rho_{i} \Delta B_{i}^{T} (t)P_{i} , \\ \Delta \hat{\varXi }_{6,10}^{i} (t) & = - \rho_{i} \Delta B_{i}^{T} (t)P_{i} ,\quad \Delta \hat{\varXi }_{89}^{i} (t) = \rho_{i} \Delta D_{i}^{T} (t)P_{i} , \\ \Delta \hat{\varXi }_{8,10}^{i} (t) & = - \rho_{i} \Delta D_{i}^{T} (t)P_{i} ,\quad \Delta \hat{\varXi }_{10,10}^{i} (t) = -\,2\rho_{i} P_{i} \Delta C_{i} (t). \\ \end{aligned} \\ \end{array} $$

The other parameters are the same as (10). According to Assumption 1, \( \varSigma_{i} (t) \) could be rewritten as

$$ \varSigma_{i} (t) = \hat{\varSigma }_{i} + \left[ {\begin{array}{*{20}c} {\varTheta_{1}^{i} } \\ 0 \\ 0 \\ \end{array} } \right]F(t)\left[ {\begin{array}{*{20}c} {\varTheta_{2}^{i} } & 0 & 0 \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {(\varTheta_{2}^{i} )^{T} } \\ 0 \\ 0 \\ \end{array} } \right]F^{T} (t)\left[ {\begin{array}{*{20}c} {(\varTheta_{1}^{i} )^{T} } & 0 & 0 \\ \end{array} } \right], $$
(23)

where

$$ \begin{aligned} \varTheta_{1}^{i} = \left[ {\begin{array}{*{20}c} {H_{i}^{T} P_{i} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 & {\rho H_{i}^{T} P_{i} } & { - \rho H_{i}^{T} P_{i} } \\ \end{array} } \right]^{T} , \hfill \\ \varTheta_{2}^{i} = \left[ {\begin{array}{*{20}c} { - M_{1}^{i} } & 0 & 0 & 0 & 0 & {M_{2}^{i} } & 0 & {M_{4}^{i} } & 0 & {M_{3}^{i} } \\ \end{array} } \right]. \hfill \\ \end{aligned} $$

By Lemma 2 and \( F_{i}^{T} (t)F_{i} (t) \le I \), \( \varSigma_{i} (t) < 0 \) holds if and only if there exists a positive scalar \( \varepsilon_{i} \) such that,

$$ \hat{\varSigma }_{i} + \varepsilon_{i} \left[ {\begin{array}{*{20}c} {\varTheta_{1}^{i} } \\ 0 \\ 0 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {(\varTheta_{1}^{i} )^{T} } & 0 & 0 \\ \end{array} } \right] + \varepsilon_{i}^{ - 1} \left[ {\begin{array}{*{20}c} {(\varTheta_{2}^{i} )^{T} } \\ 0 \\ 0 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\varTheta_{2}^{i} } & 0 & 0 \\ \end{array} } \right] < 0. $$
(24)

Using Schur complement, (24) is equivalent to the following inequality

$$ \begin{array}{*{20}c} {\left[ {\begin{array}{*{20}c} {\hat{\varSigma }_{i} + \varepsilon_{i} \left[ {\begin{array}{*{20}c} {\varTheta_{1}^{i} } \\ 0 \\ 0 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {(\varTheta_{1}^{i} )^{T} } & 0 & 0 \\ \end{array} } \right]} & {\left[ {\begin{array}{*{20}c} {(\varTheta_{2}^{i} )^{T} } \\ 0 \\ 0 \\ \end{array} } \right]} \\ * & { - \varepsilon_{i} I} \\ \end{array} } \right] < 0} \\ \end{array} . $$
(25)

Equation (25) could be rewritten as:

$$ \varOmega = \begin{array}{*{20}c} {\left[ {\begin{array}{*{20}c} {\hat{\varXi }_{i} + \varepsilon_{i} \varTheta_{1}^{i} (\varTheta_{1}^{i} )^{T} } & {\tau_{1} G_{i} } & {\tau_{1} X_{i} } & {(\varTheta_{2}^{i} )^{T} } \\ * & { - \tau_{1} e^{{\alpha \tau_{1} }} \varLambda_{i} } & 0 & 0 \\ * & * & { - \tau_{1} e^{{\alpha \tau_{1} }} \varLambda_{i} } & 0 \\ * & * & * & { - \varepsilon_{i} I} \\ \end{array} } \right] < 0} \\ \end{array} . $$
(26)

Set

$$ \begin{aligned}\varPi &= diag\left\{ {P_{i}^{ - 1} ,P_{i}^{ - 1} ,P_{i}^{ - 1} ,P_{i}^{ - 1} ,P_{i}^{ - 1} ,I,I,I,P_{i}^{ - 1} ,P_{i}^{ - 1} } \right\},\quad Y_{i} = P_{i}^{ - 1} ,\bar{X}_{i2} = P_{i}^{ - 1} X_{i2} P_{i}^{ - 1} ,\\ \bar{Q}_{i} &= P_{i}^{ - 1} Q_{i} P_{i}^{ - 1}, \quad \bar{R}_{i} = P_{i}^{ - 1} R_{i} P_{i}^{ - 1} ,\\ \bar{S}_{i} &= P_{i}^{ - 1} R_{i} P_{i}^{ - 1} , \quad \bar{U}_{i} = P_{i}^{ - 1} U_{i} P_{i}^{ - 1} ,\\ \bar{\varLambda }_{i} &= P_{i}^{ - 1} \varLambda_{i} P_{i}^{ - 1} , \quad \bar{G}_{i1} = P_{i}^{ - 1} G_{i1} P_{i}^{ - 1} ,\\ \bar{G}_{i2}& = P_{i}^{ - 1} G_{i2} P_{i}^{ - 1} , \quad \bar{X}_{i1} = P_{i}^{ - 1} X_{i1} P. \end{aligned}$$

Using \( \varPhi = diag\{ \varPi ,P_{i}^{ - 1} ,P_{i}^{ - 1} ,I\} \) pre- and post- multiply the left term of (26), the following matrix inequalities are obtained:

$$ \varPhi \varOmega \varPhi = \begin{array}{*{20}c} {\left[ {\begin{array}{*{20}c} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{i} } & {\tau_{1} \varPi G_{i} P_{i}^{ - 1} } & {\tau_{1} \varPi X_{i} P_{i}^{ - 1} } & {\varPi (\varTheta_{2}^{i} )^{T} } \\ * & { - \tau_{1} P_{i}^{ - 1} e^{{\alpha \tau_{1} }} \varLambda_{i} P_{i}^{ - 1} } & 0 & 0 \\ * & * & { - \tau_{1} P_{i}^{ - 1} e^{{\alpha \tau_{1} }} \varLambda_{i} P_{i}^{ - 1} } & 0 \\ * & * & * & { - \varepsilon_{i} I} \\ \end{array} } \right] < 0} \\ \end{array} $$

where

$$ \begin{array}{*{20}c} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{i} = \left[ {\begin{array}{*{20}c} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{11}^{i} } & 0 & 0 & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{14}^{i} } & 0 & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{16}^{i} } & {Y_{i} \varGamma_{2} } & {D_{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{19}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{1,10}^{i} } \\ * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{22}^{i} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & { - e^{{ - \alpha h_{1} }} P_{i}^{ - 1} R_{i} P_{i}^{ - 1} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{44}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{45}^{i} } & 0 & 0 & {Y_{i} \varGamma_{2} } & 0 & 0 \\ * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{55}^{i} } & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & { - I} & 0 & 0 & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{69}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{6,10}^{i} } \\ * & * & * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{77}^{i} } & 0 & 0 & 0 \\ * & * & * & * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{88}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{89}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{8,10}^{i} } \\ * & * & * & * & * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{99}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{9,10}^{i} } \\ * & * & * & * & * & * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{10,10}^{i} } \\ \end{array} } \right]} \\ \end{array} , $$
$$ \begin{aligned} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{11}^{i} & = - A_{i} Y_{i} - Y_{i} A_{i}^{T} + \alpha Y_{i} + \bar{Q}_{i} + \bar{R}_{i} + \bar{S}_{i} + \bar{U}_{i} + e^{{ - \alpha \tau_{1} }} (\bar{G}_{i1}^{T} + \bar{G}_{i1} ) \\ & \quad - Y_{i} L_{1} Y_{i} - Y_{i} \varGamma_{1} Y_{i} + \varepsilon_{i} H_{i} H_{i}^{T} , \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{14}^{i} & = e^{{ - \alpha \tau_{1} }} (\bar{G}_{i2}^{T} - \bar{G}_{i1} )_{i} ,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{16}^{i} = B_{i} + Y_{i} L_{2} ,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{19}^{i} = - \rho_{i} Y_{i} A_{i}^{T} + \varepsilon_{i} \rho_{i} H_{i} H_{i}^{T} , \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{1,10}^{i} & = \rho_{i} Y_{i} A_{i}^{T} + C_{i} Y_{i} - \varepsilon_{i} \rho_{i} H_{i} H_{i}^{T} ,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{22}^{i} = - (1 - h)e^{{ - \alpha h_{1} }} \bar{U}_{i} , \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{44}^{i} & = - (1 - \tau )e^{{ - \alpha \tau_{1} }} \bar{S}_{i} + e^{{ - \alpha \tau_{1} }} ( - \bar{G}_{i2}^{T} - \bar{G}_{i2} + \bar{X}_{i1}^{T} + \bar{X}_{i1} ) - Y_{i} \varGamma_{1} Y_{i} , \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{45}^{i} & = e^{{ - \alpha \tau_{1} }} (\bar{X}_{i2}^{T} - \bar{X}_{i1} ),\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{55}^{i} = - e^{{ - \alpha \tau_{1} }} \bar{Q}_{i} - e^{{ - \alpha \tau_{1} }} (\bar{X}_{i2}^{T} + \bar{X}_{i2} ), \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{69}^{i} & = \rho_{i} B_{i}^{T} ,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{6,10}^{i} = - \rho_{i} B_{i}^{T} ,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{77}^{i} = W_{i} - I, \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{88}^{i} & = - (1 - \tau )e^{{ - \alpha \tau_{1} }} W_{i} - I,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{89}^{i} = \rho_{i} D_{i}^{T} ,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{8,10}^{i} = - \rho_{i} D_{i}^{T} , \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{99}^{i} & = \tau_{1} \bar{\varLambda }_{i} -\,2\rho_{i} Y_{i} + \varepsilon_{i} \rho_{i}^{2} H_{i} H_{i}^{T} ,\;\;\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{9,10}^{i} = \rho_{i} Y_{i} - \varepsilon_{i} \rho_{i}^{2} H_{i} H_{i}^{T} , \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{10,10}^{i} & = -\,2\rho_{i} C_{i} Y_{i} + \varepsilon_{i} \rho_{i}^{2} H_{i} H_{i}^{T} , \\ \bar{G}_{i} & = [\bar{G}_{i1}^{T} \quad 0\quad 0\quad \bar{G}_{i2}^{T} \quad 0\quad 0\quad 0\quad 0\quad 0\quad 0]^{T} , \\ \bar{X}_{i} & = [0\quad 0\quad 0\quad \bar{X}_{i1}^{T} \quad \bar{X}_{i2}^{T} \quad 0\quad 0\quad 0\quad 0\quad 0]^{T} , \\ \bar{\varTheta }_{2}^{i} & = \left[ {\begin{array}{*{20}c} { - M_{1}^{i} Y_{i} } & 0 & 0 & 0 & 0 & {M_{2i}^{i} } & 0 & {M_{4i}^{i} } & 0 & {M_{3}^{i} Y_{i} } \\ \end{array} } \right]. \\ \end{aligned} $$

From \( L_{1} > 0, \) we can get

$$(P_{i}^{ - 1} - I)L_{1} (P_{i}^{ - 1} - I) \ge 0.$$

So, we have

$$ - P_{i}^{ - 1} L_{1} P_{i}^{ - 1} \le - P_{i}^{ - 1} L_{1} - \, L_{1} P_{i}^{ - 1} + I, $$

that is

$$ - Y_{i} L_{1} Y_{i} \le - Y_{i} L_{1} - L_{1} Y_{i} + I. $$

From (20), we have \( \varOmega < 0, \) so it follows

$$\varSigma_{i} (t) < 0.$$

From Lemma 3, we have

$$ \dot{V}_{i} (x(t)) \le - \alpha V_{i} (x(t)). $$
(27)

From (27), it follows that

$$ V_{{\sigma (t_{k} )}} (x(t)) \le e^{{ - \alpha (t - t_{k} )}} V_{{\sigma (t_{k} )}} (x(t_{k} )),\quad t \in [t_{k} ,t_{k + 1} ). $$

We have

$$ \begin{aligned} V_{{\sigma (t_{k} )}} (x(t)) & \le e^{{ - \alpha (t - t_{k} )}} V_{{\sigma (t_{k} )}} (x(t_{k} )) \\ & \le e^{{ - \alpha (t - t_{k} )}} \mu V_{{\sigma (t_{k - 1} )}} (x(t_{k} )) \\ & \le \cdots \\ & \le \mu^{{N_{\sigma } (t_{0} ,t)}} e^{{ - \alpha (t - t_{0} )}} V_{{\sigma (t_{0} )}} (x(t_{0} )) \\ & \le \mu^{{\frac{{t - t_{0} }}{{T_{a} }}}} e^{{ - \alpha (t - t_{0} )}} V_{{\sigma (t_{0} )}} (t_{0} ) \\ & \le e^{{ - (\alpha - \frac{ln\mu }{{T_{a} }})(t - t_{0} )}} V_{{\sigma (t_{0} )}} (t_{0} ). \\ \end{aligned} $$
(28)

From (8), it is easy to know that there exist scalars a, b such that

$$ \begin{aligned} a\left\| {x(t)} \right\|^{2} \le V_{\sigma (t)} (x(t)), \hfill \\ V_{{\sigma (t_{0} )}} (t_{0} ) \le b\left\| \phi \right\|_{c}^{2} , \hfill \\ \end{aligned} $$
(29)

where

$$ \begin{aligned} a & = \mathop {\hbox{min} }\limits_{{i \in {\mathbb{N}}}} \left\{ {\lambda_{\hbox{min} } (P_{i} )} \right\}, \\ b & = \mathop {\hbox{max} }\limits_{{i \in {\mathbb{N}}}} \left\{ {\lambda_{\hbox{max} } (P_{i} )} \right\} + \tau_{1} \mathop {\hbox{max} }\limits_{{i \in {\mathbb{N}}}} \left\{ {\lambda_{\hbox{max} } (Q_{i} )} \right\} + h_{1} \mathop {\hbox{max} }\limits_{{i \in {\mathbb{N}}}} \left\{ {\lambda_{\hbox{max} } (R_{i} )} \right\}\\ &\quad + \tau_{1} \mathop {\hbox{max} }\limits_{{i \in {\mathbb{N}}}} \left\{ {\lambda_{\hbox{max} } (S_{i} )} \right\} + h_{1} \mathop {\hbox{max} }\limits_{{i \in {\mathbb{N}}}} \left\{ {\lambda_{\hbox{max} } (U_{i} )} \right\} \\ & \quad + (\tau_{1}^{2} /2)\mathop {\hbox{max} }\limits_{{i \in {\mathbb{N}}}} \left\{ {\lambda_{\hbox{max} } (\varLambda_{i} )} \right\} + \frac{4}{3}\tau_{1} \mathop {\hbox{max} }\limits_{{i \in {\mathbb{N}}}} \left( {(\gamma_{i}^{ + } + \gamma_{i}^{ - } )^{2} - \gamma_{i}^{ + } \gamma_{i}^{ - } } \right)\mathop {\hbox{max} }\limits_{{i \in {\mathbb{N}}}} \left\{ {\lambda_{\hbox{max} } (W_{i} )} \right\}. \\ \end{aligned} $$

From (27)–(29), we obtain

$$ \left\| {x(t)} \right\| \le \sqrt {\frac{b}{a}} e^{{ - \frac{1}{2}(\alpha - \frac{ln\mu }{{T_{a} }})(t - t_{0} )}} \left\| \phi \right\|_{c} . $$
(30)

This completes the proof.□

3.2 Exponentiall Stabilization of Uncertain Neutral Neural Networks

For switched neutral neural networks (1), we consider the following state feedback controller

$$ u(t) = K_{\sigma (t)} x(t). $$
(31)

Under the controller (31), the corresponding closed-loop system is given by

$$ \begin{aligned} & \dot{x}(t) - C_{\sigma (t)} (t)\dot{x}(t - h(t)) = - (A_{\sigma (t)} (t) - E_{\sigma (t)} (t)K_{\sigma (t)} )x(t) + B_{1} (t)f(x(t)) \\ & \quad \quad \quad \quad + B_{2} (t)f(x(t - \tau (t))), \\ & x(t) = \varphi (t),\quad \quad \forall t \in [ - \bar{\tau },0], \\ \end{aligned} $$
(32)

Theorem 2

Under Assumption 1, for given constants\( \alpha > 0,\mu \ge 1 \)\( \rho_{i} > 0,\varepsilon_{i} > 0,i \in {\mathbb{N}} \), the switched neutral neural networks (1) is exponentially stabilizable under feedback controller (31) for any switching signal with the average dwell time satisfying\( T_{a} > \left( {ln\mu } \right)/\alpha \), if there exist symmetric and positive definite matrices\( Y_{i} ,\;\bar{Q}_{i} ,\bar{R}_{i} ,\bar{S}_{i} ,\bar{U}_{i} ,\bar{\varLambda }_{i} ,W_{i} , \)and any matrices\( Z_{i} ,\bar{G}_{i1} , \)\( \bar{G}_{i2} ,\bar{X}_{i1} \)and\( \bar{X}_{i2} \)such that the following LMIs hold for all\( i,j \in {\mathbb{N}},i \ne j \):

$$ \tilde{\varSigma }_{i} = \left[ {\begin{array}{*{20}c} {\tilde{\varXi }_{i} } & {\tau_{1} \bar{G}_{i} } & {\tau_{1} \bar{X}_{i} } & {(\tilde{\varTheta }_{2}^{i} )^{T} } \\ * & { - \tau_{1} e^{{\alpha \tau_{1} }} \bar{\varLambda }_{i} } & 0 & 0 \\ * & * & { - \tau_{1} e^{{\alpha \tau_{1} }} \bar{\varLambda }_{i} } & 0 \\ * & * & * & { - \varepsilon I} \\ \end{array} } \right] < 0, $$
(33)
$$ Y_{i} \le \mu Y_{j} ,\bar{Q}_{i} \le \mu \bar{Q}_{j} ,\bar{R}_{i} \le \mu \bar{R}_{j} ,\bar{S}_{i} \le \mu \bar{S}_{j} ,\bar{U}_{i} \le \mu \bar{U}_{j} ,\bar{\varLambda }_{i} \le \mu \bar{\varLambda }_{j} ,W_{i} \le \mu W_{j} , $$
(34)

where

$$ \begin{array}{*{20}c} {\tilde{\varXi }_{i} = \left[ {\begin{array}{*{20}c} {\tilde{\varXi }_{11}^{i} } & 0 & 0 & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{14}^{i} } & 0 & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{16}^{i} } & {Y_{i} \varGamma_{2} } & {D_{i} } & {\tilde{\varXi }_{19}^{i} } & {\tilde{\varXi }_{1,10}^{i} } \\ * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{22}^{i} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & { - e^{{ - \alpha h_{1} }} \bar{R}_{i} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & {\bar{\varXi }_{44}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{45}^{i} } & 0 & 0 & {Y_{i} \varGamma_{2} } & 0 & 0 \\ * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{55}^{i} } & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & { - I} & 0 & 0 & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{69}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{6,10}^{i} } \\ * & * & * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{77}^{i} } & 0 & 0 & 0 \\ * & * & * & * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{88}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{89}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{8,10}^{i} } \\ * & * & * & * & * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{99}^{i} } & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{9,10}^{i} } \\ * & * & * & * & * & * & * & * & * & {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{10,10}^{i} } \\ \end{array} } \right]} \\ \end{array} , $$
(35)
$$ \begin{aligned} \tilde{\varXi }_{11}^{i} & = - A_{i} Y_{i} - Y_{i} A_{i}^{T} + \alpha Y_{i} + \bar{Q}_{i} + \bar{R}_{i} + \bar{S}_{i} + \bar{U}_{i} + e^{{ - \alpha \tau_{1} }} (\bar{G}_{i1}^{T} + \bar{G}_{i1} ) - L_{1} Y_{i} - Y_{i} L_{1} \\ & \quad - \varGamma_{1} Y_{i} - Y_{i} \varGamma_{1} + 2I + \varepsilon_{i} H_{i} H_{i}^{T} + E_{i} Z_{i} + Z_{i}^{T} E_{i}^{T} , \\ \tilde{\varXi }_{19}^{i} & = - \rho_{i} Y_{i} A_{i}^{T} + \varepsilon_{i} \rho_{i} H_{i} H_{i}^{T} + \rho_{i} Z_{i}^{T} E_{i}^{T} , \\ \tilde{\varXi }_{1,10}^{i} & = \rho_{i} Y_{i} A_{i}^{T} + C_{i} Y_{i} - \varepsilon_{i} \rho_{i} H_{i} H_{i}^{T} - \rho_{i} Z_{i}^{T} E_{i}^{T} , \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\varXi }_{44}^{-i} & = - (1 - \tau )e^{{ - \alpha \tau_{1} }} \bar{S}_{i} + e^{{ - \alpha \tau_{1} }} ( - \bar{G}_{i2}^{T} - \bar{G}_{i2} + \bar{X}_{i1}^{T} + \bar{X}_{i1} ) - \varGamma_{1} Y_{i} - Y_{i} \varGamma_{1} + I, \\ \tilde{\varTheta }_{2}^{i} & = \left[ {\begin{array}{*{20}c} { - M_{1}^{i} Y_{i} + M_{5}^{i} Z_{i} } & 0 & 0 & 0 & 0 & {M_{2}^{i} } & 0 & {M_{4}^{i} } & 0 & {M_{3}^{i} Y_{i} } \\ \end{array} } \right]. \\ \end{aligned} $$

and the other elements in (33) and (35) are given by (20). Moreover, the controller gains are constructed by

$$ K_{i} = Z_{i} Y_{i}^{ - 1} ,\quad i \in {\mathbb{N}}. $$

Proof

Consider the system (32). Using Theorem 1, replace \( {\text{A}}_{i} \left( {\text{t}} \right) \) with \( A_{i} \left( t \right) - E_{i} \left( t \right)K_{i} \) and notice \( K_{i} = ZY_{i}^{ - 1} \), (33) can be get. This completes the proof.□

4 Numerical Examples

In this section, two examples are given to illustrate the effectiveness of the proposed approach.

Example 1

Consider uncertain switched neutral neural networks (1) composed of two subsystems with the following parameters:

$$ \begin{aligned} & A_{1} = \left[ {\begin{array}{*{20}c} { 0.6} & 0 \\ 0 & { 0.2} \\ \end{array} } \right],\quad A_{2} = \left[ {\begin{array}{*{20}c} { 0.3} & 0 \\ 0 & { 0.3} \\ \end{array} } \right],\quad B_{1} = \left[ {\begin{array}{*{20}c} {1.4} & 2 \\ {0.3} & {0.6} \\ \end{array} } \right], \\ & B_{2} = \left[ {\begin{array}{*{20}c} {0.5} & { -\,0.7} \\ {0.6} & {0.2} \\ \end{array} } \right],\quad C_{1} = \left[ {\begin{array}{*{20}c} { - 1.4} & 0 \\ 0 & { -\, 0.4} \\ \end{array} } \right],\quad C_{2} = \left[ {\begin{array}{*{20}c} {2.3} & 0 \\ 0 & {0.4} \\ \end{array} } \right], \\ \end{aligned} $$
$$ \begin{aligned} D_{1} = \left[ {\begin{array}{*{20}c} {1.6} & 0 \\ 0 & { -\,4.5} \\ \end{array} } \right],\quad D_{2} = \left[ {\begin{array}{*{20}c} {5.4} & 0 \\ 0 & { -\,3.4} \\ \end{array} } \right],\quad E_{1} = \left[ {\begin{array}{*{20}c} {6.7} & 0 \\ 0 & {3.9} \\ \end{array} } \right], \hfill \\ E_{2} = \left[ {\begin{array}{*{20}c} { -\,4.8} & 0 \\ 0 & {6.3} \\ \end{array} } \right],\quad H_{1} = \left[ {\begin{array}{*{20}c} 3 & 0 \\ 0 & 2 \\ \end{array} } \right],H_{2} = \left[ {\begin{array}{*{20}c} 3 & 0 \\ 0 & 2 \\ \end{array} } \right],\quad M_{ 1}^{ 1} = \left[ {\begin{array}{*{20}c} {0.01} & 0 \\ 0 & { -\,0.02} \\ \end{array} } \right], \hfill \\ \end{aligned} $$
$$ \begin{aligned} M_{ 1}^{ 2} = \left[ {\begin{array}{*{20}c} {0.02} & 0 \\ 0 & { -\,0.03} \\ \end{array} } \right],\quad M_{ 2}^{ 1} = \left[ {\begin{array}{*{20}c} {0.01} & 0 \\ 0 & { -\,0.03} \\ \end{array} } \right],\quad M_{ 2}^{ 2} = \left[ {\begin{array}{*{20}c} {0.02} & 0 \\ 0 & { -\,0.07} \\ \end{array} } \right], \hfill \\ M_{ 3}^{ 1} = \left[ {\begin{array}{*{20}c} { -\,0.02} & 0 \\ 0 & { -\,0.01} \\ \end{array} } \right],\quad M_{ 3}^{ 2} = \left[ {\begin{array}{*{20}c} { -\,0.04} & 0 \\ 0 & { -\,0.05} \\ \end{array} } \right],\quad M_{ 4}^{ 1} = \left[ {\begin{array}{*{20}c} {0.04} & 0 \\ 0 & {0.03} \\ \end{array} } \right], \hfill \\ M_{ 4}^{ 2} = \left[ {\begin{array}{*{20}c} {0.02} & 0 \\ 0 & {0.03} \\ \end{array} } \right],\quad M_{ 5}^{ 1} = \left[ {\begin{array}{*{20}c} { -\,0.01} & 0 \\ 0 & { -\,0.07} \\ \end{array} } \right],\quad M_{ 5}^{ 2} = \left[ {\begin{array}{*{20}c} { -\,0.02} & 0 \\ 0 & { -\,0.06} \\ \end{array} } \right], \hfill \\ h\left( t \right) = 0. 3+ 0. 3 {\text{sin}}\left( t \right),\quad \tau (t) = 0.2 + 0.1cos(t). \hfill \\ \end{aligned} $$
$$ \begin{aligned} & f(x(t)) = [tanh(0.08x_{1} (t)),\;tanh(0.08x_{2} (t))]^{T} , \\ & g(x(t - \tau (t)) = [tanh(0.12x_{1} (t - \tau (t)),\;\quad tanh(0.12x_{2} (t - \tau (t))]^{T} . \\ \end{aligned} $$

Obviously there is

$$ \begin{aligned} & L_{ 2} = diag\{ 0.04,\;0.04\} \quad L_{ 1} = diag\{ 0,\;0\} ,\,\varGamma_{ 2} = diag\{ 0.06,\;0.06\} , \\ & \varGamma_{ 1} = diag\{ 0,\;0\} ,\quad h_{1} = 0.6,\quad \tau_{1} = 0.3,\quad \tau = 0.1,\quad h = 0.3. \\ \end{aligned} $$

We take \( \rho_{1} = 0. 15,\;\rho_{2} = 0. 24,\;\varepsilon_{1} = 1.3,\;\varepsilon_{2} = 1.5,\;\mu = 2.5,\;\;\alpha = 0. 5. \)

By utilizing the MATLAB LMI Toolbox solving (33) and (34), feasible solutions can be obtained as follows:

$$ \begin{aligned} Y_{i} & = \left[ {\begin{array}{*{20}c} { 1. 8 6 7 5} & { -\,0.0 1 2 0} \\ { -\,0.0 1 2 0} & { 4. 8 8 1 4} \\ \end{array} } \right],\quad \bar{Q}_{i} = \left[ {\begin{array}{*{20}c} { 1. 7 3 3 9} & {0.00 3 0} \\ {0.00 3 0} & { 4.1020} \\ \end{array} } \right],\quad \bar{R}_{i} = \left[ {\begin{array}{*{20}c} { 2. 2 8 7 3} & {0.00 7 3} \\ { -\,0.00 7 3} & { 5. 3 7 4 6} \\ \end{array} } \right], \\ \bar{S}_{i} & = \left[ {\begin{array}{*{20}c} { 1. 7 455} & {0.00 2 8} \\ {0.00 2 8} & { 4.0 1 5 8} \\ \end{array} } \right],\quad \bar{U}_{i} = \left[ {\begin{array}{*{20}c} { 1.8994} & {0.00 7 3} \\ {0.00 7 3} & { 4. 9 7 6 6} \\ \end{array} } \right],\;\bar{\varLambda }_{i} = \left[ {\begin{array}{*{20}c} {0. 90 8 5} & { -\,0.0 1 7 8} \\ { -\,0.0 1 7 8} & { 4.3986} \\ \end{array} } \right],\quad i = 1,2, \\ W & = \left[ {\begin{array}{*{20}c} { 1. 1 5 3 4} & { -\,0.00 3 7} \\ { -\,0.00 3 7} & {0. 6 6 5 5} \\ \end{array} } \right],\quad \bar{G}_{11} = \left[ {\begin{array}{*{20}c} { - 10. 6 2 0 4} & {0.0 2 9 7} \\ {0.0 2 9 7} & { -\,1 7. 7 5 1 6} \\ \end{array} } \right],\bar{G}_{12} = \left[ {\begin{array}{*{20}c} { 1 2. 7 1 8 0} & { -\,0.0 4 0 0} \\ { -\,0.0 4 0 0} & { 20. 2 3 2 6} \\ \end{array} } \right],\quad \\ \bar{G}_{21} & = \left[ {\begin{array}{*{20}c} { -\,1 3. 10 8 8} & { 0.0383} \\ { 0.0383} & { -\,2 1. 3 1 5 5} \\ \end{array} } \right],\bar{G}_{22} = \left[ {\begin{array}{*{20}c} { 1 2. 7 3 7 6} & { -\,0.0393} \\ { -\,0.0393} & { 20. 1 2 0 8} \\ \end{array} } \right],\quad \quad \bar{X}_{11} = \left[ {\begin{array}{*{20}c} { -\,1 2. 2 3 6 4} & {0.0 349} \\ {0.0 349} & { -\,1 9. 1 6 6 0} \\ \end{array} } \right], \\ \end{aligned} $$
$$ \begin{aligned} & \bar{X}_{12} = \left[ {\begin{array}{*{20}c} { 1 2. 7 7 4 7} & { -\,0.0 4 0 0} \\ { -\,0.0 4 0 0} & { 20. 4 4 9 1} \\ \end{array} } \right],\quad \;\bar{X}_{21} = \left[ {\begin{array}{*{20}c} { -\,1 2. 9 8 0 9} & {0.0 4 0 0} \\ {0.0 4 0 0} & { -\,20. 9 8 3 1} \\ \end{array} } \right],\quad \bar{X}_{22} = \left[ {\begin{array}{*{20}c} {12.7675} & { -\,0.0398} \\ { -\,0.0398} & {20.4090} \\ \end{array} } \right], \\ & Z_{1} = \left[ {\begin{array}{*{20}c} { -\,4. 8 4 2 8} & { -\,0.0 4 1 0} \\ { -\,0.0 4 1 0} & { -\,10.0260} \\ \end{array} } \right],\quad \quad Z_{2} = \left[ {\begin{array}{*{20}c} {3.6953} & { -\,0.0 1 8 3} \\ { -\,0.0 1 8 3} & { -\,3.6446} \\ \end{array} } \right]. \\ \end{aligned} $$

Then the controller gains are given by

$$ K_{1} = \left[ {\begin{array}{*{20}c} { -\,2. 5 9 3 4} & { -\,0.0 1 4 8} \\ { -\,0.0 3 5 2} & { -\,2.0540} \\ \end{array} } \right],\quad K_{2} = \left[ {\begin{array}{*{20}c} {1.9788} & {0.0011} \\ { -\,0.0146} & { -\,0.7467} \\ \end{array} } \right]. $$

According to Theorem 2, the switched neutral neural networks (1) is exponentially stabilizable under the feedback control (31) for any switching signal with the average dwell time satisfying \( T_{a} > 1.8326 \).

Figure 1 shows the switching signal and the state trajectory of the closed-loop switched neutral neural networks. From Fig. 1, it is easy to see that the switched neutral neural networks (1) is exponentially stabilizable under the feedback control (31).

Fig. 1
figure 1

State trajectory of the closed-loop switched neutral neural networks under switching signal \( \sigma (t) \)

Example 2

Consider uncertain switched neutral neural networks (1) composed of two subsystems with the following parameters:

$$ \begin{aligned} & A_{1} = \left[ {\begin{array}{*{20}c} { 0.6} & 0 \\ 0 & { 0. 6} \\ \end{array} } \right],\quad A_{2} = \left[ {\begin{array}{*{20}c} { 1.3} & 0 \\ 0 & {1. 1} \\ \end{array} } \right],\quad B_{1} = \left[ {\begin{array}{*{20}c} { 0.4} & { 0.2} \\ {0.3} & {0.6} \\ \end{array} } \right],\;B_{2} = \left[ {\begin{array}{*{20}c} {0.5} & { -\,0. 1} \\ {0. 3} & {0.2} \\ \end{array} } \right], \\ & C_{1} = C_{2} = \left[ {\begin{array}{*{20}c} 0& 0 \\ 0 & 0 \\ \end{array} } \right],\quad D_{1} = \left[ {\begin{array}{*{20}c} 2& 0 \\ 0 & { -\,4} \\ \end{array} } \right],\quad D_{2} = \left[ {\begin{array}{*{20}c} 5 & 0 \\ 0 & { -\,3} \\ \end{array} } \right],\quad E_{1} = \left[ {\begin{array}{*{20}c} 4& 0 \\ 0 & 3 \\ \end{array} } \right], \\ & E_{2} = \left[ {\begin{array}{*{20}c} { -\,4.2} & 0 \\ 0 & {5.3} \\ \end{array} } \right],\quad H_{1} = \left[ {\begin{array}{*{20}c} 3 & 0 \\ 0 & 2 \\ \end{array} } \right],\;H_{2} = \left[ {\begin{array}{*{20}c} 3 & 0 \\ 0 & 2 \\ \end{array} } \right],\quad M_{ 1}^{ 1} = \left[ {\begin{array}{*{20}c} {0.01} & 0 \\ 0 & { -\,0.02} \\ \end{array} } \right], \\ \end{aligned} $$
$$ \begin{aligned} & M_{ 1}^{ 2} = \left[ {\begin{array}{*{20}c} {0.02} & 0 \\ 0 & { -\,0.03} \\ \end{array} } \right],\quad M_{ 2}^{ 1} = \left[ {\begin{array}{*{20}c} {0.01} & 0 \\ 0 & { -\,0.03} \\ \end{array} } \right],\quad M_{ 2}^{ 2} = \left[ {\begin{array}{*{20}c} {0.02} & 0 \\ 0 & { -\,0.04} \\ \end{array} } \right], \\ & M_{ 3}^{ 1} = M_{ 3}^{ 2} = \left[ {\begin{array}{*{20}c} 0& 0 \\ 0 & 0 \\ \end{array} } \right],\quad M_{ 4}^{ 1} = \left[ {\begin{array}{*{20}c} {0.04} & 0 \\ 0 & {0.03} \\ \end{array} } \right],\quad M_{ 4}^{ 2} = \left[ {\begin{array}{*{20}c} {0.02} & 0 \\ 0 & {0.03} \\ \end{array} } \right], \\ & M_{ 5}^{ 1} = \left[ {\begin{array}{*{20}c} { -\,0.01} & 0 \\ 0 & { -\,0.03} \\ \end{array} } \right],\quad M_{ 5}^{ 2} = \left[ {\begin{array}{*{20}c} { -\,0.02} & 0 \\ 0 & { -\,0.05} \\ \end{array} } \right], \\ & h\left( t \right) = 0. 2+ 0. 2 {\text{sin}}\left( t \right),\quad \tau (t) = 0.1 + 0.1{ \cos }(t). \\ \end{aligned} $$
$$ \begin{aligned} & f(x(t)) = [tanh(x_{1} (t)),\;tanh(x_{2} (t))]^{T} , \\ & g(x(t - \tau (t)) = [tanh(0.2x_{1} (t - \tau (t)),\;\quad tanh(0.2x_{2} (t - \tau (t))]^{T} . \\ \end{aligned} $$

Obviously there is

\( L_{ 2} = diag\{ 0. 5,\;0. 5\} \)\( L_{ 1} = diag\{ 0,\;0\} \), \( \varGamma_{ 2} = diag\{ 0. 1,\;0. 1\} \), \( \varGamma_{ 1} = diag\{ 0,\;0\} ,\quad h_{1} = 0.6,\quad \tau_{1} = 0.3,\quad \tau = 0.1,\quad h = 0.3. \)

We take \( \rho_{1} = 0.5,\;\rho_{2} = 0.3,\;\varepsilon_{1} = 1.3,\;\varepsilon_{2} = 1.6,\;\mu = 1.3,\;\;\alpha = 0.2. \)

By utilizing the MATLAB LMI Toolbox solving (33) and (34), feasible solutions can be obtained as follows:

$$ \begin{aligned} Y_{i} = \left[ {\begin{array}{*{20}c} { 6. 1 80 9} & { -\,0.0 0 1 3} \\ { -\,0.0 0 1 3} & { 6. 3 10 2} \\ \end{array} } \right],\quad \;\;\;\;\;\bar{Q}_{i} = \left[ {\begin{array}{*{20}c} { 7.0 70 3} & {0.00 1 5} \\ {0.00 1 5} & { 7. 2 1 7 9} \\ \end{array} } \right], \hfill \\ \bar{R}_{i} = \left[ {\begin{array}{*{20}c} { 4. 40 2 9} & {0.00 2 1} \\ { -\,0.00 2 1} & { 4. 4 30 9} \\ \end{array} } \right],\quad \;\;\;\;\;\;\bar{S}_{i} = \left[ {\begin{array}{*{20}c} { 6. 3 9 5 8} & {0.00 1 6} \\ {0.00 1 6} & { 6. 50 90} \\ \end{array} } \right], \hfill \\ \end{aligned} $$
$$ \begin{aligned} \bar{U}_{i} = \left[ {\begin{array}{*{20}c} { 3. 8 8 6 4} & {0.00 2 1} \\ {0.00 2 1} & { 3. 9 1 4 4} \\ \end{array} } \right],\quad \;\;\;\;\;\;\bar{\varLambda }_{i} = \left[ {\begin{array}{*{20}c} { 1 8. 4 1 3 2} & {0.00 3 7} \\ {0.00 3 7} & { 1 9. 3 8 1 5} \\ \end{array} } \right],\quad i = 1,2, \hfill \\ W = \left[ {\begin{array}{*{20}c} { 3. 1 6 2 3} & { -\,0.0 4 9 2} \\ { -\,0.0 4 9 2} & { 1. 1 7 7 2} \\ \end{array} } \right],\quad \quad \;\;\;\;\;\;\;\bar{G}_{11} = \left[ {\begin{array}{*{20}c} { -\,1 1 9. 8 9 5 9} & { -\,0.00 2 1} \\ { -\,0.00 2 1} & { -\,1 2 3. 6 1 6 4} \\ \end{array} } \right], \hfill \\ \bar{G}_{12} = \left[ {\begin{array}{*{20}c} { 1 1 5. 2 6 7 3 7} & {0.00 5 7} \\ {0.00 5 7} & { 1 1 9. 1 7 8 3} \\ \end{array} } \right],\quad \;\;\;\;\;\;\bar{G}_{21} = \left[ {\begin{array}{*{20}c} { -\,1 1 9.0 1 2 8} & { -\,0.0 100} \\ { -\,0.0 100} & { -\,1 2 1. 4 3 3 2} \\ \end{array} } \right], \hfill \\ \end{aligned} $$
$$ \bar{G}_{22} = \left[ {\begin{array}{*{20}c} { 1 1 7. 0 1 4 8} & { 0. 0 0 8 1} \\ { 0. 0 0 8 1} & { 1 1 9. 2 6 3 5} \\ \end{array} } \right],\quad \quad \bar{X}_{11} = \left[ {\begin{array}{*{20}c} { -\,120.2690} & { -\,0.0106} \\ { -\,0.0106} & { -\,122.6040} \\ \end{array} } \right], $$
$$ \begin{aligned} & \bar{X}_{12} = \left[ {\begin{array}{*{20}c} { 1 1 7. 9 9 0 1} & { 0. 0 0 8 6} \\ { 0. 0 0 8 6} & { 1 2 0. 2 1 7 9} \\ \end{array} } \right],\quad \bar{X}_{21} = \left[ {\begin{array}{*{20}c} { -\,1 1 7. 9 3 5 8} & { -\,0.00 9 9} \\ { -\,0.00 9 9} & { -\,1 20. 30 1 4} \\ \end{array} } \right], \\ & \bar{X}_{22} = \left[ {\begin{array}{*{20}c} {118.0324} & {0.0085} \\ {0.0085} & {120.3156} \\ \end{array} } \right],\quad Z_{1} = \left[ {\begin{array}{*{20}c} { -\,2. 2 8 3 4} & { -\,0.0 3 8 9} \\ { -\,0.0 3 8 9} & { -\,2. 8 5 3 6} \\ \end{array} } \right], \\ & Z_{2} = \left[ {\begin{array}{*{20}c} { 4. 7 9 2 8} & { -\,0.0 2 8 8} \\ { -\,0.0 2 8 8} & { -\,3. 7 6 4 1} \\ \end{array} } \right]. \\ \end{aligned} $$

Then the controller gains are given by

$$ K_{1} = \left[ {\begin{array}{*{20}c} { -\,0. 3 6 9 4} & { -\,0.00 6 2} \\ { -\,0.00 6 4} & { -\,0. 4 5 2 2} \\ \end{array} } \right],\quad K_{2} = \left[ {\begin{array}{*{20}c} {0.7754} & { -\,0.0044} \\ { -\,0.0048} & { -\,0.5965} \\ \end{array} } \right]. $$

According to Theorem 2, the switched neutral neural networks (1) is exponentially stabilizable under the feedback control (31) for any switching signal with the average dwell time satisfying \( T_{a} > 1.3118. \)

Figure 2 shows the switching signal and the state trajectory of the closed-loop system. From Fig. 2, it is easy to see that the closed-loop system is exponentially stable.

Fig. 2
figure 2

State trajectory of the closed-loop switched system under switching signal \( \sigma (t) \)

5 Conclusions

In this paper, the exponential stabilization of a class of uncertain switched neutral neural networks with mixed time-varying delays in the state variable has been studied. Based on the multiple Lyapunov-like functional method and the average dwell time method, a set of sufficient conditions was developed in terms of LMIs by assuming conditions on the system parameters, which guarantee exponential stabilization of the uncertain switched neutral neural networks. Moreover, a design scheme for the stabilizing feedback controllers is proposed. Finally, two numerical examples are given to illustrate the effectiveness of the results.

In our future work, the proposed methods will be extended to more general uncertain stochastic switched neural networks with interval and distributed time-varying delays.