1 Introduction

Due to the fact that the high-order neural networks have stronger approximation property, faster convergence rate, greater storage capacity, and higher fault tolerance than lower-order neural networks, high-order neural networks have been the object of intensive analysis by numerous authors in recent years. In particular, there have been extensive results on the problem of the existence and stability of equilibrium points of high-order neural networks in the literature [1, 2, 3]. By using M-matrix and linear matrix inequality (LMI) techniques, the authors in [4] and [5] investigated the problems of the global exponential stability and robust stability of the equilibrium point of high-order Hopfield-type neural networks without the delays. For high-order Hopfield neural networks with constant time delays, the existence and global asymptotic stability conditions were obtained in [2] which were based on the LMI approach and with the assumption that the activation functions are monotonic nondecreasing. There are numerous articles have been published for the study of dynamic behaviors of high-order neural networks with different time delays, see for example [6] and references therein.

Although the convergence dynamics of impulsive neural networks have been considered, see for example [7, 8, 9] and references therein. The problem of the exponential stability analysis for impulsive high-order Hopfield-type neural networks with time-varying delays was studied in [10], where the delays are not required to be differentiable. We note that the conditions in [10] were obtained based on simple Lyapunov functionals, which may lead to conservatism to some extent when using them to study the exponential stability of delayed high-order neural networks without an impulse. To the best of authors’ knowledge, few authors have considered high-order Hopfield neural networks with impulses. For instance, [10, 11] investigated impulsive high-order Hopfield neural networks with time delays. However, so far, there has been very little existing work on neural networks with time delay in the leakage (or ”forgetting”) term [14, 15, 16, 17]. This is due to some theoretical and technical difficulties [12]. In fact, time delay in the leakage term also has great impact on the dynamics of neural networks. As pointed out by Gopalsamy [13], time delay in the stabilizing negative feedback term has a tendency to destabilize a system.

Motivated by the above discussions, the global asymptotic stability of a class of impulsive high-order Hopfield-type neural networks with time-varying delays and time delay in the leakage term is discussed. Based on the Lyapunov stability theory, together with the LMI approach and free-weighting matrix method, some less conservative delay-dependent sufficient conditions are presented for the global asymptotic stability of the equilibrium point of the considered neural networks. Two numerical examples are provided to demonstrate the effectiveness of the proposed stability criteria.

Notations

Throughout this paper, the superscript T denotes the transposition and the notation X ≥ Y (respectively, X > Y), where X and Y are symmetric matrices, means that X − Y is positive semi-definite (respectively, positive definite). I is the identity matrix with appropriate dimension; \(\parallel \cdot \parallel\) is the Euclidean vector norm and \(\Uptheta=\{1,2,\ldots, n\}\). For any interval \(J\subseteq R, \) set \(V\subseteq R^{k}(1\leq k \leq n), C(J,V)=\{\varphi:J \rightarrow V\) is continuous} and \(PC^{1}(J,V)=\{\varphi:J\rightarrow V\) is continuously differentiable everywhere except at finite number of points t, at which \(\varphi(t^{+}), \varphi(t^{-}), \dot{\varphi})(t^{+})\) and \(\dot{\varphi}(t^{-})\) exist and \(\varphi(t^{+})=\varphi(t), \dot{\varphi}(t^{+})=\dot{\varphi}(t)\), where \(\dot{\varphi}\) denotes the derivative of \(\varphi \}\). \([\bullet]^\ast\) denotes the integer function. The notation * always denotes the symmetric block in one symmetric matrix. Matrices, if not explicitly stated, are assumed to have compatible dimensions.

2 Model description and preliminaries

Consider a continuous-time high-order Hopfield-type neural networks with time-varying delays and impulsive perturbations:

$$ \begin{aligned} \dot{x}_{i}(t) &= -c_{i}x_{i}(t-\sigma)+\sum_{j=1}^{n}a_{ij}\tilde{f}_{j}(x_{j}(t)) +\sum_{j=1}^{n}b_{ij}\tilde{g}_{j}(x_{j}(t-\tau_{j}(t)))+\sum_{j=1}^{n} \sum_{l=1}^{n} d_{ijl}\tilde{f}_{j} (x_{j}(t))\tilde{f}_{l} (x_{l}(t)) \\ &\quad+\sum_{j=1}^{n} \sum_{l=1}^{n}e_{ijl}\tilde{g}_{j}(x_{j}(t-\tau_{j}(t)))\tilde{g}_{l}(x_{l}(t-\tau_{l}(t))) +J_{i}\\ x_{i}(t)&= \phi_{i}(t), \quad t \in [-\bar{\tau}, 0], \\ \Updelta x_{i}(t_{k}) &= J_{ik}(x_{i}(t_{k}^{-})), \quad k \in {\mathbb{Z}}_{+} \end{aligned} $$
(1)

where x i (t) is the neural state, n denotes the number of neurons, ϕ i (t) is the initial condition which is continuous on \([-\bar{\tau}, 0]\) and τ j (t) ≥ 0 represents the transmission delay which satisfies \(\tau_{1} \leq \tau_{j}(t) \leq \tau_{2}, \bar{\tau}=\max\{\tau_{2}, \sigma\}. \) The first term on the right-hand side of system (1) denotes the leakage term in the neural network. σ ≥ 0 denotes the leakage delay or forgetting delay. \(\tilde{f}_{i}(x_{i}(t))\) and \(\tilde{g}_{i}(x_{i}(t))\) are the neuron activation functions, c i  > 0 is a positive constant, which denote the rate with which the ith cell resets its potential to the resting state when isolated from the other cells and inputs, a ij and b ij are the first-order synaptic weights of the neural network, and d ijl and e ijl are the second-order synaptic weights of the networks. J i denotes the ith component of an external input source introduced from the outside the network to the ith cell. The activation functions in (1) are assumed to satisfy the following assumptions:

Assumption 1

There exist constants μ j  > 0, ν j  > 0, γ ij and σ ij (i = 1, 2 and \(j=1,2,\ldots,n)\) such that γ ij  < σ ij

$$ \left\{\begin{array}{l} \left|\tilde{f}_{j}(s)\right| \leq \mu_{j}, \quad \gamma_{1j} \leq \frac{\tilde{f}_{j}(u)-\tilde{f}_{j}(v)}{u-v}\leq \sigma_{1j},\quad \hbox{for any}\, s, u, v \in {\mathbb{R}}, u \neq v, \\ \left|\tilde{g}_{j}(s)\right| \leq \nu_{j}, \quad \gamma_{2j} \leq \frac{\tilde{g}_{j}(u)-\tilde{g}_{j}(v)}{u-v}\leq \sigma_{2j},\quad \hbox{for any} s,u, v \in {\mathbb{R}}, u \neq v. \end{array}\right. $$
(2)

For simplicity, we give the following model transformation, and we give the next assumption.

Assumption 2

For any \(i, j, l=1,2,\ldots, n, d_{ijl}+d_{ilj} \neq 0\) and e ijl  + e ilj  ≠ 0.

Assumption 3

The transmission delay τ(t) is time-varying and satisfies 0 ≤ h 1 ≤ τ(t) ≤ h 2 and \(\dot{\tau}(t)<\mu, \) where h 1, h 2 and μ are some positive constants.

Assumption 4

The leakage delay σ ≥ 0 is a constant.

Assumption 5

\({J_k(\cdot):\mathbb{R}^n\times \mathbb{R}^n\rightarrow\mathbb{R}^n, k\in \mathbb{Z}_+,}\) are some continuous functions.

Assumption 6

The impulse times t k satisfy \(0=t_0 <t_{1}<\cdots <t_{k}\rightarrow\infty\) and \({\inf_{k\in \mathbb{Z}_+}\{t_k-t_{k-1}\}>0.}\)

3 Existence and uniqueness theorems

One denotes an equilibrium point of the impulsive network (1) by the constant vector \({x^\ast =[x^\ast_1, x^\ast_2,\ldots, x^\ast_n]^T\in \mathbb{R}^n, }\) where the components \(x^\ast_i\) are governed by the algebraic system

$$ \begin{aligned} 0 &= -c_{i}x^\ast_{i}+\sum_{j=1}^{n}a_{ij}\tilde{f}_{j}(x^\ast_{j}) +\sum_{j=1}^{n}b_{ij}\tilde{g}_{j}(x^\ast_{j})+\sum_{j=1}^{n} \sum_{l=1}^{n} d_{ijl}\tilde{f}_{j} (x^\ast_{j})\tilde{f}_{l} (x^\ast_{l}) \\ &\quad+\sum_{j=1}^{n} \sum_{l=1}^{n}e_{ijl}\tilde{g}_{j}(x^\ast_{j})\tilde{g}_{l}(x^\ast_{l}) +J_{i}, i\in \Uptheta. \end{aligned} $$
(3)

As usual, the impulsive operator in this paper is viewed as a perturbation of the equilibrium point of system (1) without impulse effects, that is, it satisfies \(J_{ik}(\cdot)\) satisfy \({ J_{ik}(x^\ast_i)\equiv0, i\in\Uptheta, k \in \mathbb{Z}_{+}.}\)

Theorem 3.1

Under assumption 1, the network (1) has a unique equilibrium point if the following inequality holds:

$$ \begin{aligned} &\sum_{i=1}^n \frac{1}{c^2_i}\sum_{j=1}^{n} \left\{ \left[|a_{ij}|+\left(\sum_{l=1}^{n}|d_{ijl}|\right)\mu_j+ \left(\sum_{l=1}^{n}|d_{ilj}|\mu_l\right)\right]\max\{|\gamma_{1j}|,|\sigma_{1j}|\}\right.\\ &\quad+\left. \left[|b_{ij}|+\left(\sum_{l=1}^{n}|e_{ijl}|\right)\nu_j +\left(\sum_{l=1}^{n}|e_{ilj}|\nu_l\right)\right]\max\{|\gamma_{2j}|,|\sigma_{2j}|\}\right\}^2<1. \end{aligned} $$
(4)

Proof

Consider a mapping \({\Upphi(u)=(\Upphi_1(u),\Upphi_2(u),\ldots,\Upphi_n(u))^T\in \mathbb{R}^n}\) as follows

$$ \Upphi_i(u)= \sum_{j=1}^{n}\frac{a_{ij}}{c_i}\tilde{f}_{j}(u_j) +\sum_{j=1}^{n}\frac{b_{ij}}{c_i}\tilde{g}_{j}(u_j)+\sum_{j=1}^{n} \sum_{l=1}^{n} \frac{d_{ijl}}{c_i}\tilde{f}_{j} (u_j)\tilde{f}_{l} (u_l) +\sum_{j=1}^{n} \sum_{l=1}^{n}\frac{e_{ijl}}{c_i}\tilde{g}_{j}(u_j)\tilde{g}_{l}(u_l) +\frac{J_{i}}{c_i}, $$

where \({u=(u_1,u_2,\ldots,u_n)\in \mathbb{R}^n.}\) Let \({v=(v_1,v_2,\ldots,v_n)\in \mathbb{R}^n,}\) then by assumption 1 we get

$$ \begin{aligned} & |\Upphi_i(u)-\Upphi_i(v)| \\ &=\left| \sum_{j=1}^{n}\frac{a_{ij}}{c_i}[\tilde{f}_{j}(u_j)-\tilde{f}_{j}(v_j)]+ \sum_{j=1}^{n}\frac{b_{ij}}{c_i}[\tilde{g}_{j}(u_j)-\tilde{g}_{j}(v_j)]\right.\\ &\quad+\left. \sum_{j=1}^{n} \sum_{l=1}^{n} \frac{d_{ijl}}{c_i}[\tilde{f}_{j} (u_j)\tilde{f}_{l} (u_l)-\tilde{f}_{j} (v_j)\tilde{f}_{l} (v_l)]+\sum_{j=1}^{n} \sum_{l=1}^{n}\frac{e_{ijl}}{c_i}[\tilde{g}_{j}(u_j)\tilde{g}_{l}(u_l)-\tilde{g}_{j}(v_j)\tilde{g}_{l}(v_l)]\right|\\ &\leq \sum_{j=1}^{n}\left|\frac{a_{ij}}{c_i}\right|\left|\tilde{f}_{j}(u_j)-\tilde{f}_{j}(v_j)\right|+ \sum_{j=1}^{n}\left|\frac{b_{ij}}{c_i}\right|\left|\tilde{g}_{j}(u_j)-\tilde{g}_{j}(v_j)\right| \\ &\quad+\sum_{j=1}^{n} \sum_{l=1}^{n} \left|\frac{d_{ijl}}{c_i}\right|\left|\tilde{f}_{j} (u_j)\tilde{f}_{l} (u_l)-\tilde{f}_{j} (v_j)\tilde{f}_{l} (u_l)+\tilde{f}_{j} (v_j)\tilde{f}_{l} (u_l)-\tilde{f}_{j} (v_j)\tilde{f}_{l} (v_l)\right| \\ &\quad+\sum_{j=1}^{n} \sum_{l=1}^{n}\left|\frac{e_{ijl}}{c_i}\right|\left|\tilde{g}_{j}(u_j)\tilde{g}_{l}(u_l)-\tilde{g}_{j}(v_j)\tilde{g}_{l}(u_l) +\tilde{g}_{j}(v_j)\tilde{g}_{l}(u_l) -\tilde{g}_{j}(v_j)\tilde{g}_{l}(v_l)\right| \\ &\leq \sum_{j=1}^{n}\left|\frac{a_{ij}}{c_i}\right|\max\{|\gamma_{1j}|,|\sigma_{1j}|\} | u_j - v_j |+ \sum_{j=1}^{n}\left|\frac{b_{ij}}{c_i}\right|\max\{|\gamma_{2j}|,|\sigma_{2j}|\} | u_j - v_j | \\ &\quad+ \sum_{j=1}^{n} \sum_{l=1}^{n} \left|\frac{d_{ijl}}{c_i}\right|\mu_j \max\{|\gamma_{1j}|,|\sigma_{1j}|\} | u_j - v_j | + \sum_{j=1}^{n} \sum_{l=1}^{n} \left|\frac{d_{ijl}}{c_i}\right|\mu_j \max\{|\gamma_{1l}|,|\sigma_{1l}|\} | u_l - v_l | \\ &\quad+ \sum_{j=1}^{n} \sum_{l=1}^{n} \left|\frac{e_{ijl}}{c_i}\right|\nu_j \max\{|\gamma_{2j}|,|\sigma_{2j}|\} | u_j - v_j | + \sum_{j=1}^{n} \sum_{l=1}^{n} \left|\frac{e_{ijl}}{c_i}\right|\nu_j \max\{|\gamma_{2l}|,|\sigma_{2l}|\} | u_l - v_l | \\ &\leq \sum_{j=1}^{n}\frac{1}{c_i}\left(|a_{ij}|\max\{|\gamma_{1j}|,|\sigma_{1j}|\}+|b_{ij}|\max\{|\gamma_{2j}|,|\sigma_{2j}|\}\right)| u_j - v_j | \\ &\quad+ \sum_{j=1}^{n} \sum_{l=1}^{n} \frac{1}{c_i}\left(|d_{ijl}|\mu_j \max\{|\gamma_{1j}|,|\sigma_{1j}|\}+|e_{ijl}|\nu_j \max\{|\gamma_{2j}|,|\sigma_{2j}|\}\right)| u_j - v_j | \\ &\quad+ \sum_{j=1}^{n} \sum_{l=1}^{n} \frac{1}{c_i}\left(|d_{ijl}|\mu_j \max\{|\gamma_{1l}|,|\sigma_{1l}|\}+|e_{ijl}|\nu_j \max\{|\gamma_{2l}|,|\sigma_{2l}|\}\right)| u_l - v_l | \\ &= \frac{1}{c_i} \sum_{j=1}^{n}\left\{ |a_{ij}|\max\{|\gamma_{1j}|,|\sigma_{1j}|\}+|b_{ij}|\max\{|\gamma_{2j}|,|\sigma_{2j}|\}+ \left(\sum_{l=1}^{n}|d_{ijl}|\right)\mu_j \max\{|\gamma_{1j}|,|\sigma_{1j}|\}\right. \\ &\quad+\left.\left(\sum_{l=1}^{n}|e_{ijl}|\right)\nu_j \max\{|\gamma_{2j}|,|\sigma_{2j}|\} \right\}| u_j - v_j | \\ &\quad+ \sum_{j=1}^{n} \sum_{l=1}^{n} \frac{1}{c_i}\left(|d_{ilj}|\mu_l \max\{|\gamma_{1j}|,|\sigma_{1j}|\}+|e_{ilj}|\nu_l \max\{|\gamma_{2j}|,|\sigma_{2j}|\}\right)| u_j - v_j | \\ &= \frac{1}{c_i} \sum_{j=1}^{n}\left\{ |a_{ij}|\max\{|\gamma_{1j}|,|\sigma_{1j}|\}+|b_{ij}|\max\{|\gamma_{2j}|,|\sigma_{2j}|\}+ \left(\sum_{l=1}^{n}|d_{ijl}|\right)\mu_j \max\{|\gamma_{1j}|,|\sigma_{1j}|\} \right.\\ &\quad+\left(\sum_{l=1}^{n}|e_{ijl}|\right)\nu_j \max\{|\gamma_{2j}|,|\sigma_{2j}|\}+ \left(\sum_{l=1}^{n}|d_{ilj}|\mu_l\right) \max\{|\gamma_{1j}|,|\sigma_{1j}|\} \\ &\quad+\left.\left(\sum_{l=1}^{n}|e_{ilj}|\nu_l\right) \max\{|\gamma_{2j}|,|\sigma_{2j}|\} \right\}| u_j - v_j | \\ &= \frac{1}{c_i} \sum_{j=1}^{n}\left\{ \left[|a_{ij}|+\left(\sum_{l=1}^{n}|d_{ijl}|\right)\mu_j +\left(\sum_{l=1}^{n}|d_{ilj}|\mu_l\right)\right] \max\{|\gamma_{1j}|,|\sigma_{1j}|\}\right. \\ &\quad+ \left.\left[|b_{ij}|+\left(\sum_{l=1}^{n}|e_{ijl}|\right)\nu_j +\left(\sum_{l=1}^{n}|e_{ilj}|\nu_l\right)\right]\max\{|\gamma_{2j}|,|\sigma_{2j}|\}\right\}| u_j - v_j |, \end{aligned} $$

which implies that

$$ \begin{aligned} \|\Upphi(u)-\Upphi(v)\|^2&= \sum_{i=1}^n|\Upphi_i(u)-\Upphi_i(v)|^2 \\ &\leq \sum_{i=1}^n \frac{1}{c^2_i}\left(\sum_{j=1}^{n} \left\{ \left[|a_{ij}|+\left(\sum_{l=1}^{n}|d_{ijl}|\right)\mu_j +\left(\sum_{l=1}^{n}|d_{ilj}|\mu_l\right)\right] \max\{|\gamma_{1j}|,|\sigma_{1j}|\}\right.\right.\\ &\quad+ \left.\left.\left[|b_{ij}|+\left(\sum_{l=1}^{n}|e_{ijl}|\right)\nu_j +\left(\sum_{l=1}^{n}|e_{ilj}|\nu_l\right)\right] \max\{|\gamma_{2j}|,|\sigma_{2j}|\}\right\}| u_j - v_j |\right)^2 \\ &\leq\sum_{i=1}^n \frac{1}{c^2_i}\sum_{j=1}^{n} \left\{ \left[|a_{ij}|+\left(\sum_{l=1}^{n}|d_{ijl}|\right) \mu_j+\left(\sum_{l=1}^{n}|d_{ilj}|\mu_l\right)\right] \max\{|\gamma_{1j}|,|\sigma_{1j}|\}\right. \\ &\quad+ \left.\left[|b_{ij}|+\left(\sum_{l=1}^{n}|e_{ijl}|\right)\nu_j +\left(\sum_{l=1}^{n}|e_{ilj}|\nu_l\right)\right] \max\{|\gamma_{2j}|,|\sigma_{2j}|\}\right\}^2\sum_{j=1}^{n}| u_j - v_j |^2 \\ &= \hbar ||u-v||^2, \end{aligned} $$

where

$$ \begin{aligned} \hbar&=\sum_{i=1}^n \frac{1}{c^2_i}\sum_{j=1}^{n} \left\{ \left[|a_{ij}|+\left(\sum_{l=1}^{n}|d_{ijl}|\right)\mu_j+ \left(\sum_{l=1}^{n}|d_{ilj}|\mu_l\right)\right] \max\{|\gamma_{1j}|,|\sigma_{1j}|\}\right.\\ &\quad+ \left.\left[|b_{ij}|+\left(\sum_{l=1}^{n}|e_{ijl}|\right)\nu_j +\left(\sum_{l=1}^{n}|e_{ilj}|\nu_l\right)\right] \max\{|\gamma_{2j}|,|\sigma_{2j}|\}\right\}^2<1 \end{aligned} $$

in view of condition (4).

Thus, we obtain

$$ \|\Upphi(u)-\Upphi(v)\|\leq \sqrt{\hbar} ||u-v||. $$

It means that the mapping \({\Upphi: \mathbb{R}^n\rightarrow \mathbb{R}^n}\) is a contraction on \({\mathbb{R}^n}\) endowed with the Euclidean vector norm \(\|\bullet\|.\) and thus, there exists a unique fixed point \({u^\ast\in \mathbb{R}^n}\) such that \(\Upphi(u^\ast)=u^\ast\) which defines the unique solution of the system (3). The proof is complete.□

4 Global asymptotic stability results

Under Assumption 2, system (1) is transformed into the following form:

$$ \begin{aligned} \dot{u}_{i}(t)&= -c_{i}u(t-\sigma)+\sum_{j=1}^{n}a_{ij}f_{j}(u_{j}(t))+\sum_{j=1}^{n}b_{ij} g_{j}(u_{j}(t-\tau_{j}(t)))+\sum_{j=1}^{n}\sum_{l=1}^{n}d_{ijl} \left[\left(\tilde{f}_{j}(x_{j}(t))-\tilde{f}_{j}(x_{j}^{*})\right)\right.\\ & \left.\times \tilde{f}_{l}(x_{l}(t)) +\tilde{f}_{j}(x_{j}^{*})\left(\tilde{f}_{l}(x_{l}(t))-\tilde{f}_{l}(x_{l}^{*})\right)\right]+ \sum_{j=1}^{n}\sum_{l=1}^{n}e_{ijl} \left[\left(g_{j}(x_{j}(t-\tau_{j}(t)))-g_{j}(x_{j}^{*})\right)\right. \\ & \times\left. g_{l}(x_{l}(t-\tau_{l}(t))) +g_{j}(x_{j}^{*})\left(g_{l}(x_{l}(t-\tau_{l}(t))) -g_{l}(x_{l}^{*})\right)\right]\\ &= -c_{i}u(t-\sigma)+\sum_{j=1}^{n}a_{ij}f_{j}(u_{j}(t))+\sum_{j=1}^{n}b_{ij} g_{j}(u_{j}(t-\tau_{j}(t)))+\sum_{j=1}^{n}\sum_{l=1}^{n}d_{ijl} \left[f_{j}(u_{j}(t))\tilde{f}_{l}(x_{l}(t)) \right.\\ &\quad+\left.\tilde{f}_{j}(x_{j}^{*})f_{l}(u_{l}(t)) \right]+\sum_{j=1}^{n}\sum_{l=1}^{n}e_{ijl} \left[g_{j}(u_{j}(t-\tau_{j}(t)))\tilde{g}_{l}(x_{l}(t-\tau_{l}(t))) +g_{j}(x_{j}^{*})g_{l}(u_{l}(t-\tau_{l}(t)))\right] \\ &= -c_{i}u(t-\sigma)+\sum_{j=1}^{n} \left[a_{ij}+\sum_{l=1}^{n}\left(d_{ijl}\tilde{f}_{l}(x_{l}(t)) +d_{ilj}\tilde{f}_{l}(x_{l}^{*})\right)\right]f_{j}(u_{j}(t)) +\sum_{j=1}^{n}\left[b_{ij}\right. \\ &\quad+\left.\sum_{l=1}^{n}\left(e_{ijl}\tilde{g}_{l}(x_{l} (t-\tau_{l}(t)))+e_{ilj}\tilde{g}_{l}(x_{l}^{*})\right)\right] g_{j}(u_{j}(t-\tau_{j}(t))), \\ =& -c_{i}u(t-\sigma)+\sum_{j=1}^{n}\left[a_{ij}+\sum_{l=1}^{n}(d_{ijl} +d_{ilj})\xi_{ijl}(u_{l}(t)) \right]f_{j}(u_{j}(t)) \\ &\quad+\sum_{j=1}^{n}\left[b_{ij}+\sum_{l=1}^{n}(e_{ijl}+e_{ilj}) \theta_{ijl}(u_{l}(t-\tau_{l}(t))) \right]g_{j}(u_{j}(t-\tau_{j}(t))) \end{aligned} $$

where \(\xi_{ijl}(u_{l}(t))=(d_{ijl}\tilde{f}_{l}(x_{l}(t))+d_{ilj}\tilde{f}_{l}(x_{l}^{*}))/(d_{ijl}+d_{ilj})\) if it lies between \(\tilde{f}_{l}(x_{l}(t))\) and \(\tilde{f}_{l}(x_{l}^{*})\) and \(\theta_{ijl}(u_{l}(t-\tau_{l}(t)))= (e_{ijl}\tilde{g}_{l}(x_{l}(t-\tau_{l}(t)))+e_{ilj}\tilde{g}_{l}(x_{l}^{*}))/(e_{ijl}+e_{ilj})\) if it lies between \(\tilde{g}_{l}(x_{l}(t-\tau_{l}(t)))\) and \(\tilde{g}_{l}(x_{l}^{*})\).

If we denote \(u(t)=[u_{1}(t), u_{2}(t),\ldots, u_{n}(t)]^{T}, \, f(u(t))=[f_{1}(u_{1}(t)), f_{2}(u_{2}(t)),\ldots, f_{n}(u_{n}(t))]^{T}\), \(g(u(t-\tau(t)))=[g_{1}(u_{1}(t-\tau_{1}(t))), g_{2}(u_{2}(t-\tau_{2}(t))),\ldots, g_{n}(u_{n}(t-\tau_{n}(t)))]^{T}, C=\hbox{diag}\{c_{1}, c_{2}, \ldots, c_{n}\}\), A = (a ij ) n×n B = (b ij ) n×n , \(\mathcal{D}=[D_{1}, D_{2},\ldots, D_{n}]^{T}, \) where \(D_{i}=\hbox{diag}\{d_{i1}, d_{i2},\ldots, d_{in}\}, d_{ij}=[d_{ij1}+d_{i1j}, d_{ij2}+d_{i2j},\ldots, d_{ijn}+d_{inj}]^{T}\), \(\mathcal{E}=[E_{1}, E_{2}, \ldots, E_{n}]^{T}, \) where \(E_{i}=\hbox{diag}\{e_{i1}, e_{i2}, \ldots, e_{in}\}, e_{ij}=[e_{ij1}+e_{i1j}, e_{ij2}+e_{i2j}, \ldots, e_{ijn}+e_{inj}]^{T}\), \(\Upxi(u(t))=\hbox{diag}\{\xi_{1}^{T}(u(t)), \xi_{2}^{T}(u(t)), \ldots, \xi_{n}^{T}(u(t))\}_{n \times n^{3}}\) where \(\xi_{i}(u(t))=[\xi_{i1}^{T}(u(t)), \xi_{i2}^{T}(u(t)), \ldots, \xi_{in}^{T}(u(t))]^{T}\), \(\xi_{ij}(u(t))=[\xi_{ij1}^{T}(u_{1}(t)), \xi_{ij2}^{T}(u_{2}(t)), \ldots, \xi_{ijn}^{T}(u_{n}(t))]^{T}\) and \(\Uptheta(u(t-\tau(t)))=\hbox{diag}\{\theta_{1}^{T}(u(t-\tau(t))), \theta_{2}^{T}(u(t-\tau(t))), \ldots, \theta_{n}^{T}(u(t-\tau(t)))\}_{n \times n^{3}}\) where \(\theta_{i}(u(t-\tau(t)))=[\theta_{i1}^{T}(u(t-\tau(t))), \theta_{i2}^{T}(u(t-\tau(t))), \ldots, \theta_{in}^{T}(u(t-\tau(t)))]^{T}\), \(\theta_{ij}(u(t-\tau(t)))=[\theta_{ij1}^{T}(u_{1}(t-\tau_{1}(t))), \theta_{ij2}^{T}(u_{2}(t-\tau_{2}(t))), \ldots, \theta_{ijn}^{T}(u_{n}(t-\tau_{n}(t)))]^{T}\), then the above system can be written in the following vector-matrix form with impulsive perturbation:

$$ \begin{aligned} \dot{u}(t)&=-Cu(t-\sigma)+(A+\Upxi(u(t)){\mathcal{D}})f(u(t)) +(B+\Uptheta(u(t-\tau(t))){\mathcal{E}})g(u(t-\tau(t))) \\ \Updelta u(t_{k})&= J_{k}(u(t_{k}^{-})), \quad k \in {\mathbb{Z}}_{+}, \end{aligned} $$
(5)

or in equivalent form:

$$ \begin{aligned} \frac{d}{{\text d}t}\left[u(t)-C\int\limits_{t-\sigma}^{t}u(s)ds\right]&= -Cu(t)+(A+\Upxi(u(t)){\mathcal{D}})f(u(t))+(B+\Uptheta(u(t-\tau(t))){\mathcal{E}})g(u(t-\tau(t))) \\ \Updelta u(t_{k})&= J_{k}(u(t_{k}^{-})), \quad k \in {\mathbb{Z}}_{+}. \end{aligned} $$
(6)

Remark 4.1

System (4) can be rewritten in the following vector-matrix form:

$$ \begin{aligned} \frac{d}{{\text d}t}\left[u(t)-C\int\limits_{t-\sigma}^{t}u(s)ds\right]&= -Cu(t)+(A+\tilde{\Upxi}^{T}\tilde{{\mathcal{D}}})f(u(t))+(B+\tilde{\Uptheta}^{T}\tilde{{\mathcal{E}}})g(u(t-\tau(t))) \\ \Updelta u(t_{k})&= J_{k}(u(t_{k}^{-}))=-D_{k}\left\{u(t_{k}^{-})-C\int\limits_{t_{k}-\sigma}^{t_{k}}u(s)ds\right\}, \quad k \in {\mathbb{Z}}_{+} \end{aligned} $$
(7)

where \({\tilde{\Upxi}=\hbox{diag}\{\xi, \xi, \ldots, \xi\}_{n \times n}, \xi=(\xi_{1}, \xi_{2}, \ldots, \xi_{n})^{T}, \tilde{\mathcal{D}}=(D_{1}+D_{1}^{T}, D_{2}+D_{2}^{T}, \ldots, D_{n}+D_{n}^{T})^{T}, \tilde{\Uptheta}=\hbox{diag}\{\theta, \theta, \ldots, \theta\}_{n \times n}, \theta=(\theta_{1}, \theta_{2}, \ldots, \theta_{n})^{T}}\), and \({\tilde{\mathcal{E}}=(E_{1}+E_{1}^{T}, E_{2}+E_{2}^{T}, \ldots, E_{n}+E_{n}^{T})^{T}. }\) In fact, ξ l and θ l in the aforementioned equality should be ξ ijl (u(t)) and θ ijl (u(t − τ(t))) in (5), respectively. Obviously, they are generally relevant to weights d ijl d ilj and \(e_{ijl}, e_{ilj}, i,j,l=1,2,\ldots, n; \) therefore, the aforementioned form is identical to system (5) only if ξ ijl (u(t)) and θ ijl (u(t − τ(t))) are independent of the weights d ijl d ilj and \(e_{ijl}, e_{ilj}, i,j,l=1,2,\ldots, n. \)

In order to obtain the main results, we need the following lemmas.

Lemma 4.2

[18] Let X, Y and P be real matrices of appropriate dimensions with P > 0. Then, for any positive scalar \(\epsilon\), the following matrix inequality holds:

$$ X^{T}Y+Y^{T}X \leq \epsilon^{-1}X^{T}P^{-1}X+\epsilon Y^{T}PY. $$

Lemma 4.3

[19] Given any real matrix M = M T > 0 of appropriate dimension and a vector function \({\omega(\cdot): [a, b] \rightarrow \mathbb{R}^{n}}\), such that the integrations concerned are well defined, then

$$ \left[\int\limits_{a}^{b}\omega(s)ds\right]^{T} M \left[\int\limits_{a}^{b}\omega(s)ds\right] \leq (b-a) \int\limits_{a}^{b}\omega^{T}(s) M \omega(s) ds. $$

5 Main results

In this section, we present a delay-dependent criterion for the asymptotic stability of system (7) with Assumptions 1 to 6.

Theorem 5.1

For given scalars τ2 > τ1 ≥ 0, σ and η, the equilibrium point of (7) is globally asymptotically stable if there exist symmetric matrix \(P=\left[ \begin{array}{ccc} P_{11} & P_{12} & P_{13} \\ P_{12}^{T} & P_{22} & P_{23}\\ P_{13}^{T} & P_{23}^{T} & P_{33} \end{array} \right]>0, \) positive diagonal matrices \(Q_{l}, l=1,2,\ldots,5, R_{1}, S_{1}, S_{2}, T_{1}, T_{2}, T_{3}, \) positive scalars \(\epsilon_{j}, j=1,2,\ldots,14, \) real matrix X such that the following LMIs hold:

$$ \left[ \begin{array}{cc} P& (I-D_{k}) \\ * & P \end{array} \right] \geq 0, \quad k \in {\mathbb{Z}}_{+} $$
(8)
$$ \Uppi=\left[ \begin{array}{cc} \Upomega &\Uppsi \\ \Uppsi^{T} & -\Upupsilon \end{array} \right]<0 $$
(9)

where \(\Upomega=[\Upomega_{ij}]_{12 \times 12}, \Uppsi=[\Uppsi]_{14 \times 12}, \Upupsilon=\hbox{diag}\{\epsilon_{1}I,\epsilon_{2}I, \ldots, \epsilon_{14}I\}, \)

$$ \begin{aligned} \Upomega_{1,1}&= -2P_{11}C+P_{12}+P_{12}^{T}+\sum_{i=1}^{4}Q_{i}+\sigma^{2}R_{1} -\frac{1}{\tau_{2}}S_{1}-4\tau_{2}^{2}T_{1}-4(\tau_{2}-\tau_{1})^{2}T_{2} -4\sigma^{2}T_{3}\\ &-2\Upgamma_{1}\Upsigma_{1}Z_{1}-2\Upgamma_{2}\Upsigma_{2}Z_{2}, \quad \Upomega_{1,2}=\frac{1}{\tau_{2}}S_{1}, \quad\Upomega_{1,3}=P_{13}, \quad \Upomega_{1,4}=-P_{12}-P_{13}, \\ \Upomega_{1,5} &= P_{11}C-{\mathcal{P}}C, \quad \Upomega_{1,6}=Z_{1}(\Upsigma_{1}+\Upgamma_{1})+{\mathcal{P}}A,\quad \Upomega_{1,7} =Z_{2}(\Upsigma_{2}+\Upgamma_{2}), \quad \Upomega_{1,8}={\mathcal{P}}B,\\ \Upomega_{1,10}&= C^{T}P_{11}C-P_{12}C+4\sigma T_{3}, \quad \Upomega_{1,11}=-C^{T}P_{12}+P_{22}^{T}+4 \tau_{2} T_{1}, \\ \Upomega_{1,12}&=-C^{T}P_{13}+P_{23}+4(\tau_{2}-\tau_{1})T_{2},\quad \Upomega_{2,2}=-(1-\eta)Q_{4}-\frac{1}{\tau_{2}}S_{1} -\frac{1}{\tau_{2} -\tau_{1}}(\tau_{2}S_{1}+S_{2})\\ &\quad -\frac{1}{\tau_{2}-\tau_{1}}S_{2}-2\Upgamma_{2}\Upsigma_{2}Z_{3}, \quad \Upomega_{2,3}=\frac{1}{\tau_{2}-\tau_{1}}S_{2}, \quad \Upomega_{2,4}=\frac{1}{\tau_{2}-\tau_{1}}(\tau_{2}S_{1}+S_{2}),\\ \Upomega_{2,8}&=Z_{3}(\Upsigma_{2}+\Upgamma_{2}), \quad \Upomega_{3,3}=-Q_{1}-\frac{1}{\tau_{2}-\tau_{1}}S_{2}, \quad \Upomega_{3,10} =-P_{13}C, \quad \Upomega_{3,11}=P_{23}^{T}, \quad \Upomega_{3,12}=P_{33}^{T}, \\ \Upomega_{4,4}&= -Q_{2} -\frac{1}{\tau_{2}-\tau_{1}}(\tau_{2}S_{1}+S_{2}), \quad \Upomega_{4,10}=P_{12}C+P_{13}C, \quad \Upomega_{4,11}=-P_{22}^{T}-P_{23}^{T}, \\ \Upomega_{4,12}&=-P_{23}-P_{33}^{T}, \Upomega_{5,5}=-Q_{3}, \quad \Upomega_{5,6}=C^{T}\Uplambda_{1}, \quad \Upomega_{5,7}=C^{T}\Uplambda_{2}, \quad \Upomega_{5,9}=-C^{T}X, \\ \Upomega_{6,6}&= -2Z_{1}-\Uplambda_{1}A-A^{T}\Uplambda_{1}+\bar{\epsilon}{\mathcal{D}}^{T}{\mathcal{D}}, \quad \Upomega_{6,7}=-A^{T}\Uplambda_{2}, \quad \Upomega_{6,8}=-\Uplambda_{1}B, \quad \Upomega_{6,9}=A^{T}X, \\ \Upomega_{6,10}&=-A^{T}P_{11}C,\quad \Upomega_{6,11}=-A^{T}P_{12}, \quad \Upomega_{6,12}=-A^{T}P_{13}, \quad \Upomega_{7,7}=Q_{5}-2Z_{2}, \quad \Upomega_{7,8}=-\Uplambda_{2}B, \\ \Upomega_{8,8}&= -(1-\eta)Q_{5}-2Z_{3}+\tilde{\epsilon}{\mathcal{E}}{\mathcal{E}}, \quad \Upomega_{8,9}=B^{T}X, \quad \Upomega_{8,10}=-B^{T}P_{11}C, \quad \Upomega_{8,11}=-B^{T}P_{12}, \\ \Upomega_{8,12}&= -B^{T}P_{13}, \quad \Upomega_{9,9}=\tau_{2}^{2}(\tau_{2}-\tau_{1})S_{1}+\tau_{2}^{4}T_{1} +(\tau_{2}^{2}-\tau_{1}^{2})T_{2}+\sigma^{4}T_{3}+(\tau_{2}-\tau_{1}) ^{2}S_{2}-X-X^{T}, \\ \Upomega_{10,10}&= \frac{1}{\sigma}R_{1}-4T_{3}, \quad \Upomega_{11,11}=-4T_{1}, \quad\Upomega_{12,12}=-4T_{2}, \quad \Uppsi_{1,1}=\mu {\mathcal{P}}, \quad \Uppsi_{6,2}=\mu \Uplambda_{1}, \quad \Uppsi_{7,3}=\mu \Uplambda_{2}, \\ \Uppsi_{1,4}&=\mu {\mathcal{P}},\quad \Uppsi_{6,5}=\nu \Uplambda_{1}, \quad \Uppsi_{7,6}=\nu \Uplambda_{2}, \quad \Uppsi_{10,7}=\mu C^{T}P_{11}, \quad \Uppsi_{10,8}=\nu C^{T}P_{11}, \quad \Uppsi_{11,9}=\mu P_{12}^{T}, \\ \Uppsi_{11,10}&=\nu P_{12}^{T}, \quad \Uppsi_{12,11}=\mu P_{13}^{T}, \quad \Uppsi_{12,12}=\nu P_{13}^{T}, \quad \Uppsi_{9,13}=\mu X, \quad \Uppsi_{9,14}=\nu X, \\ {\mathcal{P}}&=P_{11}+\Upsigma_{1}\Uplambda_{1}+\Upsigma_{2}\Uplambda_{2}, \quad \bar{\epsilon}=\epsilon_{1}+\epsilon_{2}+\epsilon_{3} +\epsilon_{7}+\epsilon_{9}+\epsilon_{11}+\epsilon_{13}, \\ \tilde{\epsilon}&=\epsilon_{4}+\epsilon_{5}+\epsilon_{6} +\epsilon_{8}+\epsilon_{10}+\epsilon_{12}+\epsilon_{14}. \\ \end{aligned} $$

Proof

Construct a Lyapunov–Krasovskii functional in the form

$$ V(t,u(t))=V_{1}(t,u(t))+V_{2}(t,u(t))+V_{3}(t,u(t))+V_{4} (t,u(t))+V_{5}(t,u(t)) $$
(10)

where

$$ \begin{aligned} V_{1}(t,u(t)) &= \left[ \begin{array}{c} u(t)-C\int_{t-\sigma}^{t}u(s)ds \\ \int_{t-\tau_{2}}^{t}u(s)ds \\ \int_{t-\tau_{2}}^{t-\tau_{1}}u(s)ds \end{array} \right]^{T} \left[ \begin{array}{ccc} P_{11} & P_{12} & P_{13} \\ P_{12}^{T} & P_{22} & P_{23}\\ P_{13}^{T} & P_{23}^{T} & P_{33} \end{array} \right] \left[ \begin{array}{c} u(t)-C\int_{t-\sigma}^{t}u(s)ds \\ \int_{t-\tau_{2}}^{t}u(s)ds \\ \int_{t-\tau_{2}}^{t-\tau_{1}}u(s)ds \end{array} \right], \\ V_{2}(t,u(t)) &= 2 \sum_{i=1}^{n} \left\{ \lambda_{1i} \int\limits_{0}^{u_{i}(t)}(\sigma_{1i}s-f_{i}(s))ds+\lambda_{2i} \int\limits_{0}^{u_{i}(t)}(\sigma_{2i}s-g_{i}(s))ds\right\}, \\ V_{3}(t,u(t)) &= \int\limits_{t-\tau_{1}}^{t}u^{T}(s)Q_{1}u(s)ds +\int\limits_{t-\tau_{2}}^{t}u^{T}(s)Q_{2}u(s)ds+\int\limits_{t-\sigma} ^{t}u^{T}(s)Q_{3}u(s)ds \\ &\quad+\int\limits_{t-\tau(t)}^{t}u^{T}(s)Q_{4}u(s)ds+\int\limits_{t-\tau(t)} ^{t}g^{T}(u(s))Q_{5}g(u(s))ds, \\ V_{4}(t,u(t)) &= \sigma \int\limits_{-\sigma}^{0}\int\limits_{t+\theta}^{t}u^{T}(s)R_{1}u(s) dsd\theta+\tau_{2}(\tau_{2}-\tau_{1}) \int\limits_{-\tau_{2}}^{0}\int\limits_{t+\theta}^{t}\dot{u}^{T}(s)S_{1} \dot{u}(s)dsd\theta\\ &\quad+(\tau_{2}-\tau_{1})\int\limits_{-\tau_{2}}^{-\tau_{1}} \int\limits_{t+\theta}^{t}\dot{u}^{T}(s)S_{2} \dot{u}(s)dsd\theta \\ V_{5}(t,u(t)) &= 2\tau_{2}^{2}\int\limits_{t-\tau_{2}}^{t}\int\limits_{\theta}^{t} \int\limits_{\lambda}^{t}\dot{u}^{T}(s) T_{1}\dot{u}(s)dsd\theta d\lambda+2[\tau_{2}^{2}-\tau_{1}^{2}]\int\limits_{t-\tau_{2}} ^{t-\tau_{1}}\int\limits_{\theta}^{t}\int\limits_{\lambda}^{t}\dot{u}^{T}(s) T_{2}\dot{u}(s)dsd\theta d\lambda \\ &\quad+2\sigma^{2}\int\limits_{t-\sigma}^{t}\int\limits_{\theta}^{t}\int\limits_{\lambda}^{t} \dot{u}^{T}(s) T_{3}\dot{u}(s)dsd\theta d\lambda \\ \end{aligned} $$

Calculating the upper right derivative of V along the solution of (7) at the continuous interval \({[t_{k-1}, t_{k}), k \in \mathbb{Z}_{+}, }\) we get

$$ D^{+}(V_{1}(t,u(t))) = \left[ \begin{array}{c} u(t)-C\int_{t-\sigma}^{t}u(s)ds \\ \int_{t-\tau_{2}}^{t}u(s)ds \\ \int_{t-\tau_{2}}^{t-\tau_{1}}u(s)ds \end{array} \right]^{T} \left[ \begin{array}{ccc} P_{11} & P_{12} & P_{13} \\ P_{12}^{T} & P_{22} & P_{23}\\ P_{13}^{T} & P_{23}^{T} & P_{33} \end{array} \right] \left[ \begin{array}{c} \dot{u}(t)-Cu(t)+Cu(t-\sigma) \\ u(t)-u(t-\tau_{2}) \\ u(t-\tau_{1})-u(t-\tau_{2}) \end{array} \right], $$
(11)
$$ D^{+}(V_{2}(t,u(t))) = \left[\left(u^{T}(t)\Upsigma_{1}-f^{T}(u(t))\right)\Uplambda_{1}+\left(u^{T}(t)\Upsigma_{2} -g^{T}(u(t))\right)\Uplambda_{2}\right]\dot{u}(t), $$
(12)
$$ \begin{aligned} D^{+}(V_{3}(t,u(t))) &\leq u^{T}(t)Q_{1}u(t)-u^{T}(t-\tau_{1})Q_{1}u(t-\tau_{1}) +u^{T}(t)Q_{2}u(t)-u^{T}(t-\tau_{2})Q_{1}u(t-\tau_{2}) \\ &\quad+u^{T}(t)Q_{3}u(t)-u^{T}(t-\sigma)Q_{3}u(t-\sigma)+u^{T}(t)Q_{4}u(t) \\ &\quad-(1-\eta)u^{T}(t-\tau(t))Q_{4}u(t-\tau(t))+g^{T}(u(t))Q_{5}g(u(t)) \\ &\quad-(1-\eta)g^{T}(u(t-\tau(t)))Q_{5}g(u(t-\tau(t))) \end{aligned} $$
(13)
$$ \begin{aligned} D^{+}(V_{4}(t,u(t))) &= \sigma^{2}u^{T}(t)R_{1}u(t)-\sigma \int\limits_{t-\sigma}^{t}u^{T}(s)R_{1}u(s)ds+\tau_{2}^{2}(\tau_{2}-\tau_{1}) \dot{u}^{T}(t)S_{1}\dot{u}(t) \\ &\quad-\tau_{2}(\tau_{2}-\tau_{1})\int\limits_{t-\tau_{2}}^{t} \dot{u}^{T}(s)S_{1}\dot{u}(s)ds+(\tau_{2}-\tau_{1})^{2}\dot{u}^{T}(t)S_{2}\dot{u}(t)\\ &\quad-(\tau_{2}-\tau_{1})\int\limits_{t-\tau_{2}}^{t-\tau_{1}}\dot{u}^{T}(s)S_{2}\dot{u}(s)ds \end{aligned} $$
(14)
$$ \begin{aligned} D^{+}(V_{5}(t,u(t))) &= \dot{u}^{T}(t) \left[\tau_{2}^{4}T_{1} +(\tau_{2}^{2}-\tau_{1}^{2})^{2}T_{2}+\sigma^{4}T_{3}\right] \dot{u}(t)-2\tau_{2}^{2}\int\limits_{t-\tau_{2}}^{t} \int\limits_{\theta}^{t}\dot{u}^{T}(s)T_{1}\dot{u}(s)dsd\theta \\ &\quad-2(\tau_{2}^{2}-\tau_{1}^{2}) \int\limits_{t-\tau_{2}}^{t-\tau_{1}}\int\limits_{\theta}^{t} \dot{u}^{T}(s)T_{2}\dot{u}(s)dsd\theta-2\sigma^{2} \int\limits_{t-\sigma}^{t}\int\limits_{\theta}^{t}\dot{u}^{T}(s)T_{3} \dot{u}(s)dsd\theta. \end{aligned} $$
(15)

By Lemma 4.3, we have

$$ -\int\limits_{t-\sigma}^{t}u^{T}(s)R_{1}u(s)ds \leq -\frac{1}{\sigma}\left[\int\limits_{t-\sigma}^{t}u(s)ds\right]^{T}R_{1}\left[\int\limits_{t-\sigma}^{t}u(s)ds\right] $$
(16)
$$ -\int\limits_{t-\tau(t)}^{t}\dot{u}^{T}(s)S_{1}\dot{u}(s)ds \leq -\frac{1}{\tau_{2}}\left[u(t)-u(t-\tau(t))\right]^{T}S_{1}\left[u(t)-u(t-\tau(t))\right] $$
(17)
$$ \begin{aligned} -\int\limits_{t-\tau_{2}}^{t-\tau(t)}\dot{u}^{T}(s) (\tau_{2}S_{1}+S_{2})\dot{u}(s)ds &\leq -\frac{1}{\tau_{2}-\tau_{1}} \left[u(t-\tau(t))-u(t-\tau_{2})\right]^{T}(\tau_{2}S_{1}+S_{2})\\ &\quad \left[u(t-\tau(t))-u(t-\tau_{2})\right] \end{aligned} $$
(18)
$$ -\int\limits_{t-\tau(t)}^{t-\tau_{1}}\dot{u}^{T}(s) S_{2} \dot{u}(s)ds \leq -\frac{1}{\tau_{2}-\tau_{1}} \left[u(t-\tau_{1})-u(t-\tau(t))\right]^{T} S_{2} \left[u(t-\tau_{1})-u(t-\tau(t))\right] $$
(19)
$$ -\int\limits_{t-\tau_{2}}^{t}\int\limits_{\theta}^{t}\dot{u}^{T}(s) T_{1}\dot{u}(s)ds d\theta \leq -\frac{2}{\tau_{2}^{2}} \left[\tau_{2}u(t)-\int\limits_{t-\tau_{2}}^{t}u(s)ds\right]^{T}T_{1} \left[\tau_{2}u(t)-\int\limits_{t-\tau_{2}}^{t}u(s)ds\right] $$
(20)
$$ \begin{aligned} -\int\limits_{t-\tau_{2}}^{t-\tau_{1}}\int\limits_{\theta}^{t}\dot{u}^{T}(s) T_{2}\dot{u}(s)ds d\theta &\leq -\frac{2}{\tau_{2}^{2}-\tau_{1}^{2}} \left[(\tau_{2}-\tau_{1})u(t)-\int\limits_{t-\tau_{2}}^{t-\tau_{1}}u(s)ds\right]^{T}T_{2} \\ &\quad \left[(\tau_{2}-\tau_{1})u(t)-\int\limits_{t-\tau_{2}}^{t-\tau_{1}}u(s)ds\right] \end{aligned} $$
(21)
$$ -\int\limits_{t-\sigma}^{t}\int\limits_{\theta}^{t}\dot{u}^{T}(s) T_{1}\dot{u}(s)ds d\theta \leq -\frac{2}{\sigma^{2}} \left[\sigma u(t)-\int\limits_{t-\sigma}^{t}u(s)ds\right]^{T}T_{1} \left[\sigma u(t)-\int\limits_{t-\sigma}^{t}u(s)ds\right]. $$
(22)

Furthermore, one can infer from inequality (2) that the following matrix inequalities hold for any positive diagonal matrices Z k , (k = 1, 2, 3) with compatible dimensions

$$ 0 \leq -2u^{T}(t) \Upgamma_{1}\Upsigma_{1}Z_{1}-2f^{T}(u(t))Z_{1}f(u(t))+2u^{T}(t)Z_{1}(\Upsigma_{1}+\Upgamma_{1})f(u(t)) $$
(23)
$$ 0 \leq -2u^{T}(t) \Upgamma_{2}\Upsigma_{2}Z_{2}-2g^{T}(u(t))Z_{2}g(u(t))+2u^{T}(t)Z_{2}(\Upsigma_{2}+\Upgamma_{2})g(u(t)) $$
(24)
$$ 0 \leq -2u^{T}_{\tau} \Upgamma_{2}\Upsigma_{2}Z_{3}u_{\tau}-2g^{T}(u_{\tau})Z_{3}g(u_{\tau})+2u^{T}_{\tau}Z_{3}(\Upsigma_{2}+\Upgamma_{2})g(u_{\tau}) $$
(25)

The following equation holds for any real matrix X with a compatible dimension

$$ 0=\,2\dot{u}^{T}(t)X \left[-\dot{u}(t)-Cu(t-\sigma)+(A+\Upxi(u(t)){\mathcal{D}})f(u(t)) +(B+\Uptheta(u(t-\tau(t))){\mathcal{E}})g(u(t-\tau(t)))\right]. $$
(26)

By Lemma 4.2, the following inequalities hold for any positive scalars \(\epsilon_{i}, (i=1,2,\ldots,14)\)

$$ 2 u^{T}(t){\mathcal{P}} \Upxi(u(t)){\mathcal{D}}f(u(t)) \leq \epsilon_{1}^{-1}u^{T}(t){\mathcal{P}}\Upxi(u(t)) \Upxi(u(t))^{T} {\mathcal{P}} u(t)+\epsilon_{1} f^{T}(u(t)){\mathcal{D}}^{T} {\mathcal{D}} f(u(t)) $$
(27)
$$ -2f^{T}(u(t)) \Uplambda_{1} \Upxi(u(t)) {\mathcal{D}} f(u(t)) \leq \epsilon_{2}^{-1} f^{T}(u(t)) \Uplambda_{1} \Upxi(u(t)) \Upxi(u(t))^{T} \Uplambda_{1} f(u(t))+\epsilon_{2} f^{T}(u(t)){\mathcal{D}}^{T} {\mathcal{D}} f(u(t)) $$
(28)
$$ -2 g^{T}(u(t)) \Uplambda_{1} \Upxi(u(t)) {\mathcal{D}} f(u(t)) \leq \epsilon_{3}^{-1} g^{T}(u(t)) \Uplambda_{2} \Upxi(u(t)) \Upxi(u(t))^{T} \Uplambda_{2} g(u(t))+ \epsilon_{3} f^{T}(u(t)){\mathcal{D}}^{T} {\mathcal{D}} f(u(t)) $$
(29)
$$ 2 u^{T}(t) {\mathcal{P}} \Uptheta(u_{\tau}){\mathcal{E}} g(u_{\tau}) \leq \epsilon_{4}^{-1} u^{T}(t) {\mathcal{P}} \Uptheta(u_{\tau}) \Uptheta^{T}(u_{\tau}) {\mathcal{P}}^{T} u(t)+\epsilon_{4}g^{T}(u_{\tau}){\mathcal{E}}^{T}{\mathcal{E}} g(u_{\tau}) $$
(30)
$$ -2f^{T}(u(t)) \Uplambda_{1} \Uptheta(u_{\tau}) {\mathcal{E}} g(u_{\tau}) \leq \epsilon_{5}^{-1} f^{T}(u(t)) \Uplambda_{1} \Uptheta(u_{\tau}) \Uptheta^{T}(u_{\tau}) \Uplambda_{1} f(u(t))+\epsilon_{5} g^{T}(u_{\tau}){\mathcal{E}}^{T}{\mathcal{E}} g(u_{\tau}) $$
(31)
$$ -2 g^{T}(u(t)) \Uplambda_{2} \Uptheta(u_{\tau}) {\mathcal{E}} g(u_{\tau}) \leq \epsilon_{6}^{-1} g^{T}(u(t)) \Uplambda_{2} \Uptheta(u_{\tau}) \Uptheta^{T}(u_{\tau}) \Uplambda_{2} g(u_{\tau})+\epsilon_{6} g^{T}(u_{\tau}){\mathcal{E}}^{T}{\mathcal{E}} g(u_{\tau}) $$
(32)
$$ \begin{aligned} &-2 \left(\int\limits_{t-\sigma}^{t}u(s)ds\right)^{T} C^{T}P_{11} \Upxi(u(t)) {\mathcal{D}} f(u(t)) \\ & \leq \epsilon_{7}^{-1} \left(\int\limits_{t-\sigma}^{t}u(s)ds\right)^{T} C^{T}P_{11}\Upxi(u(t)) \Upxi^{T}(u(t)) P_{11}C \left(\int\limits_{t-\sigma}^{t}u(s)ds\right) +\epsilon_{7} f^{T}(u(t)){\mathcal{D}}^{T} {\mathcal{D}} f(u(t)) \end{aligned} $$
(33)
$$ \begin{aligned} &-2 \left(\int\limits_{t-\sigma}^{t}u(s)ds\right)^{T} C^{T}P_{11} \Uptheta(u_{\tau}){\mathcal{E}}g(u_{\tau}) \\ & \leq \epsilon_{8}^{-1} \left(\int\limits_{t-\sigma}^{t}u(s)ds\right)^{T} C^{T}P_{11} \Uptheta(u_{\tau}) \Uptheta^{T}(u_{\tau}) P_{11}C \left(\int\limits_{t-\sigma}^{t}u(s)ds\right)+\epsilon_{8} g^{T}(u_{\tau}){\mathcal{E}}^{T}{\mathcal{E}} g(u_{\tau}) \end{aligned} $$
(34)
$$ \begin{aligned} & -2 \left(\int\limits_{t-\tau_{2}}^{t}u(s)ds\right)^{T} P_{12}^{T} \Upxi(u(t)) {\mathcal{D}} f(u(t)) \\ & \leq \epsilon_{9}^{-1} \left(\int\limits_{t-\tau_{2}}^{t}u(s)ds\right)^{T} P_{12}^{T} \Upxi(u(t)) \Upxi^{T}(u(t)) P_{12} \left(\int\limits_{t-\tau_{2}}^{t}u(s)ds\right)+ \epsilon_{9} f^{T}(u(t)) {\mathcal{D}}^{T} {\mathcal{D}} f(u(t)) \end{aligned} $$
(35)
$$ \begin{aligned} &-2 \left(\int\limits_{t-\tau_{2}}^{t}u(s)ds\right)^{T} P_{12}^{T} \Uptheta(u_{\tau}) {\mathcal{E}} g(u_{\tau}) \\ & \leq \epsilon_{10}^{-1} \left(\int\limits_{t-\tau_{2}}^{t}u(s)ds\right)^{T} P_{12}^{T} \Uptheta(u_{\tau}) \Uptheta^{T}(u_{\tau}) P_{12} \left(\int\limits_{t-\tau_{2}}^{t}u(s)ds\right)+\epsilon_{10} g^{T}(u_{\tau}){\mathcal{E}}^{T}{\mathcal{E}} g(u_{\tau}) \end{aligned} $$
(36)
$$ \begin{aligned} &-2\left(\int\limits_{t-\tau_{2}}^{t-\tau_{1}}u(s)ds\right)^{T} P_{13}^{T} \Upxi(u(t)) {\mathcal{D}} f(u(t)) \\ & \leq \epsilon_{11}^{-1} \left(\int\limits_{t-\tau_{2}}^{t-\tau_{1}}u(s)ds\right)^{T} P_{13}^{T} \Upxi(u(t)) \Upxi^{T}(u(t)) P_{13} \left(\int\limits_{t-\tau_{2}}^{t-\tau_{1}}u(s)ds\right)+\epsilon_{11}f^{T}(u(t)) {\mathcal{D}}^{T} {\mathcal{D}}f(u(t)) \end{aligned} $$
(37)
$$ \begin{aligned} &-2 \left(\int\limits_{t-\tau_{2}}^{t-\tau_{1}} u(s)ds\right)^{T} P_{13}^{T} \Uptheta(u_{\tau}) {\mathcal{E}} g(u_{\tau}) \\ & \leq \epsilon_{12}^{-1} \left(\int\limits_{t-\tau_{2}}^{t-\tau_{1}} u(s)ds\right)^{T} P_{13}^{T} \Uptheta(u_{\tau}) \Uptheta^{T}(u_{\tau}) P_{13} \left(\int\limits_{t-\tau_{2}}^{t-\tau_{1}} u(s)ds\right)+\epsilon_{12} g^{T}(u_{\tau}){\mathcal{E}}^{T}{\mathcal{E}} g(u_{\tau}) \end{aligned} $$
(38)
$$ -2 \dot{u}^{T}(t)X \Upxi(u(t)){\mathcal{D}}f(u(t)) \leq \epsilon_{13}^{-1}\dot{u}^{T}(t)X\Upxi(u(t)) \Upxi(u(t))^{T} X^{T} \dot{u}(t)+\epsilon_{13} f^{T}(u(t)){\mathcal{D}}^{T} {\mathcal{D}} f(u(t)) $$
(39)
$$ -2 \dot{u}^{T}(t)X \Uptheta(u_{\tau}) {\mathcal{E}} g(u_{\tau}) \leq \epsilon_{14}^{-1}\dot{u}^{T}(t)X \Uptheta(u_{\tau}) \Uptheta^{T}(u_{\tau}) X^{T}\dot{u}(t)+\epsilon_{14} g^{T}(u_{\tau}){\mathcal{E}}^{T}{\mathcal{E}} g(u_{\tau}). $$
(40)

Since

$$ \begin{aligned} \Upxi(u(t)) \Upxi^{T}(u(t))&=\hbox{diag} \left\{\sum_{j=1}^{n}\|\xi_{1j}(u(t))\|^{2}, \sum_{j=1}^{n}\|\xi_{2j}(u(t))\|^{2}, \ldots, \sum_{j=1}^{n}\|\xi_{nj}(u(t))\|^{2}\right\} \\ \Uptheta(u_{\tau}) \Uptheta^{T}(u_{\tau}) &=\hbox{diag} \left\{\sum_{j=1}^{n}\|\theta_{1j}(u_{\tau})\|^{2}, \sum_{j=1}^{n}\|\theta_{2j}(u_{\tau})\|^{2}, \ldots, \sum_{j=1}^{n}\|\theta_{nj}(u_{\tau})\|^{2}\right\} \end{aligned} $$

and \(\sum_{j=1}^{n}\|\xi_{ij}(u(t))\|^{2} \leq \sum_{l=1}^{n}\mu_{l}^{2}, \sum_{j=1}^{n}\|\theta_{ij}(u(t))\|^{2} \leq \sum_{l=1}^{n}\nu_{l}^{2}\) it follows that

$$ \Upxi(u(t)) \Upxi^{T}(u(t)) \leq \mu^{2}I, \quad \Uptheta(u_{\tau}) \Uptheta^{T}(u_{\tau}) \leq \nu^{2}I. $$
(41)

Substituting (1141) into (10) and making some manipulations, we have

$$ \begin{aligned} D^{+}(V(t,u(t)))&\leq \zeta^{T}(t)\Upomega \zeta(t)+u^{T}(t)\left[\epsilon_{1}^{-1}\mu^{2}+\epsilon_{4}^{-1} \nu^{2}\right]{\mathcal{P}}^{2}u(t)+f^{T}(u(t))\left[\epsilon_{2}^{-1}\mu^{2}+\epsilon_{5}^{-1} \nu^{2}\right]\Uplambda_{1}^{2}f(u(t))\\ &\quad+g^{T}(u(t))\left[\epsilon_{3}^{-1}\mu^{2}+\epsilon_{6}^{-1} \nu^{2}\right]\Uplambda_{2}^{2}g(u(t)) +\left(\int\limits_{t-\sigma}^{t}u(s)ds\right)^{T} \left[\epsilon_{7}^{-1}\mu^{2}+\epsilon_{8}^{-1} \nu^{2}\right]C^{T}P_{11}^{2}C \\ &\quad \left(\int\limits_{t-\sigma}^{t}u(s)ds\right) +\left(\int\limits_{t-\tau_{2}}^{t}u(s)ds\right)^{T} \left[\epsilon_{9}^{-1}\mu^{2}+\epsilon_{10}^{-1} \nu^{2}\right]P_{12}P_{12}^{T}\left(\int\limits_{t-\tau_{2}}^{t}u(s)ds\right) \\ &\quad+\left(\int\limits_{t-\tau_{2}}^{t-\tau_{1}}u(s)ds\right)^{T} \left[\epsilon_{11}^{-1}\mu^{2}+\epsilon_{12}^{-1} \nu^{2}\right]P_{13}P_{13}^{T} \left(\int\limits_{t-\tau_{2}}^{t-\tau_{1}}u(s)ds\right) \\ &\quad+ \dot{u}^{T}(t)\left[\epsilon_{13}^{-1}\mu^{2}+\epsilon_{14}^{-1} \nu^{2}\right]XX^{T}\dot{u}(t), \end{aligned} $$

where

$$ \begin{aligned} \zeta^{T}(t) &= \left[u^{T}(t) \quad u^{T}(t-\tau(t)) \quad u^{T}(t-\tau_{1}) \quad u^{T}(t-\tau_{2}) \quad u^{T}(t-\sigma) \quad f^{T}(u(t)) \quad g^{T}(u(t)) \right.\\ &\quad \left.g^{T}(u(t-\tau(t))) \quad \dot{u}^{T}(t) \quad \left(\int\limits_{t-\sigma}^{t}u(s)ds\right)^{T} \quad \left(\int\limits_{t-\tau_{2}}^{t}u(s)ds\right)^{T} \quad \left(\int\limits_{t-\tau_{2}}^{t-\tau_{1}}u(s)ds\right)^{T} \right]. \end{aligned} $$

From Lemma 4.2 and the well-known Schur complements, D +(V(t,u(t))) < 0 holds if the LMI (9) is true for \({t \in [t_{k-1}, t_{k}), \quad k \in \mathbb{Z}_{+}. }\)

It follows from (9) that

$$ D^{+}(V(t,u(t))) \leq -\zeta^{T}(t) \Uppi^{*} \zeta(t), \quad k \in {\mathbb{Z}}_{+}, $$
(42)

where \(\Uppi^{*}=-\Uppi. \) Suppose that \({t \in [t_{n-1}, t_{n}), n \in \mathbb{Z}_{+}. }\) Then, integrating inequality (42) at each interval [t k−1t k ),   1 ≤ k ≤ n − 1, we derive that

$$ \begin{aligned} V(t^{-}_1)&=V(0)-\int\limits_0^{t_1}\zeta^T(s) \Uppi^{*} \zeta(s)ds,\\ V(t^{-}_2)&=V(t_1)-\int\limits_{t_1}^{t_2}\zeta^T(u)\Uppi^{*} \zeta(s)ds,\\ &\vdots\\ V(t^{-}_{n-1})&=V(t_{n-2})-\int\limits_{t_{n-2}}^{t_{n-1}}\zeta^T(s)\Uppi^{*} \zeta(s)ds,\\ V(t)&=V(t_{n-1})-\int\limits_{t_{n-1}}^{t}\zeta^T(s)\Uppi^{*} \zeta(s)ds, \end{aligned} $$

which implies that

$$ V(t) = V(0)-\int\limits_0^t \zeta^T(s) \Uppi^{*} \zeta(s)ds+\sum_{0<t_k\leq t}[V(t_k)-V(t^{-}_k)],\quad t \geq 0. $$
(43)

In order to analyze inequality (43), we need to consider the change of V at impulse times. First, it follows from (8) that

$$ \begin{aligned} \left[ \begin{array}{cc} P &(I-D_k)P \\ \star & P \end{array}\right] \geq 0 & \Leftrightarrow \left[ \begin{array}{cc} I & 0 \\ 0 & P^{-1} \end{array}\right] \left[ \begin{array}{cc} P & (I-D_k)P \\ \star & P \end{array}\right] \left[ \begin{array}{cc} I & 0 \\ 0 & P^{-1} \end{array}\right] \geq 0 \\ &\Leftrightarrow \left[ \begin{array}{cc} P & I-D_k \\ \star & P^{-1} \end{array}\right] \geq 0 \\ &\Leftrightarrow P-(I-D_k)^TP(I-D_k)>0, \end{aligned} $$
(44)

in which the last equivalent relation is obtained by the well-known Schur complements. Secondly, from model (7), it can be obtained that

$$ \begin{aligned} u(t_k)-C\int\limits_{t_k-\sigma}^{t_k} u(s) ds&=u(t^{-}_k)-D_k\left[u(t^{-}_k)-C\int\limits_{t_k-\sigma}^{t_k} u(s) ds\right]-C\int\limits_{t_k-\sigma}^{t_k} u(s) ds \\ &=(I-D_k)\left[u(t^{-}_k)-C\int\limits_{t_k-\sigma}^{t_k} u(s) ds \right], \end{aligned} $$

which together with (44) yields

$$ \begin{aligned} V_1(t_k)&= \left[u(t_k)- C\int\limits_{t_k-\sigma}^{t_k} u(s) ds \right]^T P\left[u(t_k)- C\int\limits_{t_k-\sigma}^{t_k} u(s) ds\right]\\ &=\left[u(t^{-}_k)-C\int\limits_{t_k-\sigma}^{t_k} u(s) ds \right]^T (I-D_k)^T P(I-D_k)\left[u(t^{-}_k)- C\int\limits_{t_k-\sigma}^{t_k} u(s) ds \right]\\ &\leq\left[u(t^{-}_k)- C\int\limits_{t_k-\sigma}^{t_k} u(s) ds \right]^T P \left[y(t^{-}_k)-C\int\limits_{t_k-\sigma}^{t_k} u(s) ds \right]\\ &=V_1(t^{-}_k). \end{aligned} $$

Thus, we can deduce that

$$ V(t_k) \leq V(t^{-}_k), k\in {\mathbb{Z}}_+. $$

Substituting above inequality to (43), it yields

$$ V(t)+\int\limits_0^t \zeta^T(s)\Uppi^{*} \zeta(u)ds \leq V(0),\quad t\geq0. $$
(45)

Applying Lemma 4.3 and (45), we have

$$ \begin{aligned} \left\|C\int\limits_{t-\sigma}^{t} u(s) ds\right\|^2&= \left[C\int\limits_{t-\sigma}^{t} u(s) ds\right]^T \left[C\int\limits_{t-\sigma}^{t}u(s) ds \right]\\ &\leq \lambda_{\max}(C^2)\left[\int\limits_{t-\sigma}^t u(s)ds\right]^T \left[\int\limits_{t-\sigma}^t u(s)ds\right]\\ &\leq \frac{\lambda_{\max}(C^2)}{\lambda_{\min}(Q_3)}\left[\int\limits_{t-\sigma}^tu(s)ds\right]^T Q_3 \left[\int\limits_{t-\sigma}^tu(s)ds\right]\\ &\leq \sigma\frac{\lambda_{\max}(C^2)}{\lambda_{\min}(Q_3)} \int\limits_{t-\sigma}^tu^T(s)Q_3u(s)ds \\ &\leq\sigma\frac{\lambda_{\max}(C^2)}{\lambda_{\min}(Q_3)} V_2(t) \leq \sigma\frac{\lambda_{\max}(C^2)}{\lambda_{\min}(Q_3)} V(t)\leq \sigma\frac{\lambda_{\max}(C^2)}{\lambda_{\min}(Q_3)} V(0), t\geq0. \end{aligned} $$

Similarly,

$$ \begin{aligned} \left\|u(s)- C\int\limits_{t-\sigma}^{t} u(s) ds\right\|^2=&\left[u(s)- C\int\limits_{t-\sigma}^{t} u(s) ds\right]^T \left[u(s)- C\int\limits_{t-\sigma}^{t} u(s) ds\right]\\ &\leq \frac{V_1(t)}{\lambda_{\min}(P)}\leq \frac{V(t)}{\lambda_{\min}(P)} \leq \frac{V(0)}{\lambda_{\min}(P)}, t\geq0. \end{aligned} $$

Hence, it can be obtained that

$$ \begin{aligned} \left\|u(t)\right\|&\leq \left\|C\int\limits_{t-\sigma}^{t} u(s) ds\right\|+ \left\|u(s)-C\int\limits_{t-\sigma}^{t} u(s) ds\right\|\\ &\leq \sqrt{\sigma\frac{\lambda_{\max}(C^2)}{\lambda_{\min}(Q_3)} V(0)}+ \sqrt{\frac{V(0)}{\lambda_{\min}(P)}}<\infty, t\geq0, \end{aligned} $$

where

$$ \begin{aligned} V(0)&= \xi^{T}(0)P \xi(0)+\sum_{i=1}^{n} \left\{ \lambda_{1i} \int\limits_{0}^{u_{i}(0)}(\sigma_{1i}s-f_{i}(s))ds+\lambda_{2i} \int\limits_{0}^{u_{i}(0)}(\sigma_{2i}s-g_{i}(s))ds\right\} \\ &\quad + \int\limits_{-\tau_{1}}^{0}u^{T}(s)Q_{1}u(s)ds +\int\limits_{-\tau_{2}}^{0}u^{T}(s)Q_{2}u(s)ds+\int\limits_{-\sigma}^{0}u^{T}(s)Q_{3}u(s)ds\\ &\quad+\int\limits_{-\tau(0)}^{0}u^{T}(s)Q_{4}u(s)ds+\int\limits_{-\tau(0)}^{0}g^{T}(u(s))Q_{5}g(u(s))ds\\ &\quad +\sigma \int\limits_{-\sigma}^{0}\int\limits_{\theta}^{0}u^{T}(s)R_{1}u(s)dsd\theta+\tau_{2}(\tau_{2}-\tau_{1}) \int\limits_{-\tau_{2}}^{0}\int\limits_{\theta}^{0}\dot{u}^{T}(s)S_{1}\dot{u}(s)dsd\theta\\ &\quad +(\tau_{2}-\tau_{1})\int\limits_{-\tau_{2}}^{-\tau_{1}}\int\limits_{\theta}^{0}\dot{u}^{T}(s)S_{2} \dot{u}(s)dsd\theta+ 2\tau_{2}^{2}\int\limits_{-\tau_{2}}^{0}\int\limits_{\theta}^{0}\int\limits_{\lambda}^{0}\dot{u}^{T}(s) T_{1}\dot{u}(s)dsd\theta d\lambda \\ &\quad+2[\tau_{2}^{2}-\tau_{1}^{2}]\int\limits_{-\tau_{2}}^{-\tau_{1}} \int\limits_{\theta}^{0}\int\limits_{\lambda}^{0}\dot{u}^{T}(s) T_{2}\dot{u}(s)dsd\theta d\lambda +2\sigma^{2}\int\limits_{-\sigma}^{0}\int\limits_{\theta}^{0}\int\limits_{\lambda}^{0}\dot{u}^{T}(s) T_{3}\dot{u}(s)dsd\theta d\lambda \\ &\leq \left[2 \lambda_{\max}(P)+2 \lambda_{\max}(\Uplambda_{1}+\Uplambda_{2})\lambda_{\max}(\Upsigma_{1}-\Upsigma_{2}) \tau_{1}\lambda_{\max}(Q_{1})\tau_{2}\lambda_{\max}(Q_{2})+\sigma\lambda_{\max}(Q_{3})\right.\\ &\quad+\tau_{2}\lambda_{\max}(Q_{4})+\tau_{2}\lambda_{\max}(Q_{5})\lambda_{\max}(\Upsigma_{2}) +\sigma^{3}\lambda_{\max}(R_{1})+\tau_{2}^{3}(\tau_{2}-\tau_{1})\lambda_{\max}(S_{1})\\ &\quad+(\tau_{2}-\tau_{1})^{3}\lambda_{\max}(S_{2})+2\tau_{2}^{5}\lambda_{\max}(T_{1}) +[\tau_{2}^{2}-\tau_{1}^{2}](\tau_{2}-\tau_{1})^{3}\lambda_{\max}(T_{2})\\ &\quad \left.+2\sigma^{5}\lambda_{\max}(T_{3}) \right]\|\varphi\|^2_{\rho}<\infty. \end{aligned} $$

So the solution u(t) of model (7) is uniformly bounded on \([0,\infty).\) Thus, considering the continuity of activation function f(i.e.,(A1)), it can be deduced from system (6) that there exists some constant M > 0 such that \({\|\dot{u}(t)\|\leq M, t\in[t_{k-1},t_k), k\in \mathbb{Z}_+.}\) It implies that \({| \dot{u}_i(t) |\leq M, t\in[t_{k-1},t_k), k\in \mathbb{Z}_+,i\in \Uptheta,}\) where \(\dot{u}\) denotes the right-hand derivative of u at impulsive times.

In the following, we shall prove \(||u(t)||\rightarrow0\) as \(t\rightarrow\infty.\) We first show that

$$ \|u(t_k) \|\rightarrow 0 \quad\hbox{as} \quad t_k\rightarrow \infty. $$
(46)

Obviously, it is equal to prove \(|u_i(t_k)| \rightarrow 0\) as \(t_k \rightarrow \infty, i \in \Uptheta. \) First, note that \({|\dot{u}_i(t)| \leq M, t \in [t_{k-1},t_k), k\in \mathbb{Z}_+, }\) then for any \(\epsilon>0, \) there exists a \(\delta=\frac{\epsilon}{2M}>0\) such that for any \({t{'},t{''}\in [t_{k-1},t_k),k\in \mathbb{Z}_+}\) and |t  − t ′′| < δ implies

$$ |u_i(t{'})-u_i(t{''})|\leq M|t{'}-t{''}|=\frac{\epsilon}{2}, i\in \Uptheta. $$
(47)

By (A 5), we define \(\bar{\delta}=\min\{\delta, \frac{1}{2}\theta\},\) where \({\theta=\inf_{k\in \mathbb{Z}_+}\{t_{k}-t_{k-1}\}>0.}\) From (45), it can be obtained that

$$ \int\limits_0^t |u_i(s)|^2ds \leq \int\limits_0^t u^T(u)u(s)ds \leq \int\limits_0^t \zeta^T(s) \zeta(s) ds \leq \frac{1}{\lambda_{\min}(\Uppi^{*})}\int\limits_0^t \zeta^T(s) \Uppi^{*} \zeta(s)ds<\infty, t>0, $$

which implies

$$ \int\limits_{t_{k}}^{t_{k}+\bar{\delta}} |u_i(s)|^2ds \rightarrow 0 \quad \hbox{as} \quad t_k \rightarrow \infty. $$

Applying Lemma 4.3, we get

$$ \int\limits_{t_{k}}^{t_{k}+\bar{\delta}} |u_i(s)| ds \leq \sqrt{\bar{\delta}\int\limits_{t_{k}}^{t_{k}+\bar{\delta}} |u_i(s)|^2 ds} \rightarrow 0\, \hbox{as} \quad t_k \rightarrow \infty. $$
(48)

So for above given \(\epsilon, \) there exists a \(T=T(\epsilon)>0\) such that t k  > T implies

$$ \int\limits_{t_{k}}^{t_{k}+\bar{\delta}} |u_i(s)| ds <\frac{\epsilon}{2}\bar{\delta}. $$

From the continuity of |u i (t)| on \([t_{k},t_{k}+\bar{\delta}]\) and using integral Mean value theorem, there exists some constant \(\xi_k\in[t_{k},t_{k}+\bar{\delta}]\) such that

$$ |u_i(\xi_k)|\bar{\delta}=\int\limits_{t_{k}}^{t_{k}+\bar{\delta}} |u_i(s)| ds <\frac{\epsilon}{2}\bar{\delta}, $$

which leads to

$$ |u_i(\xi_k)|\leq \frac{\epsilon}{2}. $$
(49)

Together (47) with (49), one may deduce that for any \(\epsilon>0, \) there exists a \(T=T(\epsilon)>0\) such that t k  > T implies

$$ |u_i(t_k)|\leq |u_i(t_k)-u_i(\xi_k)|+|u_i(\xi_k)| \leq \frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon. $$

This completes the proof of (46).

Now we are in a position to prove that \(|u_i(t)|\rightarrow0\) as \(t \rightarrow\infty,i\in \Uptheta.\)

In fact, it follows from (47) that for any \(\epsilon>0,\) there exists a \(\delta=\frac{\epsilon}{2M}>0\) such that for any \({t{'},t{''}\in [t_{k-1},t_k),k\in \mathbb{Z}_+}\) and |t  − t | < δ implies

$$ |u_i(t{'})-u_i(t{''})|\leq \frac{\epsilon}{2}, i\in \Uptheta. $$
(50)

Since (46) holds, there exists a constant \(T_1=T_1(\epsilon)>0\) such that

$$ |u_i(t_k)|<\frac{\varepsilon}{2},\quad t_k>T_1. $$
(51)

In addition, applying the same argument as (48), we can deduce that

$$ \int\limits_t^{t+\bar{\delta}} |u_i(s)| ds \rightarrow 0 \quad\hbox{as} \quad t \rightarrow \infty, $$

where \({\bar{\delta}=\min\{\delta, \frac{1}{2}\theta\}, \theta=\inf_{k\in \mathbb{Z}_+}\{t_{k}-t_{k-1}\}>0.}\) So for above given \(\epsilon, \) there exists a constant \(T_2=T_2(\epsilon)>0\) such that

$$ \int\limits_{t-\bar{\delta}}^{t} |u_i(s)| ds <\frac{\epsilon}{2}\bar{\delta}, t>T_2. $$
(52)

Set \({T^\ast=\min\{t_l | t_l\geq \max\{T_1,T_2\}, l\in \mathbb{Z}_+\}.}\) Now, we claim that \(|u_i(t)|\leq \epsilon, t>T^\ast.\) In fact, for any \(t>T^\ast\) and without loss of generality assume that \(t\in [t_m,t_{m+1}), m\geq l.\) Now, we consider the following two cases:

  1. Case 1.

    \(t\in [t_m,t_m+\bar{\delta}].\)

    In this case, it is obvious from (50) and (51) that

    $$ |u_i(t)|\leq|u_i(t)-u_i(t_m)|+|u_i(t_m)|\leq \frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon. $$
  2. Case 2.

    \(t\in [t_m+\bar{\delta}, t_{m+1}).\)

    In this case, we know that u i (s) is continuous on \([ t-\bar{\delta},t]\subseteq [t_m,t_{m+1}).\) By integral Mean value theorem, there exists at least one point \(\tau_t\in[t-\bar{\delta},t]\) such that

    $$ \int_{t-\bar{\delta}}^{t} |u_i(s)| ds=|u_i(\tau_t)|\bar{\delta}, $$

    which together with (52) yields \(|u_i(\tau_t)|<\frac{\epsilon}{2}.\) Then, in view of \(\tau_t\in[t-\bar{\delta},t],\) we obtain

    $$ |u_i(t)|\leq|u_i(t)-u_i(\tau_t)|+|u_i(\tau_t)|\leq \frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon. $$

    So in either case we have proved that \(|u_i(t)|\leq \epsilon, t>T^\ast.\) Therefore, the zero solution of system (7) is globally asymptotically stable, which implies that model (1) has a unique equilibrium point which is globally asymptotically stable. This completes the proof.

For system (1), when \(a_{ij}=d_{ijl}=0, i, j, l=1,2,\ldots,n, \) it reduces to the following high-order Hopfield neural networks with time-varying delays and impulsive perturbations:

$$ \begin{aligned} \dot{x}_{i}(t) &= -c_{i}x_{i}(t-\sigma) +\sum_{j=1}^{n}b_{ij}\tilde{g}_{j}(x_{j}(t-\tau_{j}(t)))+\sum_{j=1}^{n} \sum_{l=1}^{n}e_{ijl}\tilde{g}_{j}(x_{j}(t-\tau_{j}(t)))\tilde{g}_{l}(x_{l}(t-\tau_{l}(t))) +J_{i} \\ \Updelta x_{i}(t_{k}) &= J_{ik}(x_{i}(t_{k}^{-})), \quad k \in {\mathbb{Z}}_{+}, \end{aligned} $$
(53)

where \(i=1,2,\ldots,n. \) Before proceeding, we assume the following assumption which will be used in the following Theorem.

Assumption 7

There exist constants ν j  > 0 and σ j \(j=1,2,\ldots,n\) such that γ j  < σ j

$$ \left|\tilde{g}_{j}(s)\right| \leq \nu_{j}, \quad \gamma_{j} \leq \frac{\tilde{g}_{j}(u)-\tilde{g}_{j}(v)}{u-v}\leq \sigma_{j},\quad \hbox{for any} s,u, v \in {\mathbb{R}}, u \neq v. $$
(54)

System (5) can be rewritten in the following form:

$$ \begin{aligned} \dot{u}(t)&=-Cu(t-\sigma)+(B+\Uptheta(u(t-\tau(t))){\mathcal{E}})g(u(t-\tau(t)))\\ \Updelta u(t_{k})&= J_{k}(u(t_{k}^{-})), \quad k \in {\mathbb{Z}}_{+}, \end{aligned} $$
(55)

or in equivalent form:

$$ \begin{aligned} \frac{d}{{\text d}}\left[u(t)-C\int\limits_{t-\sigma}^{t}u(s)ds\right]&= -Cu(t)+(B+\Uptheta(u(t-\tau(t))){\mathcal{E}})g(u(t-\tau(t))) \\ \Updelta u(t_{k})&= J_{k}(u(t_{k}^{-})), \quad k \in {\mathbb{Z}}_{+}. \end{aligned} $$
(56)

Now, we present a global asymptotic stability result for the delayed high-order neural networks with time delay in the leakage term and impulsive perturbations.

Theorem 5.2

For given scalars τ2 > τ1 ≥ 0, σ and η, the equilibrium point of (4) is globally asymptotically stable if there exist symmetric matrix \(P=\left[ \begin{array}{ccc} P_{11} & P_{12} & P_{13} \\ P_{12}^{T} & P_{22} & P_{23}\\ P_{13}^{T} & P_{23}^{T} & P_{33} \end{array} \right]>0,\) positive diagonal matrices \(Q_{l}, l=1,2,\ldots,5, R_{1}, S_{1}, S_{2}, T_{1}, T_{2}, T_{3}, \) positive scalars \(\epsilon_{j}, j=4,6,8,10,12,14, \) real matrix X such that the following LMIs hold:

$$ \left[ \begin{array}{cc} P& (I-D_{k}) \\ * & P \end{array} \right] \geq 0, \quad k \in {\mathbb{Z}}_{+} $$
(57)
$$ \hat{\Uppi}=\left[ \begin{array}{cc} \hat{\Upomega} &\hat{\Uppsi} \\ \hat{\Uppsi}^{T} & -\hat{\Upupsilon} \end{array} \right]<0 $$
(58)

where \(\hat{\Upomega}=[\hat{\Upomega}_{ij}]_{11 \times 11}, \hat{\Uppsi}=[\hat{\Uppsi}]_{6 \times 11}, \hat{\Upupsilon}=\hbox{diag}\{\epsilon_{4}I,\epsilon_{6}I, \epsilon_{8}I, \epsilon_{10}I, \epsilon_{12}I, \epsilon_{14}I\}\),

$$ \begin{aligned} \hat{\Upomega}_{1,1}&= -2P_{11}C+P_{12}+P_{12}^{T}+\sum_{i=1}^{4}Q_{i}+\sigma^{2}R_{1} -\frac{1}{\tau_{2}}S_{1}-4\tau_{2}^{2}T_{1}-4(\tau_{2}-\tau_{1})^{2}T_{2} -4\sigma^{2}T_{3}\\ &\quad-2\Upgamma_{1}\Upsigma_{1}Z_{1}-2\Upgamma_{2}\Upsigma_{2}Z_{2}, \quad \hat{\Upomega}_{1,2}=\frac{1}{\tau_{2}}S_{1}, \quad \hat{\Upomega}_{1,3}=P_{13}, \quad \hat{\Upomega}_{1,4}=-P_{12}-P_{13}, \\ \hat{\Upomega}_{1,5} &= -2\Upsigma_{2}\Uplambda_{2}C, \quad \hat{\Upomega}_{1,6}=Z_{2}(\Upsigma_{2}+\Upgamma_{2}), \quad \hat{\Upomega}_{1,7}=P_{11}B+\Upsigma_{2}\Uplambda_{2}B,\\ \hat{\Upomega}_{1,9}&= C^{T}P_{11}C-P_{12}C+4\sigma T_{3}, \quad \hat{\Upomega}_{1,10}=-C^{T}P_{12}+P_{22}^{T}+4 \tau_{2} T_{1}, \\ \hat{\Upomega}_{1,11}&=-C^{T}P_{13}+P_{23}+4(\tau_{2}-\tau_{1})T_{2},\quad \hat{\Upomega}_{2,2}=-(1-\eta)Q_{4}-\frac{1}{\tau_{2}}S_{1} -\frac{1}{\tau_{2}-\tau_{1}}(\tau_{2}S_{1}+S_{2})\\ &\quad -\frac{1}{\tau_{2}-\tau_{1}}S_{2}-2\Upgamma_{2}\Upsigma_{2}Z_{3}, \quad \hat{\Upomega}_{2,3}=\frac{1}{\tau_{2}-\tau_{1}}S_{2}, \quad \hat{\Upomega}_{2,4}=\frac{1}{\tau_{2}-\tau_{1}}(\tau_{2}S_{1}+S_{2}),\\ \hat{\Upomega}_{2,7}&=Z_{3}(\Upsigma_{2}+\Upgamma_{2}), \quad \hat{\Upomega}_{3,3}=-Q_{1}-\frac{1}{\tau_{2}-\tau_{1}}S_{2}, \quad \hat{\Upomega}_{3,9}=-P_{13}C, \quad \Upomega_{3,10}=P_{23}^{T}, \quad \hat{\Upomega}_{3,11}=P_{33}^{T}, \\ \Upomega_{4,4}&= -Q_{2} -\frac{1}{\tau_{2}-\tau_{1}}(\tau_{2}S_{1}+S_{2}), \quad \hat{\Upomega}_{4,9}=P_{12}C+P_{13}C, \quad \hat{\Upomega}_{4,10}=-P_{22}^{T}-P_{23}^{T}, \\ \hat{\Upomega}_{4,11}&=-P_{23}-P_{33}^{T}, \hat{\Upomega}_{5,5}=-Q_{3}, \quad \hat{\Upomega}_{5,6}=C^{T}\Uplambda_{2}, \quad \hat{\Upomega}_{5,8}=-C^{T}X, \\ \hat{\Upomega}_{6,6}&=Q_{5}-2Z_{2}, \quad \hat{\Upomega}_{6,7}=-\Uplambda_{2}B, \quad \hat{\Upomega}_{7,7}= -(1-\eta)Q_{5}-2Z_{3}+\tilde{\epsilon}{\mathcal{E}}{\mathcal{E}}, \\ \hat{\Upomega}_{7,8}&=B^{T}X, \quad \hat{\Upomega}_{7,9}=-B^{T}P_{11}C, \quad \hat{\Upomega}_{7,10}=-B^{T}P_{12}, \quad \hat{\Upomega}_{7,11}= -B^{T}P_{13}, \\ \hat{\Upomega}_{8,8}&=\tau_{2}^{2}(\tau_{2}-\tau_{1})S_{1}+\tau_{2}^{4}T_{1} +(\tau_{2}^{2}-\tau_{1}^{2})T_{2}+\sigma^{4}T_{3}+(\tau_{2}-\tau_{1})^{2}S_{2}-X-X^{T},\\ \hat{\Upomega}_{9,9}&= \frac{1}{\sigma}R_{1}-4T_{3}, \quad \hat{\Upomega}_{10,10}=-4T_{1}, \quad \hat{\Upomega}_{11,11}=-4T_{2}, \quad \hat{\Uppsi}_{1,1}=\nu (P_{11}+\Upsigma_{2}\Uplambda_{2}), \\ \hat{\Uppsi}_{6,2}&=\nu \Uplambda_{2}, \quad \hat{\Uppsi}_{9,3}=\nu C^{T}P_{11}, \quad \hat{\Uppsi}_{10,4}=\nu P_{12}^{T}, \quad \hat{\Uppsi}_{11,5}=\nu P_{13}^{T}, \quad \hat{\Uppsi}_{8,6}=\nu X, \\ \tilde{\epsilon}&=\epsilon_{4}+\epsilon_{6} +\epsilon_{8}+\epsilon_{10}+\epsilon_{12}+\epsilon_{14}. \end{aligned} $$

Remark 5.3

The new augumented Lyapunov–Krasovskii functional with triple integral and leakage delay terms in this paper is completely new and efficient than [6].

Remark 5.4

Recently, few authors have discussed the triple integral terms added in the Lyapunov–Krasovskii functional, see for example [20]. There are few papers having triple integral terms deriving stability results of neural networks, see for example [20]. The leakage delays were not taken in these triple integral terms. Motivating this reason, we have included the leakage delay in the triple integrals, which is also one of the reason of reducing conservatism. The free-weighting matrix method has also been applied to reduce less conservative stability conditions. In [6], the authors used the few free-weighting matrices and found some conservative stability results than the published papers in the literature. However, there still exists room for further improvement than the results discussed in [6]. Motivating this reason, we introduce some triple integral terms for interval time-varying delays in the Lyapunov–Krasovskii functional and derived some less conservative stability results. This plays an important role in the further reduction of conservatism and we find the better upper bound than the result reported in [6].

When there is no leakage delay, that is, model (7) becomes

$$ \frac{du(t)}{{\text d}}= -Cu(t)+(A+\tilde{\Upxi}^{T}\tilde{{\mathcal{D}}})f(u(t))+(B+\tilde{\Uptheta}^{T}\tilde{{\mathcal{E}}})g(u(t-\tau(t))). $$
(59)

For model (59), we have the following result by Theorem 5.5.

Theorem 5.5

For given scalars τ2 > τ1 ≥ 0, σ and η, the equilibrium point of (4) is globally asymptotically stable if there exist symmetric matrix \(P=\left[ \begin{array}{ccc} P_{11} & P_{12} & P_{13} \\ P_{12}^{T} & P_{22} & P_{23}\\ P_{13}^{T} & P_{23}^{T} & P_{33} \end{array} \right]>0\) , positive diagonal matrices Q l l = 1, 2, 4, S 1S 2T 1T 2, positive scalars \(\epsilon_{j}, j=1,2,\ldots,12, \) real matrix X such that the following LMI holds:

$$ \Uppi=\left[ \begin{array}{cc} \tilde{\Upomega} \tilde{\Uppsi} \\ \tilde{\Uppsi}^{T} & -\tilde{\Upupsilon} \end{array} \right]<0 $$
(60)

where \(\tilde{\Upomega}=[\tilde{\Upomega}_{ij}]_{10 \times 10}, \tilde{\Uppsi}=[\tilde{\Uppsi}]_{12 \times 10}, \tilde{\Upupsilon}=\hbox{diag}\{\epsilon_{1}I,\epsilon_{2}I, \ldots, \epsilon_{12}I\}\),

$$ \begin{aligned} \tilde{\Upomega}_{1,1}&= -{\mathcal{P}}C-C{\mathcal{P}}+P_{12}+P_{12}^{T}+\sum_{i=1}^{4}Q_{i} -\frac{1}{\tau_{2}}S_{1}-4\tau_{2}^{2}T_{1}-4(\tau_{2}-\tau_{1})^{2}T_{2} \\ &\quad-2\Upgamma_{1}\Upsigma_{1}Z_{1}-2\Upgamma_{2}\Upsigma_{2}Z_{2}, \quad \tilde{\Upomega}_{1,2}=\frac{1}{\tau_{2}}S_{1}, \quad \Upomega_{1,3}=P_{13}, \quad \tilde{\Upomega}_{1,4}=-P_{12}-P_{13}, \\ \tilde{\Upomega}_{1,5}&=Z_{1}(\Upsigma_{1}+\Upgamma_{1})+{\mathcal{P}}A,\quad \tilde{\Upomega}_{1,6}=Z_{2}(\Upsigma_{2}+\Upgamma_{2}), \quad \tilde{\Upomega}_{1,7}={\mathcal{P}}B,\\ \tilde{\Upomega}_{1,9}&=-C^{T}P_{12}+P_{22}^{T}+4 \tau_{2} T_{1}, \quad \tilde{\Upomega}_{1,10}=-C^{T}P_{13}+P_{23}+4(\tau_{2}-\tau_{1})T_{2}, \\ \tilde{\Upomega}_{2,2}&=-(1-\eta)Q_{4}-\frac{1}{\tau_{2}}S_{1} -\frac{1}{\tau_{2}-\tau_{1}}(\tau_{2}S_{1}+S_{2}) -\frac{1}{\tau_{2}-\tau_{1}}S_{2}-2\Upgamma_{2}\Upsigma_{2}Z_{3}, \\ \tilde{\Upomega}_{2,3}&=\frac{1}{\tau_{2}-\tau_{1}}S_{2}, \quad \tilde{\Upomega}_{2,4}=\frac{1}{\tau_{2}-\tau_{1}}(\tau_{2}S_{1}+S_{2}), \quad \tilde{\Upomega}_{2,7}=Z_{3}(\Upsigma_{2}+\Upgamma_{2}), \\ \tilde{\Upomega}_{3,3}&=-Q_{1}-\frac{1}{\tau_{2}-\tau_{1}}S_{2}, \quad \tilde{\Upomega}_{3,9}=P_{23}^{T}, \quad \tilde{\Upomega}_{3,10}=P_{33}^{T}, \\ \tilde{\Upomega}_{4,4}&= -Q_{2} -\frac{1}{\tau_{2}-\tau_{1}}(\tau_{2}S_{1}+S_{2}), \quad \tilde{\Upomega}_{4,9}=-P_{22}^{T}-P_{23}^{T}, \quad \tilde{\Upomega}_{4,10}=-P_{23}-P_{33}^{T}, \\ \tilde{\Upomega}_{5,5}&= -2Z_{1}-\Uplambda_{1}A-A^{T}\Uplambda_{1}+\bar{\epsilon}{\mathcal{D}}^{T}{\mathcal{D}}, \quad \tilde{\Upomega}_{5,6}=-A^{T}\Uplambda_{2}, \quad \tilde{\Upomega}_{5,7}=-\Uplambda_{1}B, \quad \tilde{\Upomega}_{5,8}=A^{T}X, \\ \tilde{\Upomega}_{5,9}&=-A^{T}P_{12}, \quad \tilde{\Upomega}_{5,10}=-A^{T}P_{13}, \quad \tilde{\Upomega}_{6,6}=Q_{5}-2Z_{2}, \quad \tilde{\Upomega}_{6,7}=-\Uplambda_{2}B, \\ \tilde{\Upomega}_{7,7}&= -(1-\eta)Q_{5}-2Z_{3}+\tilde{\epsilon}{\mathcal{E}}{\mathcal{E}}, \quad \tilde{\Upomega}_{7,8}=B^{T}X, \quad \tilde{\Upomega}_{7,9}=-B^{T}P_{12}, \\ \tilde{\Upomega}_{7,10}&= -B^{T}P_{13}, \quad \tilde{\Upomega}_{8,8}=\tau_{2}^{2}(\tau_{2}-\tau_{1})S_{1}+\tau_{2}^{4}T_{1} +(\tau_{2}^{2}-\tau_{1}^{2})T_{2}+(\tau_{2}-\tau_{1})^{2}S_{2}-X-X^{T}, \\ \tilde{\Upomega}_{9,9}&=-4T_{1}, \quad \tilde{\Upomega}_{10,10}=-4T_{2}, \quad \tilde{\Uppsi}_{1,1}=\mu {\mathcal{P}}, \quad \tilde{\Uppsi}_{5,2}=\mu \Uplambda_{1}, \quad \tilde{\Uppsi}_{6,3}=\mu \Uplambda_{2}, \\ \tilde{\Uppsi}_{1,4}&=\nu {\mathcal{P}},\quad \tilde{\Uppsi}_{5,5}=\nu \Uplambda_{1}, \quad \tilde{\Uppsi}_{6,6}=\nu \Uplambda_{2}, \quad\tilde{\Uppsi}_{9,7}=\mu P_{12}^{T}, \\ \tilde{\Uppsi}_{9,8}&=\nu P_{12}^{T}, \quad \tilde{\Uppsi}_{10,9}=\mu P_{13}^{T}, \quad \tilde{\Uppsi}_{10,10}=\nu P_{13}^{T}, \quad \tilde{\Uppsi}_{8,11}=\mu X, \quad \tilde{\Uppsi}_{8,12}=\nu X, \\ {\mathcal{P}}&=P_{11}+\Upsigma_{1}\Uplambda_{1}+\Upsigma_{2}\Uplambda_{2}, \quad \bar{\epsilon}=\epsilon_{1}+\epsilon_{2}+\epsilon_{3} +\epsilon_{7}+\epsilon_{9}+\epsilon_{11}, \\ \tilde{\epsilon}&=\epsilon_{4}+\epsilon_{5}+\epsilon_{6} +\epsilon_{8}+\epsilon_{10}+\epsilon_{12}. \end{aligned} $$

When there is no leakage delay, that is, model (56) becomes

$$ \frac{du(t)}{{\text d}}= -Cu(t)+(B+\tilde{\Uptheta}^{T}\tilde{{\mathcal{E}}})g(u(t-\tau(t))). $$
(61)

For model (61), we have the following result by Theorem 5.6.

Theorem 5.6

For given scalars τ2 > τ1 ≥ 0, σ and η, the equilibrium point of (61) is globally asymptotically stable if there exist symmetric matrix \(P=\left[ \begin{array}{ccc} P_{11} & P_{12} & P_{13} \\ P_{12}^{T} & P_{22} & P_{23}\\ P_{13}^{T} & P_{23}^{T} & P_{33} \end{array} \right]>0, \) positive diagonal matrices Q l l = 1, 2, 4, S 1S 2T 1T 2positive scalars \(\epsilon_{j}, j=4,6,10,12,14, \) real matrix X such that the following LMIs hold:

$$ \hat{\Uppi}=\left[ \begin{array}{cc} \hat{\Upomega}& \hat{\Uppsi} \\ \hat{\Uppsi}^{T} & -\hat{\Upupsilon} \end{array} \right]<0 $$
(62)

where \(\hat{\Upomega}=[\hat{\Upomega}_{ij}]_{11 \times 11}, \hat{\Uppsi}=[\hat{\Uppsi}]_{6 \times 11}, \hat{\Upupsilon}=\hbox{diag}\{\epsilon_{4}I,\epsilon_{6}I, \epsilon_{8}I, \epsilon_{10}I, \epsilon_{12}I, \epsilon_{14}I\}, \)

$$ \begin{aligned} \hat{\Upomega}_{1,1}&= -{\mathcal{P}}C-C{\mathcal{P}}+P_{12}+P_{12}^{T}+Q_{1}+Q_{2}+Q_{4} -\frac{1}{\tau_{2}}S_{1}-4\tau_{2}^{2}T_{1}-4(\tau_{2}-\tau_{1})^{2}T_{2} \\ &\quad-2\Upgamma_{2}\Upsigma_{2}Z_{2}, \quad \hat{\Upomega}_{1,2}=\frac{1}{\tau_{2}}S_{1}, \quad \hat{\Upomega}_{1,3}=P_{13}, \quad\hat{\Upomega}_{1,4}=-P_{12}-P_{13}, \\ \hat{\Upomega}_{1,5}&=Z_{2}(\Upsigma_{2}+\Upgamma_{2}), \quad \hat{\Upomega}_{1,6}=P_{11}B+\Upsigma_{2}\Uplambda_{2}B, \quad \hat{\Upomega}_{1,8}=-C^{T}P_{12}+P_{22}^{T}+4 \tau_{2} T_{1}, \\ \hat{\Upomega}_{1,9}&=-C^{T}P_{13}+P_{23}+4(\tau_{2}-\tau_{1})T_{2},\quad \hat{\Upomega}_{2,2}=-(1-\eta)Q_{4}-\frac{1}{\tau_{2}}S_{1} -\frac{1}{\tau_{2}-\tau_{1}}(\tau_{2}S_{1}+S_{2})\\ &\quad -\frac{1}{\tau_{2}-\tau_{1}}S_{2}-2\Upgamma_{2}\Upsigma_{2}Z_{3}, \quad \hat{\Upomega}_{2,3}=\frac{1}{\tau_{2}-\tau_{1}}S_{2}, \quad \hat{\Upomega}_{2,4}=\frac{1}{\tau_{2}-\tau_{1}}(\tau_{2}S_{1}+S_{2}),\\ \hat{\Upomega}_{2,6}&=Z_{3}(\Upsigma_{2}+\Upgamma_{2}), \quad \hat{\Upomega}_{3,3}=-Q_{1}-\frac{1}{\tau_{2}-\tau_{1}}S_{2}, \quad \Upomega_{3,8}=P_{23}^{T}, \quad \hat{\Upomega}_{3,9}=P_{33}^{T}, \\ \Upomega_{4,4}&= -Q_{2} -\frac{1}{\tau_{2}-\tau_{1}}(\tau_{2}S_{1}+S_{2}), \quad \hat{\Upomega}_{4,8}=-P_{22}^{T}-P_{23}^{T}, \quad \hat{\Upomega}_{4,9}=-P_{23}-P_{33}^{T}, \\ \hat{\Upomega}_{5,5}&= Q_{5}-2Z_{2}, \quad \hat{\Upomega}_{5,6}=-\Uplambda_{2}B, \quad \hat{\Upomega}_{6,6}= -(1-\eta)Q_{5}-2Z_{3}+\tilde{\epsilon}{\mathcal{E}}{\mathcal{E}}, \\ \hat{\Upomega}_{6,7}&=B^{T}X, \quad \hat{\Upomega}_{6,8}=-B^{T}P_{12}, \quad \hat{\Upomega}_{6,9}= -B^{T}P_{13}, \\ \hat{\Upomega}_{7,7}&=\tau_{2}^{2}(\tau_{2}-\tau_{1})S_{1}+\tau_{2}^{4}T_{1} +(\tau_{2}^{2}-\tau_{1}^{2})T_{2}+(\tau_{2}-\tau_{1})^{2}S_{2}-X-X^{T}, \\ \hat{\Upomega}_{8,8}&=-4T_{1}, \quad\hat{\Upomega}_{9,9}=-4T_{2}, \quad \hat{\Uppsi}_{1,1}=\nu (P_{11}+\Upsigma_{2}\Uplambda_{2}), \\ \hat{\Uppsi}_{5,2}&=\nu \Uplambda_{2}, \quad \hat{\Uppsi}_{8,3}=\nu P_{12}^{T}, \quad \hat{\Uppsi}_{9,4}=\nu P_{13}^{T}, \quad \hat{\Uppsi}_{7,5}=\nu X, \\ \tilde{\epsilon}&=\epsilon_{4}+\epsilon_{6} +\epsilon_{10}+\epsilon_{12}+\epsilon_{14}. \end{aligned} $$

6 Numerical examples

Example. 4.1

Consider the delayed impulsive neural networks (5) with

$$ \begin{aligned} C&= \hbox{diag} \{1.3232, 1.1122, 1.6091\}, \quad A=\left[ \begin{array}{ccc} 0.8698 & -0.4190 & 0.0390 \\ -0.0688 & 0.5440 & 0.1518 \\ 0.1106 & 0.2315 & -0.4615 \end{array} \right] \\ B &= \left[ \begin{array}{ccc} -1.9429 & 0.7346 & -1.1934 \\ -0.4597 & -0.6754 & -0.3736 \\ -0.1670 & 1.5342 & -0.8716 \end{array} \right] \quad D_{1} = \left[ \begin{array}{ccc} 0.0134 & 0.0247 & -0.0052 \\ 0.0227 & 0.0062 & 0.0185 \\ -0.0319 & 0.0268 & 0.0196 \end{array} \right] \\ D_{2}&=\left[ \begin{array}{ccc} -0.0034 & 0.0138 & 0.0187 \\ -0.0221 & -0.0340 & -0.0243 \\ -0.0321 & 0.0510 & -0.0136 \end{array} \right] \quad D_{3} = \left[ \begin{array}{ccc} 0.0113 & -0.0412 & 0.0062 \\ 0.0177 & 0.0083 & -0.0134 \\ 0.0216 & 0.0093 & 0.0041 \end{array} \right] \\ E_{1} &= \left[ \begin{array}{ccc} 0.1290 & 0.0129 & -0.0258 \\ -0.0258 & 0.0516 & 0.0387 \\ 0.0129 & 0.0258 & 0.0516 \end{array} \right] \quad E_{2}=\left[ \begin{array}{ccc} 0.1300 & 0.0520 & -0.0546 \\ 0.0260 & 0.1820 & 0.0260 \\ -0.0260 & 0.0520 & 0.2080 \end{array} \right] \\ E_{3} &= \left[ \begin{array}{ccc} -0.1248 & 0.0208 & -0.0416 \\ 0.0208 & -0.0624 & -0.0208 \\ -0.0416 & -0.0208 & 0.1872 \end{array} \right] \quad D_{k}=0.8I, \quad\tilde{f}_{1}(x)=0.88 \tanh(0.2159x), \\ \tilde{f}_{2}(x)&=0.88 \tanh(0.4318x), \\ \tilde{f}_{3}(x)&=0.88 \tanh(0.1477x), \quad \tilde{g}_{1}(x)=0.88 \tanh(0.3523x), \quad \tilde{g}_{2}(x)=0.88 \tanh(0.4205x), \\ \tilde{g}_{3}(x)&= 0.88 \tanh(0.2614x). \end{aligned} $$

Obviously, Assumptions 1 and 2 are satisfied with

$$ \mu=\nu=0.88, \quad \Upsigma_{1}=\{0.19, 0.38, 0.13\}, \quad \Upsigma_{2}=\{0.31, 0.37, 0.23\}, \Upgamma_{1}=\Upgamma_{2}=0. $$

Let us take \(\sigma=0.1, \Updelta_{1}=0, \eta=1\) and \(\Updelta_{2}=0.7, \) by resorting to the Matlab LMI Control Toolbox to solve the LMIs in Theorem 5.1, we obtain the following feasible solutions

$$ \begin{aligned} P_{11} &= \left[ \begin{array}{ccc} 17.2403 & -0.8533 & -2.4016 \\ -0.8533 & 35.3785 & 0.3160 \\ -2.4016 & 0.3160 & 23.3963 \end{array} \right] \quad P_{12}= \left[ \begin{array}{ccc} 1.1276 & 0.6467 & -1.1154 \\ -0.8119 & 2.4727 & -0.5851 \\ -0.2719 & 0.7354 & 2.9255 \end{array} \right] \\ P_{13} &= \left[ \begin{array}{ccc} 1.6135 & 0.3439 & 0.0918 \\ -0.5430 & 5.0347 & -0.1657 \\ 0.4043 & 1.0088 & 2.3753 \end{array} \right] \quad P_{22}= \left[ \begin{array}{ccc} 2.3520 & 0.0169 & -1.1796 \\ 0.0169 & 4.4274 & 0.2448 \\ -1.1796 & 0.2448 & 4.2203 \end{array} \right] \\ P_{23} &= \left[ \begin{array}{ccc} -1.4931 & 0.0348 & 0.0171 \\ -0.0689 & -3.6227 & -0.1245 \\ 0.0542 & -0.0461 & -2.1508 \end{array} \right] \quad P_{33}= \left[ \begin{array}{ccc} 3.4573 & 0.0783 & -0.1084 \\ 0.0783 & 7.8873 & 0.1241 \\ -0.1084 & 0.1241 & 5.3331 \end{array} \right] \\ L_{1} &= \left[ \begin{array}{ccc} 6.9801 & 0 & 0 \\ 0 & 5.0875 & 0 \\ 0 & 0 & 2.2756 \end{array} \right] \quad L_{2}= \left[ \begin{array}{ccc} 1.5292 & 0 & 0 \\ 0 & 2.4936 & 0 \\ 0 & 0 & 2.6118 \end{array} \right] \\ Q_{1} &= \left[ \begin{array}{ccc} 3.8340 & 0 & 0 \\ 0 & 8.3122 & 0 \\ 0 & 0 & 6.5979 \end{array} \right] \quad Q_{2}= \left[ \begin{array}{ccc} 2.7007 & 0 & 0 \\ 0 & 3.5844 & 0 \\ 0 & 0 & 5.6033 \end{array} \right] \\ Q_{3} &= \left[ \begin{array}{ccc} 15.8514 & 0 & 0 \\ 0 & 21.2684 & 0 \\ 0 & 0 & 25.0580 \end{array} \right] \quad Q_{4}= \left[ \begin{array}{ccc} 1.4616 & 0 & 0 \\ 0 & 2.3302 & 0 \\ 0 & 0 & 2.9593 \end{array} \right] \\ Q_{5} &= \left[ \begin{array}{ccc} 19.8221 & 0 & 0 \\ 0 & 19.5376 & 0 \\ 0 & 0 & 33.6038 \end{array} \right] \quad R_{1}= \left[ \begin{array}{ccc} 10.0896 & 0 & 0 \\ 0 & 17.1444 & 0 \\ 0 & 0 & 16.7234 \end{array} \right] \\ S_{1} &= \left[ \begin{array}{ccc} 7.3462 & 0 & 0 \\ 0 & 14.2564 & 0 \\ 0 & 0 & 6.0977 \end{array} \right] \quad S_{2}= \left[ \begin{array}{ccc} 0.8033 & 0 & 0 \\ 0 & 3.0533 & 0 \\ 0 & 0 & 1.2883 \end{array} \right] \\ T_{1} &= \left[ \begin{array}{ccc} 1.9413 & 0 & 0 \\ 0 & 5.1983 & 0 \\ 0 & 0 & 2.8887 \end{array} \right] \quad T_{2}= \left[ \begin{array}{ccc} 2.7176 & 0 & 0 \\ 0 & 6.4714 & 0 \\ 0 & 0 & 3.9232 \end{array} \right] \\ T_{3} &= \left[ \begin{array}{ccc} 16.1764 & 0 & 0 \\ 0 & 12.5327 & 0 \\ 0 & 0 & 15.4581 \end{array} \right] \quad Z_{1}= \left[ \begin{array}{ccc} 34.2672 & 0 & 0 \\ 0 & 37.6956 & 0 \\ 0 & 0 & 44.9374 \end{array} \right] \\ Z_{2} &= \left[ \begin{array}{ccc} 26.3512 & 0 & 0 \\ 0 & 27.0018 & 0 \\ 0 & 0 & 38.6477 \end{array} \right] \quad Z_{3}= \left[ \begin{array}{ccc} 126.9633 & 0 & 0 \\ 0 & 109.7197 & 0 \\ 0 & 0 & 125.5990 \end{array} \right] \\ X &= \left[ \begin{array}{ccc} 4.7477 & -0.2724 & -0.8058 \\ 0.0334 & 9.7679 & -0.0469 \\ -0.7055 & 0.3084 & 5.4820 \end{array} \right], \\ \epsilon_{1}&= 114.1212, \quad \epsilon_{2}=42.9172, \quad \epsilon_{3}=42.1433, \quad \epsilon_{4}=111.2413, \quad \epsilon_{5}=41.1892, \quad \epsilon_{6}=40.4159, \\ \epsilon_{7}&= 52.9226, \quad \epsilon_{8}=51.1170, \quad \epsilon_{9}=42.7997, \quad \epsilon_{10}=41.0721, \quad \epsilon_{11}=42.9904, \quad \epsilon_{12}=41.2624, \\ \epsilon_{13}&= 72.2590, \quad \epsilon_{14}=70.1599. \end{aligned} $$

We are ready to discuss the same example for nominal system (i.e., removing the impulse term in the reported Theorems). The following Table 1 shows that the maximum allowable upper bound of the time delay τ2 for the different fixed delays τ1 and σ.

Table 1 Maximum allowable upper bound τ2 for various values of σ with fixed τ1 and η = 1
Table 2 Maximum allowable upper bound τ2 for various values of σ with fixed τ1 and η = 1

In addition, the time-varying delay has been chosen as \(\tau(t)=0.0335 \sin t+2\) of Theorem 1 in [6] for η = 0.0335. However, if we set \(\tau(t)=0.1131 \sin t+2310, \) from Theorem 5.5, we can verify that this system (59) has a unique equilibrium point, which is globally asymptotically stable for the same η. Hence, the results presented in this paper are less conservative than those studied in [6].

Example 4.2

Consider the delayed impulsive neural networks (18) with

$$ \begin{aligned} C&=\hbox{diag}\{1.5,1.8,1.2\}, \quad B=\left[ \begin{array}{ccc} 1.63 & 0.03 & -0.13 \\ -0.02 & 0.98 & 0.12 \\ 0.01 & -0.08 & 0.79 \end{array} \right], \quad E_{1}=\left[ \begin{array}{ccc} 0.09 & 0.01 & -0.01 \\ 0.02 & 0.04 & 0.03 \\ -0.01 & 0.02 & 0.04 \end{array} \right], \\ E_{2}&=\left[ \begin{array}{ccc} 0.05 & 0.02 & -0.02 \\ 0.01 & 0.07 & 0.01 \\ -0.01 & 0.02 & 0.08 \end{array} \right], \quad E_{3}=\left[ \begin{array}{ccc} 0.06 & -0.01 & 0.02 \\ -0.01 & 0.03 & 0.01 \\ 0.02 & 0.01 & 0.09 \end{array} \right], \\ \tilde{g}_{1}(x)&= 0.32 \tanh(2.1875x), \quad \tilde{g}_{2}(x)=0.32 \tanh(1.875x), \quad \tilde{g}_{3}(x)=0.32 \tanh(2.5x). \end{aligned} $$

Obviously, Assumptions 2 and 3 are satisfied with

$$ \nu=0.96, \quad \Upsigma_{2}=\hbox{diag}\{0.7,0.6,0.8\}, \quad \Upgamma_{2}=0. $$

Let us take \(\sigma=0.1, \Updelta_{1}=0, \eta=1\) and \(\Updelta_{2}=0.4, \) by resorting to the Matlab LMI Control Toolbox to solve the LMIs in Theorem 5.2, we obtain the following feasible solutions

$$ \begin{aligned} P_{11} &= \left[ \begin{array}{ccc} 9.9922 & 0.1362 & 0.2353 \\ 0.1362 & 24.1624 & 0.0452 \\ 0.2353 & 0.0452 & 33.5189 \end{array} \right] \quad P_{12}= \left[ \begin{array}{ccc} 3.0881 & 0.1002 & -0.1599 \\ -0.0387 & 7.2257 & 0.3217 \\ 0.7965 & -0.4158 & 7.0316 \end{array} \right] \\ P_{13} &= \left[ \begin{array}{ccc} 4.6915 & 0.0303 & -0.1965 \\ -0.0116 & 6.2722 & 0.4893 \\ 0.3055 & -0.0066 & 7.6997 \end{array} \right] \quad P_{22}= \left[ \begin{array}{ccc} 6.5362 & 0.3236 & 0.2186 \\ 0.3236 & 14.0224 & -0.1517 \\ 0.2186 & -0.1517 & 14.2731 \end{array} \right] \\ P_{23} &= \left[ \begin{array}{ccc} -3.8589 & 0.2214 & -0.1754 \\ -0.2006 & -5.5219 & 0.0946 \\ 0.2506 & -0.1626 & -7.6842 \end{array} \right] \quad P_{33}= \left[ \begin{array}{ccc} 7.8346 & -0.1926 & 0.4265 \\ -0.1926 & 17.0687 & 0.0545 \\ 0.4265 & 0.0545 & 17.7939 \end{array} \right] \\ L_{1} &= \left[ \begin{array}{ccc} 38.8900 & 0 & 0 \\ 0 & 38.8900 & 0 \\ 0 & 0 & 38.8900 \end{array} \right] \quad L_{2}= \left[ \begin{array}{ccc} 0.0360 & 0 & 0 \\ 0 & 2.8687 & 0 \\ 0 & 0 & 1.2729 \end{array} \right] \\ Q_{1} &= \left[ \begin{array}{ccc} 5.1989 & 0 & 0 \\ 0 & 11.3801 & 0 \\ 0 & 0 & 13.9093 \end{array} \right] \quad Q_{2}= \left[ \begin{array}{ccc} 0.1117 & 0 & 0 \\ 0 & 6.5120 & 0 \\ 0 & 0 & 3.6061 \end{array} \right] \\ Q_{3} &= \left[ \begin{array}{ccc} 6.1709 & 0 & 0 \\ 0 & 19.7847 & 0 \\ 0 & 0 & 11.2085 \end{array} \right] \quad Q_{4}= \left[ \begin{array}{ccc} 0.0724 & 0 & 0 \\ 0 & 4.8340 & 0 \\ 0 & 0 & 2.2166 \end{array} \right] \\ Q_{5} &= \left[ \begin{array}{ccc} 0.3394 & 0 & 0 \\ 0 & 16.4172 & 0 \\ 0 & 0 & 6.3795 \end{array} \right] \quad R_{1}= \left[ \begin{array}{ccc} 43.4857 & 0 & 0 \\ 0 & 19.9572 & 0 \\ 0 & 0 & 13.8792 \end{array} \right] \\ S_{1} &= \left[ \begin{array}{ccc} 11.5391 & 0 & 0 \\ 0 & 12.1150 & 0 \\ 0 & 0 & 13.1566 \end{array} \right] \quad S_{2}= \left[ \begin{array}{ccc} 0.3816 & 0 & 0 \\ 0 & 7.8574 & 0 \\ 0 & 0 & 11.0949 \end{array} \right] \\ T_{1} &= \left[ \begin{array}{ccc} 8.1451 & 0 & 0 \\ 0 & 14.7522 & 0 \\ 0 & 0 & 16.7977 \end{array} \right] \quad T_{2}= \left[ \begin{array}{ccc} 12.5145 & 0 & 0 \\ 0 & 16.5321 & 0 \\ 0 & 0 & 18.6317 \end{array} \right] \\ T_{3} &= \left[ \begin{array}{ccc} 141.7516 & 0 & 0 \\ 0 & 13.6979 & 0 \\ 0 & 0 & 12.0782 \end{array} \right] \quad Z_{2} = \left[ \begin{array}{ccc} 0.6482 & 0 & 0 \\ 0 & 24.3500 & 0 \\ 0 & 0 & 11.2151 \end{array} \right] \\ Z_{3}&= \left[ \begin{array}{ccc} 25.5316 & 0 & 0 \\ 0 & 32.1935 & 0 \\ 0 & 0 & 34.8652 \end{array} \right] \quad X = \left[ \begin{array}{ccc} 1.3547 & 0.0195 & 0.1148 \\ 0.0048 & 3.8938 & -0.1546 \\ 0.0322 & -0.1306 & 4.3937 \end{array} \right], \\ \epsilon_{4} &= 93.4792, \quad \epsilon_{6}=14.6164, \quad \epsilon_{8}=25.8435, \quad \epsilon_{10}=16.1415, \quad \epsilon_{12}=16.8014, \quad \epsilon_{14}=38.3909. \end{aligned} $$

We are ready to discuss the same example for nominal system (i.e., removing the impulse term in the reported Theorems). The following Table 2 shows that the maximum allowable upper bound of the time delay τ2 for the different fixed delays τ1 and σ.

In addition, the time-varying delay have been chosen as \(\tau(t)=0.1023 \sin t+2\) of Theorem 1 in [6] for η = 0.1023. However, if we set \(\tau(t)=0.1023 \sin t+2795, \) from Theorem 5.6, we can verify that this system (61) has a unique equilibrium point, which is globally asymptotically stable for the same η. Hence, the results presented in this paper is less conservative than those studied in [6].

Remark 4.3

In the simulations, one may find that high-order Hopfield neural networks (5) with σ = 0 are globally asymptotically stable (see Fig. 1). If we take σ = 1.5, it is easy to check that the LMIs (89) have not any feasible solutions via MATLAB LMI toolbox, which implies that our results cannot guarantee the stability of high-order Hopfield neural networks (5) for the example 4.1. In this case, from simulations, it is interesting to find that high-order Hopfield neural networks (5) is not stable (see Fig. 2). This greatly shows the advantage of our development results. Similarly, for example 4.2, Figs. 3 and 4.

Figs. 1–4
figure 1

State trajectories of tyhe system

7 Conclusion

In this paper, we have investigated a class of high-order Hopfield neural networks with time delay in the leakage term under impulsive perturbations. Sufficient condition to ensure the global existence and uniqueness of the solution for the high-order Hopfield neural networks by using the contraction mapping theorem have been derived. Next, some sufficient conditions on the global asymptotic stability results have been derived for a class of high-order Hopfield-type neural networks with mixed delays and impulsive perturbations. A new method has been proposed to obtain the delay-dependent stability criteria by introducing an appropriate Lyapunov–Krasovskii functionals including triple integral terms. Less conservative results have been derived by applying the free-weighting matrix method and LMI techniques. Finally, the effectiveness of the proposed results has been demonstrated by two numerical examples.