1 Introduction

In the past few decades, neural networks have been used in many fields due to their extensive applications such as image and signal processing, pattern recognition, fixed-point computations, optimization and other scientific areas [18]. Before considering these applications, it is a prerequisite and essential work to check whether the equilibrium points of the designed networks are stable or not since the application of these networks is heavily dependent on the dynamic behavior of the equilibrium points. In the applications of neural networks, it is well recognized that time delays naturally occur due to the finite switching speed of amplifies and may cause performance degradation, oscillation, or even instability of neural networks. Therefore, many attentions have been paid to the delay-dependent stability analysis of neural networks with time delays [930] since delay-dependent stability analysis is generally less conservative than delay-independent ones when the sizes of delays are small.

In delay-dependent stability analysis, the most utilized index for checking the conservatism of stability criteria is a maximum delay bound which guarantees the asymptotic stability of the concerned networks. It is generally used that comparing maximum delay bounds with the existing works in other literature is one of the means of showing the superiority of delay-dependent stability criteria. It should be noted that the stability criteria which provide larger delay bounds means that the feasible region of the stability criteria is enlarged. Thus, the applicable region such as state estimation, filtering, synchronization, and other areas can be increased. The remarkable approaches in delay-dependent stability analysis are Jensen inequality [31], model transformation [32, 33], free weighting technique [34], some new Lyapunov–Krasovskii functional [19, 20, 35, 36], zero equalities [37], reciprocally convex optimization [38], and so on. In [29], a new activation function condition was proposed to reduce the conservatism of stability criteria for neural networks with time-varying delays.

Recently, since the delay-partitioning idea which was firstly proposed by Gu [31], some various methods to the stability analysis of neural networks with time delays were proposed in [2128]. In [21], by dividing the delay intervals into two subintervals and utilizing different free weighting matrices at each subintervals, improved delay-dependent stability criteria for neural networks with interval time-varying delays were proposed. In [23], by introducing a tuning parameter adjusting delay interval, some new delay-partitioning stability criteria for neural networks with time-varying delay were introduced. Kwon et al. [26] proposed some new delay-partitioning method by constructing a different Lyapunov–Krasovskii functional at each delay subintervals. In [28], by utilizing the methods of [21], exponential stability analysis of neural networks with interval time-varying delays and general activation functions was conducted.

Another remarkable approach to reduce the conservatism of delay-dependent stability analysis is to use some new Lyapunov–Krasovskii functional. Thus, more information of system can be utilized, which can increase the feasible region of stability criteria. Since the triple integral form of Lyapunov–Krasovskii functionals were proposed in [35, 36], many researches (see [27, 28] and [30]) studied the problem of stability analysis of neural networks by employing the triple integral terms of Lyapunov–Krasovskii functional. Very recently, by constructing the quadrable-integral terms in Lyapunov–Krasovskii functional, new delay-dependent stability criteria for neural networks with time-varying delays have been reported in [30] based on quadratic convex combination. However, the methods proposed in [30] have some limitation in increasing maximum delay bounds for guaranteeing asymptotic stability of neural networks since system states and activation functions have not been fully utilized as augmented vector. Thus, there are rooms for further improvements in reducing the conservatism of stability criteria.

With motivation mentioned above, in this paper, some improved delay-dependent stability criteria for neural networks with time-varying delays are being proposed. By constructing a newly augmented Lyapunov–Krasovskii functional and proposing some new zero equalities which have not been proposed yet, a sufficient condition such that the considered neural networks are asymptotically stable is derived in terms of LMIs which will be presented in Theorem 1. In Theorem 2, based on the results of Theorem 1 and [39, 40], further improved stability criteria will be proposed by ensuring the positiveness of Lyapunov–Krasovskii functional. Through two numerical examples which were utilized in many previous works to check the conservatism of stability criteria, it will be shown that the proposed stability criteria can provide larger delay bounds than the recent existing results.

Notation

\(\mathbb{R}^{n}\) is the n-dimensional Euclidean space, and \(\mathbb{R}^{m \times n}\) denotes the set of all m×n real matrices. For symmetric matrices X and Y, X>Y (respectively, XY) means that the matrix XY is positive definite (respectively, nonnegative). X denotes a basis for the null-space of X; I n , 0 n and 0 mn denote n×n identity matrix, n×n and m×n zero matrices, respectively; ∥⋅∥ refers to the Euclidean vector norm or the induced matrix norm; \(\operatorname{diag} \{ \cdots\}\) denotes the block diagonal matrix. For square matrix S, \(\operatorname{Sym}\{S\}\) means the sum of S and its symmetric matrix S T, i.e., \(\operatorname{Sym}\{S\}=S+S^{T}\). Symbol ⋆ represents the elements below the main diagonal of a symmetric matrix. \(X_{[f(t)]} \in\mathbb{R}^{m \times n}\) means that the elements of matrix X [f(t)] include the scalar value of f(t).

2 Problem statement and preliminaries

Consider the following neural networks with discrete time-varying delays:

$$\begin{aligned} \dot{y}(t) =&-A y (t) + W_0g\bigl(y(t)\bigr) \\ &{}+ W_1g\bigl(y \bigl(t-h(t)\bigr)\bigr)+b \end{aligned}$$
(1)

where \(y(t)= [y_{1} (t),\ldots,y_{n} (t) ]^{T} \in\mathbb{R}^{n}\) is the neuron state vector, n is the number of neurons in a neural network, \(g(y(t)) = [g_{1} (y_{1} (t)),\ldots,g_{n} (y_{n} (t)) ]^{T} \in\mathbb {R}^{n}\) is the neuron activation function, \(g(y(t-h(t))) = [g_{1} (y_{1} (t-h(t))),\ldots,g_{n} (y_{n} (t-h(t))) ]^{T} \in\mathbb{R}^{n}\), \(A=\operatorname{diag}\{a_{i}\} \in\mathbb{R}^{n \times n}\) is a positive diagonal matrix, \(W_{0}=(w^{0}_{ij})_{n \times n}\in\mathbb{R}^{n \times n}\) and \(W_{1}=(w^{1}_{ij})_{n \times n}\in\mathbb{R}^{n \times n}\) are the interconnection matrices representing the weight coefficients of the neurons, and \(b= [b_{1}, b_{2}, \ldots,b_{n} ]^{T}\in\mathbb{R}^{n }\) is a constant input vector.

The delay, h(t), is a time-varying continuous function satisfying

$$\begin{aligned} 0 \leq h(t) \leq h_U,\qquad \dot{h}(t) \leq h_{D}, \end{aligned}$$
(2)

where h U is a known positive scalar and h D is any constant one.

The neuron activation functions satisfy the following assumption.

Assumption 1

The neuron activation functions g i (⋅), i=1,…,n, with g i (0)=0 are continuous, bounded and satisfy

$$\begin{aligned} &{k^{-}_i \leq \frac{g_i (u)-g_i(v)}{u - v} \leq k^{+}_i,\quad u,v \in \mathbb{R},} \\ &{\quad u \neq v,\ i=1,\ldots,n,} \end{aligned}$$
(3)

where \(k^{+}_{i}\) and \(k^{-}_{i}\) are constants.

Remark 1

In Assumption 1, \(k^{+}_{i}\) and \(k^{-}_{i}\) can be allowed to be positive, negative, or zero. As mentioned in [19], Assumption 1 describes the class of globally Lipschitz continuous and monotonic nondecreasing activation when \(k^{-}_{i}=0\) and \(k^{+}_{i}>0\). And the class of globally Lipschitz continuous and monotonic increasing activation functions can be described when \(k^{+}_{i}>k^{-}_{i}>0\).

For simplicity, in stability analysis of the neural networks (1), the equilibrium point \(y^{*} = [y^{*}_{1},\ldots,y^{*}_{n} ]^{T}\) whose uniqueness has been reported in [11] is shifted to the origin by utilizing the transformation x(⋅)=y(⋅)−y , which leads the system (1) to the following form:

$$\begin{aligned} \dot{x}(t) = -Ax(t) + W_0 f \bigl(x(t)\bigr) + W_1 f\bigl(x\bigl(t-h(t)\bigr)\bigr) \end{aligned}$$
(4)

where \(x(t) = [x_{1} (t),\ldots,x_{n} (t) ]^{T} \in\mathbb{R}^{n}\) is the state vector of the transformed system, f(x(t))=[f 1(x 1(t)),…,f n (x n (t))]T and \(f_{j} (x_{j} (t)) = g_{j} (x_{j} (t) + y^{*}_{j})-g_{j} (y^{*}_{j})\) with f j (0)=0 (j=1,…,n).

It should be noted that the activation functions f i (⋅) (i=1,…,n) with f i (0)=0 satisfy the following condition [4]:

$$\begin{aligned} &{k^{-}_i \leq \frac{f_i (u)-f_i(v)}{u - v} \leq k^{+}_i,\quad u,v \in \mathbb{R},} \\ &{\quad u \neq v,\ i=1,\ldots,n.} \end{aligned}$$
(5)

If v=0 in (5), then we have

$$\begin{aligned} k^{-}_i \leq\frac{f_i (u)}{u} \leq k^{+}_i, \quad \forall u \neq0,\ i=1,\ldots,n, \end{aligned}$$
(6)

which is equivalent to

$$\begin{aligned} \bigl[f_i (u) - k^{-}_i u \bigr] \bigl[f_i (u) - k^{+}_i u \bigr] \leq0,\quad i=1,\ldots,n. \end{aligned}$$
(7)

The objective of this paper is to investigate the delay-dependent stability analysis of system (4) which will be conducted in Sect. 3 via some newly augmented Lyapunov–Krasovskii functionals.

Before deriving our main results, the following lemmas will be utilized in deriving the main results.

Lemma 1

[41]

For a positive matrix M, scalars h U >h L >0 such that the following integrations are well defined, it holds that

  1. (a)
    $$\begin{aligned} &{ -(h_U -h_L) \int ^{t-h_L}_{t-h_U} x^T (s) M x(s)\,ds } \\ &{\quad\leq - \biggl(\int^{t-h_L}_{t-h_U} x (s)\,ds \biggr)^T M \biggl(\int^{t-h_L}_{t-h_U} x (s) \,ds \biggr) ,} \end{aligned}$$
    (8)
  2. (b)
    $$\begin{aligned} &{-\bigl(\bigl(h^2_U -h^2_L\bigr)/2\bigr) \int^{t-h_L}_{t-h_U} \int^t_s x^T (u) M x(u)\,du\,ds} \\ &{\quad\leq - \biggl(\int^{t-h_L}_{t-h_U} \int ^t_s x (u) \,du\,ds \biggr)^T} \\ &{\qquad{}\times M \biggl(\int^{t-h_L}_{t-h_U} \int^t_s x (u) \,du\,ds \biggr) ,} \end{aligned}$$
    (9)
  3. (c)
    $$\begin{aligned} &{-\bigl(\bigl(h^3_U -h^3_L\bigr)/6\bigr)} \\ &{\qquad{}\times \int^{t-h_L}_{t-h_U} \int^t_s \int^t_u x^T (v) M x(v) \,dv\,du\,ds} \\ &{\quad\leq- \biggl(\int^{t-h_L}_{t-h_U} \int ^t_s \int^t_v x (v) \,dv\,du\,ds \biggr)} \\ &{\qquad{}\times M \biggl( \int^{t-h_L}_{t-h_U} \int^t_s \int^t_v x (v) \,dv\,du\,ds \biggr).} \end{aligned}$$
    (10)

Lemma 2

[42]

Let \(\zeta\in\mathbb{R}^{n}\), \(\varPhi=\varPhi^{T} \in\mathbb{R}^{n \times n}\), and \(B \in\mathbb{R}^{m \times n}\) such that rank(B)<n. Then, the following statements are equivalent:

  1. (1)

    ζ T Φζ<0, =0, ζ≠0;

  2. (2)

    (B )T ΦB <0, where B is a right orthogonal complement of B.

Lemma 3

[39]

For the symmetric appropriately dimensional matrices Ω>0, Ξ, and matrix Λ, the following two statements are equivalent:

  1. (1)

    ΞΛ T ΩΛ<0;

  2. (2)

    There exists a matrix of appropriate dimension Ψ such that

    $$\begin{aligned} \left [ \begin{array}{c@{\quad}c} \varXi+ \varLambda^T \varPsi+ \varPsi^T \varLambda& \varPsi^T \\ \varPsi & -\varOmega \end{array} \right ]<0. \end{aligned}$$
    (11)

3 Main results

In this section, we propose new stability criteria for system (4). For the sake of simplicity of matrix and vector representation, \(e_{i}\ (i=1,2,\ldots,16) \in\mathbb{R}^{16n \times n}\) which will be used in Theorems 1 and 2 are defined as block entry matrices. (For example, \(e_{3}=[0_{n},0_{n}, I_{n}, \underbrace{0_{n},\ldots, 0_{n}}_{13}]^{T}\).) The other notation for some vectors and matrices is defined as:

$$\begin{aligned} &{\zeta(t)=\biggl[x^T(t),x^T\bigl(t-h(t) \bigr),x^T(t-h_U),} \\ &{\phantom{\zeta(t)=\biggl[}\dot{x}^T(t),\dot {x}^T(t-h_U),\int^t_{t-h(t)}x^T(s) \,ds,} \\ &{\phantom{\zeta(t)=\biggl[}\int^{t-h(t)}_{t-h_U}x^T(s)\,ds, \int^t_{t-h(t)}\int^t_s x^T(u)\,du\,ds,} \\ &{\phantom{\zeta(t)=\biggl[}\int^{t-h(t)}_{t-h_U}\int ^t_s x^T(u)\,du\,ds,f^T \bigl(x(t)\bigr),} \\ &{\phantom{\zeta(t)=\biggl[}f^T\bigl(x\bigl(t-h(t)\bigr)\bigr),f^T \bigl(x(t-h_U)\bigr),} \\ &{\phantom{\zeta(t)=\biggl[}\int^t_{t-h(t)}f^T\bigl(x(s) \bigr)\,ds, \int^{t-h(t)}_{t-h_U}f^T\bigl(x(s) \bigr)\,ds,} \\ &{\phantom{\zeta(t)=\biggl[}\int^t_{t-h(t)}\int^t_s f^T\bigl(x(u)\bigr)\,du\,ds,} \\ &{\phantom{\zeta(t)=\biggl[}\int^{t-h(t)}_{t-h_U}\int^t_s f^T\bigl(x(u)\bigr)\,du\,ds\biggr]^T,} \\ &{\alpha(t) = \biggl[x^T(t), x^T(t-h_U), \int^t_{t-h_U}x^T(s)\,ds,} \\ &{\phantom{\alpha(t) = \biggl[}\int ^t_{t-h_U}\int^t_s x^T(u)\,du\,ds, \int^t_{t-h_U}f^T\bigl(x(s) \bigr)\,ds,} \\ &{\phantom{\alpha(t) = \biggl[} \int^{t}_{t-h_U}\int^t_{t-h_U}f^T \bigl(x(u)\bigr)\,du\,ds\biggr]^T,} \\ &{\beta(t) = \bigl[x^T(t),\dot{x}^T(t),f^T \bigl(x(t)\bigr) \bigr]^T,} \\ &{\gamma(t,s) = \biggl[x^T(s),f^T\bigl(x(s) \bigr),\int^t_s \dot{x}^T(u)\,du,} \\ &{\phantom{\gamma(t,s) = \biggl[}\int^t_s x^T(u)\,du,\int ^t_s f^T\bigl(x(u)\bigr)\,du \biggr]^T,} \\ &{\varGamma= [-A, 0_n, 0_n, -I_n, 0_n, 0_n, 0_n, 0_n, 0_n, W_0, W_1,} \\ &{\phantom{\varGamma= [} 0_n, 0_n, 0_n, 0_n, 0_n ],} \\ &{\mathcal{P}_1 = \left [ \begin{array}{c@{\quad}c@{\quad}c} P_3 & P_1& 0_n\\ P_1& 0_n& 0_n \\ 0_n & 0_n& 0_n \end{array} \right ],\qquad \mathcal{P}_2 =\left [ \begin{array}{c@{\quad}c@{\quad}c} P_4 & P_2& 0_n\\ P_2& 0_n& 0_n \\ 0_n & 0_n& 0_n \end{array} \right ],} \\ &{\mathcal{P}_3 = \left [ \begin{array}{c@{\quad}c@{\quad}c}0_n & P_3& 0_n\\ P_3& 0_n& 0_n \\ 0_n & 0_n& 0_n \end{array} \right ],\qquad \mathcal{P}_4 =\left [ \begin{array}{c@{\quad}c@{\quad}c} 0_n & P_4& 0_n\\ P_4& 0_n& 0_n \\ 0_n & 0_n& 0_n \end{array} \right ],} \\ &{\varLambda_{[h(t)]} =\left [ \begin{array}{c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c} 0_n & h(t) I_n & 0_n & 0_n & (h_U - h(t))I_n & 0_n\\ 0_n & -I_n & 0_n & 0_n & 0_n & 0_n \\ 0_n & 0_n & 0_n & 0_n & -I_n & 0_n \\ I_n & 0_n & 0_n & 0_n & 0_n & 0_n \\ 0_n & 0_n & 0_n & I_n & 0_n & 0_n \\ 0_n & 0_n & I_n & 0_n & 0_n & 0_n \\ 0_n & 0_n & 0_n & 0_n & 0_n & I_n \\ \end{array} \right ]^T} \\ &{\phantom{\varLambda_{[h(t)]} =}{}\times [e_1,e_6,e_7,e_8,e_9,e_{15},e_{16}]^T,} \\ &{\varUpsilon_{1[h(t)]} = \operatorname{Sym}\bigl\{ h(t) e_1 \bigl(G_{33} e^T_4 + G_{34} e^T_1 + G_{35} e^T_{10}\bigr)\bigr\} ,} \\ &{\varUpsilon_{2[h(t)]}=e_1 \bigl(h_U h(t) P_3 + h_U \bigl(h_U-h(t)\bigr) P_4\bigr) e^T_1,} \\ &{\varXi_1=\operatorname{Sym}\bigl\{ [e_1,e_3,e_6+e_7, e_8+e_9,} \\ &{\phantom{\varXi_1=\operatorname{Sym}\bigl\{ [}e_{13}+e_{14}, e_{15}+e_{16}]\mathcal{R}} \\ &{\phantom{\varXi_1=}{}\times[e_4,e_5,e_1-e_3, h_U e_1-e_6 - e_7,e_{10}-e_{12},} \\ &{\phantom{\varXi_1=\times[}h_U e_{10}-e_{13}-e_{14}]^T \bigr\} ,} \\ &{\varXi_2=[e_1,e_4,e_{10}] \mathcal {N}[e_1,e_4,e_{10}]^T} \\ &{\phantom{\varXi_2=}{}-[e_3,e_5,e_{12}] \mathcal {N}[e_3,e_5,e_{12}]^T} \\ &{\phantom{\varXi_2=}{}+\operatorname{Sym}\bigl\{ e_{10}\varSigma e^T_4-e_1 K_m \varSigma e^T_4+e_1 K_p \varDelta e^T_4 } \\ &{\phantom{\varXi_2=}{}-e_{10} \varDelta e^T_4 \bigr\} ,} \\ &{\varXi_3 =[e_1,e_{10}] \left [ \begin{array}{c@{\quad}c}G_{11} & G_{12}\\ G^T_{12} & G_{22} \end{array} \right ] [e_1,e_{10}]^T} \\ &{\phantom{\varXi_3 =}{}-(1-h_D)[e_2,e_{11},e_1 - e_2,e_6, e_{13}] } \\ &{\phantom{\varXi_3 =}{}\times\mathcal {G}[e_2,e_{11},e_1 - e_2,e_6, e_{13}]^T} \\ &{\phantom{\varXi_3 =}{}+\operatorname{Sym}\bigl\{ e_6 \bigl(G_{13} e^T_4+G_{14} e^T_1+G_{15} e^T_{10}\bigr)} \\ &{\phantom{\varXi_3 =}{}+e_{13}\bigl(G_{23} e^T_4+G_{24} e^T_1+G_{25}e^T_{10} \bigr)} \\ &{\phantom{\varXi_3 =}{}+(-e_6) \bigl(G_{33}e^T_4+G_{34} e^T_1+G_{35} e^T_{10} \bigr)} \\ &{\phantom{\varXi_3 =}{}+e_8 \bigl(G^T_{34} e^T_4+G_{44} e^T_1+G_{45} e^T_{10} \bigr)} \\ &{\phantom{\varXi_3 =}{}+e_{15} \bigl(G^T_{35} e^T_4+G^T_{45} e^T_1+G_{55} e^T_{10} \bigr)\bigr\} } \\ &{\varXi_4=h^2_U [e_1,e_4, e_{10}] \mathcal{Q}_1 [e_1,e_4, e_{10}]^T} \\ &{\phantom{\varXi_4 =}{} -[e_6,e_1-e_2,e_{13},e_7,e_2-e_3,e_{14}]} \\ &{\phantom{\varXi_3 =}{}\times \left [ \begin{array}{c@{\quad}c} \mathcal{Q}_1+ \mathcal{P}_1& \mathcal{S}_1 \\ \mathcal{S}^T_1 & \mathcal{Q}_1+\mathcal{P}_2 \end{array} \right ]} \\ &{\phantom{\varXi_3 =}{}\times[e_6,e_1-e_2,e_{13},e_7,e_2-e_3,e_{14}]^T,} \\ &{\varXi_5=\bigl(\bigl(h^2_U/2 \bigr)^2\bigr) [e_1,e_4,e_{10}] \mathcal{Q}_2 [e_1,e_4,e_{10}]^T,} \\ &{\varXi_6 = \bigl(\bigl(h^3_U/6 \bigr)^2\bigr)e_4 Q_3 e^T_4- \bigl[\bigl(h^2_U/2\bigr) e_1-e_8-e_9 \bigr] } \\ &{\phantom{\varXi_6 =}{}\times Q_3 \bigl[\bigl(h^2_U/2\bigr) e_1-e_8-e_9\bigr]^T,} \\ &{\varXi_7 = e_1 (h_U P_1) e^T_1- e_2 (h_U P_1) e^T_2+e_2 (h_U P_2) e^T_2} \\ &{\phantom{\varXi_7 =}{} - e_3 (h_U P_2) e^T_3,} \\ &{\varTheta=\sum^3_{i=1} \bigl\{ -2 e_i K_m H_i K_p e^T_i} \\ &{\phantom{\varTheta=}{}+ \operatorname {Sym}\bigl\{ e_i (K_m+K_p) H_i e^T_{9+i} \bigr\} -2 e_{9+i} H_1 e^T_{9+i}\bigr\} ,} \\ &{\varOmega=\sum^2_{i=1} \bigl\{ \operatorname{Sym}\bigl\{ \bigl[e_{9+i}-e_{10+i}-(e_i-e_{i+1})K_m \bigr]} \\ &{\phantom{\varOmega=}{}\times H_{i+3} \bigl[e_{9+i}-e_{10+i}-(e_{i}-e_{i+1})K_p \bigr]^T \bigr\} \bigr\} ,} \\ &{\varPi_{[h(t)]}=\sum^7_{i=1} \varXi_i + \varUpsilon_{1[h(t)]}+ \varUpsilon _{2[h(t)]}+ \varTheta+\varOmega,} \\ &{\varPhi_{[h(t)]} =\bigl(\varGamma^{\perp} \bigr)^T (\varPi_{[h(t)]} )\varGamma ^{\perp}} \\ &{\phantom{\varPhi_{[h(t)]} =}{} + \operatorname{Sym}\bigl\{ \bigl(\varGamma^{\perp}\bigr)^T \varLambda ^T_{[h(t)]}\varPsi\bigr\} .} \end{aligned}$$
(12)

Now, we have the following theorem.

Theorem 1

For given scalars h U >0, h D , and diagonal matrices \(K_{p}=\operatorname{diag}\{k^{+}_{1},\ldots ,k^{+}_{n}\}\) and \(K_{m}=\operatorname{diag}\{k^{-}_{1},\ldots,k^{-}_{n}\}\), system (4) is asymptotically stable for 0≤h(t)≤h U and \(\dot{h}(t) \leq h_{D} \), if there exist positive diagonal matrices \(\varSigma=\operatorname{diag}\{\sigma _{1},\ldots,\sigma_{n}\}\), \(\varDelta=\operatorname{diag}\{\delta_{1},\ldots,\delta_{n}\}\), \(H_{i}=\operatorname{diag}\{h_{1i},\ldots,h_{ni}\}\) (i=1,…,5), positive definite matrices \(\mathcal{R} \in\mathbb{R}^{6n \times6n}\), \(\mathcal{N} \in\mathbb{R}^{3n \times3n}\), \(\mathcal{G}=[G_{ij}] \in\mathbb{R}^{5n \times5n}\), \(\mathcal{Q}_{1} \in\mathbb{R}^{3n \times3n}\), \(\mathcal{Q}_{2} \in\mathbb{R}^{3n \times3n}\), \(Q_{3} \in\mathbb{R}^{n \times n}\), any matrices \(\mathcal{S}_{1}\in\mathbb{R}^{3n \times3n}\), \(\mathcal{S}_{2}\in\mathbb{R}^{3n \times3n}\), \(\varPsi\in\mathbb{R}^{6n \times15n}\), and any symmetric matrices \(P_{i} \in\mathbb{R}^{n \times n}\) (i=1,…,4), satisfying the following LMIs:

(13)
(14)
$$\begin{aligned} &{\left [ \begin{array}{c@{\quad}c}\mathcal{Q}_1+\mathcal{P}_1& \mathcal{S}_1\\ \mathcal {S}^T_1 & \mathcal{Q}_1+\mathcal{P}_2 \end{array} \right ] \geq0,} \end{aligned}$$
(15)
$$\begin{aligned} &{\left [ \begin{array}{c@{\quad}c}\mathcal{Q}_2+(4/h_U)\mathcal{P}_3& \mathcal{S}_2\\ \mathcal{S}^T_2 & \mathcal{Q}_2+(4/h_U)\mathcal{P}_4 \end{array} \right ] \geq0,} \end{aligned}$$
(16)

where Φ [h(t)], \(\mathcal{P}_{i}\) (i=1,…,4), Θ and Γ are defined in (12), and Γ is the right orthogonal complement of Γ.

Proof

Let us consider the following candidate for the appropriate Lyapunov–Krasovskii functional:

$$\begin{aligned} V(t) = & \sum^6_{i=1} V_i(t), \end{aligned}$$
(17)

where

$$\begin{aligned} &{V_1 (t) =\alpha^T(t) \mathcal{R} \alpha(t),} \\ &{V_2 (t) = \int^t_{t-h_U} \beta^T(s) \mathcal{N} \beta(s)\,ds} \\ &{\phantom{V_2 (t) =}{}+2 \sum ^{n}_{i=1} \biggl( \sigma_{i} \int ^{x_i (t)}_0 \bigl(f_i (s) - k^{-}_i s\bigr)\,ds } \\ &{\phantom{V_2 (t) =}{}+\delta_{i} \int ^{x_i (t)}_0 \bigl( k^{+}_i s - f_i (s)\bigr)\,ds \biggr),} \\ &{V_3 (t) = \int^t_{t-h(t)} \gamma^T(t,s) \mathcal{G} \gamma(t,s) ,} \\ &{V_4 (t) = (h_U) \int^t_{t-h_U} \int^t_s \beta^T(u) \mathcal{Q}_1 \beta (u)\,du\,ds,} \\ &{V_5 (t) = \bigl(h^2_U/2\bigr)\int ^t_{t-h_U} \int^t_s \int^t_u \beta^T (v) \mathcal{Q}_2 \beta(v)\,dv\,du\,ds} \end{aligned}$$

and

$$\begin{aligned} &{V_6 (t)} \\ &{\quad = \bigl(h^3_U/6\bigr) } \\ &{\qquad{}\times\int ^t_{t-h_U} \int^t_s \int^t_u \int^t_v \dot{x}^T (\lambda) Q_3 \dot{x}(\lambda) \,d \lambda \,dv \,du \,ds.} \end{aligned}$$

It should be noted that

$$\begin{aligned} \alpha(t) =&\left [ \begin{array}{c}x(t) \\ x(t-h_U) \\ \int^t_{t-h_U}x(s)\,ds \\ \int^t_{t-h_U} \int^t_s x(u)\,du\,ds\\ \int^t_{t-h_U}f(x(s))\,ds\\ \int^{t}_{t-h_U}\int^t_s f(x(u))\,du\,ds \end{array} \right ] =\left [ \begin{array}{c}x(t) \\ x(t-h_U) \\ \int^t_{t-h(t)}x(s)\,ds+\int^{t-h(t)}_{t-h_U}x(s)\,ds \\ \int^t_{t-h(t)} \int^t_s x(u)\,du\,ds+\int^{t-h(t)}_{t-h_U} \int^t_s x(u)\,du\,ds\\ \int^t_{t-h(t)}f(x(s))\,ds+\int^{t-h(t)}_{t-h_U}f(x(s))\,ds\\ \int^{t}_{t-h(t)}\int^t_s f(x(u))\,du\,ds+\int^{t-h(t)}_{t-h_U}\int^t_s f(x(u))\,du\,ds \end{array} \right ] \\ =&[e_1, e_3, e_6+e_7, e_8+e_9, e_{13}+e_{14}, e_{15}+e_{16}]^T \zeta(t), \end{aligned}$$
(18)

and

$$\begin{aligned} \dot{\alpha}(t) =&\left [ \begin{array}{c}\dot{x}(t) \\ \dot{x}(t-h_U) \\ x(t)-x(t-h_U) \\ h_U x(t)-\int^t_{t-h_U}x(s)\,ds\\ f(x(t))-f(x(t-h_U)) \\ h_U f(x(t))- \int^t_{t-h_U}f(x(s))\,ds \end{array} \right ] =\left [ \begin{array}{c}\dot{x}(t) \\ \dot{x}(t-h_U) \\ x(t)-x(t-h_U) \\ h_U x(t)-\int^t_{t-h(t)}x(s)\,ds-\int^{t-h(t)}_{t-h_U}x(s)\,ds\\ f(x(t))-f(x(t-h_U)) \\ h_U f(x(t))- \int^t_{t-h(t)}f(x(s))\,ds -\int^{t-h(t)}_{t-h_U}f(x(s))\,ds \end{array} \right ] \\ = &[e_4, e_5, e_1-e_3, h_U e_1-e_6 - e_7, e_{10}-e_{12}, h_U e_{10}-e_{13}-e_{14}]^T \zeta(t). \end{aligned}$$
(19)

From (18) and (19), \(\dot{V}_{1}(t)\) can be represented as

$$\begin{aligned} \dot{V}_1 (t) = 2\alpha^T (t) \mathcal{R} \dot{\alpha}(t) = \zeta^T (t) \varXi_1 \zeta(t). \end{aligned}$$
(20)

Also, from the following equation,

$$\begin{aligned} \beta(t)=\left [ \begin{array}{c}x(t) \\ \dot{x}(t) \\ f(x(t)) \end{array} \right ]=[e_1, e_4, e_{10}]^T \zeta(t), \end{aligned}$$
(21)

the time-derivative of V 2(t) can be calculated as follows:

$$\begin{aligned} \dot{V}_2 (t) =&\beta^T (t) \mathcal{N} \beta(t)-\beta ^T(t-h_U)\mathcal{N} \beta(t-h_U) \\ &{}+2 \bigl[f\bigl(x(t)\bigr)-K_m x(t) \bigr]^T \varSigma\dot{x}(t) +2 \bigl[K_p x(t) - f\bigl(x(t)\bigr) \bigr]^T \varDelta\dot{x}(t) \\ =& \zeta^T (t) \varXi_2 \zeta(t). \end{aligned}$$
(22)

Calculation of \(\dot{V}_{3}(t)\) leads to

$$\begin{aligned} \dot{V}_3 (t) =& \frac{d}{dt} \left ( \int ^t_{t-h(t)} \left [ \begin{array}{c}x(s) \\ f(x(s)) \\ \int^t_s \dot{x}(u)\,du\\ \int^t_{s}x(u)\,du \\ \int^t_s f(x(u))\,du \end{array} \right ]^T\mathcal{G} \left [ \begin{array}{c}x(s) \\ f(x(s))\\ \int^t_s \dot{x}(u)\,du \\ \int^t_{s}x(u)\,du \\ \int^t_s f(x(u))\,du \end{array} \right ]\,ds \right ) \\ =& \left [ \begin{array}{c}x(s) \\ f(x(s)) \\ \int^t_s \dot{x}(u)\,du\\ \int^t_{s}x(u)\,du \\ \int^t_s f(x(u))\,du \end{array} \right ]^T \mathcal{G} \left.\left [ \begin{array}{c}x(s) \\ f(x(s))\\ \int^t_s \dot{x}(u)\,du \\ \int^t_{s}x(u)\,du \\ \int^t_s f(x(u))\,du \end{array} \right ]\right| _{s=t} \times\frac{d}{dt}(t) \\ &{} - \left [ \begin{array}{c}x(s) \\ f(x(s)) \\ \int^t_s \dot{x}(u)\,du\\ \int^t_{s}x(u)\,du \\ \int^t_s f(x(u))\,du \end{array} \right ]^T \mathcal{G} \left.\left [ \begin{array}{c}x(s) \\ f(x(s))\\ \int^t_s \dot{x}(u)\,du \\ \int^t_{s}x(u)\,du \\ \int^t_s f(x(u))\,du \end{array} \right ]\right| _{s=t-h(t)} \times\frac{d}{dt}\bigl(t-h(t)\bigr) \\ &{}+\int^t_{t-h(t)}\frac{d}{dt} \left ( \left [ \begin{array}{c}x(s) \\ f(x(s)) \\ \int^t_s \dot{x}(u)\,du\\ \int^t_{s}x(u)\,du \\ \int^t_s f(x(u))\,du \end{array} \right ]^T \mathcal{G} \left [ \begin{array}{c}x(s) \\ f(x(s))\\ \int^t_s \dot{x}(u)\,du \\ \int^t_{s}x(u)\,du \\ \int^t_s f(x(u))\,du \end{array} \right ] \right )\,ds \\ \leq&\left [ \begin{array}{c} x(t) \\ f(x(t)) \\ 0_n \\ 0_n \\ 0_n \end{array} \right ]^T \mathcal{G}\left [ \begin{array}{c} x(t) \\ f(x(t)) \\ 0_n \\ 0_n \\ 0_n \end{array} \right ]-(1-h_D) \left [ \begin{array}{c}x(t-h(t))\\ f(x(t-h(t))) \\ x(t) - x(t-h(t)) \\ \int^t_{t-h(t)} x(s)\,ds \\ \int^t_{t-h(t)} f(x(s))\,ds \end{array} \right ]^T\mathcal{G} \left [ \begin{array}{c}x(t-h(t))\\ f(x(t-h(t))) \\ x(t) - x(t-h(t)) \\ \int^t_{t-h(t)} x(s)\,ds \\ \int^t_{t-h(t)} f(x(s))\,ds \end{array} \right ] \\ &{}+\int^t_{t-h(t)}2 \left [ \begin{array}{c}x(s) \\ f(x(s)) \\ \int^t_s \dot{x}(u)\,du\\ \int^t_{s}x(u)\,du \\ \int^t_s f(x(u))\,du \end{array} \right ]^T \mathcal{G} \left [ \begin{array}{c}0_n \\ 0_n\\ \dot{x}(t) \\ x(t) \\ f(x(t)) \end{array} \right ]\,ds \\ =& \zeta^T (t) \bigl\{ [e_1, e_{10}] \left [ \begin{array}{c@{\quad}c}G_{11} & G_{12}\\ G^T_{12} & G_{22} \end{array} \right ] [e_1, e_{10}]^T \\ &{}-(1-h_D)[e_2, e_{11}, e_1 - e_2, e_6, e_{13}] \mathcal {G}[e_2, e_{11}, e_1 - e_2, e_6, e_{13}]^T \bigr\} \zeta(t) \\ &{}+2 \biggl(\int^t_{t-h(t)}x(s)\,ds \biggr) \bigl(G_{13} \dot{x}(t) + G_{14} x(t) + G_{15} f \bigl(x(t)\bigr) \bigr) \\ &{}+2 \biggl(\int^t_{t-h(t)}f\bigl(x(s)\bigr)\,ds \biggr) \bigl(G_{23} \dot{x}(t) + G_{24} x(t) + G_{25} f\bigl(x(t)\bigr) \bigr) \\ &{}+2 \biggl(h(t)x(t)-\int^t_{t-h(t)}x(s)\,ds \biggr)^T \bigl(G_{33} \dot {x}(t) + G_{34} x(t) + G_{35} f\bigl(x(t)\bigr) \bigr) \\ &{}+2 \biggl(\int^t_{t-h(t)} \int ^t_s x(u)\,du\,ds \biggr)^T \bigl(G^T_{34} \dot{x}(t) + G_{44} x(t) + G_{45} f\bigl(x(t)\bigr) \bigr) \\ &{}+2 \biggl(\int^t_{t-h(t)} \int ^t_s f\bigl(x(u)\bigr)\,du\,ds \biggr)^T \bigl(G^T_{35} \dot{x}(t) + G^T_{45} x(t) + G_{55} f\bigl(x(t)\bigr) \bigr) \\ =&\zeta^T (t) (\varXi_3 + \varUpsilon_{1[h(t)]} ) \zeta(t). \end{aligned}$$
(23)

Inspired by the work of [37], the following two zero equalities with any symmetric matrices P i (i=1,2) are considered:

$$\begin{aligned} &{0 = (h_U) \biggl\{ x^T (t) P_1 x(t) - x^T \bigl(t-h(t)\bigr) P_1 x\bigl(t-h(t)\bigr) - 2 \int ^t_{t-h(t)}x^T (s) P_1 \dot{x}(s)\,ds \biggr\} ,} \end{aligned}$$
(24)
$$\begin{aligned} &{0 = (h_U) \biggl\{ x^T \bigl(t-h(t)\bigr) P_2 x\bigl(t-h(t)\bigr) - x^T (t-h_U) P_2 x(t-h_U) - 2 \int^{t-h(t)}_{t-h_U}x^T (s) P_2 \dot{x}(s)\,ds \biggr\} .} \end{aligned}$$
(25)

Furthermore, the following two zero equalities with symmetric matrices P i (i=3,4) are newly introduced:

$$\begin{aligned} &{0 =(h_U) \biggl\{ x^T (t) \bigl(h(t) P_3\bigr) x(t) - \int^t_{t-h(t)}x^T(s)P_3 x(s)\,ds - 2 \int^t_{t-h(t)} \int ^t_s x^T(u) P_3 x(u) \,du\,ds \biggr\} ,} \end{aligned}$$
(26)
$$\begin{aligned} &{0 =(h_U) \biggl\{ x^T (t) \bigl( \bigl(h_U - h(t)\bigr) P_4\bigr) x(t)- \int ^{t-h(t)}_{t-h_U}x^T(s) P_4 x(s) \,ds - 2 \int^{t-h(t)}_{t-h_U} \int^t_s x^T(u) P_4 x(u)\,du\,ds \biggr\} . } \end{aligned}$$
(27)

Summing the four zero equalities presented by Eqs. (24)–(27) leads to

$$\begin{aligned} 0 =& \zeta^T (t) (\varXi_7 + \varUpsilon_{2[h(t)]}) \zeta(t)-2h_U \int^t_{t-h(t)}x^T (s) P_1 \dot{x}(s)\,ds-2 h_U\int^{t-h(t)}_{t-h_U}x^T (s) P_2 \dot{x}(s)\,ds \\ &{}- h_U\int^t_{t-h(t)}x^T(s)P_3 x(s)\,ds - 2h_U \int^t_{t-h(t)} \int ^t_s x^T(u) P_3 x(u) \,du\,ds \\ &{}- h_U\int^{t-h(t)}_{t-h_U}x^T(s) P_4 x(s) \,ds - 2h_U \int^{t-h(t)}_{t-h_U} \int^t_s x^T(u) P_4 x(u)\,du\,ds. \end{aligned}$$
(28)

By calculating \(\dot{V}_{4}(t)\), it can be obtained that

$$\begin{aligned} \dot{V}_4 (t) =& h^2_U \beta^T (t) \mathcal{Q}_1 \beta(t)- (h_U) \int^t_{t-h_U} \beta^T (s) \mathcal{Q}_1 \beta(s)\,ds. \end{aligned}$$
(29)

With the consideration of the four integral terms in Eq. (28), if the inequality (15) holds, then an upper bound of the last term in Eq. (29) can be obtained by utilizing reciprocally convex optimization approach [38]:

$$\begin{aligned} &{- (h_U) \int^t_{t-h_U} \beta^T (s) \mathcal{Q}_1 \beta(s)\,ds-2h_U \int^t_{t-h(t)}x^T (s) P_1 \dot{x}(s)\,ds-2 h_U\int^{t-h(t)}_{t-h_U}x^T (s) P_2 \dot{x}(s)\,ds} \\ &{\qquad{}- h_U\int^t_{t-h(t)}x^T(s)P_3 x(s)\,ds - h_U\int^{t-h(t)}_{t-h_U}x^T(s) P_4 x(s) \,ds} \\ &{\quad\leq-h_U \int^t_{t-h(t)} \beta^T (s)\left(\mathcal{Q}_1 + \underbrace{\left [ \begin{array}{c@{\quad}c@{\quad}c} P_3 & P_1& 0_n\\ P_1& 0_n& 0_n \\ 0_n & 0_n& 0_n \end{array} \right ]}_{\mathcal{P}_1} \right)\beta(s) -h_U \int^{t-h(t)}_{t-h_U} \beta^T (s)\left(\mathcal{Q}_1 + \underbrace{\left [ \begin{array}{c@{\quad}c@{\quad}c} P_4 & P_2& 0_n\\ P_2& 0_n& 0_n \\ 0_n & 0_n& 0_n \end{array} \right ]}_{\mathcal{P}_2} \right)\beta(s)} \\ &{\quad\leq -\left [ \begin{array}{c} \int^t_{t-h(t)}\beta(s)\,ds \\ \int^{t-h(t)}_{t-h_U}\beta (s)\,ds \end{array} \right ]^T \left [ \begin{array}{c@{\quad}c}\mathcal{Q}_1 +\mathcal{P}_1 & \mathcal{S}_1 \\ \mathcal{S}^T_1 & \mathcal{Q}_1 +\mathcal{P}_2 \end{array} \right ] \left [ \begin{array}{c} \int^t_{t-h(t)}\beta(s)\,ds \\ \int^{t-h(t)}_{t-h_U}\beta (s)\,ds \end{array} \right ] } \\ &{\quad= -\left [ \begin{array}{c} \int^t_{t-h(t)}x(s)\,ds \\ x(t)-x(t-h(t)) \\ \int^t_{t-h(t)}f(x(s))\,ds \\ \int^{t-h(t)}_{t-h_U}x(s)\,ds \\ x(t-h(t))-x(t-h_U) \\ \int^{t-h(t)}_{t-h_U}f(x(s))\,ds \end{array} \right ]^T\left [ \begin{array}{c@{\quad}c}\mathcal{Q}_1 +\mathcal{P}_1 & \mathcal{S}_1 \\ \mathcal{S}^T_1 & \mathcal{Q}_1 +\mathcal{P}_2 \end{array} \right ] \left [ \begin{array}{c} \int^t_{t-h(t)}x(s)\,ds \\ x(t)-x(t-h(t)) \\ \int^t_{t-h(t)}f(x(s))\,ds \\ \int^{t-h(t)}_{t-h_U}x(s)\,ds \\ x(t-h(t))-x(t-h_U) \\ \int^{t-h(t)}_{t-h_U}f(x(s))\,ds \end{array} \right ] } \\ &{\quad=-\zeta^T(t) \bigl\{ [e_6, e_1-e_2, ~e_{13}, e_7, e_2-e_3, e_{14}]\left [ \begin{array}{c@{\quad}c}\mathcal{Q}_1 +\mathcal{P}_1 & \mathcal{S}_1 \\ \mathcal{S}^T_1 & \mathcal{Q}_1 +\mathcal{P}_2 \end{array} \right ]} \\ &{\qquad{} \times[e_6, e_1-e_2, ~e_{13}, e_7, e_2-e_3, e_{14}] \bigr\} \zeta(t),} \end{aligned}$$
(30)

where \(\mathcal{S}_{1}\) is any matrix.

Thus, from (29) and (30) we have

$$\begin{aligned} &{ \dot{V}_4 (t)- 2h_U \int ^t_{t-h(t)}x^T (s) P_1 \dot{x}(s)\,ds-2 h_U\int^{t-h(t)}_{t-h_U}x^T (s) P_2 \dot{x}(s)\,ds} \\ &{\qquad{}- h_U\int^t_{t-h(t)}x^T(s)P_3 x(s)\,ds - h_U\int^{t-h(t)}_{t-h_U}x^T(s) P_4 x(s) \,ds} \\ &{\quad\leq\zeta^T (t) \varXi_4 \zeta(t).} \end{aligned}$$
(31)

By adding the two integral terms \(-2h_{U} \int^{t}_{t-h(t)} \int^{t}_{s} x^{T}(u) P_{3} x(u)\,du\,ds \) and \(-2h_{U} \int^{t-h(t)}_{t-h_{U}} \int^{t}_{s} x^{T}(u) P_{4} x(u)\,du\,ds \) to the results of \(\dot{V}_{5}(t)\), if the inequality (16) holds, then it can be obtained that

$$\begin{aligned} &{\dot{V}_5 (t) -2h_U \int ^t_{t-h(t)} \int^t_s x^T(u) P_3 x(u)\,du\,ds-2h_U \int ^{t-h(t)}_{t-h_U} \int^t_s x^T(u) P_4 x(u)\,du\,ds} \\ &{\quad= \bigl(h^2_U/2\bigr)^2 \beta^T (t) \mathcal{Q}_2 \beta(t) - \bigl(h^2_U/2 \bigr) \int^t_{t-h_U} \int^t_s \beta^T (u) \mathcal{Q}_2 \beta(u)\,du\,ds } \\ &{\qquad{}-2h_U \int^t_{t-h(t)} \int^t_s x^T(u) P_3 x(u)\,du\,ds-2h_U \int^{t-h(t)}_{t-h_U} \int ^t_s x^T(u) P_4 x(u) \,du\,ds} \\ &{\quad=\bigl(h^2_U/2\bigr)^2 \beta^T (t) \mathcal{Q}_2 \beta(t)-\bigl(h^2_U/2 \bigr) \int^t_{t-h(t)}\int^t_s \beta^T (u) \left(\mathcal{Q}_2 + (4/h_U) \underbrace{\left [ \begin{array}{c@{\quad}c@{\quad}c}0_n & P_3& 0_n\\ P_3& 0_n& 0_n \\ 0_n & 0_n& 0_n \end{array} \right ]}_{\mathcal{P}_3} \right) \beta(u)\,du\,ds } \\ &{\qquad{} -\bigl(h^2_U/2\bigr) \int ^{t-h(t)}_{t-h_U}\int^t_s \beta^T (u)\left(\mathcal{Q}_2 + (4/h_U) \underbrace{\left [ \begin{array}{c@{\quad}c@{\quad}c}0_n & P_4& 0_n\\ P_4& 0_n& 0_n \\ 0_n & 0_n& 0_n \end{array} \right ]}_{\mathcal{P}_4} \right) \beta(u)\,du\,ds.} \end{aligned}$$
(32)

By utilizing Lemma 1 and reciprocally convex optimization approach [38], it can be confirmed that

$$\begin{aligned} &{{- \bigl(h^2_U/2\bigr) \int^t_{t-h(t)}\int^t_s \beta^T (u) \bigl(\mathcal{Q}_2 + (4/h_U) \mathcal{P}_3 \bigr) \beta(u)\,du\,ds}} \\ &{{\qquad{} - \bigl(h^2_U/2\bigr) \int ^{t-h(t)}_{t-h_U}\int^t_s \beta^T (u)\bigl(\mathcal{Q}_2 + (4/h_U) \mathcal{P}_4 \bigr) \beta(u)\,du\,ds}} \\ &{{\quad\leq- \bigl(h^2_U/2\bigr) \bigl(2/h^2(t)\bigr) \biggl(\int^t_{t-h(t)} \int^t_s \beta(u) \,du\,ds \biggr)^T \bigl(\mathcal{Q}_2 + (4/h_U) \mathcal{P}_3 \bigr) \biggl(\int^t_{t-h(t)} \int^t_s \beta(u) \,du\,ds \biggr)}} \\ &{{\qquad{}- \bigl(h^2_U/2\bigr) \bigl(2/ \bigl(h^2_U - h^2(t)\bigr)\bigr)\biggl(\int ^{t-h(t)}_{t-h_U}\int^t_s \beta(u) \,du\,ds \biggr)^T \bigl(\mathcal{Q}_2 + (4/h_U)\mathcal{P}_4 \bigr) \biggl(\int ^{t-h(t)}_{t-h_U}\int^t_s \beta(u)\,du\,ds \biggr)}} \\ &{{\quad=- \left [ \begin{array}{c} \int^t_{t-h(t)} \int^t_s \beta(u)\,du\,ds \\ \int^{t-h(t)}_{t-h_U} \int^t_s \beta(u)\,du\,ds \end{array} \right ]^T \left [ \begin{array}{c@{\quad}c} \frac{1}{\eta(t)} (\mathcal{Q}_2 + (4/h_U)\mathcal{P}_3 ) & 0_{3n} \\ 0_{3n} & \frac{1}{1 - \eta(t)} (\mathcal{Q}_2 + (4/h_U)\mathcal {P}_4 ) \end{array} \right ]}} \\ &{{\qquad{} \times \left [ \begin{array}{c} \int^t_{t-h(t)} \int^t_s \beta(u)\,du\,ds \\ \int^{t-h(t)}_{t-h_U} \int^t_s \beta(u)\,du\,ds \end{array} \right ]}} \\ &{{\quad\leq- \left [ \begin{array}{c} \int^t_{t-h(t)} \int^t_s \beta(u)\,du\,ds \\ \int^{t-h(t)}_{t-h_U} \int^t_s \beta(u)\,du\,ds \end{array} \right ]^T \left [ \begin{array}{c@{\quad}c} \mathcal{Q}_2 + (4/h_U)\mathcal{P}_3 & \mathcal{S}_2 \\ \mathcal{S}^T_2 & \mathcal{Q}_2 + (4/h_U)\mathcal{P}_4 \end{array} \right ] \left [ \begin{array}{c} \int^t_{t-h(t)} \int^t_s \beta(u)\,du\,ds \\ \int^{t-h(t)}_{t-h_U} \int^t_s \beta(u)\,du\,ds \end{array} \right ],}} \end{aligned}$$
(33)

where \(\mathcal{S}_{2}\) is any matrix and \(\eta(t)=h^{2}(t)/h^{2}_{U}\).

It should be noted that

$$\begin{aligned} &{ \left [ \begin{array}{c} \int^t_{t-h(t)} \int^t_s \beta(u)\,du\,ds \\ \int^{t-h(t)}_{t-h_U} \int^t_s \beta(u)\,du\,ds \end{array} \right ] } \\ &{\quad= \left [ \begin{array}{c} \int^t_{t-h(t)} \int^t_s x(u)\,du\,ds \\ h(t) x(t) - \int^t_{t-h(t)} x(s)\,ds \\ \int^t_{t-h(t)} \int^t_s f(x(u))\,du\,ds \\ \int^{t-h(t)}_{t-h_U} \int^t_s x(u)\,du\,ds \\ (h_U - h(t)) x(t) - \int^{t-h(t)}_{t-h_U} x(s)\,ds \\ \int^{t-h(t)}_{t-h_U} \int^t_s f(x(u))\,du\,ds \end{array} \right ]} \\ &{\quad=\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 0_n & h(t) I_n & 0_n & 0_n & (h_U - h(t))I_n & 0_n\\ 0_n & -I_n & 0_n & 0_n & 0_n & 0_n \\ 0_n & 0_n & 0_n & 0_n & -I_n & 0_n \\ I_n & 0_n & 0_n & 0_n & 0_n & 0_n \\ 0_n & 0_n & 0_n & I_n & 0_n & 0_n \\ 0_n & 0_n & I_n & 0_n & 0_n & 0_n \\ 0_n & 0_n & 0_n & 0_n & 0_n & I_n \\ \end{array} \right ]^T \left [ \begin{array}{c} x(t) \\ \int^t_{t-h(t)}x(s)\,ds \\ \int^{t-h(t)}_{t-h_U}x(s)\,ds \\ \int^t_{t-h(t)} \int^t_s x(u)\,du\,ds \\ \int^{t-h(t)}_{t-h_U} \int^t_s x(u) \,du\,ds \\ \int^{t}_{t-h(t)}\int_s^t f(x(u)) \,du\,ds\\ \int^{t-h(t)}_{t-h_U}\int_s^t f(x(u)) \,du\,ds \end{array} \right ]} \\ &{\quad=\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 0_n & h(t) I_n & 0_n & 0_n & (h_U - h(t))I_n & 0_n\\ 0_n & -I_n & 0_n & 0_n & 0_n & 0_n \\ 0_n & 0_n & 0_n & 0_n & -I_n & 0_n \\ I_n & 0_n & 0_n & 0_n & 0_n & 0_n \\ 0_n & 0_n & 0_n & I_n & 0_n & 0_n \\ 0_n & 0_n & I_n & 0_n & 0_n & 0_n \\ 0_n & 0_n & 0_n & 0_n & 0_n & I_n \\ \end{array} \right ]^T [e_1, e_6, e_7, e_8, e_9, e_{15}, e_{16}]^T \zeta(t)} \\ &{\quad=\varLambda_{[h(t)]} \zeta(t).} \end{aligned}$$
(34)

From (32) to (34), the following inequality holds:

$$\begin{aligned} &{\dot{V}_5 (t) -2h_U \int^t_{t-h(t)} \int^t_s x^T(u) P_3 x(u)\,du\,ds-2h_U \int^{t-h(t)}_{t-h_U} \int ^t_s x^T(u) P_4 x(u) \,du\,ds} \\ &{\quad\leq \zeta^T (t) \biggl( \varXi_5 - \varLambda^T_{[h(t)]}\left [ \begin{array}{c@{\quad}c} \mathcal{Q}_2 + (4/h_U)\mathcal{P}_3 & \mathcal{S}_2 \\ \mathcal{S}^T_2 & \mathcal{Q}_2 + (4/h_U)\mathcal{P}_4 \end{array} \right ]\varLambda_{[h(t)]} \biggr) \zeta(t).} \end{aligned}$$
(35)

By utilizing (c) in Lemma 1, the estimation of \(\dot{V}_{6}\) can be

$$\begin{aligned} \dot{V}_6 (t) =& \bigl(h^3_U/6 \bigr)^2 \dot{x}^T (t) Q_3 \dot{x}(t)- \bigl(h^3_U/6\bigr) \int^t_{t-h_U} \int^t_s \int^t_u \dot{x}^T(v) Q_3 \dot {x}(v)\,dv\,du\,ds \\ \leq&\bigl(h^3_U/6\bigr)^2 \dot{x}^T (t) Q_3 \dot{x}(t)- \biggl(\int ^t_{t-h_U} \int^t_s \int^t_u \dot{x}(v) \,dv\,du\,ds \biggr)^T Q_3 \biggl(\int^t_{t-h_U} \int^t_s \int^t_u \dot{x}(v) \,dv\,du\,ds \biggr) \\ =&\bigl(h^3_U/6\bigr)^2 \dot{x}^T (t) Q_3 \dot{x}(t) \\ &{}- \biggl(\bigl(h^2_U/2\bigr)x(t)-\int ^t_{t-h(t)}\int^t_s x(u)\,du\,ds-\int^{t-h(t)}_{t-h_U}\int ^t_s x(u)\,du\,ds \biggr)^T \\ &{}\times Q_3 \biggl(\bigl(h^2_U/2 \bigr)x(t)-\int^t_{t-h(t)}\int^t_s x(u)\,du\,ds-\int^{t-h(t)}_{t-h_U}\int ^t_s x(u)\,du\,ds \biggr) \\ =& \zeta^T (t) \varXi_6 \zeta(t). \end{aligned}$$
(36)

From (6), for any positive diagonal matrices \(H_{i}=\operatorname{diag}\{h_{1i},\ldots,h_{ni}\}\) (i=1,…,3), the following inequality holds:

$$\begin{aligned} 0 \leq& -2\sum^n_{i=1}h_{i1} \bigl[f_i \bigl(x_i (t)\bigr) - k^{-}_i x_i (t) \bigr]\bigl[f_i \bigl(x_i(t) \bigr) - k^{+}_i x_i (t) \bigr] \\ &{}-2\sum^n_{i=1}h_{i2} \bigl[f_i \bigl(x_i \bigl(t-h(t)\bigr)\bigr)- k^{-}_i x_i \bigl(t-h(t)\bigr) \bigr] \bigl[f_i \bigl(x_i\bigl(t-h(t)\bigr)\bigr) -k^{+}_i x_i \bigl(t-h(t)\bigr) \bigr] \\ &{}-2\sum^n_{i=1}h_{i3} \bigl[f_i \bigl(x_i (t-h_U)\bigr)- k^{-}_i x_i (t-h_U) \bigr] \bigl[f_i \bigl(x_i(t-h_U)\bigr) -k^{+}_i x_i (t-h_U) \bigr] \\ =&\zeta^T(t) \varTheta\zeta(t), \end{aligned}$$
(37)

where Θ is defined in (12).

Inspired by the work of [29], from (5), the following conditions hold:

$$\begin{aligned} \begin{aligned} {}&k^{-}_i \leq\frac{f_i (x_i (t)) - f_i (x_i (t-h(t)))}{ x_i (t) - x_i (t-h(t))} \leq k^{+}_i, \\ &k^{-}_i \leq\frac{f_i (x_i (t-h(t))) - f_j (x_i (t-h_U))}{ x_i (t-h(t)) - x_i (t-h_U)} \leq k^{+}_i, \\ &\quad i=1,\ldots,n. \end{aligned} \end{aligned}$$
(38)

For i=1,…,n, the above two conditions are equivalent to

$$\begin{aligned} &{ \bigl[ f_i \bigl(x_i (t)\bigr) - f_i \bigl(x_i \bigl(t-h(t)\bigr)\bigr)- k^{-}_i \bigl(x_i (t) - x_i \bigl(t-h(t)\bigr)\bigr) \bigr]} \\ &{\quad{}\times \bigl[ f_i \bigl(x_i (t)\bigr) - f_i \bigl(x_i \bigl(t-h(t)\bigr)\bigr)- k^{+}_i\bigl(x_i (t) - x_i \bigl(t-h(t)\bigr)\bigr) \bigr] \leq0,} \end{aligned}$$
(39)
$$\begin{aligned} &{ \bigl[ f_i \bigl(x_i \bigl(t-h(t)\bigr) \bigr) - f_i \bigl(x_i (t-h_U)\bigr)- k^{-}_i\bigl(x_i \bigl(t-h(t)\bigr) - x_i (t-h_U)\bigr) \bigr]} \\ &{\quad{}\times \bigl[ f_i \bigl(x_i \bigl(t-h(t)\bigr)\bigr) - f_i \bigl(x_i (t-h_U)\bigr)- k^{+}_i\bigl(x_i \bigl(t-h(t)\bigr) - x_i (t-h_U)\bigr) \bigr] \leq0.} \end{aligned}$$
(40)

Therefore, for any positive diagonal matrices \(H_{i}=\operatorname{diag}\{h_{1i},\ldots,h_{ni}\}\) (i=4,5), the following inequality holds:

$$\begin{aligned} 0 \leq& -2 \sum ^n_{i=1}\bigl\{ h_{i4}\bigl[ f_i \bigl(x_i (t)\bigr) - f_i \bigl(x_i \bigl(t-h(t)\bigr)\bigr) - k^{-}_i \bigl(x_i (t) - x_i \bigl(t-h(t)\bigr)\bigr)\bigr] \\ & {}\times\bigl[ f_i \bigl(x_i (t)\bigr) - f_i \bigl(x_i \bigl(t-h(t)\bigr)\bigr)- k^{+}_i\bigl(x_i (t) - x_i \bigl(t-h(t)\bigr)\bigr)\bigr] \bigr\} \\ &{} -2 \sum^n_{i=1}\bigl\{ h_{i5}\bigl[ f_i \bigl(x_i \bigl(t-h(t) \bigr)\bigr) - f_i \bigl(x_i (t-h_U)\bigr) {}- k^{-}_i\bigl(x_i \bigl(t-h(t)\bigr) - x_i (t-h_U)\bigr)\bigr] \\ &{}\times \bigl[ f_i \bigl(x_i \bigl(t-h(t)\bigr) \bigr) - f_i \bigl(x_i (t-h_U)\bigr) {}- k^{+}_i\bigl(x_i \bigl(t-h(t)\bigr) - x_i (t-h_U)\bigr)\bigr] \bigr\} \\ =& \zeta^T (t) \varOmega\zeta(t). \end{aligned}$$
(41)

From (17)–(41) and by applying the S-procedure from Ref. [43], an upper bound of \(\dot {V}(t)=\sum^{6}_{i=1} V_{i} (t)\) with the addition of (28) can be written as

$$\begin{aligned} &{\dot{V}(t) +\zeta^T (t) ( \varXi_7 + \varUpsilon_{2[h(t)]}) \zeta(t)-2h_U \int ^t_{t-h(t)}x^T (s) P_1 \dot{x}(s)\,ds-2 h_U\int^{t-h(t)}_{t-h_U}x^T (s) P_2 \dot{x}(s)\,ds} \\ &{\qquad{}- h_U\int^t_{t-h(t)}x^T(s)P_3 x(s)\,ds - 2h_U \int^t_{t-h(t)} \int ^t_s x^T(u) P_3 x(u) \,du\,ds} \\ &{\qquad{}- h_U\int^{t-h(t)}_{t-h_U}x^T(s) P_4 x(s) \,ds- 2h_U \int^{t-h(t)}_{t-h_U} \int^t_s x^T(u) P_4 x(u)\,du\,ds } \\ &{\quad\leq \zeta^T (t) \Biggl\{ \underbrace{\sum ^7_{i=1} \varXi_i + \varUpsilon _{1[h(t)]}+ \varUpsilon_{2[h(t)]}+\varTheta+\varOmega}_{\varPi _{[h(t)]}} } \\ &{\qquad{}-\varLambda^T_{[h(t)]} \left [ \begin{array}{c@{\quad}c} \mathcal{Q}_2 + (4/h_U)\mathcal{P}_3 & \mathcal{S}_2 \\ \mathcal{S}^T_2 & \mathcal{Q}_2 + (4/h_U)\mathcal{P}_4 \end{array} \right ]\varLambda_{[h(t)]} \Biggr\} \zeta(t).} \end{aligned}$$
(42)

By Lemma 2,

$$\zeta^T(t) \left(\varPi_{[h(t)]}-\varLambda^T_{[h(t)]} \left[ \begin{array}{c@{\quad}c} \mathcal{Q}_2 + (4/h_U)\mathcal{P}_3 & \mathcal{S}_2 \\ \mathcal{S}^T_2 & \mathcal{Q}_2 + (4/h_U)\mathcal{P}_4 \end{array} \right]\varLambda_{[h(t)]} \right) \zeta(t)<0 $$

with 0=Γζ(t) is equivalent to

$$\begin{aligned} \bigl(\varGamma^{\perp} \bigr)^T \left(\varPi_{[h(t)]}-\varLambda^T_{[h(t)]} \left [ \begin{array}{c@{\quad}c} \mathcal{Q}_2 + (4/h_U)\mathcal{P}_3 & \mathcal{S}_2 \\ \mathcal{S}^T_2 & \mathcal{Q}_2 + (4/h_U)\mathcal{P}_4 \end{array} \right ]\varLambda_{[h(t)]} \bigr) \bigl(\varGamma^{\perp} \right)<0. \end{aligned}$$
(43)

Then, by Lemma 3, the condition (43) is equivalent to the following inequality with any matrix \(\varPsi\in \mathbb {R}^{6n \times15n}\):

(44)

The above condition is affinely dependent on h(t). Therefore, if inequalities (13) and (14) hold, then inequality (44) is satisfied, which means that system (4) is asymptotically stable for 0≤h(t)≤h U and \(\dot{h}(t) \leq h_{D}\). This completes our proof. □

Remark 2

Unlike the previous works [1130], the new augmented vector ζ(t) defined by Eq. (12) was utilized in Theorem 1 which includes the state vectors such as \(\int^{t}_{t-h(t)} \int^{t}_{s} f(x(u)) \,du \,ds\) and \(\int^{t-h(t)}_{t-h_{U}} \int^{t}_{s} f(x(u)) \,du \,ds\). These state vectors have not been utilized as an element of augmented vector. Furthermore, V 1, V 3, V 4, and V 5 have not been proposed yet in the previous works to stability analysis of neural networks with time-varying delays, which is the main difference between Theorem 1 and the methods in other literature. Thus, some new cross terms which may play a role to reduce the conservatism of stability condition were considered in stability criteria of (4).

Remark 3

It should be noted that the four zero equalities (24)–(27) are added in the results of \(\dot{V}(t)\) as shown in (42). Inspired by the work of [37], the two zero equalities in Eqs. (24) and (25) are proposed and utilized in Theorem 1 to enhance the feasible region of stability criterion. As presented in Eqs. (24) and (25), the quadratic terms such as (h U )(x T(t)P 1 x(t)−x T(th(t))P 1 x(th(t))) and (h U )(x T(th(t))P 2 x(th(t))−x T(th U )P 2 x(th U )) play a role in enhancing the feasible region of stability criterion. Also, as shown in (30), by merging the two integral terms \(-2(h_{U} ) \int^{t}_{t-h(t)} \dot{x}^{T}(s) P_{1} x(s)\,ds\) and \(-2(h_{U} ) \int^{t-h(t)}_{t-h_{U}} \dot{x}^{T}(s) P_{2} x(s)\,ds\) into the terms \(-(h_{U} )\int^{t}_{t-h(t)} \beta^{T}(s) \mathcal{Q}_{1} \beta(s) \,ds\) and \(-(h_{U} )\int^{t-h(t)}_{t-h_{U}} \beta^{T}(s) \mathcal{Q}_{1} \beta(s) \,ds\), respectively, the conservatism of stability criterion can be reduced. Furthermore, the two zero equalities (26) and (27) are proposed for the first time to increase the feasible region of the criterion. These zero equalities can be obtained from the fact that \(\int^{t}_{t-h} \int^{t}_{s} \dot{f}(u)\,du\,ds=hf(t) - \int^{t}_{t-h}f(s)\,ds\) with f(t)=x T(t)Px(t). The terms \(-(h_{U}) \int^{t}_{t-h(t)} x^{T}(s)P_{3} x(s)\,ds\) and \(-(h_{U}) \int^{t-h(t)}_{t-h(t)}x^{T}(s)P_{4} x(s)\,ds\) are merged into the results of \(\dot{V}_{4}(t)\) and the terms \(-2(h_{U}) \int^{t}_{t-h(t)}\int^{t}_{s} \dot{x}^{T} (u) P_{3} x(u)\,du\,ds\) and \(-2(h_{U}) \int^{t-h(t)}_{t-h_{U}}\int^{t}_{s} \dot{x}^{T} (u) P_{4} x(u)\,du\,ds\) are merged into the result of \(\dot{V}_{5}(t)\) as shown in (32).

Remark 4

In the proposed Theorem 1, the positiveness of V(t) is included such as \(\mathcal{R}>0\), \(\mathcal{N}>0\), Σ>0, Δ>0, \(\mathcal{G}>0\), \(\mathcal {Q}_{1}>0\), \(\mathcal{Q}_{2}>0\), and Q 3>0. These conditions guarantee the positiveness of each V i (t) (i=1,…,6). However, as mentioned in [39] and [40], by incorporating some functional of V(t), the positiveness of V(t) can be relaxed which will be introduced in Theorem 2.

For the sake of simplicity of matrix and vector representation in Theorem 2, \(\tilde{e}_{i}\ (i=1,\ldots,6) \in\mathbb{R}^{6n \times n}\), which will be used are defined as block entry matrices. (For example, \(\tilde{e}_{3} = [0_{n}, 0_{n}, I_{n}, 0_{n}, 0_{n}, 0_{n} ]^{T}\).) Assume that \(\mathcal{N}>0\), Σ>0, Δ>0, \(\mathcal{G}>0\), \(\mathcal{Q}_{1}>0\), \(\mathcal{Q}_{2}>0\), and Q 3>0. Then, V(t) has the following lower bound:

$$\begin{aligned} V(t) >& \alpha^T(t) \mathcal{R} \alpha(t) + \int^t_{t-h_U} \beta^T (s) \mathcal{N} \beta(s)\,ds \\ &{}+ h_U \int ^t_{t-h_U}\int^t_s \beta^T(u) \mathcal {Q}_1 \beta(u)\,du\,ds. \end{aligned}$$
(45)

By (a) in Lemma 1, the lower bound of \(\int^{t}_{t-h_{U}} \beta^{T} (s) \mathcal{N} \beta(s)\,ds\) can be obtained as

$$\begin{aligned} &{\int^t_{t-h_U} \beta^T (s) \mathcal{N} \beta(s)\,ds } \\ &{\quad\geq (1/h_U) \biggl(\int^t_{t-h_U} \beta(s)\,ds \biggr)^T\mathcal{N} \biggl(\int^t_{t-h_U} \beta(s)\,ds \biggr) } \\ &{\quad=(1/h_U) \left [ \begin{array}{c} \int^t_{t-h_U}x(s)\,ds \\ x(t)-x(t-h_U) \\ \int^t_{t-h_U} f(x(s))\,ds \end{array} \right ]^T} \\ &{\qquad{}\times\mathcal{N}\left [ \begin{array}{c} \int^t_{t-h_U}x(s)\,ds \\ x(t)-x(t-h_U) \\ \int^t_{t-h_U} f(x(s))\,ds \end{array} \right ] } \\ &{\quad=(1/h_U)\alpha^T(t) \bigl([ \tilde{e}_3, \tilde {e}_1-\tilde{e}_2, \tilde{e}_5]} \\ &{\qquad{}\times\mathcal{N}[\tilde{e}_3, \tilde{e}_1-\tilde{e}_2, \tilde{e}_5]^T \bigr) \alpha(t).} \end{aligned}$$
(46)

By utilizing (b) in Lemma 1, the lower bound of the integral term \(h_{U} \int^{t}_{t-h_{U}}\int^{t}_{s} \beta^{T}(u) \mathcal{Q}_{1} \beta(u)\,du\,ds\) can be

$$\begin{aligned} &{h_U \int^t_{t-h_U}\int ^t_s \beta^T(u) \mathcal{Q}_1 \beta(u)\,du\,ds} \\ &{\quad \geq (2/h_U) \biggl(\int^t_{t-h_U} \int^t_s \beta(u)\,du\,ds \biggr)^T } \\ &{\qquad{}\times\mathcal{Q}_1 \biggl(\int^t_{t-h_U} \int^t_s \beta(u)\,du\,ds \biggr)} \\ &{\quad=(2/h_U) \left [ \begin{array}{c} \int^t_{t-h_U} \int^t_s x(u)\,du\,ds \\ h_U x(t) - \int^t_s x(s)\,ds \\ \int^t_{t-h_U} \int^t_s f(x(u))\,du\,ds \end{array} \right ]^T } \\ &{\qquad{}\times\mathcal{Q}_1\left [ \begin{array}{c} \int^t_{t-h_U} \int^t_s x(u)\,du\,ds \\ h_U x(t) - \int^t_s x(s)\,ds \\ \int^t_{t-h_U} \int^t_s f(x(u))\,du\,ds \end{array} \right ]} \\ &{\quad=(2/h_U) \alpha^T (t) \bigl([ \tilde{e}_4, h_U \tilde {e}_1- \tilde{e}_3, \tilde{e}_6]} \\ &{\qquad{}\times\mathcal{Q}_1[ \tilde{e}_4, h_U \tilde{e}_1- \tilde{e}_3, \tilde{e}_6]^T \bigr) \alpha(t).} \end{aligned}$$
(47)

Therefore, if the following inequality holds,

$$\begin{aligned} &{\mathcal{R}+(1/h_U)[\tilde{e}_3, \tilde{e}_1-\tilde{e}_2, \tilde{e}_5] \mathcal{N}[\tilde{e}_3, \tilde{e}_1-\tilde{e}_2, \tilde{e}_5]^T} \\ &{\quad{}+(2/h_U) [ \tilde{e}_4, h_U \tilde{e}_1- \tilde{e}_3, \tilde{e}_6]} \\ &{\quad{}\times\mathcal {Q}_1[ \tilde{e}_4, h_U \tilde{e}_1- \tilde{e}_3, \tilde{e}_6]^T>0,} \end{aligned}$$
(48)

then the lower bound of V(t) can be guaranteed to be positive. Thus, by deleting the positiveness of the matrix \(\mathcal{R}\) and adding the inequality (48) into stability condition of Theorem 1, we have the following theorem.

Theorem 2

For given scalars h U >0, h D , and diagonal matrices \(K_{p}=\operatorname{diag}\{k^{+}_{1},\ldots ,k^{+}_{n}\}\) and \(K_{m}=\operatorname{diag}\{k^{-}_{1},\ldots,k^{-}_{n}\}\), system (4) is asymptotically stable for 0≤h(t)≤h U and \(\dot{h}(t) \leq h_{D} \), if there exist positive diagonal matrices \(\varSigma=\operatorname{diag}\{\sigma _{1},\ldots,\sigma_{n}\}\), \(\varDelta=\operatorname{diag}\{\delta_{1},\ldots,\delta_{n}\}\), \(H_{i}=\operatorname{diag}\{h_{1i},\ldots,h_{ni}\}\ (i=1,\ldots,5)\), symmetric matrices \(\mathcal{R} \in\mathbb{R}^{6n \times6n}\), positive definite matrices \(\mathcal{N} \in\mathbb{R}^{3n \times3n}\), \(\mathcal{G}=[G_{ij}] \in\mathbb{R}^{5n \times5n}\), \(\mathcal{Q}_{1} \in\mathbb{R}^{3n \times3n}\), \(\mathcal{Q}_{2} \in\mathbb{R}^{3n \times3n}\), \(Q_{3} \in\mathbb{R}^{n \times n}\), any matrices \(\mathcal{S}_{1}\in\mathbb{R}^{3n \times3n}\), \(\mathcal{S}_{2}\in\mathbb{R}^{3n \times3n}\), \(\varPsi\in\mathbb{R}^{6n \times15n}\), and any symmetric matrices \(P_{i} \in\mathbb{R}^{n \times n}\ (i=1,\ldots,4)\), satisfying the LMIs (13)(16) and (48) where Φ [h(t)], \(\mathcal{P}_{i}\ (i=1,\ldots,4)\), Θ and Γ are defined as in (12), and Γ is the right orthogonal complement of Γ.

Remark 5

When information about the upper bound of \(\dot{h}(t)\) is unknown, then Theorems 1 and 2 can provide delay-dependent stability criteria for 0≤h(t)≤h U by not considering V 3(t).

4 Numerical examples

In this section, two numerical examples are introduced to show the improvements of the proposed methods. In examples, MATLAB, YALMIP 3.0 and SeDuMi 1.3 are used to solve LMI problems.

Example 1

Consider the neural networks (4) where

$$\begin{aligned} &{A=\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 1.2769& 0& 0& 0\\ 0& 0.6231 &0 &0\\ 0 &0& 0.9230 &0\\ 0 &0& 0 &0.4480 \end{array} \right ],} \\ &{W_0=\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} -0.0373& 0.4852& -0.3351& 0.2336\\ -1.6033& 0.5988 &-0.3224& 1.2352\\ 0.3394 &-0.0860& -0.3824& -0.5785\\ -0.1311& 0.3253 &-0.9534 &-0.5015 \end{array} \right ],} \\ &{W_1=\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 0.8674& -1.2405 &-0.5325& 0.0220\\ 0.0474& -0.9164 &0.0360& 0.9816\\ 1.8495& 2.6117& -0.3788& 0.8428\\ -2.0413& 0.5179 &1.1734 &-0.2775 \end{array} \right ], } \\ &{K_p=\operatorname{diag}\{0.1137, 0.1279, 0.7994, 0.2368 \},} \\ &{K_m=\operatorname{diag}\{0, 0, 0, 0\}.} \end{aligned}$$
(49)

When h D is 0.1, 0.5, 0.9, or unknown (or larger than one), the obtained maximum delay bounds applying Theorems 1 and 2 to the system (4) with the parameter (49) and the results in recent works of [2023, 25, 26, 29] are listed in Table 1. It can be seen that the proposed Theorem 1 provides larger delay bounds than those of the methods with the delay-partitioning approach. Also, as mentioned in Remark 4, Theorem 2 improved the feasible region of stability criterion of Theorem 1.

Table 1 Delay bounds h U with different h D (Example 1)

Example 2

Consider the neural networks (4) with the parameters

$$\begin{aligned} \begin{aligned} {}&A = \left [ \begin{array}{c@{\quad}c} 2 & 0\\ 0&2 \end{array} \right ],\qquad W_0= \left [ \begin{array}{c@{\quad}c} 1 & 1\\ -1&-1 \end{array} \right ],\\ &W_1 = \left [ \begin{array}{c@{\quad}c} 0.88 & 1\\ 1 & 1 \end{array} \right ],\qquad K_p = \operatorname{diag}\{0.4, 0.8\}, \\ &K_m= \operatorname{diag}\{0, 0\}. \end{aligned} \end{aligned}$$
(50)

In Table 2, when h D is 0.8, 0.9, or unknown (or larger than one), the comparison of maximum delay bounds obtained in Refs. [20, 21, 24, 2630] and the results of our proposed methods are listed. From Table 2 it can be confirmed that Theorem 1 significantly increase the feasible region of stability criterion. Also, one can see, Theorem 2 provides larger feasible region than that of Theorem 1. For the system (4) with the above parameters, Fig. 1 shows, for the state responses x(t), when h(t)=3.4504|sin(t)|, f 1(x 1(t))=0.4tanh(x 1(t)), f 2(x 2(t))=0.8tanh(x 2(t)), and x(0)=[1,−1]T. This figure shows that the state signal converges to zero, which verifies the effectiveness of Theorem 2.

Fig. 1
figure 1

State responses with h(t)=3.4504|sin(t)| (Example 2)

Table 2 Delay bounds h U with different h D (Example 2)

5 Conclusion

In this paper, two improved delay-dependent stability criteria for neural networks with time-varying delays have been proposed by the use of Lyapunov stability theorem and LMI framework. In Theorem 1, by constructing the newly augmented Lyapunov–Krasovskii functional, utilizing some new zero equalities and techniques mentioned in Remarks 3 and 4, a sufficient condition for asymptotic stability of the system was derived. By taking lower bound of Lyapunov–Krasovskii functional and utilizing the property of its positiveness, further improved stability condition were derived in Theorem 2. By two numerical examples dealt with in many previous works, the improvement of the feasible region of the two proposed stability criteria has been successfully verified. Based on the proposed Lyapunov–Krasovskii functional, future work will focus on state estimation [44], periodic solutions of neural networks [45], quasi-synchronization control for switched networks [46], dissipativity and quasi-synchronization with discontinuous activations and parameter mismatches [47], and so on.