1 Introduction

From past few decades, neural networks (NN) have been successfully used in many scientific and engineering systems such as image processing, associative memory, pattern recognition and optimization algorithms [1, 2]. Broadly, the neural networks are realized by very large-scale integrated electronic circuits. In view of communicative speed between the neurons and restricted switching speed of electronic devices, time delay often exists in the neural networks which degrades the performance and even destabilizes it. Therefore, delay-dependent stability analysis of neural network (NN) with delay becomes a key problem in the past decades, see in [3,4,5,6,7].

The Lyapunov–Krasovskii (LK) approach for determining stability in its implementation is the most investigated method in stability analysis of NN with time delay. A variety of delay-dependent conditions for stability analysis have been proposed in the form of LMIs [8,9,10,11,12,13]. The main focus in these works is to develop new stability criteria such that it provides largest upper bound (LUB) of the delay by establishing negative definite condition of derivative of the LK functional (LKF). For obtaining LUB of time-varying delay for NN, the two crucial issues are construction of appropriate LKF and to find a precise bound of quadratic integral function obtained in the time derivative of LKF.

In earlier works, Jensen-based integral inequality (JBII) [14] has been used numerously to bound the integral function. To get less conservative stability condition, several other integral inequalities have been used, like Wirtinger-based integral inequality (WBII) [15], auxiliary function-based integral inequality (AFII) [16], Bessel–Legendre-based integral inequality (BLII) [17] and free-matrix-based integral inequality (FMII) [18], although these inequalities have potential contribution to get improved stability criterion. A careful look reveals that these inequality functions introduce additional quadratic terms and each term includes a new state vector. It was shown in [19] that LKF must be augmented by the states involved in the inequality in order to reduce conservativeness.

The matrix-refined function in [20] has been formulated in which slack variables are utilized to refine the Lyapunov matrix to provide more flexibility. In [21,22,23,24], the LKF proposed in which all the quadratic terms involved was not necessarily required to be positive definite. It has been demonstrated that such a relaxed condition provides improved results. To exploit the information of delays and its derivative, a new type of LKF, known as delay-product (DP) functional [25, 26], has been introduced, which contains time-varying delay as the coefficients in the non-integral quadratic terms. In similar fashion, new DP functionals have been proposed by modifying WBII and FMII to exploit the advantages of single integral state vectors in [22, 27, 28].

Motivated by these types of works to formulate new form of LKF, this article is mainly based on the creation of new delay-product LKF by modifying non-orthogonal polynomial-based integral inequality (NPBI) of [29]. The NPBI is designed on the basis of a non-orthogonal polynomial sequence. The auxiliary sequel vector \(\{1, g(s), g^{2}(s)\}\) is non-orthogonal because \(\int _{a}^{b}g^{2}(s)\ne 0\). Hence, an additional cross-term has been introduced in the NPBI as compared to orthogonal polynomial type integral inequality. This additional cross-term has been key to get less conservative stability condition. Then improved reciprocally convex lemma of [19] and second-order BLII are jointly utilized to derive, delay-dependent stability criteria for delayed NN. To show the efficacy of the proposed stability conditions, two numerical examples are considered.

\(\textbf{Notations}\):-\(\mathbb {R}^n\) and \(\mathbb {R}^{n \times m}\) mean n-dimensional Euclidean space and set of all real matrices with dimension \((n \times m)\), respectively. \(Col(\cdot )\) and \({\text {diag}}(\cdot )\) stand for column and diagonal matrix, respectively. \(Q>0\) represents Q be a symmetric positive definite matrix. \(Sym\{N\}=N+N^\textrm{T}\). 0 and I are zero and identity matrix of appropriate dimensions. For a given function \(x: [-\hslash , +\infty ] \rightarrow \mathbb {R}^n\), \(x_{t}(s)\) represents \(x(t + s)\), for all \(s \in [-\hslash , 0]\) and all \(t \ge 0\).

2 System description and preliminaries

The NN with its equilibrium point shifted into origin can be represented as:

$$\begin{aligned} \dot{x}(t)=-\textit{A}x(t)+\textit{B}f(\textit{W}x(t))+\textit{C}f(\textit{W}x(t-d_{t})) \end{aligned}$$
(1)

where \(x(t) \in \mathbb {R}^n\) is the state vector representing n number of neurons; \(\textit{A}= {\text {diag}}\{a_{1},\ldots ,a_{n-1}, a_{n}\} >0, \textit{B}, \textit{C}\) and \(\textit{W} \in \mathbb {R}^{n \times n}\) are the known interconnecting weight matrices of the neurons. The time-delay function \(d_{t}\) is otherwise expressed as d(t). As per the real scalars \(\nu _{1}\), \(\nu _{2}\) and \(\hslash \), time delay \(d_{t}\) has following conditions.

$$\begin{aligned} 0 \le d_{t} \le \hslash , \qquad -\nu _{1} \le \dot{d}_{t} \le \nu _{2} \end{aligned}$$
(2)

The activation function of neuron is denoted by \(f(x(t))=\{f(x_{1}(t)),\ldots ,f(x_{n-1}(t)), f(x_{n}(t))\}\) and satisfies

$$\begin{aligned} \sigma _{i}^{-} \le \frac{f_{i}(r_{1})-f_{i}(r_{2})}{r_{1}-r_{2}}\le \sigma _{i}^{+}, r_{1} \not = r_{2}, i=1,\ldots ,n \end{aligned}$$
(3)

where \(\sigma _{i}^{-}\) and \(\sigma _{i}^{+}\) are real scalars with known values. For this paper, let \(\varSigma _{1}={\text {diag}}\{\sigma _{1}^{-},\ldots ,\sigma _{n}^{-}\}\) and \(\varSigma _{2}={\text {diag}}\{\sigma _{1}^{+},\ldots ,\sigma _{n}^{+}\}\). The aim of this paper is to construct new functionals to obtain improved stability criteria for delayed NN of (1). For this reason, some key lemmas are rewritten as follows. First, the improved reciprocally convex lemma is presented.

Lemma 1

[19] For matrices \(X_{i}>0\) and \(S_{i},i=1,2\), using positive scalars \(\alpha , \beta \) related by \(\alpha +\beta =1\), then the following holds:

$$\begin{aligned} \begin{bmatrix} \frac{1}{\alpha } X_{1} &{} 0 \\ 0 &{} \frac{1}{\beta } X_{2} \end{bmatrix}&\ge \begin{bmatrix} (1+\beta )X_{1}-T_{1} &{} \beta S_{1}+\alpha S_{2} \\ *&{} (1+\alpha )X_{2}-T_{2} \end{bmatrix} \end{aligned}$$
(4)

where \(T_{1}= \beta S_{2}R_{2}^{-1}S_{2}^\textrm{T}\) and \(T_{2}= \alpha S_{1}^\textrm{T}R_{1}^{-1} S_{1}\). The following lemma presents the second-order BLII and JII.

Lemma 2

[14, 17, 30] For any continuously differentiable function \( \textit{w} \in [a,b] \rightarrow \mathbb {R}^n \) with a constant matrix \(0 \le R\), then the following holds:

  1. (i)

    Second-order BLII:

    $$\begin{aligned}&\int ^b_a \dot{\textit{w}}^\textrm{T}(s) R \dot{\textit{w}}(s) \textrm{d}s \ge \frac{1}{b-a}[ \mathcal {V}_{1}^\textrm{T} R \mathcal {V}_{1}+ 3\mathcal {V}_{2}^\textrm{T} R \mathcal {V}_{2} \nonumber \\&\quad + 5\mathcal {V}_{3}^\textrm{T} R \mathcal {V}_{3}] \end{aligned}$$
    (5)
  2. (ii)

    JII:

    $$\begin{aligned} \int ^b_a \textit{w}^\textrm{T}(s) R \textit{w}(s) \textrm{d}s \ge \frac{1}{b-a} \vartheta _{0}^\textrm{T} R \vartheta _{0} \end{aligned}$$
    (6)

where \(\mathcal {V}_{1}= \textit{w}(b)-\textit{w}(a), \mathcal {V}_{2}=\textit{w}(a)+\textit{w}(b)-\frac{2}{(b-a)}\vartheta _{0}\), \(\mathcal {V}_{3}=\mathcal {V}_{1}+\frac{6}{(b-a)}\int _{a}^{b} \delta _{a}^{b}(s) \textit{w}(s)\textrm{d}s\), \(\vartheta _{0}=\int _{a}^{b} \textit{w}(s)\textrm{d}s\) and \(\delta _{a}^{b}(s)=-2\left( \frac{s-a}{b-a}\right) +1\).

The next inequality to be recalled is used as bound of the quadratic type integral function. It is derived based on a polynomial sequence of non-orthogonal in nature.

Lemma 3

[29] For real scalars a and b having \(b > a\), differentiable function \(w: [a.b] \rightarrow \mathbb {R}^n\), real matrix \(R > 0\) with dimension \(n \times n\) and matrices \(Z_{1}, Z_{2}, L_{1}, L_{2}\) satisfies \(\mathcal {Z}=\begin{bmatrix} Z_{1} &{} Z_{2} &{} L_{1} \\ *&{} Z_{3} &{} L_{2} \\ *&{} *&{} R \end{bmatrix} \ge 0,~ \) the following inequality holds

$$\begin{aligned}&\int ^b_a \dot{w}^\textrm{T}(s) R \dot{w}(s) \textrm{d}s \ge \frac{1}{b-a} \mathcal {V}_{1}^\textrm{T} R \mathcal {V}_{1} + \mathcal {V}_{2}^\textrm{T} (L_{1}+L_{1}^\textrm{T}\nonumber \\&\quad -\frac{b-a}{3}Z_{1}) \mathcal {V}_{2} + \mathcal {V}_{4}^\textrm{T} [15(L_{2}+L_{2}^\textrm{T})-20(b-a)Z_{3}] \mathcal {V}_{4} \nonumber \\&\quad + 20 \mathcal {V}_{4}^\textrm{T} L_{2}\mathcal {V}_{1} \end{aligned}$$
(7)

where \(\mathcal {V}_{4}=\frac{4}{b-a} \int _{a}^{b}w(s)\textrm{d}s-\frac{8}{(b-a)^2}\int _{a}^{b} \int _{\theta }^{b} w(s)\textrm{d}s\textrm{d}\theta \)

Remark 1

The right-hand side of NPII in (7) contains four quadratic terms. The first three quadratic terms are similar to the BLII. The last cross-term is the additional term utilized in NPII. The cross-term \(\mathcal {V}_{4}^\textrm{T} L_{2}\mathcal {V}_{1}\) evolves due to the non-orthogonal polynomial function. This cross-term contains the states \(\frac{4}{b-a} \int _{a}^{b}w(s)\textrm{d}s\), \(\frac{8}{(b-a)^2}\int _{a}^{b} \int _{\theta }^{b} w(s)\textrm{d}s\textrm{d}\theta \), \(\textit{w}(b)\) and \(\textit{w}(b)\). The additional interaction among these states in cross-term improves the conservativeness.

In order to obtain the simpler presentation, the notations have been utilized as follows.

$$\begin{aligned}&\hslash _{d}=\hslash -d_{t}, ~ x_{d}=x(t-d_{t}),~ x_{\hslash }=x(t-\hslash ), ~ \tilde{d}=1-\dot{d}_{t},~\\&w_{1}(t)= \frac{1}{d_{t}}\int _{-d_{t}}^{0} x_{t}(s)\textrm{d}s,~ w_{2}(t)\\&\quad = \frac{1}{\hslash _{d}}\int _{-\hslash }^{-d_{t}} x_{t}(s)\textrm{d}s \\&w_{3}(t)= \frac{1}{d_{t}}\int _{-d_{t}}^{0} \delta _{-d_{t}}^{0}(s)x_{t}(s)\textrm{d}s, ~ w_{4}(t)\\&\quad = \frac{1}{\hslash _{d}}\int _{-\hslash }^{-d_{t}}\delta _{-\hslash }^{-d_{t}}(s)x_{t}(s)\textrm{d}s \\&\varpi _{0}(t)=col[x(t), x_{d}, d_{t}w_{1}(t), \hslash _{d}w_{2}(t), d_{t}w_{3}(t), \hslash _{d}w_{4}(t)],~ \\&\varpi _{1}(s)=col[x(s), \dot{x}(s), f(\textit{W}x(s)), \int ^{t}_{s}\dot{x}(s)\textrm{d}s] \\&\zeta (t)=col[x(t), x_{d}, x_{\hslash }(t), \dot{x}_{d}, w_{1}(t), w_{2}(t), w_{3}(t),\\&\quad w_{4}(t),\dot{x}_{\hslash }(t), f(\textit{W}x(t)), f(\textit{W}x_{d}), f(\textit{W}x_{\hslash }(t)),\\&\quad \int _{t-d_{t}}^{t} f(\textit{W}x(s))\textrm{d}s, \int _{t-\hslash }^{t-d_{t}}f(\textit{W}x(s))\textrm{d}s],\\&e_{p}=\textit{A} e_{1}+\textit{B}e_{10}+\textit{C}e_{11}, e_{0}=0_{n\times 14n}\\ \end{aligned}$$

with \(e_{1}, e_{2},\ldots , e_{14} \in \mathbb {R}^{n \times 14n}\) are basic block matrices, for example \(e_{2}=[0_{n \times n}, I_{n}, 0_{n \times 12n}]\).

3 Main Results

In this section, we construct a delay-product functional (DPF) using the NPII in (7).

Proposition 1

If the matrices \(0 < N_{3}, N_{4} \in \mathbb {R}^{n \times n}\), the symmetric matrices \(Z_{i}, Y_{i}, L_{j}\) and \(M_{j} \in \mathbb {R}^{n \times n}\), \(i=1,2,3\) and \(j=1,2\) satisfy the following LMIs

$$\begin{aligned} \begin{bmatrix} Z_{1} &{} Z_{2} &{} L_{1} \\ *&{} Z_{3} &{} L_{2} \\ *&{} *&{} N_{1} \end{bmatrix}> 0, \qquad \begin{bmatrix} Y_{1} &{} Y_{2} &{} M_{1}\\ *&{} Y_{3} &{} M_{2}\\ *&{} *&{} N_{2} \end{bmatrix} > 0; \end{aligned}$$
(8)

then, the following function can be a DPF candidate:

$$\begin{aligned} V_{N}&= \int ^{t}_{t-d_{t}} \dot{x}^\textrm{T}(s) N_{1} \dot{x}(s) \textrm{d}s+ d(t)\vartheta _{4}^\textrm{T} (t) \mathcal {Z} \vartheta _{4} (t)\nonumber \\&\quad - \frac{1}{\hslash }[d_{t}\vartheta _{3}^\textrm{T} (t) \mathcal {L}(\hslash ) \vartheta _{3} (t)+\hslash _{d}(t)\vartheta _{5}^\textrm{T} (t) \mathcal {M}(\hslash ) \vartheta _{5} (t)]\nonumber \\&\quad +\int ^{t-d_{t}}_{t-\hslash } \dot{x}^\textrm{T}(s) N_{2} \dot{x}(s) \textrm{d}s + \hslash _{d}(t) \vartheta _{6}^\textrm{T} (t) \mathcal {Y} \vartheta _{6} (t) \end{aligned}$$
(9)

where

$$\begin{aligned} \mathcal {L}(\hslash )&=\begin{bmatrix} \frac{1}{\hslash }N_{1} &{}\quad 0 &{}\quad 10L_{2}^\textrm{T} \\ *&{}\quad L_{1}+L_{1}^\textrm{T} &{}\quad 0 \\ *&{}\quad *&{}\quad 15(L_{2}+L_{2}^\textrm{T}) \end{bmatrix},\\ \mathcal {M}(\hslash )&=\begin{bmatrix} \frac{1}{\hslash }N_{2} &{}\quad 0 &{}\quad 10M_{2}^\textrm{T} \\ *&{}\quad M_{1}+M_{1}^T &{}\quad 0 \\ *&{}\quad *&{}\quad 15(M_{2}+M_{2}^\textrm{T}) \end{bmatrix},\\ \mathcal {Y}&=\textrm{diag}\left\{ \frac{Y_{1}}{3}, 20Y_{3} \right\} , \mathcal {Z}=\textrm{diag}\left\{ \frac{Z_{1}}{3}, 20Z_{3} \right\} ,\\ \\ \vartheta _{3} (t)&=col\{ x(t)-x_{d}(t), x(t)+x_{d}(t)-2 w_{1}(t), 4 w_{3}(t) \}, \\ \vartheta _{4} (t)&=col\{x(t)+x_{d}(t)-2 w_{1}(t), 4 w_{3}(t) \} \\ \vartheta _{5} (t)&=col\{ x_{d}(t)-x_{\hslash }(t), x_{d}(t)+x_{\hslash }(t)-2 w_{2}(t), 4 w_{4}(t)\}, \\ \vartheta _{6} (t)&=col\{x_{d}(t)+x_{\hslash }(t)-2 w_{2}(t), 4 w_{4}(t) \} \end{aligned}$$

Proof

By using the non-orthogonal polynomial-based integral inequality of [29], one can write \(\int ^{t}_{t-d_{t}} \dot{x}^\textrm{T}(s) N_{1} \dot{x}(s) \textrm{d}s\) \(\ge \frac{1}{d_{t}}[ d_{t}\vartheta _{3}^\textrm{T} (t) \mathcal {L}(d_{t}) \vartheta _{3} (t)] - d_{t}\vartheta _{4}^\textrm{T} (t) \mathcal {Z} \vartheta _{4} (t)\), and \(\int ^{t-d_{t}}_{t-\hslash } \dot{x}^\textrm{T}(s) N_{2} \dot{x}(s)\textrm{d}s\ge - \hslash _{d}(t) \vartheta _{6}^\textrm{T} (t) \mathcal {Y} \vartheta _{6} (t)\) \(+\frac{1}{\hslash _{d}(t)}[\hslash _{d}(t)\vartheta _{5}^\textrm{T} (t) \mathcal {M}(\hslash _{d}(t)) \vartheta _{5} (t)] \)

Since \(\hslash \ge d_{t} \ge 0\), it is clear that \(V_{N}(t) > 0\). \(\square \)

Remark 2

The new delay-product LKF has been created of modifying non-orthogonal polynomial-based integral inequality (NPBI) introduced in [29]. The NPBI is designed on the basis of a non-orthogonal polynomial sequence. The auxiliary sequel vector \(\{1, g(s), g^{2}(s)\}\) is non-orthogonal because \(\int _{a}^{b}g^{2}(s)\ne 0\). Hence, an additional cross-term has been introduced in the NPBI as compared to orthogonal polynomial type integral inequality. This additional cross-term has been key to get less conservative stability condition.

Remark 3

The DPF (9) consists of integral terms and delay coefficient-based non-integral terms. These non-integral terms are non-positive and play the role to provide relaxed stability condition, and its derivative produces cross-terms with product of delay and its derivative. Hence, both relaxed condition and delay-product properties make the DPF more effective to reduce conservatism.

By considering DPF (9), we have the following stability criterion.

Theorem 1

For matrices \(0<P \in \mathbb {R}^{6n \times 6n}, 0<P_{i}\in \mathbb {R}^{4n \times 4n}, 0<N_{i},R_{i}\in \mathbb {R}^{n \times n}\), diagonal matrices \(0 < \varPi _{k}\), \(\varOmega _{j}, \varLambda _{j}\), any matrices \(U_{i}\in \mathbb {R}^{3n \times 3n}\) and \(U_{i+2}\in \mathbb {R}^{n \times n}\), \(i=1,2; j=1,2,3; k=1,2,\ldots ,6\) with given scalars \(\hslash , \nu _{1}\) and \(\nu _{2}\), the delayed NN (1) is asymptotically stable, if inequalities in (8) and following LMIs hold for \(\dot{d}_{t} \in [\nu _{1}, \nu _{2}]\).

$$\begin{aligned}&\begin{bmatrix} \bar{\Upsilon }(0,\dot{d}_{t})&{} E_{1}^\textrm{T} U_{2}&{} e_{13}^\textrm{T}U_4\\ *&{} -{\mathcal {R}}_{1}&{}0\\ *&{} *&{} -R_{2} \end{bmatrix}<0 \end{aligned}$$
(10)
$$\begin{aligned}&\begin{bmatrix} \bar{\Upsilon }(\hslash ,\dot{d}_{t})&{} E_{2}^\textrm{T} U_{1}^\textrm{T} &{} e_{14}^\textrm{T}U_3\\ *&{} -{\mathcal {R}}_{1}&{} 0\\ *&{} *&{} -R_{2} \end{bmatrix}<0 \end{aligned}$$
(11)
$$\begin{aligned} \text {where},~~ {\Upsilon }(\dot{d}_{t},{d}_{t})&=\varPhi _{0}(\dot{d}_{t},{d}_{t})+\varPhi _{1}(\dot{d}_{t})+\varPhi _{4}\nonumber \\&\quad +{\varPhi }_{2}(\dot{d}_{t},{d}_{t})+\varPhi _{3}({d}_{t}) \end{aligned}$$
(12)
$$\begin{aligned} \Phi _{0}(\dot{d}_{t},d_{t}){} & {} =Sym\{G_{0}^\textrm{T}(d_{t}) P G_{1}(\dot{d}_{t})\}-\tilde{d}_{t}G_{3}^\textrm{T} P_{1} G_{3}\nonumber \\{} & {} \quad + Sym\{G_{4}^\textrm{T}(d_{t}) P_{1} G_{7}\} +\tilde{d}_{t}G_{3}^\textrm{T} P_{2} G_{3}-G_{5}^\textrm{T} P_{2} G_{5}\nonumber \\{} & {} \quad + Sym\{G_{6}^\textrm{T}(d_{t}) P_{2} G_{7}\}+G_{2}^\textrm{T} P_{1} G_{2} \end{aligned}$$
(13)
$$\begin{aligned} \Phi _{1}(\dot{d}_{t}){} & {} =Sym\{[(e_{10}-\varSigma _{1}We_{1})^\textrm{T}\varPi _{1}\nonumber \\{} & {} \quad +(\varSigma _{2}We_{1}-e_{10})^\textrm{T} \varPi _{2}]We_{p} \nonumber \\{} & {} \quad +[\tilde{d}_{t}(e_{11}-\varSigma _{1}We_{2})^\textrm{T}\varPi _{3}\nonumber \\{} & {} \quad +\tilde{d}_{t}(\varSigma _{2}We_{2}-e_{11})^\textrm{T} \varPi _{4}]We_{5}\nonumber \\{} & {} \quad +[(e_{12}-\varSigma _{1}We_{3})^\textrm{T}\varPi _{5}\nonumber \\{} & {} \quad +(\varSigma _{2}We_{3}-e_{12})^\textrm{T} \varPi _{6}]We_{6}\} \end{aligned}$$
(14)
$$\begin{aligned} {\varPhi }_{2}(\dot{d}_{t},{d}_{t}){} & {} =e_{p}^\textrm{T} N_{1}e_{p}+\tilde{d}_{t}e_{4}^\textrm{T} (N_{2}-N_{1})e_{4}\nonumber \\{} & {} \quad -e_{9}^\textrm{T} N_{2}e_{9}-\frac{\dot{d}_{t}}{\hslash }E_{3}^\textrm{T} \mathcal {L}(\hslash )E_{3}+\frac{\dot{d}_{t}}{\hslash }E_{5}^\textrm{T} \mathcal {M}(\hslash )E_{5}\nonumber \\{} & {} \quad -\frac{1}{\hslash }Sym\{E_{3}^\textrm{T} \mathcal {L}(\hslash )({d}_{t}D_{31}+D_{30})\}\nonumber \\{} & {} \quad +\dot{d}_{t}E_{4}^\textrm{T} \mathcal {Z}E_{4}+Sym\{E_{4}^\textrm{T} \mathcal {Z}({d}_{t}D_{41}+D_{40})\}\nonumber \\{} & {} \quad -\frac{1}{\hslash }Sym\{E_{5}^\textrm{T} \mathcal {M}(\hslash )(\hslash _{d}(t)D_{51}+D_{50})\}\nonumber \\{} & {} \quad -\dot{d}_{t}E_{6}^\textrm{T} \mathcal {Y}E_{6}+Sym\{E_{6}^\textrm{T} \mathcal {Y}(\hslash _{d} (t)D_{61}+D_{60})\} \end{aligned}$$
(15)
$$\begin{aligned} \Phi _{3}({d}_{t}){} & {} =\hslash ^{2}(e_{p}^\textrm{T} {R}_{1}e_{p}+e_{10}^\textrm{T} R_{2} e_{10})\nonumber \\{} & {} \quad -(1+\beta )(E_{1}^\textrm{T} \mathcal {R}_{1}E_{1}+e_{13}^\textrm{T} R_{2} e_{13})\nonumber \\{} & {} \quad -(1+\alpha )(E_{2}^\textrm{T} \mathcal {R}_{1}E_{2}+e_{14}^\textrm{T} R_{2} e_{14})\nonumber \\{} & {} \quad -2E_{1}^\textrm{T}[\beta U_{1}+\alpha U_{2}]E_{2}-2e_{13}^\textrm{T}[\beta U_{3}+\alpha U_{4}]e_{14} \end{aligned}$$
(16)
$$\begin{aligned} \Phi _{4}{} & {} =\sum _{i=1}^{3}Sym\{(e_{9+i}-\varSigma _{1}We_{i})^\textrm{T} \varOmega _{i}(\varSigma _{2}We_{i}-e_{9+i})\}\nonumber \\{} & {} \quad +\sum _{i=1}^{2} Sym\{[(e_{9+i}-e_{10+i})-\varSigma _{1}W(e_{i}-e_{1+i})]^\textrm{T}\varLambda _{i}\nonumber \\{} & {} \quad \times [\varSigma _{2}W(e_{i}-e_{1+i})-(e_{9+i}-e_{10+i})]\}\nonumber \\{} & {} \quad +Sym\{[(e_{10}-e_{12})\nonumber \\{} & {} \quad -\varSigma _{1}W(e_{1}-e_{3})]^\textrm{T}\varLambda _{3}\nonumber \\{} & {} \quad \times [\varSigma _{2}W(e_{1}-e_{3})-(e_{10}-e_{12})]\} \end{aligned}$$
(17)
$$\begin{aligned} G_{0}&=[e_{1}^\textrm{T},e_{2}^\textrm{T},d_{t}e_{5}^\textrm{T},\hslash _{d}(t)e_{6}^\textrm{T},d_{t}e_{7}^\textrm{T},\hslash _{d}(t)e_{8}^\textrm{T}]^\textrm{T},~ \\ G_{1}&=[e_{p}^\textrm{T},\tilde{d}_{t}e_{4}^\textrm{T},e_{1}^\textrm{T}-\tilde{d}_{t}e_{2}^\textrm{T},\tilde{d}_{t}e_{2}^\textrm{T}-e_{3}^\textrm{T}, \\&\quad -e_{1}^\textrm{T}-\tilde{d}_{t}e_{2}^\textrm{T}+(1+\tilde{d}_{t})e_{5}^\textrm{T}-\dot{d}_{t}e_{7}^\textrm{T},\\&\quad -\tilde{d}_{t}e_{2}^\textrm{T}-e_{3}^\textrm{T}+(1+\tilde{d}_{t})e_{6}^\textrm{T}+\dot{d}_{t}e_{8}^\textrm{T}]^\textrm{T} \\ G_{2}&=[e_{p}^\textrm{T},e_{1}^\textrm{T},e_{10}^\textrm{T},e_{0}^\textrm{T}]^\textrm{T}, G_{3}=[e_{4}^\textrm{T},e_{2}^\textrm{T},e_{11}^\textrm{T},(e_{1}-e_{2})^\textrm{T}]^\textrm{T},\\ G_{4}&=[(e_{1}-e_{2})^\textrm{T},{d}_{t}e_{5}^\textrm{T},e_{13}^\textrm{T},{d}_{t}(e_{1}-e_{5})^\textrm{T}]^\textrm{T} \\ G_{5}&=[e_{9}^\textrm{T} ,e_{3}^\textrm{T},e_{12}^\textrm{T},(e_{1}-e_{3})^\textrm{T}]^\textrm{T},\\ G_{6}&=[(e_{2}-e_{3})^\textrm{T},\hslash _{d}(t)e_{6}^\textrm{T},e_{14}^\textrm{T},\hslash _{d}(t)(e_{1}-e_{6})^\textrm{T}]^\textrm{T},\\ G_{7}&=[e_{0}^\textrm{T},e_{0}^\textrm{T},e_{0}^\textrm{T},e_{p}^\textrm{T}]^\textrm{T} \end{aligned}$$
$$\begin{aligned} E_{1}&=Col\{e_{1}-e_{2}, e_{1}+e_{2}-2e_{5}, e_{1}-e_{2}+6e_{7}\}\\ E_{2}&=Col\{e_{2}-e_{3}, e_{2}+e_{3}-2e_{6}, e_{2}-e_{3}+6e_{8}\} \\ E_{3}&=[(e_{1}-e_{2})^\textrm{T}, (e_{1}+e_{2}-2e_{5})^\textrm{T}, 4 e_{7}^\textrm{T}]^\textrm{T},E_{4}\\&=[ (e_{1}+e_{2}-2e_{5})^\textrm{T}, 4 e_{7}^\textrm{T}]^\textrm{T} ,\\ E_{5}&=[(e_{2}-e_{3})^\textrm{T}, (e_{2}+e_{3}-2e_{6})^\textrm{T}, 4 e_{8}^\textrm{T}],E_{6}\\&=[ (e_{2}+e_{3}-2e_{6})^\textrm{T}, 4 e_{8}^\textrm{T}]^\textrm{T},\\ {D}_{30}&=[e_{0}^\textrm{T}, -2(e_{1}-\tilde{d}_{t}e_{2}-\dot{d}_{t}e_{5})^\textrm{T}, 4(-e_{1}-\tilde{d}_{t}e_{2}\\&\quad +(1+\tilde{d}_{t})e_{5}-2\dot{d}_{t}e_{7})^\textrm{T}]^\textrm{T}, \\ {D}_{31}&=[(e_{p}-\tilde{d}_{t}e_{4})^\textrm{T}, (e_{p}+\tilde{d}_{t}e_{4})^\textrm{T}, e_{0}^\textrm{T}]^\textrm{T},\\ {D}_{40}&=[-2(e_{1}-\tilde{d}_{t}e_{2}-\dot{d}_{t}e_{5})^\textrm{T},4(-e_{1}-\tilde{d}_{t}e_{2}\\&\quad +(1+\tilde{d}_{t})e_{5}-2\dot{d}_{t}e_{7})^\textrm{T}]^\textrm{T} {D}_{41}=[(e_{p}+\tilde{d}_{t}e_{4})^\textrm{T}, e_{0}^\textrm{T}]^\textrm{T}, \\ {D}_{50}&=[e_{0}^\textrm{T}, -2(\tilde{d}_{t}e_{2}-e_{3}-\dot{d}_{t}e_{6})^\textrm{T}, 4(-\tilde{d}_{t}e_{2}-e_{3}\\&\quad +(1+\tilde{d}_{t})e_{6}+2\dot{d}_{t}e_{8})^\textrm{T}]^\textrm{T}, \mathcal {D}_{51}\\&=[(\tilde{d}_{t}e_{4}-e_{9})^\textrm{T}, (\tilde{d}_{t}e_{4}+e_{9})^\textrm{T}, e_{0}^\textrm{T}]^\textrm{T}, \\ {D}_{60}&=[-2(\tilde{d}_{t}e_{2}-e_{3}+\dot{d}_{t}e_{6})^\textrm{T}, 4(-\tilde{d}_{t}e_{2}-e_{3}\\&\quad +(1+\tilde{d}_{t})e_{6}+2\dot{d}_{t}e_{8})^\textrm{T}]^\textrm{T}, {D}_{61}=[(\tilde{d}_{t}e_{4}+e_{9})^\textrm{T}, e_{0}^\textrm{T}]^\textrm{T} \end{aligned}$$

Proof

Consider the following LKF as:

$$\begin{aligned} {V}(t)=V_{N}(t)+\sum _{i=1}^{3}V_{i}(t), \end{aligned}$$
(18)

where \(V_{N}(t)\) is defined in Proposition 1 and \(V_{1}(t)=\varpi ^\textrm{T}_{0}(t)P\varpi _{0}(t)+\int _{t-{d}_{t}}^{t} \varpi ^\textrm{T}_{1}(s)P_{1}\varpi _{1}(s)\textrm{d}s\) \(+\int _{t-\hslash }^{t-{d}_{t}} \varpi ^\textrm{T}_{1}(s)P_{2}\varpi _{1}(s)\textrm{d}s\), \(V_{2}(t)=2\sum _{i=1}^{n}\int _{0}^{\textit{W}_{i}x(t)}[\pi _{1i}f_{i}^{-}(s)+\pi _{2i}f_{i}^{+}(s)]\textrm{d}s+2\sum _{i=1}^{n}\int _{0}^{\textit{W}_{i}x_{d}(t)}[\pi _{3i}f_{i}^{-}(s)+\pi _{4i}f_{i}^{+}(s)]\textrm{d}s+\) \(+2\sum _{i=1}^{n}\int _{0}^{\textit{W}_{i}x_{h}(t)}[\pi _{5i}f_{i}^{-}(s)+\pi _{6i}f_{i}^{+}(s)]\textrm{d}s,V_{3}(t)=\hslash \int _{-\hslash }^{0}\int _{t+u}^{t}\dot{x}^\textrm{T}(s)R_{1}\dot{x}(s)\textrm{d}s \textrm{d}u+\hslash \int _{-\hslash }^{0}\int _{t+u}^{t}f^\textrm{T}(\textit{W}x(s))R_{2}f(\textit{W}x(s))\textrm{d}s \textrm{d}u\).

With the derivative of (18) along the solution of delayed NN (1), one can write

$$\begin{aligned} \dot{V}(t)&=\dot{V}_{N}(t)+\sum _{i=1}^3\dot{V}_{i}(t) \end{aligned}$$
(19)

where

$$\begin{aligned} \dot{V}_{1}(t)&=\zeta ^\textrm{T}(t)\varPhi _{0}(\dot{d}_{t},{d}_{t})\zeta (t) \end{aligned}$$
(20)
$$\begin{aligned} \dot{V}_{2}(t)&=\zeta ^\textrm{T}(t)\varPhi _{1}({d}_{t})\zeta (t)\end{aligned}$$
(21)
$$\begin{aligned} \dot{V}_{N}(t)&=\zeta ^\textrm{T}(t)\varPhi _{2}(\dot{d}_{t},{d}_{t})\zeta (t)\end{aligned}$$
(22)
$$\begin{aligned} \dot{V}_{3}(t)&=\zeta ^\textrm{T}(t)(\hslash ^{2}(e_{p}^\textrm{T} {R}_{1}e_{p}+e_{10}^\textrm{T} R_{2} e_{10})\zeta (t)\nonumber \\&\quad -\hslash \int _{t-\hslash }^{t}\dot{x}^\textrm{T}(s)R_{1}\dot{x}(s)\textrm{d}s\nonumber \\&\quad -\hslash \int _{t-\hslash }^{t}f^\textrm{T}(\textit{W}x(s))R_{2}f(\textit{W}x(s))\textrm{d}s \end{aligned}$$
(23)

where \(\varPhi _{0}(\dot{d}_{t},{d}_{t}), \varPhi _{1}{d}_{t})\) and \(\varPhi _{2}(\dot{d}_{t},{d}_{t})\) are defined in (13), (14) and (15), respectively. Now, one can approximate the integral functions involving \(R_{1}\) and \(R_{2}\) in (23) by utilizing Lemmas 1 and 2. Apply integral inequality (5) of Lemma 2; on these integrals we have the following inequalities.

$$\begin{aligned}&-\hslash \int _{t-\hslash }^{t}\dot{x}^\textrm{T}(s)R_{1}\dot{x}(s)\textrm{d}s \nonumber \\&\quad \le -\zeta (t)^\textrm{T}\left( \frac{\hslash }{{d}_{t}}E_{1}^\textrm{T} \mathcal {R}_1 E_{1}+\frac{\hslash }{\hslash _{d}(t)}E_{2}^\textrm{T} \mathcal {R}_1 E_{2}\right) \zeta (t) \end{aligned}$$
(24)

where \(\mathcal {R}_1={\text {diag}}\{R_{1}, 3R_{1}, 5R_{1}\}\) and \(E_{1}, E_{2}\) are defined above. Similarly using (6), we have

$$\begin{aligned}&-\hslash \int _{t-\hslash }^{t}f^\textrm{T}(\textit{W}x(s))R_{2}f(\textit{W}x(s))\textrm{d}s \nonumber \\&\quad \le -\zeta (t)^\textrm{T}\left( \frac{\hslash }{{d}_{t}}e_{13}^\textrm{T} {R}_2 e_{13}+\frac{\hslash }{\hslash _{d}(t)}e_{14}^\textrm{T} {R}_2 e_{14}\right) \zeta (t) \end{aligned}$$
(25)

Now, one can estimate the right-hand side (RHS) of (24) and (25) using Lemma 1 with \(\frac{{d}_{t}}{\hslash }=\alpha \), \(\frac{{d}_{t}}{\hslash _{d}(t)}=\beta \), and finally, substituting in (23) we have

$$\begin{aligned} \dot{V}_{3}(t) \le \zeta ^\textrm{T}(t) [\varPhi _{3}({d}_{t})+\Gamma ({d}_{t})]\zeta (t) \end{aligned}$$
(26)

where \(\varPhi _{3}({d}_{t})\) is defined in (16) and

$$\begin{aligned} \Gamma ({d}_{t})&=\beta E_{1}^\textrm{T} U_{2} \mathcal {R}_{1}^{-1}U_{2}^\textrm{T} E_{1}+\alpha E_{2}^\textrm{T} U_{1}^\textrm{T} \mathcal {R}_{1}^{-1} U_{1} E_{2}\nonumber \\&\quad +\beta e_{13}^\textrm{T} U_{4} {R}_{2}^{-1}U_{4}^\textrm{T} e_{13}+\alpha e_{14}^\textrm{T} U_{3}^\textrm{T} {R}_{2}^{-1}U_{3} e_{14}. \end{aligned}$$
(27)

Now, from (3) with \(\varOmega ={\text {diag}}\{\omega _{1}, \omega _{2},\ldots , \omega _{n}\}\ge 0\) and \(\varLambda ={\text {diag}}\{\lambda _{1}, \lambda _{2},\ldots , \lambda _{n}\}\ge 0\), the following inequalities hold for \(s, s_{1}, s_{2} \in \mathbb {R}\):

$$\begin{aligned} \varTheta (s,\varOmega )\ge 0, \qquad \varPsi (s_{1}, s_{2}, \varLambda )\ge 0 \end{aligned}$$
(28)

where

$$\begin{aligned} \varTheta (s,\varOmega )&=2[f(\textit{W}x(s))-\varSigma _{2}\textit{W}x(s)]^\textrm{T}\\&\quad \times \varOmega [\varSigma _{1}\textit{W}x(s)-f(\textit{W}x(s))]\\ \varPsi (s_{1}, s_{2}, \varLambda )&=2[f(\textit{W}x(s_{1}))-f(\textit{W}x(s_{2}))\\&\quad -\varSigma _{2}\textit{W}(x(s_{1})-x(s_{2}))]^\textrm{T} \\&\quad \times \varLambda [\varSigma _{1}\textit{W}(x(s_{1})-x(s_{2}))\\&\quad -f(\textit{W}x(s_{1}))+f(\textit{W}x(s_{2}))] \end{aligned}$$

Therefore, it follows from (27) that

$$\begin{aligned}&\varTheta (t,\varOmega _{1})\ge 0, \varTheta (t-{d}_{t},\varOmega _{2})\ge 0, \varTheta (t-h,\varOmega _{3})\ge 0,\\&\varPsi (t,t-{d}_{t},\varLambda _{1})\ge 0,\varPsi (t-{d}_{t}, t-h, \varLambda _{2})\ge 0\\&\varPsi (t,t-h,\varLambda _{3})\ge 0 \end{aligned}$$

so, one can write

$$\begin{aligned} \zeta ^\textrm{T}(t) \varPhi _{4} \zeta (t) \ge 0 \end{aligned}$$
(29)

where \(\varPhi _{4}\) is defined in (17). Using (20), (21), (22), (26), and (29), the time derivative of V(t) along the solution of (1) as

$$\begin{aligned} \dot{{V}}(t)\le \zeta ^\textrm{T}(t)[{\Upsilon }(\dot{d}_{t},{d}_{t})+\Gamma ]\zeta (t) \end{aligned}$$
(30)

where \({\Upsilon }(\dot{d}_{t},{d}_{t})\) and \(\Gamma \) are defined in (30) and (27), respectively. The matrix \({\Upsilon }(\dot{d}_{t},{d}_{t})+\Gamma \) is linear in \(\dot{d}_{t}\) and \({d}_{t}\). Therefore, if the matrix \({\Upsilon }(\dot{d}_{t},{d}_{t})+\Gamma \) is negative definite for all \({d}_{t}\in [0,\hslash ]\) and \(\dot{d}_{t}\in [\nu _{1},\nu _{2}]\), then \(\dot{{V}}(t) < 0\). Finally, using Schur complement one can transform matrix \(\bar{\Upsilon }(\dot{d}_{t},{d}_{t})+\Gamma \) into LMIs (10) and (11). \(\square \)

Table 1 LUB of delay \(\hslash \) for various values of \(\nu \)

4 Numerical examples

In this section, we perform a comparison between the proposed theorems and the existing ones in literature by considering two numerical examples.

Example 1

Consider a delayed NN (1) having delay function \(d_{t}\), activation function f(x(t)) satisfying (2) and (3), respectively, and \(W=I\) with

$$\begin{aligned} A&={\text {diag}}(1.2769, 0.6231, 0.9230, 0.4480), ~~\\ \varSigma _{1}&=0, ~ \varSigma _{2}={\text {diag}}(0.1137, 0.1279, 0.7994, 0.2368) \\ B&=\begin{bmatrix}-0.0373&{}\quad 0.4852&{}\quad -0.3351&{}\quad 0.2336\\ -1.6033&{}\quad 0.5988&{}\quad -0.3224&{}\quad 1.2352\\ 0.3394&{}\quad -0.0860&{}\quad -0.3824&{}\quad -0.5785\\ -0.1311&{}\quad 0.3253&{}\quad -0.9534&{}\quad -0.5015 \end{bmatrix},\\ C&=\begin{bmatrix} 0.8674&{}\quad -1.2405&{}\quad -0.5325&{}\quad 0.0220\\ 0.0474&{}\quad -0.9164&{}\quad 0.0360&{}\quad 0.9816\\ 1.8495&{}\quad 2.6117&{}\quad -0.3788&{}\quad 0.8428\\ -2.0413&{}\quad 0.5179&{}\quad 1.1734&{}\quad -0.2775 \end{bmatrix} \end{aligned}$$

The LUB of time delay \(\hslash \) for different \(\nu =(0.1, 0.5, 0.9)\) using the criteria proposed in this paper is listed in Table 1 along with the existing ones. Comparing proposed Theorem 1 and the criteria listed in Table 1, one can find that this method is less conservative in comparison with all the approaches listed in Table 1 for slow-varying delay \((\nu =0.1)\). But for fast-varying delay \((\nu =0.5, 0.9)\), Theorem 1 is more conservative than Theorem 1 of [34], Proposition 1 of [29], and Proposition 1 of [33]. However, the proposed criteria in Tm 1 contain less number of decision variables and it decreases the complexity.

Table 2 LUB of delay \(\hslash \) for various values of \(\nu \)

Example 2

Consider another delayed NN in the form of (1), where

$$\begin{aligned} A&={\text {diag}}(7.3458, 6.9987, 5.5949), B=0, C=I\\ W&=\begin{bmatrix}13.6014&{}\quad -2.9616&{}\quad -0.6936\\ 7.4736&{}\quad 21.6810&{}\quad 3.2100\\ 0.7920&{}\quad -2.6334&{}\quad -20.1300 \end{bmatrix} \end{aligned}$$

with \(\varSigma _{1}=0, \varSigma _{2}={\text {diag}}(0.368, 0.1795, 0.2876)\). Also, the time-varying delay and the activation function satisfy (2) and (3), respectively.

The LUB of the time-varying delay \(\hslash \) for various \(\nu =-\nu _{1}=\nu _{2}\) using the proposed criteria and existing ones is listed in Table 2. The obtained results in Theorem 2 provide better results as compared to all works listed in Table 2 except Theorem 1(N=1) of [34] for \(\nu =0.5\). It may be noted that the proposed methods involve lesser number of LMI variables. Therefore, the criteria introduced in this article reduce the computational burden and complexity.

5 Conclusion

In this paper, stability analysis of generalized DNN is studied utilizing LKF method. First a delay-product type functional (DPF) is proposed using the cross-terms of NPII. In second part, a LKF is introduced using newly developed DPF. Finally, a stability criterion based on LMI is derived in which the information on delay and its time derivative are considerably utilized to get improved results. The effectiveness of the developed stability criteria is demonstrated by considering two numerical examples.