Keywords

1 Introduction

Analysis of various dynamics of neural networks has recently become an interesting research topic due to their qualitative properties that are employed to solve various practical real-word problems related to combinatorial optimization, image processing and control systems. When solving these types of problems by using neural networks, one needs to establish a neural system possessing a unique and globally asymptotically stable equilibrium point. Thus, one needs to deal with stability of neural networks. The fact that neurons implemented by amplifiers usually have finite switching speeds will result in time delays, which may have undesired affects on the dynamics of neural networks. Another problem is that the parameters of neural systems may involve some uncertainties, which can also have an affect on the equilibria of neural networks. Because of these reasons, for a proper stability analysis, the time delay in the states and uncertainties in the network parameters need to be included in the mathematical model of neural networks. That is to say, the key requirement would be the establishment of robust stability of neural systems which also involve time delay. When reviewing past literature, it can be realized that many researchers published useful robust stability criteria for delayed neural systems, (see references [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]). This paper uses Lyapunov and Homeomorphic mapping theorems to derive a novel condition for global robust asymptotic stability.

Notations: Let \(z = (z_{1},z_{2}, ..., z_{n})^{T}\). We will use \(|z|=(|{z_1}|,|{z_2}|,...,|{z_n}|)^{T}\). For a given matrix \(E=({e_{ij}})_{n\times n}\), we will use \(|E|=({|e_{ij}|})_{n\times n}\), and \({\lambda _m}(E)\) will represent the minimum eigenvalue of E. If of \(E=E^T\), \(E>0\) will show that E is positive definite. \(E=({e_{ij}})_{n\times n}\) is nonnegative matrix if \({e_{ij}}\ge 0, \forall i,j\). Assume that \(E=({e_{ij}})_{n\times n}\) and \(F=({f_{ij}})_{n\times n}\) are nonnegative matrices. In this case, \(E\preceq F\) will denote that \({e_{ij}}\le {f_{ij}}, \forall i,j\). For the vector z, we will use the norm \(||z||_2^2 = {\sum _{i=1}^{n}z_i^2}\), and for E, we use \(||E||_{2} = [\lambda _{\max }(E^{T}E)]^{1/2}\).

2 Preliminaries

Consider the neural network of the mathematical form

$$\begin{aligned} {d{x_i}(t)\over dt}= & {} -{c_i}{x_i}(t)+\sum _{j=1}^{n}{a_{ij}}{{f_j}({x_j}(t))} +\sum _{j=1}^{n}{b_{ij}}{{f_j}({x_j}}(t-{\tau }))+{u_i},{\forall i} \end{aligned}$$
(1)

in this above equation, \(a_{ij}\) and \(b_{ij}\) are interconnection parameters, \({c}_{i}\) are the neurons charging rates, \({x}_{i}(t)\) represent state of neuron i, the functions \({f_i}({\cdot })\) are the nonlinear activation functions, \({\tau }\) represents the time delay, \(u_i\) are the inputs.

Neural system (1) can be put into an equivalent system governed by the differential equation:

$$\begin{aligned} {\dot{x}}(t)=-C{x}(t)+A{f}({x}(t))+ B{f}({x}(t-{\tau ))}+{u} \end{aligned}$$
(2)

where \(C=diag({c_i})\), \(A=({a_{ij}})_{n\times n}\), \(B=({b_{ij}})_{n\times n}\), \(x(t)=({x}_{1}(t),{x}_{2}(t),...,{x}_{n}(t))^{T}\), \(u=({u}_{1},{u}_{2},...,{u}_{n})^{T}\), \(f(x(\cdot ))=({f_1}({x}_{1}(\cdot )),{f_2}({x}_{2}(\cdot )),...,{f_n}({x}_{n}(\cdot )))^{T}.\)

The functions \(f_i\) possess the following properties:

$$0 \le {{f_i}{(x)}-{f_i}{(\tilde{x})}\over x-\tilde{x}} \le {k_i},{\,\,}\forall i,{\,\,\,}\forall x, \tilde{x} \in R, x \ne {\tilde{x}} $$

with \({k_i}\) being positive constants. The functions satisfying the above conditions are denoted by \(f\in \mathcal{K}\).

The matrices \(A=({a_{ij}})\), \(B=({b_{ij}})\) and \(C=diag({c_i}>0)\) in (1) are stated by the following intervals:

$$\begin{aligned}&{C_I}:=\{C:0 \preceq {\underline{C}} \preceq {C} \preceq {\overline{C}}, ~i.e.,~0<{{\underline{c}_i}} \le {c_i} \le {\overline{c}_i}\}\nonumber \\&{A_I}:=\{A :{\underline{A}} \preceq {A} \preceq {\overline{A}},~i.e.,~{{\underline{a}_{ij}}} \le {a_{ij}} \le {\overline{a}_{ij}}\}\\&{B_I}:=\{B :{\underline{B}} \preceq {B} \preceq {\overline{B}},~i.e.,~{{\underline{b}_{ij}}} \le {b_{ij}} \le {\overline{b}_{ij}}\}\nonumber \end{aligned}$$
(3)

We now introduce the following lemma which is of great importance to obtaining our main result:

Lemma 1:

Let D be a positive diagonal matrix with n diagonal entries, x be any real vector having n elements, and consider any real \(n\times n\) dimensional matrix \(A=(a_{ij})\) with being intervalized as \({\underline{A}} \preceq {A} \preceq {\overline{A}}\). In this case, the following inequality is satisfied:

$$ {x^T}{A^T}DAx\le |{x^T}|[|{{A^*}^T}D{{A^*}}|+|{{A^*}^T}|D{A_*}+{{A_*}^T}D|{{A^*}}|+{{A_*}^T}D{{A_*}}]|x| $$

in which \({A^*}={\frac{1}{2}}~({\overline{A}}+{\underline{A}})\) and \({A_*}=\frac{1}{2}~({\overline{A}}-{\underline{A}})\).

Proof:

If \(A\in {A_I}\), then, \({a_{ij}}\) can be written as

$$\begin{aligned} {a_{ij}}={1\over 2}({\overline{a}_{ij}}+{{\underline{a}_{ij}}})+{1\over 2}{\sigma _{ij}}({\overline{a}_{ij}}- {{\underline{a}_{ij}}}),{\,\,} -1\le {\sigma _{ij}}\le 1, \forall i,j. \end{aligned}$$

Assume that \(\tilde{A}=({\tilde{a}_{ij}})_{n\times n}\) is a real constant matrix and whose elements are defined as \( {\tilde{a}}_{ij}={1\over 2}{\sigma _{ij}}({\overline{a}_{ij}}- {{\underline{a}_{ij}}})\). Then, A can be written as

$$ A={1\over 2}({\overline{A}}+{\underline{A}})+{\tilde{A}}={A^*}+{\tilde{A}} $$

We can now express the following:

$$\begin{aligned} {x^T}{A^T}DAx= & {} {x^T}{({A^*}+{\tilde{A}})^T}D{({A^*}+{\tilde{A}})}x\\= & {} {x^T}({{A^*}^T}D{{A^*}}+{{A^*}^T}D{\tilde{A}}+{{\tilde{A}}^T}D{{A^*}}+{{\tilde{A}}^T}D{{\tilde{A}}})x\\\le & {} |{x^T}||{{A^*}^T}D{{A^*}}+{{A^*}^T}D{\tilde{A}}+{{\tilde{A}}^T}D{{A^*}}+{{\tilde{A}}^T}D{{\tilde{A}}}||x|\\\le & {} |{x^T}||{{A^*}^T}D{{A^*}}||x|+|{x^T}||{{A^*}^T}|D|{\tilde{A}}||x|\\&+\,|{x^T}||{{\tilde{A}}^T}|D|{{A^*}}||x|+|{x^T}||{{\tilde{A}}^T}|D|{{\tilde{A}}}||x| \end{aligned}$$

Since \( |{\tilde{a}}_{ij}| \le {1\over 2}({\overline{a}_{ij}}- {{\underline{a}_{ij}}}), {\forall } i,j \), it follows that \(|\tilde{A}| \preceq {A_*}\). Then, we obtain

$$\begin{aligned} {x^T}{A^T}DAx\le & {} |{x^T}||{{A^*}^T}D{{A^*}}||x|+|{x^T}||{{A^*}^T}|D|{{A_*}}||x|\\&+\,|{x^T}||{{{A_*}}^T}|D|{{A^*}}||x|+|{x^T}||{{{A_*}}^T}|D|{{{A_*}}}||x|\\= & {} |{x^T}|(|{{A^*}^T}D{{A^*}}|+|{{A^*}^T}|D{A_*}+{{A_*}^T}D|{{A^*}}|+{{A_*}^T}D{{A_*}})|x| \end{aligned}$$

Below are two lemmas and a fact that will be needed in the proofs:

Lemma 2

[1]: Let D be a positive diagonal matrix with n diagonal entries, x be any real vector having n elements, and consider any real \(n\times n\) dimensional matrix \(A=(a_{ij})\) with being intervalized as \({\underline{A}} \preceq {A} \preceq {\overline{A}}\). In this case, the following inequality is satisfied:

$$ {x^T}(DA+{A^T}D)x \le |{x^T}|S|x| $$

where \(S=(s_{ij})\) such that \(s_{ii}=2{d_i}{\overline{a}_{ii}}\) and \(s_{ij}=max(|{d_i}\overline{a}_{ij}+{d_j}\overline{a}_{ji}|, |{d_i}\underline{a}_{ij}+{d_j}\underline{a}_{ji}|)\) for \(i\ne j\).

Lemma 3

[2]: Let the map \(H (y)\in C^0\) posses two properties: \(H (y)\ne H (z)\), \(\forall y\ne z\) and \(||H(y)||{\rightarrow }{\infty }\) as \(||y||{\rightarrow }{\infty }\) with \(y\in R^n\) and \(z\in R^n\). Then, H(y) is said to be homeomorphism of \(R^n\).

Fact 1:

If \(A=({a_{ij}})\) and \(B=({b_{ij}})\) satisfy (3), then, A and B have bounded norms, i.e., we can find some positive real constants \(\epsilon \) and \(\varepsilon \) satisfying the following

$$\begin{aligned} {\Vert A\Vert _2}\le \epsilon {\,\,\,} \text{ and }{\,\,\,}{\Vert B\Vert _2}\le \varepsilon \end{aligned}$$

3 Existence and Uniqueness Analysis

The following theorem presents the criterion which ensures that system (1) possesses a unique equilibrium point for each constant input:

Theorem 1:

Let neuron activation functions belong \(\mathcal{K}\), and assume that the uncertain network elements A, B and C are defined by (3). Then, delayed neural network described by (1) possesses only one equilibrium point, if one can find a matrix \(D=diag({d_i}>0)\) satisfying the following condition

$${\varTheta }=2{{\underline{C}}D{K^{-1}}}-{D}-S-Q>0$$

where \(K=diag({k_i}>0)\), \(Q={{B_*}^T}D|{{B^*}}|+{{B_*}^T}D{{B_*}}+|{{B^*}^T}D{{B^*}}|+|{{B^*}^T}|D{B_*}\), \(S=(s_{ij})\) is the matrix whose diagonal elements are defined by \(s_{ii}=2{d_i}{\overline{a}_{ii}}\) and off-diagonal elements are defined by \(s_{ij}=max{(|{d_i}\overline{a}_{ij}+{d_j}\overline{a}_{ji}|, |{d_i}\underline{a}_{ij}+{d_j}\underline{a}_{ji}|)}\), the matrix \({B^*}\) included in Q is defined as \({B^*}={1\over 2}({\overline{B}}+{\underline{B}})\) and the nonnegative matrix \({B_*}\) included in Q is defined as \({B_*}={1\over 2}({\overline{B}}-{\underline{B}})\).

Proof:

Consider the associated map H(x) representing neural network (2)

$$\begin{aligned} H(x)=-Cx+Af(x)+Bf(x)+u \end{aligned}$$
(4)

For every equilibrium point \(x^*\) of (2), by definition of equilibrium equation, the following must be satisfied

$$ -C{x^*}+Af(x^*)+Bf(x^*)+u=0 $$

Apparently, when a vector x satisfies \(H(x)=0\), it results in the fact that \(H(x)=0\) also corresponds to the equilibrium points representing the solutions of (2). Thus, by the virtue of Lemma 3, one can conclude that neural model (2) possesses only one equilibrium point for the constant u if H(x) fulfills conditions in Lemma 3. For any randomly selected vectors \(x\ne y\), using (4), we express

$$ {H}({x})-{H}({y})=-C(x-y)+A({f}({x})-{f}({y}))+ B({f}({x})-{f}({y})) $$

Let \(H(x,y)={H}({x})-{H}({y})\) and \(f(x,y)={f}({x})-{f}({y})\). Then, the previous equation can be put in the form:

$$\begin{aligned} H(x,y)=-C(x-y)+Af(x,y)+ Bf(x,y) \end{aligned}$$
(5)

Since \(f\in \mathcal{K}\), if \(x\ne y\) and \({f}({x})={f}({y})\), (5) yields

$$ H(x,y)=-C(x-y) $$

in which \(C=diag({c_i}>0)\). Therefore, \(x-y\ne 0\) will ensure the condition that \(H(x,y)\ne 0\). For \(x-y\ne 0\), \(f(x,y)\ne 0 \), and \(D=diag({d_i}>0)\), multiplying (5) by the nonzero vector \(2{f^T}(x,y)D\) leads to

$$\begin{aligned} 2{f^T}(x,y)D{H}(x,y)= & {} -2{f^T}(x,y)DC(x-y)\\&+\,2{f^T}(x,y)DA{f}(x,y)\\&+\,2{f^T}(x,y)DB{f}(x,y) \end{aligned}$$

The following can be written

$$ 2{f^T}(x,y)DA{f}(x,y)={f^T}(x,y)(DA+A^TD){f}(x,y) $$

Thus, one would obtain

$$\begin{aligned} 2{f^T}(x,y)D{H}(x,y)= & {} -2{f^T}(x,y)DC(x-y)\nonumber \\&+\,{f^T}(x,y)(DA+A^TD){f}(x,y)\nonumber \\&+\,{f^T}(x,y)DB{f}(x,y) \end{aligned}$$
(6)

For activation functions in \( \mathcal{K}\), the following can be derived

(7)

Lemma 2 leads to

$$\begin{aligned} {f^T}(x,y)(DA+A^TD){f}(x,y)\le |{f^T}(x,y)|S|{f}(x,y)| \end{aligned}$$
(8)

It is worth noting that

$$\begin{aligned} 2{f^T}(x,y)DB{f}(x,y)\le & {} {f^T}(x,y){D}{f}(x,y)+{f^T}(x,y){B^{T}}DB{f}(x,y) \end{aligned}$$

By using Lemma 1, one would get

$$ {f^T}(x,y){B^{T}}DB{f}(x,y)\le |{f^T}(x,y)|Q|{f}(x,y)| $$

Thus

$$\begin{aligned} 2{f^T}(x,y)DB{f}(x,y)\le & {} {f^T}(x,y){D}{f}(x,y)+|{f^T}(x,y)|Q|{f}(x,y)| \end{aligned}$$
(9)

Using (7)–(9) in (6) will give the following

$$\begin{aligned} 2{f^T}(x,y)D{H}(x,y)\le & {} -\,2|{f^T}(x,y)||{{\underline{C}}D{K^{-1}}}|{f}(x,y)|\nonumber \\&+\,|{f^T}(x,y)|(S+D+Q)|{f}(x,y)|\nonumber \\= & {} -|{f}(x,y)|^{T}\varTheta |{f}(x,y)| \end{aligned}$$

Since \(\varTheta >0\), one can observe that

$$\begin{aligned} 2{f^T}(x,y)D{H}(x,y)\le & {} -{{\lambda _m}}(\varTheta )||{f}(x,y)||_2^2 \end{aligned}$$
(10)

Obviously, \({f}(x,y)\ne 0 \) with \(\varTheta \) being positive definite, that is \(\varTheta >0\), (10) leads to

$$\begin{aligned} 2{f^T}(x,y)D{H}(x,y)< & {} 0 \end{aligned}$$

where \({f}(x,y)\ne 0 \) guarantees condition that \({H}({x})\ne {H}({y})\) for all \(x\ne y\).

Choosing \(y=0\), (10) will directly result in

$$ 2({f}({x})-{f}({0}))^{T}D({H}({x})-{H}({0}))\le -{{\lambda _m}}(\varTheta )||{f}({x})-{f}({0})||_2^2 $$

It follows from the above inequality that

$$ |2({f}({x})-{f}({0}))^{T}D({H}({x})-{H}({0}))|\ge {{\lambda _m}}(\varTheta )||{f}({x})-{f}({0})||_2^2 $$

yielding

$$ ||{H}({x})-{H}({0})||_{1}> {{{\lambda _m}}(\varTheta )||{f}({x})-{f}({0})||_2^2\over 2||D||_{\infty }||{f}({x})-{f}({0})||_{\infty }} $$

Using some basic properties of the vector norms, we can state

$$\begin{aligned} ||{H}({x})||_{1}> & {} {{{\lambda _m}}(\varTheta )(||{f}({x})||_2-||{f}({0})||_2 -2||D||_{\infty }||{H}({0})||_{1})\over 2||D||_{\infty }} \end{aligned}$$

Knowing that \(||{H}({0})||_{1}\), \(||D||_{\infty }\), and \(||{f}({0})||_{2}\) have limited upper bounds will enable us to conclude that \(||{H}({x})||\rightarrow \infty \) if \(||{x}||\rightarrow \infty \). Q.E.D.

4 Stability Analysis

Stability of neural network (1) will be studied in this section. If \(x^*\) is defined to denote an equilibrium point of (1), then, by means of \({z}_{i}(\cdot )={x}_{i}(\cdot )-{x_i^*}\), neural system (1) is replaced by a new model whose dynamics is governed by:

$$\begin{aligned} {\dot{z}_{i}(t)}= & {} -c_i{z}_{i}(t)+\sum _{ j=1}^{ n}{a_{ij}}{g_j}({z}_{j}(t))+\sum _{ j=1}^{ n}{b_{ij}}{g_j}({z}_{j}(t-{\tau ))} \end{aligned}$$
(11)

Note that \({g_i}({z}_{i}(\cdot ))={f_i}({z}_{i}(\cdot )+{x_i^*})-{f_i}({x_i^*})\). We can easily observe that the functions \(g_i\) belong to the class \(\mathcal{K}\), that is, \(g\in \mathcal{K}\) satisfying \({g_i}(0)=0\).

The vector-matrix form of neural system (11) is

$$\begin{aligned} {\dot{z}(t)}=-C{z}(t)+A{g}({z}(t))+ B{g}({z}(t-{\tau ))} \end{aligned}$$
(12)

In this new system, \(z(t)=({z}_{1}(t),{z}_{2}(t),...,{z}_{n}(t))^{T}\), and the new nonlinear output functions state vector is \(g(z(\cdot ))=({g_1}({z}_{1}(\cdot )),{g_2}({z}_{2}(\cdot )),...,{g_n}({z}_{n}(\cdot )))^{T}\).

The stability result is given as follows:

Theorem 2:

Let neuron activation functions belong \(\mathcal{K}\), and assume that the uncertain network elements A, B and C are given by (3). Then, the origin of delayed neural system described by (11) is globally asymptotically stable, if one can find an appropriate matrix \(D=diag({d_i}>0)\) satisfying the following condition

$${\varTheta }=2{{\underline{C}}D{K^{-1}}}-{D}-S-Q>0$$

where \(K=diag({k_i}>0)\), \(Q={{B_*}^T}D|{{B^*}}|+{{B_*}^T}D{{B_*}}+|{{B^*}^T}D{{B^*}}|+|{{B^*}^T}|D{B_*}\), \(S=(s_{ij})\) is the matrix whose diagonal elements are defined by \(s_{ii}=2{d_i}{\overline{a}_{ii}}\) and off-diagonal elements are defined by \(s_{ij}=max{(|{d_i}\overline{a}_{ij}+{d_j}\overline{a}_{ji}|, |{d_i}\underline{a}_{ij}+{d_j}\underline{a}_{ji}|)}\), the matrix \({B^*}\) included in Q is defined as \({B^*}={1\over 2}({\overline{B}}+{\underline{B}})\) and the nonnegative matrix \({B_*}\) included in Q is defined as \({B_*}={1\over 2}({\overline{B}}-{\underline{B}})\).

Proof:

The Lyapunov functional to be exploited for the proof of this theorem is chosen as:

$$\begin{aligned} V({z}(t))= & {} \sum _{ i=1}^{ n}({z_i^2}(t)+2{\gamma }{\int _{0}^{{z_i}(t)}}{d_i} {g_i}(s)ds)\\&+\,{\gamma } {\int _{t-\tau }^{t}} |{g^T}({z}(\zeta ))|Q|{g}({z}(\zeta ))|d \zeta +{\xi } {\int _{t-\tau }^{t}} {\Vert {g}({z}(\zeta )\Vert _2^2}d \zeta \end{aligned}$$

where the \(d_i\), \(\gamma \) and \(\xi \) are constants. \(\dot{V}({z}(t))\) is determined to be as

$$\begin{aligned} {\dot{V}}({z}(t))= & {} -2{z}^{T}(t)C{z}(t)+2{z}^{T}(t)A{g}({z}(t))+2{z}^{T}(t)B{g}({z}(t-{\tau ))}\nonumber \\&-\,2{\gamma }{g^T}({z}(t))DC{z}(t)+2{\gamma }{g^T}({z}(t))DA{g}({z}(t))\nonumber \\&+\,2{\gamma }{g^T}({z}(t))DB{g}({z}(t-{\tau ))}\nonumber \\&+\,{\gamma }|{g^T}({z}(t))|Q|{g}({z}(t))|-{\gamma }|{g^T}({z}(t-\tau ))|Q|{g}({z}(t-\tau ))|\nonumber \\&+\,{\xi }{\Vert {g}({z}(t))\Vert _2^2}-{\xi }{\Vert {g}({z}(t-\tau ))\Vert _2^2} \end{aligned}$$
(13)

Let \(\alpha ={\Vert A\Vert _2^2}{\Vert C^{-1}\Vert _2}\) and \(\beta ={\Vert A\Vert _2^2}{\Vert C^{-1}\Vert _2}\). We now observe the inequalities:

$$\begin{aligned} 2{z}^{T}(t)A{g}({z}(t)) -{z}^{T}(t)C{z}(t)\le & {} \alpha {\Vert {g}({z}(t))\Vert _2^2} \end{aligned}$$
(14)
$$\begin{aligned} 2{z}^{T}(t)B{g}({z}(t-\tau )-{z}^{T}(t)C{z}(t)\le & {} {g^T}({z}(t-\tau )){B^T}{C^{-1}}B{g}({z}(t-\tau ))\nonumber \\\le & {} \beta {\Vert {g}({z}(t-\tau ))\Vert _2^2} \end{aligned}$$
(15)
(16)
$$\begin{aligned} 2{\gamma }{g^T}({z}(t))DB{g}({z}(t-{\tau ))}\le & {} {\gamma }{g^T}({z}(t)){D}{g}({z}(t))\nonumber \\&+\,{\gamma }{g^T}({z}(t-\tau )){B^{T}}DB{g}({z}(t-\tau ))\nonumber \\\le & {} {\gamma }{g^T}({z}(t)){D}{g}({z}(t))\nonumber \\&+\,{\gamma }|{g^T}({z}(t-\tau ))|Q|{g}({z}(t-\tau ))| \end{aligned}$$
(17)
$$\begin{aligned} -2{\gamma }{g^T}({z}(t))DC{z}(t)\le & {} -2{\gamma }{g^T}({z}(t))DC{K^{-1}}{g}({z}(t)) \end{aligned}$$
(18)

Inserting (14)–(18) into (13) yields

$$\begin{aligned} {\dot{V}}({z}(t))\le & {} \alpha {\Vert {g}({z}(t))\Vert _2^2}+\beta {g^T}({z}(t)){g}({z}(t))\nonumber \\&-\,2{\gamma }{g^T}({z}(t))P{\underline{C}}{K^{-1}}{g}({z}(t))+{\gamma }|{g^T}({z}(t))|S|{g}({z}(t))|\nonumber \\&+\,{\gamma }{g^T}({z}(t)){P}{g}({z}(t))+{\gamma }|{g^T}({z}(t-\tau ))|Q|{g}({z}(t-\tau ))|\nonumber \\&+\,{\gamma }|{g^T}({z}(t))|Q|{g}({z}(t))|-{\gamma }|{g^T}({z}(t-\tau ))|Q|{g}({z}(t-\tau ))|\nonumber \\&+\,{\xi }{\Vert {g}({z}(t))\Vert _2^2}-{\xi }{\Vert {g}({z}(t-\tau ))\Vert _2^2} \end{aligned}$$

Taking \({\xi }={{\beta }}\) leads to

$$\begin{aligned} {\dot{V}}({z}(t))\le & {} {({{\beta }}+{{\alpha }})}{\Vert {g}({z}(t))\Vert _2^2}-2{\gamma }{g^T}({z}(t))D{\underline{C}}{K^{-1}}{g}({z}(t))\nonumber \\&+\,{\gamma }|{g^T}({z}(t))|S|{g}({z}(t))|+{\gamma }{g^T}({z}(t)){D}{g}({z}(t))\nonumber \\&+\,{\gamma }|{g^T}({z}(t))|Q|{g}({z}(t))|\nonumber \\= & {} {({{\beta }}+{{\alpha }})}{\Vert {g}({z}(t))\Vert _2^2}-{\gamma }|{g^T}({z}(t))|(D{\underline{C}}{K^{-1}}-{D}-S-Q)|{g}({z}(t))|\nonumber \\= & {} {({{\beta }}+{{\alpha }})}{\Vert {g}({z}(t))\Vert _2^2}-{\gamma }|{g^T}({z}(t))|\varTheta |{g}({z}(t))| \end{aligned}$$
(19)

Since \(\varTheta >0\), (19) gives

$$\begin{aligned} {\dot{V}}({z}(t))\le & {} -\,(\gamma {\lambda _m}({\varTheta })-{({{\beta }}+{{\alpha }}}) \Vert ){g}({z}(t))\Vert _2^2 \end{aligned}$$

Thus

$${\gamma }>{{{({{\alpha }}+{{\beta }})}}\over {{{\lambda _m}}(\varTheta )} }$$

guarantees that \({\dot{V}}({z}(t))\) will have negative values \(\forall {g}({z}(t))\ne 0\), or equivalently \({\dot{V}}({z}(t))<0\) when \( {z}(t)\ne 0\).

Let \({g}({z}(t))= 0\). Taking \({z}(t)\ne 0\) leads to

$$\begin{aligned} {\dot{V}}({z}(t))= & {} -2{z}^{T}(t)C{z}(t)+2{z}^{T}(t)B{g}({z}(t-{\tau ))}\\&-\,{\gamma }|{g^T}({z}(t-\tau ))|Q|{g}({z}(t-\tau ))|-{\xi }{\Vert {g}({z}(t-\tau ))\Vert _2^2} \end{aligned}$$

Then

$$ -{z}^{T}(t)C{z}(t)+ 2{z}^{T}(t)B{g}({z}(t-{\tau ))}\le {\xi }{\Vert {g}({z}(t-\tau ))\Vert _2^2} $$

leads to

$$ {\dot{V}}({z}(t)) \le -{z}^{T}(t)C{z}(t)$$

in which \({\dot{V}}({z}(t))<0\) \(\forall {z}(t)\ne 0\). Finally, \({g}({z}(t))= {z}(t)=0\) leads to

$$\begin{aligned} {\dot{V}}({z}(t)) \le -{\xi }{\Vert {g}({z}(t-\tau ))\Vert _2^2} \end{aligned}$$

Apparently, \({\dot{V}}({z}(t))<0\) \(\forall {g}({z}(t-\tau ))\ne 0\). Hence, \({\dot{V}}({z}(t))=0\) iff \({z}(t)={g}({z}(t))={g}({z}(t-\tau ))=0\). This also means \({\dot{V}}({z}(t))<0\) when states and delayed states are not equal to zero. The radially unboundedness the Lyapunov functional is to be easily checked by proving \(V({z}(t))\rightarrow \infty \) when \(\Vert {z}(t)\Vert \rightarrow \infty \). Q.E.D.

5 Conclusions

This work proposed a further improved condition for the robustness of neural networks involving intervalized network parameters and including single time delay. For the sake of obtaining a new robust stability condition, a new upper bound for the norm of the intervalized interconnection matrices has been established. The homeomorphism mapping and Lyapunov stability theorems are employed to derive the proposed stability condition by making use of this upper bound norm. The obtained result is applicable to all nondecreasing slope-bounded activation functions and imposes constraints on parameters of neural network without involving time delay.