Keywords

1 Introduction

In recent years, dynamical neural networks have been widely used in solving various classes of engineering problems such as image and signal processing, associative memory, pattern recognition, parallel computation, control and optimization. In such applications, the equilibrium and stability properties of neural networks are of great importance in the design of dynamical neural networks. It is known that in the VLSI implementation of neural networks, time delays are unavoidably encountered during the processing and transmission of signals, which may affect the dynamics of neural networks. On the other hand, some deviations in the parameters of the neural network may also affect the stability properties. Therefore, we must consider the time delays and parameter uncertainties in studying stability of neural networks, which requires to deal with the robust stability of delayed neural networks. Recently, many conditions for global robust stability of delayed neural networks have been reported [119]. In this paper, we present a new sufficient condition for the global robust asymptotic stability of neural networks with multiple time delays.

Consider the following neural network model:

$$\begin{aligned} {d{x_i}(t)\over dt}= & {} -{c_i}{x_i}(t)+\sum _{j=1}^{n}{a_{ij}}{{f_j}({x_j}}(t))+\sum _{j=1}^{n}{b_{ij}}{{f_j}({x_j}}(t-{\tau }_{ij}))+{u_i} \end{aligned}$$
(1)

where n is the number of the neurons, \({x}_{i}(t)\) denotes the state of the neuron i at time t, \({f_i}({\cdot })\) denote activation functions, \(a_{ij}\) and \(b_{ij}\) denote the strengths of connectivity between neurons j and i at time t and \(t-{\tau _{ij}}\), respectively; \({\tau _{ij}}\) represents the time delays, \(u_i\) is the constant input to the neuron i, \({c}_{i}\) is the charging rate for the neuron i.

The parameters \(a_{ij}\) and \(b_{ij}\) and \({c}_{i}\) are assumed to satisfy the conditions

$$\begin{aligned}&{C_I}=[{\underline{C}},{\overline{C}}]= \{C=diag(c_i): 0<{{\underline{c}_i}}{\le }{c_i}{\le }{\overline{c}_i}, i=1,2,...,n\}\nonumber \\& {A_I}=[{\underline{A}},{\overline{A}}]=\{A=(a_{ij})_{n\times n}: {{\underline{a}_{ij}}} {\le }{a_{ij}}{\le } {\overline{a}_{ij}}, i,j=1,2,...,n\}\\& {B_I}=[{\underline{B}},{\overline{B}}]=\{B=(a_{ij})_{n\times n}: {{\underline{b}_{ij}}} {\le }{b_{ij}}{\le } {\overline{b}_{ij}}, i,j=1,2,...,n\}\nonumber \end{aligned}$$
(2)

The activation functions \(f_i\) are assumed to satisfy the condition:

$$ |{f_i}{(x)}-{f_i}{(y)}|{\le }{\ell _i}|x-y|,{\,}i=1,2,...,n, {\,\,\,}\forall x, y \in R, {x \ne }{y} $$

where \(\ell _i>0\) denotes a constant. This class of functions is denoted by \(f\in \mathcal{L}\).

The following lemma will play an important role in the proofs:

Lemma 1

[3]: Let A be any real matrix defined by

$$A\in {A_I}=[{\underline{A}},{\overline{A}}]=\{A=(a_{ij})_{n\times n}: {{\underline{a}_{ij}}} {\le }{a_{ij}}{\le } {\overline{a}_{ij}}, i,j=1,2,...,n\}$$

Let \(x =(x_{1},x_{2}, ..., x_{n})^{T} \) and \(y =(y_{1},y_{2}, ..., y_{n})^{T} \). Then, we have

$$ 2{x^T}Ay\le {\beta }\sum _{ i=1}^{ n}{x_i^2}+{1\over \beta }\sum _{ i=1}^{ n}{p_i}{y_i^2} $$

where \(\beta \) is any positive constant, and

$$ {p_i}=\sum _{ k=1}^{ n}({\hat{a}_{ki}}\sum _{ j=1}^{ n}{\hat{a}_{kj}}),{\,\,} i=1,2,...,n $$

with \( {\hat{a}_{ij}}=max\{|{\underline{a}_{ij}}|, |{\overline{a}_{ij}}|\}, i,j=1,2,...,n\).

2 Global Robust Stability Analysis

In this section, we present the following result:

Theorem 1

For the neural system (1), let the network parameters satisfy (2) and \(f\in \mathcal{L}\). Then, the neural network model (1) is globally asymptotically robust stable, if there exist positive constants \(\alpha \) and \(\beta \) such that

$$ {\varepsilon _i}=2{{\underline{c}_i}}-{\beta }-{1\over \beta }{p_i}{\ell _i^2}-\sum _{ j=1}^{ n}(\alpha {\ell _j} +{1\over \alpha }{\hat{b}_{ji}^2}{\ell _i})>0, {\,\,\,}{i=1,2,...,n} $$

where \({p_i}=\sum _{ j=1}^{ n}({\hat{a}_{ji}}\sum _{ k=1}^{ n}{\hat{a}_{jk}}), {\,\,\,}{i=1,2,...,n}\) and \( {\hat{a}_{ij}}=max\{|{\underline{a}_{ij}}|, |{\overline{a}_{ij}}|\}\) and \( {\hat{b}_{ij}}=max\{|{\underline{b}_{ij}}|, |{\overline{b}_{ij}}|\},\) \( {\,\,\,}{i,j=1,2,...,n}\).

Proof

In order to prove the existence and uniqueness of the equilibrium point of system (1), we consider the following mapping associated with system (1):

$$\begin{aligned} H(x)=-Cx+Af(x)+Bf(x)+u \end{aligned}$$
(3)

Clearly, if \(x^*\) is an equilibrium point of (1), then, \(x^*\) satisfies the equilibrium equation of (1):

$$ -C{x^*}+Af(x^*)+Bf(x^*)+u=0 $$

Hence, we can easily see that every solution of \(H(x)=0\) is an equilibrium point of (1). Therefore, for the system defined by (1), there exists a unique equilibrium point for every input vector u if H(x) is homeomorphism of \(R^n\). Now, let x, \(y\in R^n\) be two different vectors such that \(x\ne y\). For H(x) defined by (3), we can write

$$\begin{aligned} {H}({x})-{H}({y})=-C(x-y)+A({f}({x})-{f}({y}))+ B({f}({x})-{f}({y})) \end{aligned}$$
(4)

For \(f\in \mathcal{L}\), first consider the case where \(x\ne y\) and \({f}({x})-{f}({y})=0\). In this case, we have

$$ {H}({x})-{H}({y})=-C(x-y) $$

from which \(x-y\ne 0\) implies that \({H}({x})\ne {H}({y})\) since C is a positive diagonal matrix. For \(f\in \mathcal{L}\), now, consider the case where \(x-y\ne 0\) and \({f}({x})-{f}({y})\ne 0\). In this case, multiplying both sides of (4) by \(2(x-y)^{T}\) results in

$$\begin{aligned} 2(x-y)^{T}(H(x)-H(y))= & {} -\,\, 2(x-y)^{T}C(x-y)+2(x-y)^{T}A(f(x)-f(y))\nonumber \\&+\,\, 2(x-y)^{T}B(f(x)-f(y))\nonumber \\= & {} -\,\, 2\sum _{i=1}^{ n}{c_i}({x_i}-{y_i})^{2}+2(x-y)^{T}A(f(x)-f(y))\nonumber \\&+\,\, 2\sum _{ i=1}^{ n}\sum _{ j=1}^{n}{b_{ij}}({x_i}-{y_i})({f_j}({x_j})-{f_j}({y_j})) \end{aligned}$$
(5)

We first note the following inequality:

$$\begin{aligned}&2\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{b_{ij}}({x_i}-{y_i})({f_j}({x_j})-{f_j}({y_j}))\nonumber \\\le & {} \sum _{ i=1}^{ n}\sum _{ j=1}^{ n}2|{b_{ij}}||{x_i}-{y_i}||{f_j}({x_j})-{f_j}({y_j})|\nonumber \\\le & {} \sum _{ i=1}^{ n}\sum _{ j=1}^{ n}2|{b_{ij}}|{\ell _j}|{x_i}-{y_i}||{x_j}-{y_j}|\nonumber \\\le & {} \sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{\ell _j}(\alpha ({x_i}-{y_i})^{2}+{1\over \alpha }{b_{ij}^2}({x_j}-{y_j})^{2})\nonumber \\= & {} \alpha \sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{\ell _j}({x_i}-{y_i})^{2} +{1\over \alpha }\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{b_{ij}^2}{\ell _j}({x_j}-{y_j})^{2}\nonumber \\= & {} \alpha \sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{\ell _j}({x_i}-{y_i})^{2} +{1\over \alpha }\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{b_{ji}^2}{\ell _i}({x_i}-{y_i})^{2}\nonumber \\\le & {} \sum _{ i=1}^{ n}\sum _{ j=1}^{ n}(\alpha {\ell _j} +{1\over \alpha }{\hat{b}_{ji}^2}{\ell _i})({x_i}-{y_i})^{2} \end{aligned}$$
(6)

For any positive constant \(\beta \), we can also write

$$\begin{aligned} 2(x-y)^{T}A(f(x)-f(y))\le {\beta }(x-y)^{T}(x-y)+{1\over \beta }(f(x)-f(y))^{T}{A^T}A(f(x)-f(y)) \end{aligned}$$
(7)

For \(f\in \mathcal{L}\), from Lemma 1, we can write

$$\begin{aligned} (f(x)-f(y))^{T}{A^T}A(f(x)-f(y))\le & {} \sum _{ i=1}^{ n}{p_i}({f_i}({x_i})-{f_i}({y_i}))^{2}\nonumber \\\le & {} \sum _{ i=1}^{ n}{p_i}{\ell _i^2}({x_i}-{y_i})^{2} \end{aligned}$$
(8)

Hence, in the light of (6)–(8), (5) takes the form:

$$\begin{aligned} 2(x-y)^{T}(H(x)-H(y))\le & {} -2\sum _{i=1}^{ n}{\underline{c}_i}({x_i}-{y_i})^{2}+{\beta }\sum _{i=1}^{ n}({x_i}-{y_i})^{2})\nonumber \\&\,+\,\,{1\over \beta }\sum _{ i=1}^{ n}{p_i}{\ell _i^2}({x_i}-{y_i})^{2}+\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}(\alpha {\ell _j} +{1\over \alpha }{\hat{b}_{ji}^2}{\ell _i})({x_i}-{y_i})^{2} \end{aligned}$$

which is equivalent to

$$\begin{aligned} 2(x-y)^{T}(H(x)-H(y))\le & {} -\sum _{ i=1}^{ n}(2{{\underline{c}_i}}-{\beta }-{1\over \beta }{p_i}{\ell _i^2}-\sum _{ j=1}^{ n}(\alpha {\ell _j} {+}{1\over \alpha }{\hat{b}_{ji}^2}{\ell _i}))({x_i}-{y_i})^{2}\nonumber \\= & {} -\sum _{ i=1}^{ n}{\varepsilon _i}({x_i}-{y_i})^{2}\le -{\varepsilon _m}\sum _{ i=1}^{ n}({x_i}-{y_i})^{2}\nonumber \\= & {} -{\varepsilon _m}{||x-y||_2^2} \end{aligned}$$
(9)

where \({\varepsilon _m}=min\{{\varepsilon _i}\},i=1,2,...,n\). Let \(x-y\ne 0\) and \({\varepsilon _m}>0\). Then,

$$\begin{aligned} (x-y)^{T}({H}({x})-{H}({y}))< & {} 0 \end{aligned}$$

from which we can conclude that \({H}({x})\ne {H}({y})\) for all \(x\ne y\). In order to show that \(||{H}({x})||\rightarrow \infty \) as \(||{x}||\rightarrow \infty \), we let \(y=0\) in (9), which yields

$$\begin{aligned} x^{T}(H(x)-H(0))\le & {} -{{\varepsilon _m}}{||x||_2^2} \end{aligned}$$

from which it follows that \(||H(x)-H(0)||_{1} \ge {{\varepsilon _m}}{||x||_2}\). Using the property \(||H(x)-H(0)||_{1}\le ||H(x)||_{1}+||H(0)||_{1}\), we obtain \(||H(x)||_{1}\ge {{\varepsilon _m}}{||x||_2}-||H(0)||_{1}\) Since \(||H(0)||_{1}\) is finite, it follows that \(||{H}({x})||\rightarrow \infty \) as \(||{x}||\rightarrow \infty \). This completes the proof of the existence and uniqueness of the equilibrium point of (1).

We will now prove the global asymptotic stability of the equilibrium point of system (1). We first shift the equilibrium point \(x^*\) of system (1) to the origin. Using \({z}_{i}(\cdot )={x}_{i}(\cdot )-{x_i^*},{\,\,\,}{i=1,2,...,n}\), puts the (1) in the form:

$$\begin{aligned} {\dot{z}_{i}(t)}=-c_i{z}_{i}(t)+\sum _{ j=1}^{ n}{a_{ij}}{g_j}({z}_{j}(t)) +\sum _{ j=1}^{ n}{b_{ij}}{g_j}({z}_{j}(t-{\tau _{ij}))} \end{aligned}$$
(10)

where \( {g_i}({z}_{i}(\cdot ))={f_i}({z}_{i}(\cdot )+{x_i^*})-{f_i}({x_i^*})\). Note that \(f\in \mathcal{L}\) implies that \(g\in \mathcal{L}\) with

$$ |{g_i}{(z)}|{\le }{\ell _i}|z|, {\,\,\,}\text{ and }{\,\,\,}{g_i}(0)=0,{\,\,\, }{i=1,2,...,n} $$

Since \(z(t)\rightarrow 0\) implies that \(x(t)\rightarrow x^*\), the asymptotic stability of \(z(t)=0\) is equivalent to that of \(x^*\). In order to prove the global asymptotic stability of \(z(t)=0\), we will employ the following positive definite Lyapunov functional:

$$\begin{aligned} V({z}(t))= & {} \sum _{ i=1}^{ n}{z_i^2}(t) +\sum _{i=1}^{n}\sum _{j=1}^{n}(\gamma +{1\over \alpha }{\ell _j}{\hat{b}}_{ij}^2){\int _{t-\tau _{ij}}^{t}} {z_j^2}(\xi )d \xi \end{aligned}$$

where \(\alpha \) and \(\gamma \) are some positive constants. The time derivative of the functional along the trajectories of system (10) is obtained as follows

$$\begin{aligned} \dot{V}({z}(t))= & {} -\,\,2\sum _{ i=1}^{ n}{c_i}{z^2_i}(t)+2\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{a_{ij}}{z_i}(t){g_j}({z}_{j}(t))\nonumber \\&+\,\,2\sum _{i=1}^{ n}\sum _{ j=1}^{ n}{b_{ij}}{z_i}(t){g_j}({z}_{j}(t-{\tau _{ij}))}\nonumber \\&+\,\,\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{1\over \alpha }{\ell _j}{{\hat{b}}_{ij}^2} {z_j^2}(t)-\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{1\over \alpha }{\ell _j}{{{\hat{b}}_{ij}^2}}{z_j^2}(t-\tau _{ij})\nonumber \\&+\,\,{\gamma }\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{z_j^2}(t)-{\gamma }\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{z_j^2}(t-\tau _{ij}) \end{aligned}$$
(11)

We have

$$\begin{aligned} -\sum _{ i=1}^{ n}{c_i}{z^2_i}(t){\le }-\sum _{ i=1}^{ n}{\underline{c}_i}{z^2_i}(t) \end{aligned}$$
(12)

For any positive constant \(\beta \), we can write

$$\begin{aligned} 2\sum _{ i=1}^{ n}\sum _{ j=1}^{n}{a_{ij}}{z_i}(t){g_j}({z}_{j}(t))&{\le }\,\, {\beta }{z^T}(t){z}(t)+{1\over \beta }{g^{T}}(z(t)){A^T}Ag(z(t)) \end{aligned}$$
(13)

From Lemma 1, we obtain:

$$\begin{aligned} {g^{T}}(z(t)){A^T}Ag(z(t))\le & {} \sum _{ i=1}^{ n}{p_i}{g_i^2}({z}_{i}(t)) \end{aligned}$$

Since \( |{g_i}{({z}_{i}(t))}|{\le }{\ell _i}|{z}_{i}(t)|,{\,\,}({i=1,2,...,n}) \), (14) can be written as

$$\begin{aligned} {g^{T}}(z(t)){A^T}Ag(z(t))\le & {} \sum _{ i=1}^{ n}{p_i}{\ell _i^2}{z^2_i}(t) \end{aligned}$$
(14)

Using (14) in (13) results in

$$\begin{aligned} 2{z^T}(t)Ag(z(t)){\le } {\beta }\sum _{ i=1}^{ n}{z^2_i}(t)+{1\over \beta }\sum _{ i=1}^{ n}{p_i}{\ell _i^2}{z^2_i}(t) \end{aligned}$$
(15)

We also note that

$$\begin{aligned} 2\sum _{ i=1}^{ n}\sum _{ j=1}^{n}{b_{ij}}{z_i}(t){g_j}({z}_{j}(t-{\tau _{ij}))}&{\le }&\,\, \sum _{i=1}^{ n}\sum _{ j=1}^{n}2|{b_{ij}}||{z_i}(t)||{g_j}({z}_{j}(t-{\tau _{ij}))}|\nonumber \\&{\le }&\,\, \sum _{ i=1}^{ n}\sum _{ j=1}^{n}2{\ell _j}|{b_{ij}}||{z_i}(t)||{z}_{j}(t-{\tau _{ij})}|\nonumber \\&{\le }&\,\, \sum _{ i=1}^{ n}\sum _{ j=1}^{n}{\ell _j}(\alpha {z_i^2}(t)+{1\over \alpha }{{\hat{b}}_{ij}^2}{z_j^2}(t-{\tau _{ij})}) \end{aligned}$$
(16)

where \(\alpha \) is a positive constant. Using (12), (15) and (16) in (11), we obtain

$$\begin{aligned} \dot{V}({z}(t))\le & {} -2\sum _{ i=1}^{ n}{\underline{c}_i}{z^2_i}(t)+\sum _{ i=1}^{ n}{\beta }{z^2_i}(t)+{1\over \beta }\sum _{ i=1}^{ n}{p_i}{\ell _i^2}{z_i^2}(t)\nonumber \\&+ \sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{\ell _j}\alpha {z_i^2}(t)+\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{1\over \alpha }{\ell _i}{{\hat{b}}_{ji}^2}{z_i^2}(t)\nonumber \\&+\,\,{\gamma }\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{z_j^2}(t)-{\gamma }\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{z_j^2}(t-\tau _{ij}) \end{aligned}$$

which can be written as

$$\begin{aligned} \dot{V}({z}(t))\le & {} -\sum _{ i=1}^{ n}(2{{\underline{c}_i}}-{\beta }-{1\over \beta }{p_i}{\ell _i^2}-\sum _{ j=1}^{ n}(\alpha {\ell _j} +{1\over \alpha }{\hat{b}_{ji}^2}{\ell _i})){z_i^2}(t)\nonumber \\&+\,\,{\gamma }\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{z_j^2}(t)-{\gamma }\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{z_j^2}(t-\tau _{ij})\nonumber \\= & {} -\sum _{ i=1}^{ n}{\varepsilon _i}{z_i^2}(t)+{\gamma }\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{z_j^2}(t)-{\gamma }\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{z_j^2}(t-\tau _{ij})\nonumber \\\le & {} -\sum _{ i=1}^{ n}{\varepsilon _m}{z_i^2}(t)+{\gamma }\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{z_j^2}(t)\nonumber \\= & {} -{\varepsilon _m}{||z(t)||_2^2}+n{\gamma }{||z(t)||_2^2}=-({{\varepsilon _m}}-n{\gamma }){||z(t)||_2^2} \end{aligned}$$
(17)

In (17), \({\gamma }<{{\varepsilon _m} \over n}\) implies that \(\dot{V}({z}(t))\) is negative definite for all \({z}(t)\ne 0\). Now let \({z}(t)=0\). Then, \(\dot{V}({z}(t))\) is of the form:

$$\begin{aligned} \dot{V}({z}(t))= & {} -{1\over \alpha }\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{\ell _j}{{\hat{b}}_{ij}^2}{z_j^2}(t-\tau _{ij})-\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{\gamma }{z_j^2}(t-\tau _{ij})\\\le & {} -\sum _{ i=1}^{ n}\sum _{ j=1}^{ n}{\gamma }{z_j^2}(t-\tau _{ij}) \end{aligned}$$

in which \(\dot{V}({z}(t))<0\) if there exists at least one nonzero \({z_j}(t-\tau _{ij})\), implying that \(\dot{V}({z}(t))=0\) if and only if \({z}(t)=0\) and \({z_j}(t-\tau _{ij})=0\) for all ij, and \(\dot{V}({z}(t))<0\) otherwise. Also note that, V(z(t)) is radially unbounded since \(V({z}(t))\rightarrow \infty \) as \(||{z}(t)||\rightarrow \infty \). Hence, the origin system (10), or equivalently the equilibrium point of system (1) is globally asymptotically stable.

3 Conclusions

By employing Homomorphic mapping theorem and Lyapunov stability theorem, we have derived a new result for the existence, uniqueness and global robust stability of equilibrium point for neural networks with constant multiple time delays with respect to the Lipschitz activation functions. The key contribution of this paper is to establish some new relationships between the upper bound absolute values of the elements of the interconnection matrix, which is given in Lemma 1. The obtained condition is independently of the delay parameters and establishes a new a relationship between the network parameters of the system.