Abstract
In this paper, the exponential stability of a class of competitive neural networks with multi-proportional delays is studied. First, through suitable transformations, a class of competitive neural networks with multi-proportional delays can be equivalently turned into a class of competitive neural networks with multi-constant delays and variable coefficients. By using fixed point theorem, the existence and uniqueness of equilibrium point of the system is proved. Furthermore by constructing appropriate delay differential inequality, two delay-independent and delay-independent sufficient conditions for the exponential stability of equilibrium point are obtained. Finally, several examples and their simulations are given to illustrate the effectiveness of the obtained results.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In 1996, Meyer-Baese studied and put forward competitive neural network model in [1]. As one of the popular artificial neural networks, competitive neural networks have received significant attention. From the view of biology, there are two kinds of the human memories: short-term-memory (STM) and long-term-memory (LTM), STM prestnts fast neural activity and LTM presents unsupervised and slow synaptic modifications. Competitive neural networks contain two timescales, the one dealing with the fast change of the state and the other one with the slow change of the synapse by external stimulation. It is a kind of unsupervised learning neural networks, which refers to the whole interconnection between input and output of the single layer neural networks and is widely used in optimization design, pattern recognition, signal processing and control theory and so on [1–3]. Dynamics of competitive neural networks with different time scales can be found in [4–10]. No matter in biological or man-made neural networks, the synapse between neurons inevitably appear time delay effect, and connection weight between neurons is time-varying which may lead to oscillation, divergence, so as to instability. At present, a variety of dynamic behaviors about competitive neural networks with delays have been studied, such as singular perturbation [1], periodicity [11], stability [4, 12–17], synchronization [8, 10, 18–22] and so on. And the dynamic behaviors of competitive neural networks with delays mainly focus on constant delays [4, 11, 12, 18], bounded time-varying delays [8, 13, 14, 16, 17, 19, 20], mixed delays (i.e. bounded time-varying delay and distributed delay) [15, 21, 22], etc.
It is well known that stability has played a very important role in the applications of competitive neural networks. Thus, various stabilities of competitive neural networks with delays have been widely studied and a great deal of results have been obtained (see, [4, 12–17]). Global exponential stability of competitive neural networks with constant and time-varying delays had been studied by constructing Lyapunov functional in [4, 14], respectively. In [12], existence and global exponential stability of equilibrium of competitive neural networks with different time scales and multiple delays had been discussed by nonlinear Lipschitz measure method and constructing suitable Lyapunov functional. In [13], exponential stability of competitive neural networks with time-varying and distributed delays were studied by inequality techniques and properties of an M-matrix. In [15, 16], multi-stability of competitive neural networks with time-varying and distributed delays had been studied by using inequality technique. Global stability and convergence of equilibrium point for delayed competitive neural networks with different time scales and discontinuous activations were investigated by employing the Leray-Schauder alternative theorem in multi-valued analysis, linear matrix inequality technique and generalized Lyapunov-like method in [17].
Different from above mentioned delays, proportional delay is an unbounded time-varying delay. The proportional delay functions \(\tau (t)=(1-q)t,\,0<q<1\) is a kind of unbounded delay, which often rises in many fields such as physics, biology systems and control theory. At the same time, since the differences between proportional delay and other delays, the past results about the stability of neural networks with delays can not be directly applied to neural networks with proportional delays. The category of proportional delayed differential equation, which the neural networks with proportional delays belongs to, is an important kind of unbounded delay differential equation and is widely used in many fields, such as light absorption in the star substance and nonlinear dynamic systems. Hence, researches on the dynamic behaviors of neural networks with proportional delays have important theoretical and practical value. The dynamical behaviors of neural networks with proportional delays have been studied in [23–31]. In [23], dissipativity of a class of cellular neural networks (CNNs) with proportional delays was investigated by using the inner product properties. In [24–26, 28], Zhou had discussed the global exponential stability and asymptotic stability of CNNs with multi-proportional delays by employing matrix theory and constructing Lyapunov functional, respectively. Delay-dependent exponential synchronization of recurrent neural networks (RNNs) with multiple proportional delays was studied in [28] by constructing appropriate Lyapunov functional. A few results about dynamical behaviors of neural networks with proportional delays in [24–26, 28] mainly used to establish appropriate Lyapunov functionals. It is well known that constructing new Lyapunov functions is very difficult, and no general method can be found. At present, there are many other research methods, for example, by constructing nonlinear delay differential inequality, Zhou had studied the global exponential stability of the bidirectional associative memory neural networks with proportional delays in [27] and [29]. In [30], stability criteria for high-order networks with proportional delay was studied based on matrix measure and Halanay inequality. In [31], new explicit conditions ensuring that the state trajectories of the system do not exceed a certain threshold over a pre-specified finite time interval was obtained by matrix inequalities.
The advantage of neural networks with proportional delays is that the network’s running time can be controlled according to the network allowed delays. Thus, it is not only theoretically interesting but also practical to establish sufficient conditions for stability of neural networks with proportional delays. Until now, the results on exponential stability of competitive neural networks with proportional delays has not been obtained. Inspired by [27], the aim of this paper is to discuss exponential stability of competitive neural networks with proportional delays. By using fixed point theorem, the existence and uniqueness of equilibrium point of the system is proved. Furthermore, by constructing appropriate delay differential inequality, two delay-independent and delay-dependent sufficient conditions for the exponential stability of equilibrium are obtained. Finally, two examples and their simulations are given to illustrate the effectiveness of the obtained results.
The rest of the paper is organized as follows. In Sect. 2, models and preliminaries are presented that will be used later. Through transformations, competitive neural networks with multi-proportional delays can be turned into competitive neural network with multi-constant delays and variable coefficients. In Sect. 3, some novel sufficient conditions are derived for the existence, uniqueness and stability of the equilibrium point. In Sect. 4, several examples and their simulations are given to show the effectiveness of the obtained results. In Sect. 5, conclusions are provided.
2 Model and Preliminaries
Consider the following competitive neural networks with multi-proportional delays
for \(t\ge 1\), \(i,j=1,2,\ldots ,n\). Where \(x_{i}(t)\) denotes the neuron currrent activity level; \(m_{ij}(t)\) are synaptic efficiency; \(a_{i}>0\) is the changing rate for neurons i; \(b_{ij}\) and \(c_{ij}(t)\) are constants which denote the strengths of connectivity between the cells j and i at time t and connection weights at \(q_{j}t\) time respectively; \(d_{j}\) is a given arbitrarily constant; \(q_{j}\) is proportional delay factor and satisfies \(0<q_{j}\le 1\), \(q_{j}t=q_{j}*t\), \(q_{j}t=t-(1-q_{j})t\), in which \((1-q_{j})t\) corresponds to the time delay function, and \((1-q_{j})t\rightarrow +\infty \) as \(q_{j}\ne 1,~t\rightarrow +\infty \); \(q=\min \limits _{1\le j\le n}\{q_{j}\}\); \(I_{i}\) denotes external input; \(B_{i}>0\) is an external stimulus intensity; \(\varepsilon \) is a fast time scale decided by STM and \(\varepsilon >0\). In this paper, taking \(\varepsilon =1\) for convenience. \(f_{i}(x_{i}(t))\) is the nonlinear activation function.
Let \(s_{i}(t)=\sum \limits _{j=1}^{n}d_{j}m_{ij}(t)\), then (2.1) can be written as
for \(t\ge 1\), with the initial values
where \(\alpha =\sum \limits _{j=1}^{n}d_{j}^{2}>0\). \(x_{i0}\) and \(s_{i0}\), \(i=1,2,\ldots ,n\) are constants. \(x(0)=(x_{10},x_{20},\ldots ,x_{n0})^{T}\), and \(s(0)=(s_{10},s_{20},\ldots ,s_{n0})^{T}\) as \(t\in [q,1]\).
Assumption 1
\(f_{j}(\cdot )\) is bounded and satisfies Lipschitz condition, that is there exists \(A_{i}>0,\) \(L_{j}>0\), for \(\forall \varsigma ,\zeta \in \mathbb {R}\), such that
Assumption 2
\(s_{i}(\cdot )\) is bounded, that is there exists \(C_{i}>0\), such that
Remark 2.1
In (2.1), \((1-q_{j})t\rightarrow +\infty \) as \(q_{j}\ne 1,~t\rightarrow +\infty \), so those stability results in [4, 12–17] can not be directly applied to (2.1).
Consider the following transformations defined by
then (2.1) can be equivalently turned into competitive neural networks with multi-constant delays and variable coefficients (See, [25])
for \(t\ge 0\), with the initial values
where \(\tau =\max \limits _{1\le j\le n}\{\tau _{j}\}\), in which \(\tau _{j}=-\log q_{j}\ge 0\), \(\varphi _{i}(t)=x_{i0}\), \(\psi _{i}(t)=s_{i0}\), \(t\in [-\tau ,0]\). \(y(0)=(y_{10},y_{20},\ldots ,y_{n0})^{T},\) \(u(0)=(u_{10},u_{20},\ldots ,u_{n0})^{T}\).
It follows from Assumption 2 that
namely, \(u_{i}(t)\) is bounded.
Remark 2.2
It is very easy to verify that (2.2) and (2.6) have the same equilibriums. For considering the stability of equilibrium of (2.2), we just need to consider the stability of equilibrium of (2.6).
3 Main Results
In this section, we shall establish some sufficient conditions to ensure the global exponential stability of system (2.6).
Theorem 3.1
Under Assumptions 1 and 2, if the following conditions
hold, then system (2.6) has a unique equilibrium point.
Proof
\({(y^*,u^*)^{T}}\) is said to be an equilibrium of system (2.6), if this point \((y^{*},u^{*})^{T}\) satisfies the following equations
in which \({y^{*}}=(y_{1}^{*},y_{2}^{*},\ldots ,y_{n}^{*})^{T}\), \({u^{*}}=(u_{1}^{*},u_{2}^{*},\ldots ,u_{n}^{*})^{T}.\)
Define the mapping \(Q(\theta )=(F(\theta ),G(\theta ))^{T}\), where \(\theta ={(y,u)^{T}}\),
\({F(\theta )=(F_{1}(\theta ), F_{2}(\theta ), \ldots , F_{n}(\theta ))^{T}}\), \({G(\theta )=(G_{1}(\theta ), G_{2}(\theta ), \ldots , G_{n}(\theta ))^{T}}\),
in which
Then it follows from (3.3) and Assumption 1 that
where \(r=\max \{ r_{1},r_{2}\} \), in which \(r_{1}\) and \(r_{2}\) are respectively:
Then we have \(\theta ={{(y,u)}}^{T}\in [-r,r]^{2n} \Longrightarrow Q(\theta )=(F(\theta ),G(\theta ))^{T}\in [-r,r]^{2n}\), because of the continuity of \(f_{j}(\cdot )\), it follows that the mapping \(Q:[-r,r]^{2n}\rightarrow \) itself is continuous. By Brouwer’s fixed point theorem, there exists at least one fixed point\({(y^{*},u^{*})^{T}}\) of Q , i.e., an equilibrium point of system (2.6).
Next we prove the uniqueness of equilibrium \({(y^{*},u^{*})^{T}}\). Suppose system (2.6) has another equilibrium \({(y^{**},u^{**})^{T}}\), there must be \(y_{i}^{*}=y_{i}^{**}\), \(u_{i}^{*}=u_{i}^{**},\) \(i=1,2,\ldots ,n.\)
Case 1 if \(y^{**}\ne y^{**}\), \(u^{*}= u^{**}\), there are some components \(y_{i}^{*},\) \(y_{i}^{**}\) among \(y^{*},\) \(y^{**}\), such that \(y_{d}^{*}\ne y_{d}^{**},\) \( y_{j}^{*}= y_{j}^{**},\) \(j\ne d\), \(u_{i}^{*}=u_{i}^{**},\) \(i=1,2,\ldots ,n.\) From (3.1),(3.2), we can get
\(y_{d}^{*}=y_{d}^{**}\) which is a contradiction, so the assumption is not established.
Case 2 if \(y^{**}\ne y^{**}\), \(u^{*}\ne u^{**}\), there must be components \(y_{d}^{*}\ne y_{d}^{**}\) among \(y^{*},\) \(y^{**}\), \(u_{k}^{*}\ne u_{k}^{**}\) among \(u^{*},\) \(u^{**}\).
when \(d=k\), it follows from (3.2) that
which is a contradiction.
when \(d\ne k\), it follows from (3.2) that
By (3.1) and (3.9), we get \(|u_{k}^{*}- u_{k}^{**}|\le |y_{d}^{*}-y_{d}^{**}|<0\), which is a contradiction. All in all, the equilibrium of system (2.6) is unique. \(\square \)
Suppose
where either \(K_{1}\) or \(K_{2}\) is positive. For example, if \(K_{1}>0\), we assume that \(K_{2}=0\); when \(K_{2}=0\), we have \(u_{i}(t)=u_{i}^{*}\), for \(t\in [-\tau ,0].\)
Theorem 3.2
Under Assumptions 1 and 2, if (3.1) holds and there exists positive constants \(\eta \) and K such that
for \(t\ge 0\), \(i=1,2,\ldots ,n\). Where \(K=\max \big \{K_{1},K_{2}\big \}>0\), \(K_{1},K_{2}\) are given by (3.10), then equilibrium point \({(y^*,u^*)^{T}}\) of system (2.6) is globally exponentially stable.
Proof
From Theorem 3.1, system (2.6) has a unique equilibrium point \({(y^*,u^*)^{T}}\), next we prove it is globally exponentially stable. By (2.6), for \(t>0,\) we have
Defining functions as follows,
where \(\mu _{i},~\upsilon _{i}\in [0,+\infty )\). Note that (3.1), we have
where \(\varsigma =\min \{\varsigma _{1},\varsigma _{2}\}\), in which
It follows from (3.13) and (3.14) that \(\Phi _{i}(0)\ge \varsigma \) and \(\Psi _{i}(0)\ge \varsigma \). Obviously, \(\Phi _{i}(\mu _{i})\) and \(\Psi _{i}(\upsilon _{i})\) are continuous, and \(\Phi _{i}(\mu _{i})\rightarrow -\infty \), \(\Psi _{i}(\upsilon _{i})\rightarrow -\infty \) as \(\mu _{i}\rightarrow +\infty \), \(\upsilon _{i}\rightarrow +\infty .\) So there are \(\tilde{\mu }_{i}, \tilde{\upsilon }_{i}\in (0,+\infty )\) such that
Thus, there exists a constant \(\eta \), which satisfies \(0<\eta <\min \limits _{1\le i\le n}\big \{\tilde{\mu }_{i},\tilde{\upsilon }_{i}\big \}\), such that
Accordingly, define functions \(Y_{i}(t)\) and \(U_{i}(t)\) as follows
We obtain the following inequalities
and
By (3.10) and (3.18), we know that
We claim that
First, for \(d>1\), we prove that there are
Suppose that (3.23) does not hold in this sense that there is one component among \(U_{i}\) (say \(U_{k}\)) and a first time \(t_{1}>0\) such that
while \(Y_{i}(t)<dK, ~i\ne k\), and \(U_{i}(t)<dK\), then we have \(D^{+}Y_{k}(t_{1})\ge 0\). On the other hand, it follows from (3.17) and (3.19) that
which means a contradiction. Thus for \(t\in [-\tau ,+\infty )\), we have \(Y_{i}(t)<dK\). while \(d\rightarrow 1\), we obtain that \(Y_{i}(t)\le K\). That is, the claim (3.22) must hold. Namely, the unique equilibrium \((y^{*},u^{*})^{T}\) of system (2.6) is globally exponentially stable. \(\square \)
In view of the proof of Theorem 3.2, we obtained the following delay-dependent sufficient condition.
Theorem 3.3
Under Assumptions 1 and 2, if there exists a constant \(\eta >0\) such that the following conditions
hold, then system (2.6) has a unique equilibrium \((y^{*},u^{*})^{T}\). And there exist a positive constant K such that
hold, where \(K=\max \big \{K_{1},K_{2}\big \}>0\), \(K_{1}\) and \(K_{2}\) are given by (3.10), \(\tau =\max \limits _{1\le j\le n}\{\tau _{j}\}\), \(\tau _{j}=-\log q_{j}\ge 0\).
4 Illustrative Examples
In this section, several examples are given to show the effectiveness of the conditions given in this paper.
Example 4.1
Consider the following competitive neural networks
where \(A=\left( \begin{array}{cc} 3 &{} 0 \\ 0 &{} 4 \\ \end{array} \right) \), \(B=\left( \begin{array}{cc} 1 &{} 0 \\ 0 &{} 1 \\ \end{array} \right) , \) \(C=\left( \begin{array}{cc} 1 &{} 0 \\ 0 &{} 2 \\ \end{array} \right) , \) \(B^{\tau }=\left( \begin{array}{cc} 1 &{} 0 \\ 0 &{} 1 \\ \end{array} \right) , \) \(q=0.5\), \(d_{1}=d_{2}=1\), \(f_{i}(y_{i})=0.4\tanh (y_{i})\), \(i=1,2\), then \(L_{1}=L_{2}=0.4,~\alpha =d_{1}^{2}+d_{2}^{2}=2\).
Let \(s_{i}(t)=\sum \limits _{j=1}^{n}d_{j}m_{ij}(t)\), system (4.1) can turn into
Let \(y_{i}(t)=x_{i}(\text {e}^{t}),u_{i}(t)=s_{i}(\text {e}^{t}) \), (4.2) is equivalent to the following
We compute and obtain that
and
Therefore, by Theorems 3.1 and 3.2, system (4.2) has a unique equilibrium and it is globally exponentially stable. By Matlab, we obtain that the equilibrium of (4.2) is \((-0.5257,0.7516,-0.1884,0.2485)^{T}\), the Matlab simulation result is presented in Fig.1.
Example 4.2
Consider the following competitive neural networks
where \(A=\left( \begin{array}{cc} 4 &{} 0 \\ 0 &{} 5 \\ \end{array} \right) \), \(B=\left( \begin{array}{cc} 2 &{} 0 \\ 0 &{} 1 \\ \end{array} \right) , \) \(C=\left( \begin{array}{cc} 2 &{} 0 \\ 0 &{} 5 \\ \end{array} \right) , \) \(B^{\tau }=\left( \begin{array}{cc} 1 &{} 0 \\ 0 &{} 1 \\ \end{array} \right) , \) \(q_{1}=0.4,\) \(q_{2}=0.6\), \(d_{1}=d_{2}=0.5\), \(f_{i}(x_{i})=0.5(\tanh (0.4x_{i})+\cos (0.4x_{i})),~i=1,2.\)
Let \(s_{i}(t)=\sum \limits _{j=1}^{n}d_{j}m_{ij}(t)\), system (4.4) can be turned into
Let \(y_{i}(t)=x_{i}(\text {e}^{t}),\) \(u_{i}(t)=s_{i}(\text {e}^{t})\), system (4.5) becomes as follows
By computing, \(\tau _{1}=-\log q_{1}=0.9163,\) \(\tau _{2}=-\log q_{2}=0.5108\), \(\tau =\max \{\tau _{1},\tau _{2}\}=0.9163\). \(L_{1}=L_{2}=0.4\). \(\alpha =d_{1}^{2}+d_{1}^{2}=0.5\). Taking \(\eta =0.2\). We compute and obtain that
and
Then it follows from Theorems 3.1 and 3.3, system (4.5) has a unique equilibrium and it is globally exponentially stable. Though Matlab, the equilibrium point of system (4.5) is \((1.2473,1.6426,0.3248,0.3422)^{T}\). The Matlab simulation result is presented in Fig. 2.
Example 4.3
Consider the following competitive neural networks
where \(\varepsilon =1\), \(A=\left( \begin{array}{cc} 2.2 &{} 0 \\ 0 &{} 2.2 \\ \end{array} \right) \), \(D=\left( \begin{array}{cc} -1 &{} 0.3 \\ 0.3 &{} -1 \\ \end{array} \right) , \) \(D^{\tau }=\left( \begin{array}{cc} -1.2 &{} 0.5 \\ 0.5 &{} -1.2 \\ \end{array} \right) , \) \(B=\left( \begin{array}{cc} -0.1 &{} 0 \\ 0 &{} 0.3 \\ \end{array} \right) , \) the activation is given by \(f_{i}(s)=0.01\sin (s)\) with \(L_{i}=0.01\). \(q=0.5\).
We compute and obtain that
Then it follows from Theorems 3.1 and 3.2, system (4.7) has a unique equilibrium and it is globally exponentially stable. the Matlab simulation result is presented in Fig. 3. Except for time delay and activation function in Example 4.3, the other data are the same as Example 4.1 in [32]. The activation function is discontinuous in [32], in this paper we choose a continuous and bounded one. The time delay is time-varying and bounded in [32], while in this paper the time delay is a proportional one which is unbounded and time-varying. Thus, the obtained results in [32] cann’t directly apply to Example 4.3 in the paper. In terms of time delay terms, the obtained results in the paper are less conservative than the previous results.
5 Conclusions
In this paper, by the fixed point theorem and constructing delay differential inequality, we discuss the global exponential stability of a class of competitive neural networks with multi-proportional delays. And we obtain two novel delay-independent and delay-dependent sufficient conditions which ensure the existence and uniqueness, and global exponential stability of equilibrium of the system. This method is to construct a delay differential inequality rather than a Lyapunov functional, whose results can be easily checked. Different from the prior works, delays here are proportional delays which are unbounded and time-varying. In terms of time delay terms, the obtained results in the paper are less conservative than the previous results.
References
Merer-Baese A (1996) Singular pertubation analysis of competitive neural networks with different time scales. Neural Comput 8(8):1731–1742
Cohen MA, Grossberg S (1983) Absolute stability of global pattern formatiom and parallel memory storage by competitive neural networks. IEEE Trans Syst Man Cybern 13(5):815–826
Merer-Baese A, Thummler V (2008) Local and global stability of an unsupervised competitive neural networks. IEEE Trans Neural Netw 19(2):346–351
Lu H, He Z (2005) Global exponential stability of delayed competitive neural networks with different time scales. Neural Netw 18(3):243–250
Meuer-Baese A, Guillerno B, Liliana R (2013) Stochastic stability analysis of competitive neural networks with different time-scales. Neurocomputing 118:115–118
Merer-Baese A, Pilyugn SS, Chen Y (2003) Global exponential stability of competitive neural networks with different time scale. IEEE Trans Neural Netw 14(3):716–719
Merer-Baese A, Roberts R, Thummler V (2010) Local uniform stability of competitive neural networks with different time-scales under vanishing perturbations. Neurocomputing 73(4–6):770–775
Shi Y, Zhu P (2014) Synchronization of stochastic competitive neural networks with different time scales and reaction-diffusion terms. Neural Comput 26(9):2005–2024
Lu H, Amari S (2006) Global exponential stability of multi-time-scale competitive neural networks with nonsmooth functions. IEEE Trans Neural Netw 17(10):1152–1164
Zhu P, Shi Y (2014) Synchronization of memristive competitive neural networks with different time scales. Neural Comput Appl 25(5):1163–1168
Liu Y, Yang Y, Liang T, Li L (2014) Existence and global exponential stability of anti-periodic solutions for competitive neural networks with delays in the leakage terms on time scales. Neurocomputing 133:471–482
Gu H, Jiang H, Teng Z (2010) Existence and global exponential stability of equilibrium of competitive neural networks with different time scales and multiple delays. J Frank Inst 347(5):719–731
Nie X, Cao J (2008) Exponential stability of competitive neural networks with time-varing and distributed delays. Int J Sys Control Eng 222(6):583–594
Cui B, Chen J, Lou X (2008) New results on global exponential stability of competitive neural networks with different time scales and time-varying delay. Chin Phys B 17(5):1670–1671
Nie X, Cao J (2009) Multistability of competitive neural networks with time-varying and distributed delays. Nonlinear Anal 10(2):928–940
Nie X, Cao J, Fei S (2013) Multistability and instability of delayed competitive neural networks with nondecreasing piecewise linear activation functions. Neurocomputing 119:281–291
Duan L, Huang L (2014) Global dynamics of equilibrium point for delayed competitive neural networks with different time scales and discontinuous activations. Neurocomputing 123:318–324
Gan Q, Xu R, Kang X (2012) Synchronization of unknown chaotic delayed competitive neural networks with different time scales based on adaptive control and parameter identification. Nonlinear Dyn 67(3):1893–1902
Gan Q (2013) Synchronization of competitive neural networks with different time scales and time-varying delay based on delay partitioning approach. Int J Machi Learn Cybern 4(4):327–333
Gan Q, Hu R, Liang Y (2012) Adaptive synchronization for stochastic competitive neural networks with mixed time-varying delay. Commun Nonlinear Sci Numer Simu 17(9):3708–3715
Yang X, Cao J, Long Y, Wei R (2010) Adaptive lag synchronization for competitive neural networks with mixed delays and uncertain hybrid perturbations. IEEE Trans Neural Netw 21(10):1656–1667
Yang X, Huang C, Cao J (2012) An LMI approach for exponential synchronization of switched stochastic competitive neural networks with mixed delays. Neural Comput Appli 21(8):2033–2047
Zhou L (2013) Dissipativity of a class of cellular neural networks with proportional delays. Nonlinear Dyn 73(3):1895–1903
Zhou L, Chen X, Yang Y (2014) Asymptotic stability of cellular neural networks with multi-proportional delays. Appl Math Comput 229(1):457–466
Zhou L (2013) Delay-dependent exponential stability of cellular neural networks with multi-proportional delays. Neural Process Lett 38(3):347–359
Zhou L (2014) Global asymptotic stability of cellular neural networks with proportional delays. Nonlinear Dyn 77(1):41–47
Zhou L (2014) The global exponential stability of the bidirectional associative memory neural networks with proportional time delays. Acta Electron Sin 42(1):96–101 (in Chinese)
Zhou L (2015) Delay-dependent exponential synchronization of recurrent neural networks with multiple proportional delays. Neural Process Lett 42:619–632
Zhou L (2015) Novel global exponential stability criteria for hybrid BAM neural networks with proportional delays. Neurocomputing 165(5):99–106
Zheng C, Li N, Cao J (2015) Matrix measure based stability criteria for high-order networks with proportional delay. Neurcomputing 149:1149–1154
Hiena LV, Son DT (2015) Finite-time stability of a class of non-autonomous neural networks with heterogeneous proportional delays. Appl Math Comput 14:14–23
Duan L, Huang L (2014) Global dynamics of equilibrium point for delayed competitive neural networks with different time scales and discontinuous activations. Neurcomputing 123:318–327
Acknowledgments
The author would like to thank for the Associate Editor and reviewers for their constructive and valuable comments and suggestions. The project is supported by the National Science Foundation of China (No. 61374009).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Zhou, L., Zhao, Z. Exponential stability of a class of competitive neural networks with multi-proportional delays. Neural Process Lett 44, 651–663 (2016). https://doi.org/10.1007/s11063-015-9486-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-015-9486-6