Abstract
The research in Chaps. 4–10 is focused on the qualitative analysis of complex neural networks with delays. It is well known that the qualitative analysis of nonlinear dynamical systems is the foundation of controlling the systems. Therefore, in this chapter controller design problem will be studied for a class of stochastic Cohen-Grossberg neural networks with mode-dependent mixed time delays and Markovian switching, in which the neural dynamical networks will be stabilized. The contents in this chapter are from the research result in Zheng et al., (IEEE Trans. Neural Netw. Learn. Syst. 24(5):800–811, 2013, [1]).
Access provided by Autonomous University of Puebla. Download chapter PDF
The research in Chaps. 4–10 is focused on the qualitative analysis of complex neural networks with delays. It is well known that the qualitative analysis of nonlinear dynamical systems is the foundation of controlling the systems. Therefore, in this chapter controller design problem will be studied for a class of stochastic Cohen-Grossberg neural networks with mode-dependent mixed time delays and Markovian switching, in which the neural dynamical networks will be stabilized. The contents in this chapter are from the research result in [1].
11.1 Introduction
In recent decades, neural networks have been successfully applied to various fields such as optimization, image processing, and associative memory design. In such application, it is important to know the stability properties of the designed neural network, these properties include asymptotic stability and exponential stability. However, time delays inevitably exist in neural networks due to various reasons [2]. The existence of time delay may lead to some complex dynamic behaviors such as oscillation, divergence, chaos, instability, or other poor performance of the neural networks. Since neural networks usually have a spatial extent, there is a distribution of propagation delays over a period of time. In these circumstances, the signal propagation is not instantaneous and cannot be modeled with discrete-time delays [3]. A more appropriate way is to incorporate discrete and continuously distributed time delays in the neural network model [2, 4]. Stability analysis for neural networks with delays has attracted more and more interests in recent years, for example, see [5–21] and references therein.
On the other hand, the stabilization issue has been an important focus of research in the control fields, and several feedback stabilizing control design approaches have been proposed (see [7, 22–25]). Some interesting results [6, 26–35] on the stabilization of a wide range and different types of neural networks have been reported in the literature. For a class of discrete-time dynamic neural networks, reference [29] proposes two methods, namely, the gradient projection and the minimum distance projection to investigate the stabilization. For a class of dynamic neural network systems, a global robust stabilizing controller with unknown nonlinearities is developed in [6] via Lyapunov stability and inverse optimality. For a class of linearly coupled stochastic neural networks, some results are derived in [31] on the design of the minimum number of controllers for the pinning stabilization, which are expressed in terms of strict linear matrix inequality (LMI). For a class of neutral neural networks with varying delays, a novel criterion is obtained in [28] for the global stabilization using the Razumikhin’s method. For a class of so-called standard neural network models with time delays, a few stabilization criteria are presented [30] which are based on the Lyapunov–Krasovskii stability theory and the LMI approach. For a class of impulsive high-order Hopfield-type neural networks with time-varying delays, some stabilization criteria are reported in [26] by employing the Lyapunov–Razumikhin technique. Very recently, for a class of neural networks with various activation functions and time-varying continuously distributed delays, LMI-based delay-dependent conditions are obtained in [27] for the global exponential stabilization. Despite some good progress on the stability analysis of delayed neural networks with various activation functions [36–38], the stabilization issue has not been fully explored in the existing studies.
Although the stabilization problem for some kinds of neural networks with or without time delays is investigated by some authors, there has been no literature reported on the stabilization of stochastic Cohen-Grossberg neural networks with both Markovian jumping parameters and mixed mode-dependent time delays. As well known, mode-dependent time delays are of practical significance since the signal may switch between different modes and also propagate in a distributed way during a certain time period with the presence of an amount of parallel pathways [24]. The purpose of this chapter is to make an attempt to deal with the control problem for a class of stochastic neural networks with mode-dependent delays [1]. By introducing a new Lyapunov–Krasovskii functional that accounts for the mode-dependent mixed delays, stochastic analysis is conducted in order to derive delay-dependent criteria for the exponential stabilization problem. The feedback stabilizing controller is designed to satisfy some exponential stability constraints on the closed-loop poles. The stabilization criteria are obtained in terms of LMI and hence the gain control matrix is easily determined by numerical MATLABs LMI Control Toolbox. Three numerical examples are carried out to demonstrate the feasibility of our delay-dependent stabilization criteria.
Throughout this chapter, the shorthand \(\mathrm{col}\{M_1,M_2,\ldots ,M_l\}\) denotes a column matrix with the matrices \(M_1,M_2,\ldots ,M_l.\) \(\left( \varOmega ,\mathcal {F},\{\mathcal {F}_t\}_{t\ge 0},\mathcal {P}\right) \) denotes a complete probability space with a filtration \(\{\mathcal {F}_t\}_{t\ge 0}\) satisfying the usual conditions, i.e., the filtration is right continuous and contains all \(\mathcal {P}\)-null sets. \(\mathcal {L}^p_{\mathcal {F}_0}([-h,0],\mathbb {R}^n)\) denotes the family of all \(\mathcal {F}_0\)-measurable \(\mathbb {C}\left( [-h,0];\mathbb {R}^n\right) \)-valued random variables \(\xi =\{\xi (\theta ):-h\le \theta \le 0\}\) such that \(\sup _{-h\le \theta \le 0}\mathbb {E}|\xi (\theta )|^p<\infty ,\) where \(\mathbb {E}\{\cdot \}\) stands for the mathematical expectation operator with respect to the given probability measure \(\mathcal {P}.\)
11.2 Problem Formulation and Preliminaries
We consider the following stochastic neural network with both feedback control law and Markovian jumping parameters described by
where \(x(t)=[x_1(t),\ldots ,x_n(t)]^T\) denotes the neuron state at time t, \(u(t)\in L_2([0,s),\mathbb {R}^m), \forall s> 0,\) is the control input vector of the neural networks, \(\alpha (x(t),\eta _t)=\mathrm{diag}\{\alpha _j(x_j(t),\eta _t),\ldots ,\alpha _n(x_n(t),\eta _t)\}\) denotes the amplification function, \(\beta (x(t),\eta _t)=\mathrm{diag}\{\beta _j(x_j(t),\eta _t),\ldots ,\beta _n(x_n(t),\eta _t)\}\) denotes the appropriately behaved function such that the solution of the model given in (11.1) remains bounded, and \(f(x(t))=[f_1(x_1(t)),\ldots ,f_n(x_n(t))]^T,\) \(g(x(s))=[g_1(x_1(s)),\ldots ,g_n(x_n(s))]^T\) denote the activation functions. \(f(x(t-\tau (t,\eta _t)))= \left[ f_1(x_1(t-\tau (t,\eta _t))),\ldots ,f_n\right. \left. (x_n(t-\tau (t,\eta _t)))\right] ^T.\) \(0\le \tau (t,\eta _t)\le \bar{\tau }(\eta _t)\le \bar{\tau },0\le \upsilon (t,\eta _t)\le \bar{\upsilon }(\eta _t)\le \bar{\upsilon }\) \((j=1,\ldots ,n)\) are bounded and unknown delays. The matrices \(A(\eta _t),B(\eta _t),C(\eta _t)\in \mathbb {R}^{n\times n},D(\eta _t)\in \mathbb {R}^{n\times m}\) are the connection weight matrix, the discretely delayed connection weight matrix, the distributively delayed connection weight matrix and the control input weights, respectively. \(E_j(\eta _t)(j=1,2,\ldots ,5)\) is known real constant matrix with appropriate dimension, \(\omega (t)\) is a one-dimensional Brownian motion defined on complete probability space \(\left( \varOmega ,\mathcal {F},\{\mathcal {F}_t\}_{t\ge 0},\mathcal {P}\right) \) with \(\mathbb {E}\{\mathrm{d}\omega (t)\}=0,\ \mathbb {E}\{[\mathrm{d}\omega (t)]^2\}=\mathrm{d}t.\) \(\{\eta _t=\eta (t),t\ge 0\}\) is a homogeneous, finite-state Markovian process with right continuous trajectories and taking values in finite set \(\wp =\{1,2,\ldots ,N\}\) with given probability space \(\left( \varOmega ,\mathcal {F},\{\mathcal {F}_t\}_{t\ge 0},\mathcal {P}\right) \) and the initial model \(\eta _0\). It is assumed that the initial condition of neural network (11.1) has the form \(x(t)=\varphi (t)\) for \(t\in [-\varpi ,0],\) where \(\varphi (t)=\left[ \varphi _1(t),\ldots ,\varphi _n(t)\right] ^T\), function \(\varphi _j(t)(j=1,2,\ldots ,n)\) is continuous, \(\varpi =\max \{\bar{\tau },\bar{\upsilon }\}.\) Let \(\aleph =[\pi _{\textit{ij}}]_{i,j\in \wp }\) denote the transition rate matrix with given probability:
where \(\delta >0,\ \lim _{\delta \rightarrow 0^+}{\frac{o(\delta )}{\delta }}=0\) and \(\pi _{\textit{ij}}\) is the transition rate from mode i to mode j satisfying \(\pi _{\textit{ij}}\ge 0\) for \(i\ne j\) with \(\pi _{\textit{ii}}=-\sum ^N_{j=1,i\ne j} \pi _{\textit{ij}},i,j\in \wp \).
For convenience, each possible value of \(\eta _t\) is denoted by \(i(i\in \wp )\) in the sequel. Then we have
In the following, we need the following definitions, assumptions, and lemmas.
Definition 11.1
([24, 27]) Given \(r>0\), and any initial condition \(\varphi \in \mathcal {L}^2_{\mathcal {F}_0}([-\varpi ,0],\mathbb {R}^n)\) with \(u(t,\eta _t)=0.\) The zero solution of system (11.1) is said to be r-exponentially stable in the mean square, if there exists a positive scalar M such that any solution \(x(t,\varphi )\) of the system satisfies the following inequality,
Definition 11.2
([24, 27]) Given \(r>0\). The system (11.1) is said to be r-exponentially stabilizable in the mean square, if there is a feedback control law \(u(t,\eta _t)=\overline{U}(\eta _t)x(t),\) such that the following closed-loop system
is r-exponentially stable.
Assumption 11.3
([8]) Each \(\alpha _{\textit{ji}}(\cdot )\) is a continuous function and satisfies \(\bar{\alpha }_{\textit{ji}}\ge \alpha _{\textit{ji}}(\cdot )\ge \underline{\alpha }_{\textit{ji}}>0,\ j=1,2,\ldots ,n,i=1,2,\ldots ,N.\)
Here, we denote \(\underline{\alpha }_i=\mathrm{min}_{1\le j\le n}\{\underline{\alpha }_{\textit{ji}}\},\) \(\bar{\alpha }_i=\mathrm{max}_{1\le j\le n}\{\bar{\alpha }_{\textit{ji}}\}\) for simplicity.
Assumption 11.4
Each function \(\beta _{\textit{ji}}(\cdot )\) is locally Lipschitz continuous, \(\beta _{\textit{ji}}(0)=0\) and there exist constants \(\bar{\beta }_{\textit{ji}}>\underline{\beta }_{\textit{ji}}\ge 0\) such that
for any \(s\in \mathbb {R},\ j=1,2,\ldots ,n,i=1,2,\ldots ,N.\)
For simplicity, we denote \({\varPi }_i=\mathrm{diag}\{\bar{\beta }_{1i},\ldots ,\bar{\beta }_{\textit{ni}}\},\) \(\varGamma _i=\mathrm{diag}\{\underline{\beta }_{1i},\ldots ,\underline{\beta }_{\textit{ni}}\}.\)
Assumption 11.5
For \(j=1,2,\ldots ,n,\) \(f_j(0)=g_j(0)=0.\) Furthermore, there exist constants \(\varrho ^-_{j}, \varrho ^+_{j}, \psi ^-_{j}, \psi ^+_{j}\) such that \(\varrho ^-_{j}<\varrho ^+_{j}, \psi ^-_{j}<\psi ^+_{j}\) and
for any \(s\in \mathbb {R},\ j=1,2,\ldots ,n.\)
Remark 11.6
As pointed out in [24], the constants \(\varrho ^-_{j},\varrho ^+_{j}, \psi ^-_{j},\psi ^+_{j}\) in Assumption 11.5 are allowed to be positive, negative, or zero. Then, those previously used Lipschitz conditions are just the special cases of Assumption 11.5. Hence, the activation functions can be of more general descriptions than those earlier forms.
For notational simplicity, we denote
Lemma 11.7
( Jensen integral inequality , see [39]) For any constant matrix \(M > 0\), any scalars a and b with \(a < b\), and a vector function \(\chi (t):[a,b]\rightarrow \mathbb {R}\) such that the integrals concerned are well defined, then the following inequality holds
where \(\Big <A,B\Big >=A^TB\) denotes the inner product.
Lemma 11.8
Assume that \(\nu ,\mu ,\underline{\vartheta },\bar{\vartheta }\) are real scalars such that \(\nu \le 1,\nu +\mu \le 4,\) and \(\underline{\vartheta }<\bar{\vartheta }.\) Let \(\vartheta :\mathbb {R}\rightarrow (\underline{\vartheta },\bar{\vartheta })\) be a real function. Then for any nonnegative scalars a, b, the following inequality holds
Proof
Without loss of generality, we assume that \(\nu \le \mu .\) First consider the case that \( a\le b.\) It is easy to see that \(\max \{-\nu a-\mu b,-\mu a-\nu b\}=-\mu a-\nu b.\) Therefore, we have
That is
Similarly, we can also conclude that the inequality (11.2) holds for \( a>b\). Now, the proof of Lemma 11.8 is completed.
Remark 11.9
If we set \(\nu =1,\mu =3,\) then we get Lemma 3 of [40] from Lemma 11.8. Thus, based on Lemma 11.8, we can get some conditions of exponential stabilization problem with less conservativeness.
11.3 Stabilization Result
As is well known, for stochastic systems, It\(\hat{\mathrm{o}}\)’s formula plays an important role in the stability analysis of stochastic systems and we cite some related results here [41]. Consider a general stochastic system
on \(t\ge t_0\) with initial value \(x(t_0)=x_0\in \mathbb {R}^n,\) where \(f:\mathbb {R}^n\times \mathbb {R}^+\times \wp \rightarrow \mathbb {R}^n\) and \(g:\mathbb {R}^n\times \mathbb {R}^+\times \wp \rightarrow \mathbb {R}^{n+m}.\) Let \(\mathbb {C}^{2,1}\big (\mathbb {R}^n\times \mathbb {R}^+,\mathbb {R}^+\big )\) denote the family of all nonnegative functions V(x, t, i) on \(\mathbb {R}^n\times \mathbb {R}^+\) which are continuously differentiable in t and twice differentiable in x. Let \(\pounds \) be the weak infinitesimal generator of the random process \(\{x(t),\eta (t)\}_{t\ge 0}\) along the system (11.3) (see [24, 42, 43]), i.e.,
then, by the generalized It\(\hat{\mathrm{o}}\)’s formula, one can get
Theorem 11.10
Given \(r>0\). For any given scalars \(\bar{\tau }_i>0,\) \(\bar{\upsilon }_i>0,\upsilon '_i<1,\) considering the system (11.1) satisfying Assumptions 11.3–11.5 and \(\dot{\tau }_i(t)\le \tau '_i,\dot{\upsilon }_i(t)\le \upsilon '_i,\) the system (11.1) is globally r-exponentially stabilized if there exist symmetric positive definite matrices \(P_i\in \mathbb {R}^{n\times n},\) symmetric nonnegative definite matrices \(Q_{\textit{ji}},R_i,M_i,S_{l},Z_i\ (j=1,\ldots ,4,l=1,\ldots ,9),\) positive diagonal matrices \(G_i,U_i,T_i,W_i,H,K,\) and real matrices \(X_i\) satisfying the following inequalities \((i=1,\ldots ,N)\)
where
with
and
and \(\bar{\pi }_{\textit{ij}}=\max \{\pi _{\textit{ij}},0\},\ \bar{M}_i=M_iX_i,\)
Furthermore, the feedback stabilizing control law is defined by \(u_i(t)=D^T_iX_ix(t).\)
Proof
From Assumption 11.3, we know that the amplification function \(\alpha _i(x(t))\) is nonlinear and satisfies \(\alpha _i(x(t))\alpha _i(x(t))\le \bar{\alpha }^2_iI.\) Following the way in [15], pre- and postmultiplying the left-hand sides of inequalities (11.10) and (11.11) by \(\mathrm{diag}\{I\ \ I\ \ I\ \ I\ \ I\ \ I\ \ \alpha _i(x(t))\ \ I\ \ I\ \ I\ \ I\ \ I\}\), respectively, it follows that
where
with
For any \(j=1,2,\ldots ,n\), from Assumption 11.5 we obtain that
Therefore, the following matrix inequalities hold for any positive diagonal matrices \(U_i,T_i,W_i\),
Denoting
then system (11.1) can be rewritten as
Define the following Lyapunov–Krasovskii functional:
where
with \(P_i=\mathrm{diag}\{p_{1i},p_{2i},\ldots ,p_{\textit{ni}}\},\ H=\mathrm{diag}\{h_{1},h_{2},\ldots ,h_{n}\},\) \(K=\mathrm{diag}\{k_{1},k_{2},\ldots ,k_{n}\}.\)
For any \(\eta (t)=i\in \wp ,\) it can be shown that
where \(\alpha ^{-1}_i(x(t))=\mathrm{diag}\left\{ \alpha ^{-1}_{1i}(x_1(t)),\ \ldots ,\ \alpha ^{-1}_{\textit{ni}}(x_n(t))\right\} .\)
According to the definition of \(\rho _{\textit{il}}\) and Assumptions 11.3–11.5 we have that
Using the well-known It\(\hat{\mathrm{o}}\)’s differential formula [41, 44], we obtain
Based on Assumption 11.4, we obtain that
From Lemma 11.7, it follows that
For simplicity, we denote
When \(0<\tau _i(t)<\bar{\tau }_i,\) from Lemma 11.8 with \(\nu =1,\mu =3,\) one can obtain that
Obviously, from Lemma 11.7, inequality (11.36) holds when \(\tau _i(t)=0\) or \(\tau _i(t)=\bar{\tau }_i.\) Therefore, inequality (11.36) holds for any t with \(0\le \tau _i(t)\le \bar{\tau }_i.\)
On the other hand, by the Leibniz-Newton formula , we get
It is easy to see that the following equality holds for any positive diagonal matrices \(G_i\) with compatible dimensions
Considering that the feedback stabilizing control law being defined by \(u_i(t)=D^T_iX_ix(t),\) if we denote \(y_i(t)=X_ix(t),\) then for any symmetric nonnegative definite matrices \(M_i,\) we have
Noticing that the following equality holds
From [14], we have
By (11.4)–(11.9) and (11.14)–(11.43), we obtain
where
Next, we prove that the error system is exponentially stable in mean square.
For convenience, we define
From (11.12) and (11.13) and the well-known Schur complements , it can be easily seen that \(\lambda _M>0.\) Furthermore, from (11.44) we have that
Similar to [45], from (11.18) and the definition of \(\vartheta _i(t)\), there exist positive scalars \(\varepsilon _1\) and \(\varepsilon _2\) such that
To prove the mean square exponential stability, we modify the Lyapunov function candidate (11.18) as \(\bar{V}(x(t),t,i)=e^{\textit{rt}}V(x(t),t,i),\) where r is chosen such that \(r(\varepsilon _1+\bar{\tau }\varepsilon _2e^{r \bar{\tau }})\le \lambda _M\).
Then, we have
Furthermore, by the Dynkin’s formula [14], for any \(\eta (t)=i\in \wp ,t>0,\) we obtain that
By changing the integration sequence, we get
Therefore we have
or
where \(\epsilon =\lambda ^{-1}_p(\varepsilon _1+\bar{\tau }\varepsilon _2+r\bar{\tau }^2\varepsilon _2e^{r\bar{\tau }}).\)
Consequently, we prove that the error system (11.1) is exponentially stable in mean square. So the system (11.1) is r-exponentially stabilizable in the mean square. This ends the proof.
Remark 11.11
The Lyapunov functional (11.18) of this chapter fully uses the information about the amplification function and the mode-dependent time-varying delays, but [15, 20] only use the information about delays when constructing their Lyapunov functionals. Therefore the Lyapunov functional is more general than those in [15, 20], and the stability criteria in this chapter may be less conservativeness.
Remark 11.12
When one of the time-varying delays \(\dot{\tau }_i(t)\) is not differentiable or unknown, the result in Theorem 11.10 is no longer applicable. For this case, by setting \(Q_{1i}=Q_{2i}=0\) in Theorem 11.10, one can obtain a result of the mean square exponential stability of system (11.1).
If there are no stochastic disturbances, that is \(E_j(\eta _t)=0\ (j=1,\ldots ,5)\), then the neural network (11.1) is simplified to
For system (11.46), by setting \(Z_i=S_6=S_8=0\) in Theorem 11.10 and deleting \(\int ^t_{t-\tau _i(t)}\sigma _i(s)\mathrm{d}\omega (s),\ \int ^{t-\tau _i(t)}_{t-\bar{\tau }_i}\sigma _i(s)\mathrm{d}\omega (s)\) from \(\zeta _i(t)\), we can get the following result of the mean square exponential stability.
Corollary 11.13
Given \(r>0\). For any given scalars \(\bar{\tau }_i>0,\) \(\bar{\upsilon }_i>0,\upsilon '_i<1,\) considering the system (11.46) satisfying Assumptions 11.3–11.5 and \(\dot{\tau }_i(t)\le \tau '_i,\dot{\upsilon }_i(t)\le \upsilon '_i,\) the system (11.46) is globally r-exponentially stabilizable if there exist symmetric positive definite matrices \(P_i\in \mathbb {R}^{n\times n},\) symmetric nonnegative definite matrices \(Q_{\textit{li}},R_i,M_i,S_{l}\ (l=1,\ldots ,4,l=1,\ldots ,5,7,9),\) positive diagonal matrices \(G_i,U_i,T_i,W_i,H,K,\) and real matrices \(X_i\) such that (11.4), (11.5), (11.7), (11.9) and the following inequalities hold,
where
\(i=1,\ldots ,N,\) and other parameters are defined in Theorem 11.10. Furthermore, the feedback stabilizing control law is defined by \(u_i(t)=D^T_iX_ix(t).\)
11.4 Illustrative Examples
In this section, we provide three numerical examples to demonstrate the feasibility of our delay-dependent stabilization criteria.
Example 11.14
Consider system (11.1) with \(N=2,\)
and
For this system without external controller, Fig. 11.1a shows the results of time response of \(x_1(t)\) and \(x_2(t)\).
However, if we set
it is easy to see that Assumptions 11.3–11.5 are satisfied with \(\underline{\alpha }_i=0.4,\bar{\alpha }_i=1.2,\varPi _i=8I,\varGamma _i=7I,\) \(\bar{\varSigma }=I,\varSigma =F_1=F_3=0,F_2=F_4=0.5I,\) and \(\bar{\tau }=\bar{\tau }_i=0.4,\bar{\upsilon }=\bar{\upsilon }_i=0.6,\ i=1,2.\) Using the Matlab LMI Toolbox, the LMIs (11.4)–(11.11) are feasible and the feedback control is
The simulation of the solution is given in Fig. 11.1b for \(t\in [-0.65,200]\). It is clear that both \(x_1(t)\) and \(x_2(t)\) converge exponentially to zeros.
Example 11.15
Consider system (11.46) with \(N=2,\)
and other parameters are defined in Example 11.14.
For this system without external controller, Fig. 11.2a shows the results of time response of \(x_1(t)\) and \(x_2(t)\).
However, if we set \(D_1=D_2=[\ 4\ \ 0\ ]^T,\) it is easy to see that Assumptions 1-3 are satisfied. Using the Matlab LMI Toolbox, the LMIs (11.4), (11.5), (11.7), (11.9), (11.47) and (11.48) are feasible and the feedback control is
The simulation of the solution is given in Fig. 11.2b for \(t\in [-0.65,200]\). It is clear that both \(x_1(t)\) and \(x_2(t)\) converge exponentially to zeros.
Example 11.16
Consider system (11.46) with \(N=1,\)
and
For this system, Assumptions 11.3–11.5 are satisfied with \(\underline{\alpha }_i=\bar{\alpha }_i=1,\varPi _i=\varGamma _i=8I,\) \(\bar{\varSigma }=I,\varSigma =F_1=F_3=0,F_2=F_4=0.5I,\) and \(\bar{\tau }=\bar{\tau }_1=8.5,\bar{\upsilon }=\bar{\upsilon }_1=2.5.\) It is easy to verify that Theorem 1 of [27] admits no feasible solution. However, using the Matlab LMI Toolbox, the LMIs (11.4), (11.5), (11.7), (11.9), (11.47) and (11.48) are feasible with the following matrices:
and accordingly the feedback control is
Based on Example 11.16, it is easy to see that the obtained results are better than those in [27]. Hence, the proposed method is an improvement over the existing ones.
11.5 Summary
In this chapter, the problem of designing a feedback control law to exponentially stabilize a class of stochastic Cohen-Grossberg neural networks with both Markovian jumping parameters and mixed mode-dependent time delays has been studied. The mixed time delays consist of both discrete and distributed delays. Using a new Lyapunov–Krasovskii functional that accounts for the mode-dependent mixed delays, a new delay-dependent condition for the global exponential stabilization has been established in terms of linear matrix inequalities. Upon the feasibility of the LMI, all the control parameters can be easily computed and the design of a stabilizing controller can be accomplished.
References
C. Zheng, Q. Shan, H. Zhang, Z. Wang, On stabilization of stochastic Cohen-Grossberg neural networks with mode-dependent mixed time-delays and Markovian switching. IEEE Trans. Neural Netw. Learn. Syst. 24(5), 800–811 (2013)
Q. Song, Z. Wang, Neural networks with discrete and distributed time-varying delays: a general stability analysis. Chaos Solitons Fractals 37, 1538–1547 (2008)
X.F. Liao, K. Wong, C.G. Li, Global exponential stability for a class of generalized neural networks with distributed delays. Nonlinear Anal. Real World Appl. 5, 527–547 (2004)
C.H. Lien, L.Y. Chung, Global asymptotic stability for cellular neural networks with discrete and distributed time-varying delays. Chaos Solitons Fractals 34, 1213–1219 (2007)
Z. Liu, H. Zhang, Q. Zhang, Novel stability analysis for recurrent neural networks with multiple delays via line integral-type L-K functional. IEEE Trans. Neural Netw. 21(11), 1710–1718 (2010)
Z. Liu, S.C. Shih, Q. Wang, Global robust stabilizing control for a dynamic neural network system. IEEE Trans. Syst. Man Cybern. A 39(2), 426–436 (2009)
F.O. Souza, L.A. Mozelli, R.M. Palhares, On stability and stabilization of T-S Fuzzy time-delayed systems. IEEE Trans. Fuzzy Syst. 17(6), 1450–1455 (2009)
H. Zhang, Z. Wang, D. Liu, Robust stability analysis for interval Cohen-Grossberg neural networks with unknown time-varying delays. IEEE Trans. Neural Netw. 19(11), 1942–1955 (2008)
H. Zhang, Z. Liu, G.B. Huang, Z. Wang, Novel weighting-delay-based stability criteria for recurrent neural networks with time-varying delay. IEEE Trans. Neural Netw. 21(1), 91–106 (2010)
H. Zhang, T. Ma, G.B. Huang, Z. Wang, Robust global exponential synchronization of uncertain chaotic delayed neural networks via dual-stage impulsive control. IEEE Trans. Syst. Man Cyber. B Cyber. 40(3), 831–844 (2010)
H. Zhang, Z. Wang, Global asymptotic stability of delayed cellular neural networks. IEEE Trans. Neural Netw. 18(3), 947–950 (2007)
H. Zhang, Z. Wang, D. Liu, Robust exponential stability of cellular neural networks with multiple time varying delays. IEEE Trans. Circuits Syst. II 54(8), 730–734 (2007)
H. Zhang, Z. Wang, D. Liu, Global asymptotic stability of recurrent neural networks with multiple time-varying delays. IEEE Trans. Neural Netw. 19(5), 855–873 (2008)
J. Tian, Y. Li, J. Zhao, S. Zhong, Delay-dependent stochastic stability criteria for Markovian jumping neural networks with mode-dependent time-varying delays and partially known transition rates. Appl. Math. Comput. 218, 5769–5781 (2012)
H. Zhang, Y. Wang, Stability analysis of Markovian jumping stochastic Cohen-Grossberg neural networks with mixed time delays. IEEE Trans. Neural Netw. 19(2), 366–370 (2008)
Z.G. Wu, J.H. Park, H. Su, J. Chu, Stochastic stability analysis of piecewise homogeneous Markovian jump neural networks with mixed time-delays. J. Frankl. Inst. 349, 2136–2150 (2012)
S.M. Lee, O.M. Kwon, J.H. Park, A novel delay-dependent criterion for delayed neural networks of neutral type. Phys. Lett. A 374, 1843–1848 (2010)
O.M. Kwon, S.M. Lee, J.H. Park, Improved delay-dependent exponential stability for uncertain stochastic neural networks with time-varying delays. Phys. Lett. A 374, 1232–1241 (2010)
R. Rakkiyappan, P. Balasubramaniam, Dynamic analysis of Markovian jumping impulsive stochastic Cohen-Grossberg neural networks with discrete interval and distributed time-varying delays. Nonlinear Anal.: Hybrid Syst. 3, 408–417 (2009)
P. Balasubramaniam, R. Rakkiyappan, Delay-dependent robust stability analysis for Markovian jumping stochastic Cohen-Grossberg neural networks with discrete interval and distributed time-varying delays. Nonlinear Anal.: Hybrid Syst. 3, 207–214 (2009)
C. Vidhya, P. Balasubramaniam, Robust stability of uncertain Markovian jumping stochastic Cohen-Grossberg type BAM neural networks with time-varying delays and reaction diffusion terms. Neural Parallel Sci. Comput. 19, 181–196 (2011)
C. Hua, X. Guan, Output feedback stabilization for time-delay nonlinear interconnected systems using neural networks. IEEE Trans. Neural Netw. 19(4), 673–688 (2008)
V.N. Phat, New stabilization criteria for linear time-varying systems with state delay and norm-bounded uncertainties. IEEE Trans. Autom. Control 47(12), 2095–2098 (2002)
Z. Wang, Y. Liu, X. Liu, Exponential stabilization of a class of stochastic system with Markovian jump parameters and mode-dependent mixed time-delays. IEEE Trans. Autom. Control 55(7), 1656–1662 (2010)
E. Fridman, Descriptor discretized Lyapunov functional method: analysis and design. IEEE Trans. Autom. Control 51(5), 890–897 (2006)
X. Liu, Q. Wang, Impulsive stabilization of high-order Hopfield-type neural networks with time-varying delays. IEEE Trans. Neural Netw. 19(1), 71–79 (2008)
V.N. Phat, H. Trinh, Exponential stabilization of neural networks with various activation functions and mixed time-varying delays. IEEE Trans. Neural Netw. 21(7), 1180–1184 (2010)
J. Cao, S. Zhong, Y. Hu, Global stability analysis for a class of neural networks with varying delays and control input. Appl. Math. Comput. 189(2), 1480–1490 (2007)
K. Patan, Stability analysis and the stabilization of a class of discrete-time dynamic neural networks. IEEE Trans. Neural Netw. 18(3), 660–673 (2007)
M. Liu, Delayed standard neural network models for control systems. IEEE Trans. Neural Netw. 18(4), 1376–1391 (2007)
J. Lu, D.W.C. Ho, Z. Wang, Pinning stabilization of linearly coupled stochastic neural networks via minimum number of controllers. IEEE Trans. Neural Netw. 20(10), 1617–1629 (2009)
E.N. Sanchez, J.P. Perez, Input-to-state stabilization of dynamic neural networks. IEEE Trans. Syst. Man Cybern. A 33(4), 532–536 (2003)
Z. Chen, D. Zhao, Stabilization effect of diffusion in delayed neural networks systems with Dirichlet boundary conditions. J. Frankl. Inst. 348(10), 2884–2897 (2011)
X. Lou, B. Cui, On robust stabilization of a class of neural networks with time-varying delays, in Proceedings of IEEE International Conference on Computational Intelligence and Security, pp. 437–440 (2006)
K. Shi, H. Zhu, S. Zhong et al., Less conservative stability criteria for neural networks with discrete and distributed delays using a delay-partitioning approach. Neurocomputing 140, 273–282 (2014)
B. Chen, J. Wang, Global exponential periodicity and global exponential stability of a class of recurrent neural networks with various activation functions and time-varying delays. Neural Netw. 20(10), 1067–1080 (2007)
Z. Guo, L. Huang, LMI conditions for global robust stability of delayed neural networks with discontinuous neuron activations. Appl. Math. Comput. 215, 889–900 (2009)
H. Wu, Global exponential stability of Hopfield neural networks with delays and inverse Lipschitz neuron activations. Nonlinear Anal.: Real World Appl. 10, 2297–2306 (2009)
K. Gu, An integral inequality in the stability problem of time-delay systems, in Proceedings of the 39th IEEE Conference on Decision and Control, Sydney, Australia, pp. 2805–2810 (2000)
C. Zheng, Q.H. Shan, Z. Wang, Improved stability results for stochastic Cohen-Grossberg neural networks with discrete and distributed delays. Neural Process. Lett. 35(2), 103–129 (2012)
L. Arnold, Stochastic Differential Equations: Theory and Applications (Wiley, New York, 1972)
X. Mao, Exponential stability of stochastic delay interval systems with Markovian switching. IEEE Trans. Autom. Control 47(10), 1604–1612 (2002)
C. Yuan, J. Lygeros, Stabilization of a class of stochastic differential equations with Markovian switching. Syst. Control Lett. 54, 819–833 (2005)
B. Øksendal, Stochastic Differential Equations: An Introduction with Applications (Springer, New York, 1985)
Y. Tang, J. Fang, Q. Miao, On the exponential synchronization of stochastic jumping chaotic neural networks with mixed delays and sector-bounded non-linearities. Neurocomputing 72, 1694–1701 (2009)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2016 Science Press, Beijing and Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Wang, Z., Liu, Z., Zheng, C. (2016). Stabilization of Stochastic RNNs with Stochastic Delays. In: Qualitative Analysis and Control of Complex Neural Networks with Delays. Studies in Systems, Decision and Control, vol 34. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-47484-6_11
Download citation
DOI: https://doi.org/10.1007/978-3-662-47484-6_11
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-47483-9
Online ISBN: 978-3-662-47484-6
eBook Packages: EngineeringEngineering (R0)