Abstract
This paper studies stochastic quasi-synchronization of delayed neural networks with parameter mismatches and stochastic perturbation mismatch via pinning impulsive control. By pinning selected nodes of stochastic neural network at impulse time, an impulsive control scheme is proposed. Some sufficient conditions are obtained to ensure that the error system can converge to small region in the mean square. Meanwhile, numerical example is provided to illustrate the effectiveness of theoretical results.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In the past few decades, neural networks as an emerging field have been drawing attention from researchers due to their wide application in signal processing, pattern recognition, dynamic optimization, deep learning and so on [1,2,3,4,5,6]. As an important collective behavior, the synchronization of neural networks is an important topic because of its practical applications in biological systems [7,8,9,10,11,12,13].
As is well known, random noises widely exist in the signal transmission of neural networks due to environmental uncertainties, which usually lead to stochastic perturbation and uncertainties of the process for dynamic evolution. Based on the theory of stochastic system [14], a lot of stability and synchronization for neural networks with stochastic perturbations have been obtained [15,16,17]. In the hardware implementation of neural networks, it is impossible for neurons to respond and communicate simultaneously owing to time-delays. To reduce the negative influence of time-delays, delay-independent method is a powerful tool to check the stability and synchronization of neural networks by constructing Lyapunov–Krasovkii functional, see [18,19,20]. On the other hand, in the implementations of neural networks systems, parameter mismatches and stochastic perturbation mismatch are unavoidable. If parameter mismatch and stochastic perturbation mismatch are small, although the stochastic synchronization error can not converge to zero in the mean square with time increasing, but we may show that the error of system is small fluctuations about zero or even a nonzero mean value in the mean square. For background material on parameter mismatch of system, we can find in [21,22,23].
Pinning control, which is an effectively external control approach, has been widely used for a variety of purposes due to low cost. It is characterized that the controllers are added on only a small fraction of network nodes [24,25,26,27]. For example, in [26], the authors proposed a variety of pinning control methods of cluster synchronization in an array of coupled neural networks and proposed a new event-triggered sampled-data transmission strategy. Impulsive control is an energy-saving control due to the instantaneous perturbations input at certain moment, which has been applied efficiently in the fields of engineering, physics, and science as well [28, 29]. With the help of the impulsive system theory, a lot of synchronization results of dynamical networks with impulsive input have been obtained [30,31,32,33]. It is worth noting that the cost of control can be further reduce by adding the impulsive controllers to a small fraction of networks nodes, which can combine the advantage of pinning control and impulsive control. Recently, some research have been devoted to the synchronization of delayed neural networks with pinning and impulsive controls [34, 35]. For instance, in [34], the authors proposed a new pinning impulsive control scheme to investigate the synchronization problem for a class complex networks with time-varying delay.
Motivated by the above discussion, this paper focuses on stochastic quasi-synchronization of delayed neural networks under parameter mismatch and stochastic perturbation mismatch. Although the error of systems will not converge exponentially to zero in the mean square, some effectively sufficient condition are obtained to synchronize the error of systems up to a relatively small bound in the mean square via pinning impulsive control. The contributions in this paper are concluded as follows:
- (i)
By pinning certain selected nodes of stochastic neural networks at each impulsive time, impulsive control strategy is proposed to achieve stochastic-synchronization.
- (ii)
By establishing a new lemma of stochastic impulsive system, stochastic-synchronization criteria are derived to guarantee that the nodes of stochastic neural networks can synchronizes desired trajectory to small region in the mean square.
- (iii)
If the bound of time delay does not exceed the length of impulsive interval, delay-independent method is used to overcome the effects of time delay and impulses by constructing Lyapunov–Krasovkii functional.
The rest of this paper is organized as follows. In Sect. 2, the stochastic neural network is presented and some definitions and lemmas are provided. In addition, a new lemma is established, which plays an important role in the proof of obtained theorems. In Sect. 3, some stochastic quasi-synchronization criteria are obtained via impulsive control technique. In Sect. 4, numerical example is presented to illustrate our results. Finally, some conclusions are given in Sect. 5.
Notations\(R^{n}\) and \(R^{n\times m}\) denote n dimensional Euclidean space and the set of \(n\times m\) real matrices. The superscript T denotes the transpose. \(I_{n}\) represents the identity matrix with n dimension. \(\Vert \cdot \Vert \) denotes the Euclidean norm for a vector and a matrix. \(\lambda _{\max }(A)\) and \(\lambda _{\min }(A)\) represent the maximum and minimum eigenvalues of matrix A. \(diag\{\cdots \}\) stands for a diagonal matrix. For real symmetric matrices X, the notation \(X>0(X<0)\) implies that the matrix X is positive (negative) definite. \(\otimes \) represents Kronecker product. Let \(\omega (t)=(\omega _{1}(t),\omega _{2}(t),\cdots ,\omega _{n}(t))^{T}\) be an n-dimensional Brownian motion on a complete probability space \((\Omega ,\mathcal {F}, P)\) with a natural filtration \(\{\mathcal {F}_{t}\}_{t\ge 0}\) satisfying the usual conditions.
2 Model Description and Preliminaries
Consider stochastic neural network with delay and N coupled nodes. The dynamics of ith neuron is described by the following form
where \(x_{i}(t)=(x_{i1}(t),x_{i2}(t),\cdots ,x_{in}(t))^{T}\) is the state vector of the i-th neural networks at time t; \(C_{1}=diag\{c_{11},c_{12},\cdots ,c_{1n}\}\) denotes the rate with which ith cell resets its potential to the resting state when being isolated from other cells and inputs, \(B_{1}=(b^{(1)}_{ij})_{n\times n},D_{1}=(d^{(1)}_{ij})_{n\times n}\in R^{n\times n}\) are the connection weight matrices; f(x) is the activation function at time t satisfying \(f(x_{i}(t))=(f_{1}(x_{i1}(t)),f_{2}(x_{i2}(t)),\ldots ,f_{n}(x_{in}(t)))^{T}\); \(h_{1}(t,x_{i}(t),x_{i}(t-\tau (t)))=(h_{11}(x_{i1}(t),x_{i1}(t-\tau (t))),h_{12}(x_{i2}(t),x_{i2}(t-\tau (t))),\cdots ,h_{1n}(x_{in}(t),x_{in}(t-\tau (t))))^{T}\); \(\tau (t)\) is transmittal delay, and there exist constant \(\tau \) and \(\sigma \) such that \(0<\tau (t)\le \tau \), \(\tau ^{'}(t)\le \sigma <1\); \(\Gamma =diag\{\gamma _{1},\gamma _{2},\cdots ,\gamma _{n}\}\) is the inner coupling positive definite matrix between two connected nodes i and j; and \(a_{ij}\) is defined as follows: if there is a connection from node j to node \(i(j\ne i)\), then \(a_{ij}\ne 0\); otherwise, \(a_{ij}>0\).
Let s(t) be the desired trajectory described by the following form:
where \(C_{2}=diag\{c_{21},c_{22},\cdots ,c_{2n}\}\), \(B_{2}=(b^{(2)}_{ij})_{n\times n}\), \(D_{2}=(d^{(2)}_{ij})_{n\times n}\in R^{n\times n}\); \(h_{2}(t,x_{i}(t),x_{i}(t-\tau (t)))=(h_{21}(x_{i1}(t),x_{i1}(t-\tau (t))),h_{22}(x_{i2}(t),x_{i2}(t-\tau (t))),\cdots ,h_{2n}(x_{in}(t),x_{in}(t-\tau (t))))^{T}\).
Define the error signal as \(e_{i}(t)=x_{i}(t)-s(t)\), \(i=1,2,\cdots ,N\), then we have the following error dynamical system
where \(\triangle C=C_{1}-C_{2}\), \(\triangle B=B_{1}-B_{2}\), \(\triangle D=D_{1}-D_{2}\) are parameter mismatch errors and \(\triangle h=h_{1}-h_{2}\) is stochastic perturbation mismatch error.
Due to the parameter mismatch and the stochastic perturbation mismatches, the origin \(e_{i}=0\) is not an equilibrium point of the error system (3), which means that it is impossible to be complete synchronization. However, by pinning impulsive control, stochastic quasi-synchronization with a relatively small error bound can considered.
Let \(t_{k}\ge 0\) be impulsive moments satisfying \(0=t_{0}<t_{1}<\cdots<t_{k}<t_{k+1}<\cdots \), \(\lim \limits _{k\rightarrow +\infty }t_{k}=+\infty \) and \(\sup \limits _{k\ge 0}\{\triangle _{k}\}<+\infty \), where \(\triangle _{k}=t_{k+1}-t_{k}\). For \(t=t_{k}\), the node errors are arranged in the following two forms
and
where \(i_{s}\in \{1,2,\cdots ,N\}\), \(s=1,2,\cdots ,N\), and \(i_{u}\ne i_{v}\) for \(u\ne v\). Furthermore, if \(E\Vert e_{i_{s}}(t_{k})\Vert =E\Vert e_{i_{s+1}}(t_{k})\Vert \), then \(i_{s}<i_{s+1}\). To reach stochastic quasi-synchronization of networks, the pinning impulsive control scheme is used on the nodes. Let \(\delta (\cdot )\) be a Dirac function. \(d_{k}\) denotes the impulsive gain. If \(-1<u_{k}<1\), the first q nodes are chosen as pinned nodes according to the arrangement (i). If \(u_{k}\ge 1\) or \(u_{k}\le -1\), then the first q nodes are chosen as the pinning nodes according to the arrangement (ii). The set of pinned nodes can be defined by \(\chi (t_{k})=\{i_{1},i_{2},\cdots ,i_{q}\}\subset \{1,2,\cdots ,N\}\) and \(\sharp \chi (t_{k})=q\). We design the pinning impulsive controller as follows:
With the help of impulsive control, the error dynamical system can be obtained in the following form:
where \(e_{i}(t^{+}_{k})=\lim \limits _{h\rightarrow 0+}e_{i}(t_{k}+h),e_{i}(t_{k})=\lim \limits _{h\rightarrow 0-}e_{i}(t_{k}+h)\) is left-hand continuous at \(t=t_{k}\). The initial condition of \(e_{i}(t)\) is denoted as \(e_{i}(t)=\phi (t)\in PC_{\mathcal {F}_{t}}([-\tau ,0],R^{n})\), where \(PC_{\mathcal {F}_{t}}([-\tau ,0],R^{n})\) is the family of all \(\mathcal {F}_{t}\)-measurable, \(PC([-\tau ,0],R^{n})\)-value random variable \(\phi \) satisfied \(\int ^{0}_{-\tau }E[|\phi (\theta )|^{2}]d\theta <\infty \), \(PC([-\tau ,0],R^{n})\) is the family of piecewise continuous functions \(\phi \) with the norm \(\Vert \phi \Vert =\sup \limits _{-\tau \le \theta \le 0}\Vert \phi (\theta )\Vert \).
Remark 1
Based on the proposed scheme, the norm of synchronization error may vary with impulse time \(t_{k}\), which implies that the pinned nodes may be different at different \(t_{k}\). In view of control cost, if \(-1<v_{k}<1\), some nodes with large norm value are chosen to be pinned nodes. If \(v_{k}\le 1\) or \(v_{k}\ge 1\), some nodes with small norm value are considered to be pinned nodes.
Definition 1
Let \(\Phi \) be a region in the phase space of system (2). The neural networks (1) and (2) are said to be uniformly stochastic quasi-synchronized with error bound \(\overline{\theta }>0\) if there theists a \(\tilde{t}\ge 0\) such that for \(t\ge \tilde{t}\), \(E[\Vert x_{i}(0)\Vert ^{2}],E[\Vert s(0)\Vert ^{2}]\in \Phi \), \(i=1,2,\cdots ,N\)
Assumption 1
There exists constant \(l>0\) such that for \(\forall x,y\in R^{n}\)
Assumption 2
There exist matrices \(M_{1}\in R^{n\times n}>0,M_{2}\in R^{n\times n}>0\) such that for \(x,y,u,v\in R^{n}\)
Assumption 3
There exists constant \(\rho _{1}>0\) such that
Assumption 4
There exists constants \(\rho _{2}>0\), \(\rho _{3}>0\) such that
For the following impulsive stochastic equation with delay:
where \(x(t)=(x_{1}(t),x_{2}(t),\cdots ,x_{n}(t))^{T}\), \(F:[0,+\infty )\times R^{n}\times PC([-\tau ,0];R^{n})\rightarrow R^{n}\), \(G:[0,+\infty )\times R^{n}\times PC([-\tau ,0];R^{n})\rightarrow R^{n\times n}\), \(\triangle x(t_{k})=x(t^{+}_{k})-x(t_{k})\), \(I:[0,+\infty )\times R^{n}\rightarrow R^{n}\).
Let \(\mathcal {C}^{2}_{1}([-\tau ,\infty )\times R^{n};[0,+\infty ))\) be the family of all nonnegative functions \(V(t,\phi )\) on \([-\tau ,\infty )\times R^{n}\), V, \(V_{t}\), \(V_{x}\), \(V_{xx}\) are continuous on \((t_{k-1},t_{k}]\times R^{n}\). For each \(V\in \mathcal {C}^{2}_{1}([-\tau ,\infty )\times R^{n};[0,+\infty ))\), \(\phi =\{\phi (\theta ):-\tau \le \theta \le 0\}\in PC_{\mathcal {F}_{t}}([-\tau ,0];R^{n})\), an operator \(\mathcal {L}V:(t_{k-1},t_{k}]\times PC_{\mathcal {F}_{t}}([-\tau ,0];R^{n})\rightarrow [0,+\infty )\) associated with Eq. (6) is defined as the following form:
Lemma 1
Assume that \(V\in \mathcal {C}^{2}_{1}([-\tau ,\infty )\times R^{n};R^{+})\) and there exist constants \(d_{1}>0\), \(d_{2}>0\), \(\eta _{k}>0,k=1,2,\cdots \), \(\overline{\mu }\ge 0\), \(\hat{\mu }\ge 0\), \(\mu ,\delta \) such that
- (i)
\(d_{1}\Vert x\Vert ^{2}\le V(t,x)\le d_{2}\Vert x\Vert ^{2}\);
- (ii)
\(E[\mathcal {L}V(t,x(t))]\le \mu E[V(t,x(t))]+\overline{\mu }E[V(t,x(t-\tau (t)))]+\hat{\mu }\) for all \(t\in (t_{k-1},t_{k}]\);
- (iii)
\(E[V(t^{+}_{k},x(t_{k})+I_{k}(t_{k},x(t_{k})))]\le \eta _{k}E[V(t_{k},x(t_{k}))]\);
- (iv)
\(\ln \eta _{k}\le \delta \triangle _{k-1}\), \(k=1,2,\cdots \);
- (v)
\(\mu +\beta \overline{\mu }+\delta <0\),
then the zero solution of Eq. (6) converges exponentially to small region \(\mathcal {K}=\{x(t)\in R^{n}|E[\Vert x(t)\Vert ^{2}]\le d\}\) in the mean square with exponent \(\lambda \), where \(\beta =\sup \limits _{1\le k<+\infty }\{\beta _{k}\}\), \(\beta _{k}=\max \{e^{\delta \triangle _{k-1}},e^{-\delta \triangle _{k-1}}\}\), \(\lambda \) is the unique solution of \(\lambda +\mu +\beta e^{\lambda \tau }\overline{\mu }+\delta =0\), \(d=\frac{-\beta \hat{\mu }}{\mu +\delta +\beta \overline{\mu }}\).
Proof
By It\(\hat{o}\) formula, we can obtain
For \(t\in (t_{k-1},t_{k}]\), we chose \(\varepsilon >0\) such that \(t+\varepsilon \in (t_{k-1},t_{k}]\). It follows from integrating the above inequality from t to \(t+\varepsilon \) and taking the expectations on both sides of (8) that
Let \(\varepsilon \rightarrow 0\), by (ii), it yields that for \(t\in (t_{k-1},t_{k}]\)
Let \(V(t)=V(t,x(t))\) and \(z(t)=e^{-\mu t}E[V(t)]\). For \(t\in (t_{k-1},t_{k}]\), we have
By (iii), we have
For \(t\in [0,t_{1}]\), integrating the inequality (11) from 0 to t, we obtain
and
For \(t\in (t_{1},t_{2}]\), by using the same method, we obtain
By induction, it yields that for \(t\in (t_{k-1},t_{k}]\)
which implies that for \(t>0\)
For \(t>s\), the impulsive points in [s, t) can be denoted by \(t_{i1},_{i2},\cdots ,t_{ip}\) and \(t_{i1-1}\) is the first impulsive point before \(t_{i1}\). If \(\delta \ge 0\), by (iv), we have
If \(\delta <0\), by the similar methods, we can conclude that the above inequality holds. It follows that
Let \(\varphi (\lambda )=\lambda +\mu +\beta \overline{\mu }e^{\lambda \tau }+\delta \). By (v), we see that \(\varphi (0)<0,\varphi (+\infty )=+\infty \) and \(\varphi ^{'}(\lambda )=1+\beta \overline{\mu }\tau e^{\lambda \tau }>0\), which means that \(\varphi (\lambda )=0\) has a unique positive solution \(\lambda \). Next, we can claim that for \(t\ge -\tau \)
Indeed, for \(t\in [-\tau ,0]\)
Thus we only need to prove that (20) holds for \(t>0\). Otherwise, there exists a \(\tilde{t}>0\) such that
Noting \(\varphi (\lambda )=0\) and (20) (21) yields
\(\square \)
Lemma 2
[36]. For any vectors \(x,y\in R^{n}\), there exist constant \(\vartheta > 0\), and \(\Xi \in R^{n\times n}>0\) such that
Lemma 3
([36] Schur complement). The linear matrix inequality
is equivalent to
where \(U_{11}=U^{T}_{11}\), and \(U_{22}=U^{T}_{22}\).
3 Stochastic Quasi-Synchronization in Mean Square
This section devotes to stochastic quasi-synchronization for stochastic neural networks by adding pinning impulsive control.
Theorem 1
Under Assumption 1–4. Let \(\Theta =\{y\in R^{n}|E(\Vert y\Vert ^{2})\le \theta \}\) is the range of system (2). If there exist matrices \(P\in R^{n\times n}>0\), \(L_{i}\in R^{n\times n}>0\), \(i=1,2,3\) and constants \(\alpha _{1}>0\), \(\alpha _{2}>0\), \(\alpha _{3}>0\), \(\mu _{1}\), \(\mu _{2}\), \(\nu \) such that
and
then the error of system (3) can converge to small region \(\mathcal {D}=\{(e_{1}(t),e_{2}(t),\cdots ,e_{N}(t))^{T}|E[\sum \limits ^{N}_{i=1}\Vert e_{i}(t)\Vert ^{2})]\le \frac{\overline{d}}{\lambda _{\min }(P)},e_{i}(t)\in R^{n},i=1,2,\cdots ,N\}\) in the mean square with exponent \(\lambda \), where \(\Xi =I_{N}\otimes (PC+C^{T}P+\alpha ^{-1}_{1}l^{2}L_{1}+\lambda _{\max }(P)M_{1})+A\otimes P\Gamma +(A\otimes P\Gamma )^{T}-\mu _{1}I_{N}\otimes P\), \(\xi _{k}=v^{2}_{k}-(v_{k}-1)(v_{k}+1)\frac{\lambda _{\max }(P)(N-q)}{\lambda _{\min }(P)N}\), \(\overline{\nu }=\sup \limits _{1\le k<\infty }\{\nu _{k}\}\), \(\nu _{k}=\max \{e^{\nu \triangle _{k-1}},e^{-\nu \triangle _{k-1}}\}\), \(\overline{d}=\frac{-\overline{\nu }\theta [\alpha ^{-1}_{3}\lambda _{\max }(L_{3})\rho ^{2}_{1}+ \lambda _{\max }(P)(\rho ^{2}_{2}+\rho ^{2}_{3})]}{\mu _{1}+\overline{\nu }\mu _{2}+\nu }\), \(\lambda >0\) is the unique solution of \(\lambda +\mu _{1}+\overline{\nu }e^{\lambda \tau }\mu _{2}+\nu =0\).
Proof
Construct a Lyapunov function
For \(t\in (t_{k-1},t_{k}]\), by (7), we have
From Assumption 1 and Lemma 2, there exist \(\alpha _{1}>0\), \(\alpha _{2}>0\) and \(L_{1}\in R^{n\times n}>0\), \(L_{2}\in R^{n\times n}>0\) such that
and
In view of Assumption 2, we can obtain
Let \(e(t)=(e^{T}_{1}(t),e^{T}_{2}(t),\cdots ,e^{T}_{N}(t))^{T}\), then
Noting that the parameter mismatches and stochastic perturbation mismatches satisfy Assumption 3 and Assumption 4, it follows from Lemma 2 that there exist \(\alpha _{3}>0\) and \(L_{3}\in R^{n\times n}>0\) such that
and
Substituting (29)–(34) into (28) yields
where \(\Psi _{1}=PC+C^{T}P+\alpha _{1} PB_{1}L^{-1}_{1}B^{T}_{1}P+\alpha ^{-1}_{1}l^{2}L_{1}+\alpha _{2}PD_{1}L^{-1}_{2}D^{T}_{1}P+\lambda _{\max }(P)M_{1}+\alpha _{3}PL^{-1}_{3}P\). \(\Psi _{2}=\alpha ^{-1}_{2}l^{2}L_{2}+\lambda _{\max }(P)M_{2}\). By (23) (24) and Lemma 3, we have
On the other hand, when \(t=t_{k}\), we have
If \(-1<u_{k}<1\), in view of the selection of pinning nodes in set \(\chi _{p}(t_{k})\), we get
Then we have
where \(\xi _{k}=v^{2}_{k}-(v_{k}-1)(v_{k}+1)\frac{\lambda _{\max }(P)(N-q)}{\lambda _{\min }(P)N}\). For \(v_{k}\le -1\) or \(v_{k}\ge 1\), we can conclude that (39) holds by the same method. It follows from (23)–(26) and Lemma 1 that there exists \(\overline{l}>0\) such that
which implies that
Therefore, the error system (3) can converges to small region in the mean square with exponent \(\lambda \). \(\square \)
Remark 2
From the proof of Theorem 1, the cluster synchronization criterion is related to \(\rho _{p}\), \(\eta _{k}\) and the impulsive interval \(t_{k+1}-t_{k}\). \(\eta _{k}\) depends on the impulsive gain \(d_{k}\) and the pinned number \(\rho _{p}\) at impulsive time \(t_{k}\). on the other hand, inequality (10) characterize the relation between \(\eta _{k}\) and \(t_{k+1}-t_{k}\). Therefore, a suitable pinning impulsive controller can be determined by selecting the value \(d_{k}\), \(\rho _{p}\) and \(t_{k+1}-t_{k}\).
Remark 3
According to Theorem 1, if \(-1<v_{k}<1\), we can conclude that the number of the pinned nodes is estimated as
Correspondingly, if \(v_{k}\le -1\) or \(v_{k}\ge 1\), we see that
Remark 4
The obtained conditions for stochastic quasi-synchronization are condition (23) (24) (25) and (26) in Theorem 1. To reduce the calculation burden, MATLAB LMI toolbox is used to determine \(\mu _{1}\) and \(\mu _{2}\) by fixing the values of \(\alpha _{i},i=1,2,3\). If taking \(P=L_{1}=L_{2}=L_{3}=I_{n},\alpha _{1}=\frac{l}{\sqrt{\lambda _{\max }(B_{1}B^{T}_{1})}},\alpha _{2}=\frac{l}{\sqrt{\lambda _{\max }(D_{1}D^{T}_{1})}},\alpha _{3}=1\), \(\mu _{1}=\lambda _{\max }(C_{1}+C^{T}_{1})+2l\sqrt{\lambda _{\max }(B_{1}B^{T}_{1})}+l\sqrt{\lambda _{\max }(D_{1}D^{T}_{1})} +\lambda _{\max }[(A\otimes \Gamma )+(A\otimes \Gamma )^{T}]+\lambda _{\max }(M_{1})+1\), \(\mu _{2}=l\sqrt{\lambda _{\max }(D_{1}D^{T}_{1})}+\lambda _{\max }(M_{2})\), \(\nu =\sup \limits _{1\le k<+\infty }\{\frac{\ln [v^{2}_{k}-(v_{k}-1)(v_{k}+1)\frac{\lambda _{\max }(P)(N-q)}{\lambda _{\min }(P)N}]}{\triangle _{k-1}}\}\), \(\overline{d}=\frac{-\overline{\nu }\theta (\rho ^{2}_{1}+ \rho ^{2}_{2}+\rho ^{2}_{3})}{\mu _{1}+\overline{\nu }\mu _{2}+\nu }\), we can derive the following practical corollary.
Corollary 1
Under Assumption 1–4. Let \(\Theta =\{y\in R^{n}|E(\Vert y\Vert ^{2})\le \theta \}\) is the range of system (2). If
then the error of system (3) can converge to small region \(\mathcal {D}=\{(e_{1}(t),e_{2}(t),\cdots ,e_{N}(t))^{T}|E[\sum \limits ^{N}_{i=1}\Vert e_{i}(t)\Vert ^{2})]\le \overline{d},e_{i}(t)\in R^{n},i=1,2,\cdots ,N\}\) in the mean square.
Theorem 2
Under Assumption 1–4 and \(t_{k}-t_{k-1}\ge \tau \). Let \(\Theta =\{y\in R^{n}|E(\Vert y\Vert ^{2})\le \theta \}\) is the range of system (2). If there exist matrices \(P\in R^{n\times n}>0\), \(L_{i}\in R^{n\times n}>0\), \(i=1,2,3\) and constants \(\alpha _{1}>0\), \(\alpha _{2}>0\), \(\alpha _{3}>0\), \(\mu _{1}\), \(\mu _{2}\) satisfying (23), (24). Also if there exists a constant \(\lambda >0\) such that for \(k=1,2\cdots \)
then the error of system (3) can converge to small region \(\mathcal {D}=\{(e_{1}(t),e_{2}(t),\cdots ,e_{N}(t))^{T}|E[\sum \limits ^{N}_{i=1}\Vert e_{i}(t)\Vert ^{2})]\le \frac{\widetilde{b}}{\lambda _{\min }(P)},e_{i}(t)\in R^{n},i=1,2,\cdots ,N\}\) in the mean square with exponent \(\frac{\lambda }{\overline{\triangle }}\), where \(\rho =\mu _{1}+\frac{\mu _{2}}{1-\sigma }\), \(\widetilde{b}=\frac{\varepsilon (e^{-\lambda }-\hat{d})e^{\rho \overline{\triangle }}}{\rho (1-e^{-\lambda })}+\frac{\varepsilon }{\rho }(e^{\rho \overline{\triangle }}-1)\), \(\varepsilon =[\alpha ^{-1}_{3}\lambda _{\max }(L_{3})\rho ^{2}_{1}+\lambda _{\max }(P)(\rho ^{2}_{2}+\rho ^{2}_{3})]\theta \), \(\overline{\triangle }=\sup \limits _{k\ge 1}\{\triangle _{k-1}\}\).
Proof
Consider a Lyapunov–Krasovskii functional
where
Similar to the proof of Theorem 3.1, for \(t\in (t_{k},t_{k+1}]\), we see that
For \(t\in (t_{k},t_{k+1}]\), it yields
Thus
where \(\rho =\mu _{1}+\frac{\mu _{2}}{1-\sigma }\), \(\varepsilon =[\alpha ^{-1}_{3}\lambda _{\max }(L_{3})\rho ^{2}_{1}+\lambda _{\max }(P)(\rho ^{2}_{2}+\rho ^{2}_{3})]\theta \). It follows that for \(t\in (t_{k},t_{k+1}]\)
When \(t=t_{k}\), according to the proof of Theorem 3.1 and by (48), we can obtain
By (44), there exists a \(\bar{t}_{k}\in (t_{k-1},t_{k}]\) such that
It follows from (33) and the above inequality that
Submitting (34) and (36) into (28), we have
where \(\overline{b}=\inf \limits _{k\ge 1}\{\xi _{k}+\frac{\mu _{2}}{1-\sigma }\triangle _{k-1}\}\), which yields that
For \(t\in (t_{k},t_{k+1}]\), by (26) (33), we see that
where \(\widetilde{b}=\frac{\varepsilon (e^{-\lambda }-\overline{b})e^{\rho \triangle _{k}}}{\rho (1-e^{-\lambda })}+\frac{\varepsilon }{\rho }(e^{\rho \triangle _{k}}-1)\). This completes the proof. \(\square \)
4 Numerical Simulations
In this section, a numerical example is given to demonstrate our results. Consider the following neural networks with stochastic perturbation as the leader:
where \(s(t)=(s_{1}(t),s_{2}(t))^{T}\), \(f(s(t))=(f_{1}(s_{1}(t)),f_{2}(s_{2}(t)))^{T}\), \(f_{1}(s_{1}(t))=\arctan s_{1}(t)\), \(f_{2}(s_{2}(t))=\arctan s_{2}(t)\),
Figure 1 depicts the trajectory of \((s_{1}(t),s_{2}(t))\) with value (0.2, 0.5). This is a chaotic attractor with stochastic perturbation and the range \(\Theta =\{s\in R^{2}|E[\Vert s\Vert ]\le 16\}\).
We assume the response neural networks is the following form:
where \(\Gamma =diag\{1.2,1.5\}\)
We design the pinning impulsive input with pinned nodes \(q=2\), \(t_{k}=0.02k\), \(v_{k}=0.25\). By simple calculation, we conclude that \(\mu _{1}+\overline{\nu }\mu _{2}+\nu \approx -13.8816<0\). By Corollary 1, we can estimate the convergence region \(\mathcal {D}=\{(e_{1},e_{2},e_{3},e_{4})|E[\sum \limits ^{4}_{i=1}\Vert e_{i}\Vert ^{2}]\le 0.098\}\). Figure 2 depicts the errors \(\Vert e_{1}\Vert \) and \(\Vert e_{2}\Vert \) of synchronization with initial value \((10,-5)\). Figure 2 depicts the errors \(\Vert e_{i}\Vert ,(i=1,2,3,4)\) of synchronization with initial value (0.2, 0.5). Figure 3 depicts the errors \(\Vert e_{i}\Vert ,(i=1,2,3,4)\) of synchronization without pinning impulsive input.
Remark 5
It is necessary to select suitable nodes when applying pinning impulsive control scheme. In [24, 25], random nodes can be selected to control. However, since the expectation of synchronization error \(e_{i}(t)\) may be different at impulsive times \(t=t_{k}\), so the pinned nodes are not invariant. Figure 2 implies that our pinning algorithm is more general than ones in [24, 25, 35]. Figure 3 shows that pinning impulsive input plays an important role in stochastic quasi-synchronization of delayed networks.
5 Conclusions
In this paper, stochastic quasi-synchronization is studied in a leader-follower delayed neural networks by using pinning impulsive control scheme. First, by pinning selected nodes of stochastic neural networks and establishing a new lemma of stochastic impulsive system, a general criterion is obtained to ensure stochastic quasi-synchronization between the leader and the followers with two different topologies. Finally, an example is provided to illustrate the effectiveness of the obtained results.
References
Chua L, Yang L (1998) Cellular neural networks. IEEE Trans Circuits Syst 35:1257–1290
Venetianer P, Roska T (1998) Image compression by cellular neural networks. IEEE Trans Circuits Syst 45:205–215
Yu J, Yang X, Gao F, Tao D (2017) Deep multimodal distance metric learning using click constraints for image ranking. IEEE Trans Cybern 47:4014–4024
Yu J, Zhu C, Zhang J, Huang Q, Tao D (2019) Spatial pyramid-enhanced netVLAD with weighted triplet loss for place recognition. IEEE Trans Neural Netw. https://doi.org/10.1109/TNNLS.2019.2908982
Hong C, Yu J, Zhang J, Jin X, Lee K (2019) Multimodal face-pose estimation with multitask manifold deep learning. IEEE Trans Ind Inform 15:3952–3961
Yu J, Tao D, Wang M, Rui Y (2015) Learning to rank using user clicks and visual features for image retrieval. IEEE Trans Cybern 45:767–779
Xiong W, Ho D, Yu X (2016) Saturated finite interval iterative learning for tracking of dynamic systems with HNN-structural output. IEEE Trans Neural Netw 27:1578–1584
Uhlhaas P, Singer W (2006) Neural synchrony in brain disorders: relevance for cognitive dysfunctions and pathophysiology. Neuron 52:155–168
Wang X, Chen G (2003) Complex networks: small-world, scale-free, and beyond. IEEE Circuits Syst Mag 3:6–20
Yang S, Li C, Huang T (2016) Exponential stabilization and synchronization for fuzzy model of memristive neural networks by periodically intermittent control. Neural Netw 75:162–172
Lu W, Chen T (2004) Synchronization of coupled connected neural networks with delays. IEEE Trans Circuits Syst I(51):2491–2503
Cao J, Wan Y (2014) Matrix measure strategies for stability and synchronization of inertial bam neural network with time delays. Neural Netw 53:165–172
He W, Cao J (2009) Exponential synchronization of chaotic neural networks: a matrix measure approach. Nonlinear Dyn 55:55–65
Mao X (1997) Stochastic differential equations and applications. Ellis Horwood, Chichester
Wang Z, Shu H, Fang J, Liu X (2006) Robust stability for stochastic Hopfield neural networks with time delays. Nonlinear Anal Real World Appl 7:1119–1128
Shen B, Wang Z, Liu X (2011) Bounded H-infinity synchronization and state estimation for discrete time-varying stochastic complex networks over a finite-horizon. IEEE Trans Neural Netw 22:145–57
Feng L, Cao J, Liu L (2018) Stability analysis in a class of Markov switched stochastic Hopfield neural networks. Neural Process Lett. https://doi.org/10.1007/s11063-018-9912-7
Xiong L, Cheng J, Cao J, Liu Z (2018) Novel inequality with application to improve the stability criterion for dynamical systems with two additive time-varying delays. Appl Math Comput 321:672–688
Zhu Q, Cao J, Hayat T, Alsaadi F (2015) Robust stability of Markovian jump stochastic neural networks with time delays in leakage terms. Neural Process Lett 41:1–17
Xiao S, Liana H, Teod K, Zenga H, Zhang X (2018) A new Lyapunov functional approach to sampled-data synchronization control for delayed neural networks. J Frankl Inst 355:8857–8873
Shahverdiev E, Sivaprakasam S, Shore K (2002) Parameter mismatches and perfect anticipating synchronization in bi-directionally coupled external cavity laser diodes. Phys Rev E 66:017206
Jalnine A, Kim S (2002) Characterization of the parameter-mismatching effect on the loss of chaos synchronization. Phys Rev E 65:026210
Pan L, Cao J (2012) Stochastic quasi-synchronization for delayed dynamical networks via intermittent control. Commun Nonlinear Sci Numer Simul 17:1332–1343
Sun W, Wang S, Wang G, Wu Y (2015) Lag synchronization via pinning control between two coupled networks. Nonlinear Dyn 79:2659–2666
Li B (2016) Pinning adaptive hybrid synchronization of two general complex dynamical networks with mixed coupling. Appl Math Model 40:2983–2998
Li L, Ho D, Cao J, Lu J (2016) Pinning cluster synchronization in an array of coupled neural networks under event-based mechanism. Neural Netw 76:1–12
Yuan M, Luo X, Wang W, Li L, Peng H (2019) Pinning synchronization of coupled memristive recurrent neural networks with mixed time-varying delays and perturbations. Neural Process Lett 49:239–262
Yang T (2001) Impulsive control theory. Springer, Berlin
Pan L, Cao J (2011) Exponential stability of impulsive stochastic functional differential equations. J Math Anal Appl 382:672–685
Lu J, Ho DWC, Cao JD (2010) A unified synchronization criterion for impulsive dynamical networks. Automatica 46:1215–1221
Guan Z, Liu Z, Feng G, Wang Y (2010) Synchronization of complex dynamical networks with time-varying delays via impulsive distributed control. IEEE Trans Circuits Syst I(57):2182–2195
Liu B, Hill D (2011) Impulsive consensus for complex dynamical networks with nonidentical nodes and coupling time-delays. SIAM J Control Optim 49:315–338
Li Y (2017) Impulsive synchronization of stochastic neural networks via controlling partial states. Neural Process Lett 46:59–69
Wang X, Liu X, She K, Zhong S (2017) Pinning impulsive synchronization of complex dynamical networks with various time-varying delay sizes. Nonlinear Anal Hybrid Syst 26:307–318
He W, Qian F, Cao J (2017) Pinning-controlled synchronization of delayed neural networks with distributed-delay coupling via impulsive control. Neural Netw 85:1–9
Boyd S, Ghaoui L, Feron E, Balakrishnan V (1994) Linear matrix inequalities in system and control theory. SIAM, Philadelphia
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was supported by the Natural Science Foundation of Guangdong Province of China under Grant 2015A030310425.
Rights and permissions
About this article
Cite this article
Pan, L. Stochastic Quasi-Synchronization of Delayed Neural Networks: Pinning Impulsive Scheme. Neural Process Lett 51, 947–962 (2020). https://doi.org/10.1007/s11063-019-10118-5
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-019-10118-5