Abstract
In this paper, a feedback controller is proposed for the synchronization of memristive competitive neural networks with different time scales. By constructing a proper Lyapunov–Krasovskii functional, as well as employing differential inclusions theory, a feedback controller is designed to achieve the asymptotical synchronization of coupled competitive neural networks. The proposed synchronization algorithm is simple and can be easily realized. A simulation example is given to show the effectiveness of the theoretical results.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
As a contraction of memory and resistor, memristor was introduced by Prof. Chua in 1971 [1]. He reasoned that the memristor was a similarly fundamental device for providing conceptual symmetry with resistor, inductor and capacitor. In 2008, the Hewlett-Packard Laboratory team announced they invented a practical memristor device in Nature [2, 3].
The memristor’s memory characteristic and nanometer dimensions attracted much attention. Currently, many researchers attempt to build an electronic intelligence that can mimic the awesome power of a brain by mean of the crucial electronic components—memristors [2–10]. From the previous work, it can be see that the memristor exhibits features just as the neurons in the human brain have. Because of this feature, we can apply this device to build a new model of neural networks to emulate the human brain, and its potential applications are in next generation computers and powerful brain-like neural computers.
There are some existing works about the memristor-based nonlinear circuit networks [6–10] and neural networks [11–16]. Since Meyer-Bäse et al. proposed the competitive neural networks with different time scales in [17]. The synchronization problems of competitive neural networks have been intensively investigated [20–24]. However, so far, there are very few works dealing with the synchronization control of the memristor-based competitive neural networks. Motivated by the above discussions, in this paper, we propose the memristor-based competitive neural networks with different time scales as follows:
where \(a_{ij}\) represent the connection weight between the \(i\)th neuron and the \(j\)th neuron; \(b_{ij}\) denote the synaptic weight of delayed feedback.
in which switching jumps \(T_i > 0, \hat{a}_{ij}, \check{a}_{ij}, \hat{b}_{ij}, \check{b}_{ij}\) are all constant numbers and \(\tau (t)\) corresponds to the transmission time-varying delay and satisfies \(0\le \tau (t)\le \tau\). Where \(\varepsilon >0\) is the time scale of STM state; \(n\) denotes the number of neurons. \(x(t)=\left( x_1(t),x_2(t),\ldots ,x_n(t)\right) ^{T},\, x_i(t)\) is the neuron current activity level. \(f_j(x_j(t))\) is the output of neurons, \(f(y(t))=\left( f_{1}(x_1(t)),f_{2}(x_2(t)), \ldots ,f_{n}(x_n(t))\right) ^{T}.\, s_{i}(t)\) is the synaptic efficiency, \(s(t)=(s_{1}(t), s_{2}(t),\ldots , s_{n}(t))^{T}\). \(H_{i}\) is the strength of the external stimulus.
Remark 1
The memristive competitive neural network model (1) is basically a state-dependent nonlinear switching dynamical system, which is a general class of competitive neural network.
2 Preliminaries
Throughout this paper, solutions of all the systems considered in the following are intended in the Filippovs sense [25]. \(R ^{n}\) and \(R ^{n \times n}\) denote the \(n\)-dimensional Euclidean space and the set of all \(n\times n\) real matrices, respectively. \(P>0\) means that is a real positive definite matrix. \([\cdot , \cdot ]\) represents the interval. In Banach space of all continuous functions \(C([-\tau , 0],R^{n})\) equipped with the norm defined by \(\Vert \phi \Vert =\sup _{-\tau \le t\le 0}\left[ \sum \nolimits _{i=1}^{n}|\phi _i(t)|^{2}\right] ^{1/2}\) for all \(\phi =(\phi _1(t),\phi _2(t),\ldots ,\phi _n(t))\, \in C([-\tau , 0],R^{n}),\, co[\underline{a}_i, \bar{a}_i]\) denotes the convex hull. For vector \(x(t)=\left( x_1(t),x_2(t),\ldots ,x_n(t)\right) ^{T}\in R^{n},\, \Vert x\Vert\) denotes the Euclidean vector norm, \(\Vert x\Vert =\left[ \sum\nolimits _{i=1}^{n}|\phi _i(t)|^{2}\right] ^{1/2}\).
Definition 1
Let \(E\subset R^{n}, x\mapsto F(x)\) be called a set-valued map from \(E\hookrightarrow R^n\), if to each point \(x\) of a set \(E\subset R^n\), there corresponds a nonempty set \(F(x)\subset R^n\).
Definition 2
For the system \(\frac{dx}{dt}= g(x), x\in R^n\) , with discontinuous right-hand sides, a set-valued map is defined as
where \(\overline{co}[E]\) is the closure of the convex hull of set \(E,\, B(x,\delta ) = \{y :\Vert y- x\Vert \le \delta \}\) and \(\mu (N )\) is a Lebesgue measure of set \(N\). A solution in Filippovs sense [25] of the Cauchy problem for this system with initial condition \(x(0) = x_0\) is an absolutely continuous function \(x(t),\, t\in [0, T]\), which satisfies \(x(0) = x_0\) and the differential inclusion:
By applying the theories of set-valued maps and differential inclusions above, the memristor-based neural network (1) can be written as the following differential inclusion:
where \(\bar{a}_{ij}=\text{max} \left\{ \hat{a}_{ij}, \check{a}_{ij}\right\} ,\, \underline{a}_{ij}=\text{min}\left\{ \hat{a}_{ij}, \check{a}_{ij}\right\} ,\, \bar{b}_{ij}=\text{max} \left\{ \hat{b}_{ij}, \check{b}_{ij}\right\} ,\, \underline{b}_{ij}=\text{min}\left\{ \hat{b}_{ij}, \check{b}_{ij}\right\}\). And from [25–27], there exist \(\tilde{a}_{ij}\in \text{co}[\check{a}_{ij}, \hat{a}_{ij}], \tilde{b}_{ij}\in \text{co}[\check{b}_{ij}, \hat{b}_{ij}]\), such that
Throughout this paper, we consider system (2) or (3) as the drive system and corresponding response system are as follows:
or equivalently, there exist \(\tilde{a}_{ij}\in \text{co}[\check{a}_{ij}, \hat{a}_{ij}], \tilde{b}_{ij}\in \text{co}[\check{b}_{ij}, \hat{b}_{ij}]\), such that
where \(y(t)\in R^n\) is the state vector of the response system, \(u(t)\) is the control input to be designed.
Let the error \(e(t) = y(t) - x(t)\) and \(h(t) = r(t) - s(t)\), then the error system is given as follows:
or equivalently, there exist \(\tilde{a}_{ij}\in \text{co}[\check{a}_{ij}, \hat{a}_{ij}], \tilde{b}_{ij}\in \text{co}[\check{b}_{ij}, \hat{b}_{ij}]\), such that
where \(g(e(t)) =f(y(t)) - f(x(t)),\, g(e(t-\tau (t))) =f(y(t-\tau (t))) - f(x(t-\tau (t)))\).
In our paper, the control inputs in the response system (4) or (5) are taken as follows:
where \(K_{1}\) and \(K_{2}\) are the controller gains to be determined.
Definition 3
The trivial solution of system (6) or (7) is said to be globally asymptotically stable if for any given initial conditions they satisfy:
Throughout this paper, we make the following assumptions.
Assumption 1
There exists a diagonal matrix \(L = \text{diag}(l_1 , l_2, \ldots , l_\text{n})\), satisfying
for all \(u,v \in R,j = 1,2, \ldots ,\text{n}.\)
Assumption 2
There exist positive constants \(\tau ,\gamma\) such that
Lemma 1
For any vector \(x, y \in R^n\) and a positive constant \(a\), the following matrix inequality holds
3 Main results
Theorem 1
Under Assumptions 1–2, the two coupled delayed neural networks (2) and (4) or (3) and (5) can be synchronized with control inputs (8), if there exist constants \(r_{1}, r_{2}, r_{3},\, r_{4}>0\), diagonal matrix \(Q>0\) and \(K_{1}, K_{2}\) such that
where \(T =\frac{2}{\varepsilon }I - \frac{2}{\varepsilon }K_{1} -\frac{r_{1}}{\varepsilon }(\tilde{B}L)^{T}(\tilde{B}L)- Q -\frac{2}{\varepsilon }\tilde{A}L -\frac{r_{2}}{\varepsilon }H^{T}H -\frac{r_{3}}{\varepsilon } K_{2}^{T}K_{2} -\frac{r_{4}}{\varepsilon }L^{T}L.\)
Proof
Consider the following Lyapunov–Krasovskii function for system (7) as
Then, it follows from (6) to (7) and assumption 2 that
By Assumption 1 and Lemma 1, it can be seen that there exist positive scalars \(r_{1}, r_{2}, r_{3}, r_{4}>0\), it follows
Substituting (11)–(15) into (10) we have
where \(I\) is the identity matrix of appropriate dimension.
It is easy to know that there are real numbers \(r_{2}\) and \(r_{4}\) such that
Letting
From (16)–(18), it can be seen that
Moreover, in (19), the equality holds if and only if \(\Vert e(t)\Vert ^{2} +\Vert h(t)\Vert ^{ 2}=0\), i.e.,\(\Vert e(t)\Vert ^{2}=0\) and \(\Vert h(t)\Vert ^{2}=0\). It can be concluded from Lyapunov stability theory that
According to Definition 3, the trivial solution of system (8) or (9) is globally asymptotically stable. We can conclude that the neural networks (4) and (6) or (5) and (7) can be synchronized with control inputs (10). The proof is complete.
Remark 2
When system (1) does not exhibit memristive, system (1) is a continuous system without switching jumps, Theorem 1 in this paper is similar to of Theorem 1 in [22–24].
Corollary 1
Under assumptions 1–2, when \(\tau (t)= \tau >0,\) systems (2) and (4) or (3) and (5) can be synchronized with control inputs (8), if there exist constants \(r_{1}, r_{2}, r_{3},\, r_{4}>0\), diagonal matrix \(Q>0\) and \(K_{1}, K_{2}\) such that
where \(T =\frac{2}{\varepsilon }I - \frac{2}{\varepsilon }K_{1} -\frac{r_{1}}{\varepsilon }(\tilde{B}L)^{T}(\tilde{B}L)- Q -\frac{2}{\varepsilon }\tilde{A}L -\frac{r_{2}}{\varepsilon }H^{T}H -\frac{r_{3}}{\varepsilon } K_{2}^{T}K_{2} -\frac{r_{4}}{\varepsilon }L^{T}L.\)
Proof
We can obtain Corollary 1 directly from Theorem 1 by taking \(Q= \frac{I}{\varepsilon r_{1}} + \frac{I}{\varepsilon r_{3}}\).
4 Numerical example
In the following, we give some numerical simulations to illustrate the results above. Consider the following memristor-based competitive neural networks with different time scales:
let \(\varepsilon =0.8,\, \tau (t) = 0.5\left| {\sin t} \right| ,\, f(x(t)) = \tanh (x(t)),\, H_{1} = 1.6,\, H_{2} = 0.3\), with initial values with initial values \(x_1(\theta )= -0.4,\, x_2(\theta ) = 0.5,\, s_1(\theta ) = 0.5,\, s_2(\theta ) = -0.3,\, \forall \theta \in [ -0.5,0]\).
with initial values \(y_{1}(\theta ) = 0.3,\, y_{2}(\theta ) = -0.5,\, r_{1}(\theta ) = 0.2,\, r_{2}(\theta ) = -0.6,\, \forall \theta \in [-0.5,0],\, u(t)=K_{1}e(t,x)+K_{2}e(t-\tau (t)),\, K_{1}=\left( \begin{array}{ccc} -1 &{} 0 \\ 0 &{} -1 \\ \end{array} \right) ,\, K_{2}=\left( \begin{array}{ccc} 1 &{} 0 \\ 0 &{} 1 \\ \end{array} \right)\).
Figure 1a–d depicts the synchronization errors of state variables between drive and response systems. According to Theorem 1, the response system and the drive system with the controller \(u(t)\) can be globally asymptotically synchronized.
5 Conclusions
The memory property of memristor enables us to build a new model of competitive neural networks with different time scales. By constructing a proper Lyapunov–Krasovskii functional, as well as employing differential inclusions theory, a feedback controller is designed to achieve the asymptotical synchronization of coupled competitive neural networks. The proposed synchronization algorithm is simple and can be easily realized. A simulation example is given to show the effectiveness of the theoretical results.
References
Chua LO (1971) Memristor-the missing circuit element. IEEE Trans Circuit Theory CT 18:507–519
Strukov DB, Snider GS, Stewart GR, Williams RS (2008) The missing memristor found. Nature 453:80–83
Tour JM, He T (2008) The fourth element. Nature 453:42–43
Itoh M, Chua LO (2008) Memristor oscillators. Int J Bifurcat Chaos 18:3183–3206
Wang ZQ et al (2012) Synaptic learning and memory functions achieved using oxygen ion migration/diffusion in an amorphous ingazno memristor. Adv Funct Mater 22:2759–2765
Itoh M, Chua LO (2009) Memristor cellular automata and memristor discrete-time cellular neural networks. Int J Bifurcat Chaos 19:3605–3656
Corinto F, Ascoli A, Gilli M (2011) Nonlinear dynamics of memristor oscillators. IEEE Trans Circuits Syst I Regul Pap 58:1323–1336
Hu J, Wang J (2010) Global uniform asymptotic stability of memristor-based recurrent neural networks with time delays, In: 2010 International Joint Conference on Neural Networks, IJCNN 2010, Barcelona, Spain, pp. 1–8
Merrikh-Bayat F, Shouraki SB (2011) Memristor-based circuits for performing basic arithmetic operations. Procedia Comput Sci 3:128–132
Petras I (2010) Fractional-order memristor-based chuas circuit. IEEE Trans Circuits Syst II Express Briefs 57:975–979
Yang X, Cao J, Yu W (2014) Exponential synchronization of memristive Cohen-Grossberg neural networks with mixed delays. Cogn Neurodyn. doi:10.1007/s11571-013-9277-6
Wu AL, Zeng ZG, Zhu XS, Zhang JN (2011) Exponential synchronization of memristor-based recurrent neural networks with time delays. Neurocomputing 74:3043–3050
Cao J, Wan Y (2014) Matrix measure strategies for stability and synchronization of inertial bam neural network with time delays. Neural Netw 53:165–172
Wu AL, Wen SP, Zeng ZG (2012) Synchronization control of a class of memristor-based recurrent neural networks. Inf Sci 183:106–116
Cao J, Alofi AS, Al-Mazrooei A, Elaiw A (2013) Synchronization of switched interval networks and applications to chaotic neural networks. Abstract and Applied Analysis, Vol 2013. Article ID 940573, 11 pages
Zhang GD, Shen Y, Wang LM (2013) Global anti-synchronization of a class of chaotic memristive neural networks with time-varying delays. Neural Netw 46:1–8
Meyer-Bäse A, Ohl F, Scheich H (1996) Singular perturbation analysis of competitive neural networks with different time scales. Neural Comput 8:1731–1742
Wang G, Shen Y (2013) Exponential synchronization of coupled memristive neural networks with time delays. Neural Comput Appl. doi:10.1007/s00521-013-1349-3
Zhang GD, Shen Y (2013) New algebraic criteria for synchronization stability of chaotic memristive neural networks with time-varying delays. IEEE Trans Neural Netw Learn Syst 24:1701–1707
Meyer-Bäse A, Pilyugin SS, Chen Y (2003) Global exponential stability of competitive neural networks with different time scales. IEEE Trans Neural Netw 14:716–719
Yang X, Cao J, Long Y, Rui W (2010) Adaptive lag synchronization for competitive neural networks with mixed delays and uncertain hybrid perturbations. IEEE Trans Neural Netw 21:1656–1667
Gu H (2009) Adaptive synchronization for competitive neural networks with different time scales and stochastic perturbation. Neurocomputing 73:350–356
Lou X, Cui B (2007) Synchronization of competitive neural networks with different time scales. Physica A 380:563–576
Gan QT, Xu R, Kang XB (2012) Synchronization of unknown chaotic delayed competitive neural networks with different time scales based on adaptive control and parameter identification. Nonlinear Dyn 67:1893–1902
Filippov AF (1988) Differential equations with discontinuous right-hand sides. Kluwer, Dordrecht
Aubin JP, Cellina A (1984) Differential inclusions. Springer, Berlin
Clarke FH, Ledyaev YS, Stem RJ, Wolenski RR (1998) Nonsmooth analysis and control theory. Springer, New York
Acknowledgments
The authors thank the editor and the anonymous reviewers for their helpful comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Shi, Y., Zhu, P. Synchronization of memristive competitive neural networks with different time scales. Neural Comput & Applic 25, 1163–1168 (2014). https://doi.org/10.1007/s00521-014-1598-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-014-1598-9