1 Introduction

As a contraction of memory and resistor, memristor was introduced by Prof. Chua in 1971 [1]. He reasoned that the memristor was a similarly fundamental device for providing conceptual symmetry with resistor, inductor and capacitor. In 2008, the Hewlett-Packard Laboratory team announced they invented a practical memristor device in Nature [2, 3].

The memristor’s memory characteristic and nanometer dimensions attracted much attention. Currently, many researchers attempt to build an electronic intelligence that can mimic the awesome power of a brain by mean of the crucial electronic components—memristors [210]. From the previous work, it can be see that the memristor exhibits features just as the neurons in the human brain have. Because of this feature, we can apply this device to build a new model of neural networks to emulate the human brain, and its potential applications are in next generation computers and powerful brain-like neural computers.

There are some existing works about the memristor-based nonlinear circuit networks [610] and neural networks [1116]. Since Meyer-Bäse et al. proposed the competitive neural networks with different time scales in [17]. The synchronization problems of competitive neural networks have been intensively investigated [2024]. However, so far, there are very few works dealing with the synchronization control of the memristor-based competitive neural networks. Motivated by the above discussions, in this paper, we propose the memristor-based competitive neural networks with different time scales as follows:

$$\begin{aligned} \text{STM}: \varepsilon \dot{x}_i(t)&= - x_i(t) +\sum \limits _{j = 1}^n {a_{ij}(x_i)f_j (x_j(t))}\nonumber \\&\quad +\sum \limits _{j = 1}^n{b_{ij}(x_i)f_j(x_j(t-\tau (t)))}+H_{i}s_{i}(t), \quad j=1,2,\ldots ,n,\nonumber \\ \text{LTM}:\dot{s}_{i}(t)&= s_{i}(t)+f_i(x_i(t)), \quad \,i=1,2,\ldots ,n, \end{aligned}$$
(1)

where \(a_{ij}\) represent the connection weight between the \(i\)th neuron and the \(j\)th neuron; \(b_{ij}\) denote the synaptic weight of delayed feedback.

$$\begin{aligned} a_{ij} (x_i)&= a_{ij}(x_i(t)) = \left\{ {\begin{array}{l} {\hat{a}_{ij}, | {x_i(t)} | > \text{T}_i ,} \\ {{\mathop {\mathop {b}\limits ^{\smile }}\nolimits _{ij}} , | {x_i(t)} | \le \text{T}_i ,} \\ \end{array}} \right. \\ b_{ij} (x_i)&= b_{ij}(x_i(t)) = \left\{ {\begin{array}{l} {\hat{b}_{ij}, | {x_i(t)} | > \text{T}_i ,} \\ {{\mathop {\mathop {b}\limits ^{\smile }}\nolimits _{ij}}, | {x_i(t)} | \le \text{T}_i ,} \\ \end{array}} \right. \end{aligned}$$

in which switching jumps \(T_i > 0, \hat{a}_{ij}, \check{a}_{ij}, \hat{b}_{ij}, \check{b}_{ij}\) are all constant numbers and \(\tau (t)\) corresponds to the transmission time-varying delay and satisfies \(0\le \tau (t)\le \tau\). Where \(\varepsilon >0\) is the time scale of STM state; \(n\) denotes the number of neurons. \(x(t)=\left( x_1(t),x_2(t),\ldots ,x_n(t)\right) ^{T},\, x_i(t)\) is the neuron current activity level. \(f_j(x_j(t))\) is the output of neurons, \(f(y(t))=\left( f_{1}(x_1(t)),f_{2}(x_2(t)), \ldots ,f_{n}(x_n(t))\right) ^{T}.\, s_{i}(t)\) is the synaptic efficiency, \(s(t)=(s_{1}(t), s_{2}(t),\ldots , s_{n}(t))^{T}\). \(H_{i}\) is the strength of the external stimulus.

Remark 1

The memristive competitive neural network model (1) is basically a state-dependent nonlinear switching dynamical system, which is a general class of competitive neural network.

2 Preliminaries

Throughout this paper, solutions of all the systems considered in the following are intended in the Filippovs sense [25]. \(R ^{n}\) and \(R ^{n \times n}\) denote the \(n\)-dimensional Euclidean space and the set of all \(n\times n\) real matrices, respectively. \(P>0\) means that is a real positive definite matrix. \([\cdot , \cdot ]\) represents the interval. In Banach space of all continuous functions \(C([-\tau , 0],R^{n})\) equipped with the norm defined by \(\Vert \phi \Vert =\sup _{-\tau \le t\le 0}\left[ \sum \nolimits _{i=1}^{n}|\phi _i(t)|^{2}\right] ^{1/2}\) for all \(\phi =(\phi _1(t),\phi _2(t),\ldots ,\phi _n(t))\, \in C([-\tau , 0],R^{n}),\, co[\underline{a}_i, \bar{a}_i]\) denotes the convex hull. For vector \(x(t)=\left( x_1(t),x_2(t),\ldots ,x_n(t)\right) ^{T}\in R^{n},\, \Vert x\Vert\) denotes the Euclidean vector norm, \(\Vert x\Vert =\left[ \sum\nolimits _{i=1}^{n}|\phi _i(t)|^{2}\right] ^{1/2}\).

Definition 1

Let \(E\subset R^{n}, x\mapsto F(x)\) be called a set-valued map from \(E\hookrightarrow R^n\), if to each point \(x\) of a set \(E\subset R^n\), there corresponds a nonempty set \(F(x)\subset R^n\).

Definition 2

For the system \(\frac{dx}{dt}= g(x), x\in R^n\) , with discontinuous right-hand sides, a set-valued map is defined as

$$\begin{aligned} \phi (x)=\bigcap \limits _{\delta >0}\bigcap \limits _{\mu (N)=0} \overline{co}[g(B(x,\delta ))\setminus N] \end{aligned}$$

where \(\overline{co}[E]\) is the closure of the convex hull of set \(E,\, B(x,\delta ) = \{y :\Vert y- x\Vert \le \delta \}\) and \(\mu (N )\) is a Lebesgue measure of set \(N\). A solution in Filippovs sense [25] of the Cauchy problem for this system with initial condition \(x(0) = x_0\) is an absolutely continuous function \(x(t),\, t\in [0, T]\), which satisfies \(x(0) = x_0\) and the differential inclusion:

$$\begin{aligned} \frac{\text{d}x}{\text{d}t}\in \phi (x), \quad \text{for a.e. t}\in [0,T]. \end{aligned}$$

By applying the theories of set-valued maps and differential inclusions above, the memristor-based neural network (1) can be written as the following differential inclusion:

$$\begin{aligned} \text{STM}: \varepsilon \dot{x}_i(t) \in - x_i(t) +\sum \limits _{j = 1}^n \text{co}[\underline{a}_{ij},\bar{a}_{ij}] f_j (x_j(t))\nonumber \\ \quad +\sum \limits _{j = 1}^n\text{co}[\underline{b}_{ij},\bar{b}_{ij}]f_j(x_j(t-\tau (t))) +H_{i}s_{i}(t),\nonumber \\ LTM: \dot{s}_{i}(t)= - s_{i}(t)+f_i(x_i(t)), \,i=1,2,\ldots ,n, \end{aligned}$$
(2)

where \(\bar{a}_{ij}=\text{max} \left\{ \hat{a}_{ij}, \check{a}_{ij}\right\} ,\, \underline{a}_{ij}=\text{min}\left\{ \hat{a}_{ij}, \check{a}_{ij}\right\} ,\, \bar{b}_{ij}=\text{max} \left\{ \hat{b}_{ij}, \check{b}_{ij}\right\} ,\, \underline{b}_{ij}=\text{min}\left\{ \hat{b}_{ij}, \check{b}_{ij}\right\}\). And from [2527], there exist \(\tilde{a}_{ij}\in \text{co}[\check{a}_{ij}, \hat{a}_{ij}], \tilde{b}_{ij}\in \text{co}[\check{b}_{ij}, \hat{b}_{ij}]\), such that

$$\begin{aligned}&\text{STM}: \varepsilon \dot{x}_i(t) = - x_i(t) +\sum \limits _{j = 1}^n \tilde{a}_{ij} f_j (x_j(t)) +\sum \limits _{j = 1}^n \tilde{b}_{ij}f_j(x_j(t-\tau (t))) +H_{i}s_{i}(t),\nonumber \\&\text{LTM}: \dot{s}_{i}(t)= - s_{i}(t)+f_i(x_i(t)),\quad i=1,2,\ldots ,n. \end{aligned}$$
(3)

Throughout this paper, we consider system (2) or (3) as the drive system and corresponding response system are as follows:

$$\begin{aligned}&\text{STM}: \varepsilon \dot{y}_i(t) \in - y_i(t) +\sum \limits _{j = 1}^n \text{co}[\underline{a}_{ij},\bar{a}_{ij}] f_j (y_j(t))\nonumber \\&+\sum \limits _{j = 1}^n\text{co}[\underline{b}_{ij},\bar{b}_{ij}]f_j(y_j(t-\tau (t))) +H_{i}r_{i}(t)+ u_{i}(t),\nonumber \\&\text{LTM}: \dot{r}_{i}(t)= - r_{i}(t)+f_i(y_i(t)), i=1,2,\ldots ,n. \end{aligned}$$
(4)

or equivalently, there exist \(\tilde{a}_{ij}\in \text{co}[\check{a}_{ij}, \hat{a}_{ij}], \tilde{b}_{ij}\in \text{co}[\check{b}_{ij}, \hat{b}_{ij}]\), such that

$$\begin{aligned}\text{STM}: \varepsilon \dot{y}_i(t)& = - y_i(t) +\sum \limits _{j = 1}^n \tilde{a}_{ij} f_j (y_j(t)) \\ &\quad +\sum \limits _{j = 1}^n \tilde{b}_{ij}f_j(y_j(t-\tau (t))) +H_{i}r_{i}(t)+ u_{i}(t),\nonumber \\ \text{LTM}: \dot{r}_{i}(t) &= - r_{i}(t)+f_i(y_i(t)), \quad i=1,2,\ldots ,n. \end{aligned}$$
(5)

where \(y(t)\in R^n\) is the state vector of the response system, \(u(t)\) is the control input to be designed.

Let the error \(e(t) = y(t) - x(t)\) and \(h(t) = r(t) - s(t)\), then the error system is given as follows:

$$\begin{aligned}&\text{STM}: \varepsilon \dot{e}_i(t) \in - e_i(t) +\sum \limits _{j = 1}^n \text{co}[\underline{a}_{ij},\bar{a}_{ij}] g_j (e_j(t))\nonumber \\& \quad +\sum \limits _{j = 1}^n \text{co}[\underline{b}_{ij},\bar{b}_{ij}]g_j(e_j(t-\tau (t))) +H_{i}h_{i}(t)+ u_{i}(t),\nonumber \\&\text{LTM}: \dot{h}_{i}(t)= - h_{i}(t)+g_i(e_i(t)), \quad i=1,2,\ldots ,n, \end{aligned}$$
(6)

or equivalently, there exist \(\tilde{a}_{ij}\in \text{co}[\check{a}_{ij}, \hat{a}_{ij}], \tilde{b}_{ij}\in \text{co}[\check{b}_{ij}, \hat{b}_{ij}]\), such that

$$\begin{aligned}&\text{STM}: \varepsilon \dot{e}_i(t) =- e_i(t) +\sum \limits _{j = 1}^n \tilde{a}_{ij} g_j (e_j(t)) +\sum \limits _{j = 1}^n \tilde{b}_{ij}g_j(e_j(t-\tau (t))) +H_{i}h_{i}(t)+ u_{i}(t),\nonumber \\&\text{LTM}: \dot{h}_{i}(t)= - h_{i}(t)+g_i(e_i(t)), \quad i=1,2,\ldots ,n, \end{aligned}$$
(7)

where \(g(e(t)) =f(y(t)) - f(x(t)),\, g(e(t-\tau (t))) =f(y(t-\tau (t))) - f(x(t-\tau (t)))\).

In our paper, the control inputs in the response system (4) or (5) are taken as follows:

$$\begin{aligned} u(t)=K_{1}e(t,x)+K_{2}e(t-\tau (t)), \end{aligned}$$
(8)

where \(K_{1}\) and \(K_{2}\) are the controller gains to be determined.

Definition 3

The trivial solution of system (6) or (7) is said to be globally asymptotically stable if for any given initial conditions they satisfy:

$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty }\Vert e(t)\Vert ^{2}=0,\, \mathop {\lim }\limits _{t \rightarrow \infty } \Vert h(t)\Vert ^{2}=0. \end{aligned}$$

Throughout this paper, we make the following assumptions.

Assumption 1

There exists a diagonal matrix \(L = \text{diag}(l_1 , l_2, \ldots , l_\text{n})\), satisfying

$$\begin{aligned} 0 \le \frac{{f_j (u) - f_j (v)}}{{u - v}} \le l_j, \end{aligned}$$

for all \(u,v \in R,j = 1,2, \ldots ,\text{n}.\)

Assumption 2

There exist positive constants \(\tau ,\gamma\) such that

$$\begin{aligned} 0 < \tau (t)\le \tau , \dot{\tau }(t)\le \gamma < 1. \end{aligned}$$

Lemma 1

For any vector \(x, y \in R^n\) and a positive constant \(a\), the following matrix inequality holds

$$\begin{aligned} 2x^T y \le a x^T x + a^{-1} y^T y. \end{aligned}$$

3 Main results

Theorem 1

Under Assumptions 1–2, the two coupled delayed neural networks (2) and (4) or (3) and (5) can be synchronized with control inputs (8), if there exist constants \(r_{1}, r_{2}, r_{3},\, r_{4}>0\), diagonal matrix \(Q>0\) and \(K_{1}, K_{2}\) such that

$$\begin{aligned} T>0, \end{aligned}$$

where \(T =\frac{2}{\varepsilon }I - \frac{2}{\varepsilon }K_{1} -\frac{r_{1}}{\varepsilon }(\tilde{B}L)^{T}(\tilde{B}L)- Q -\frac{2}{\varepsilon }\tilde{A}L -\frac{r_{2}}{\varepsilon }H^{T}H -\frac{r_{3}}{\varepsilon } K_{2}^{T}K_{2} -\frac{r_{4}}{\varepsilon }L^{T}L.\)

Proof

Consider the following Lyapunov–Krasovskii function for system (7) as

$$\begin{aligned} V(t,e(t))= e^T(t)e(t)+h^T(t)h(t) + \int \limits_{t-\tau (t)}^t e^T(s)Q e(s)d s. \end{aligned}$$
(9)

Then, it follows from (6) to (7) and assumption 2 that

$$\begin{aligned}&\dot{V }(t,e(t)) \le 2 {e^T(t)}\frac{1}{\varepsilon } \left[ - e (t)+ \tilde{A} g(e(t)) + \tilde{B} g(e(t-\tau (t)))\right. \nonumber \\&\quad \left. + H h(t) + K_{1}e(t)+ K_{2}e(t-\tau (t))\right] \nonumber \\&\quad +2 h^T(t)\left[ - h(t)+ g(e(t))\right] + e^T(t)Q e(t) - (1-\gamma )e^T(t-\tau (t))Q e(t-\tau (t)) \nonumber \\&\quad = -\frac{2}{\varepsilon }e^T(t)e(t) + \frac{2}{\varepsilon }{e^T(t)}\tilde{A} g(e(t)) + \frac{2}{\varepsilon }{e^T(t)}\tilde{B} g(e(t-\tau (t))) + \frac{2}{\varepsilon }{e^T(t)} H h(t) \nonumber \\& \quad\quad + \frac{2}{\varepsilon }{e^T(t)}K_{1}e(t) + \frac{2}{\varepsilon }{e^T(t)}K_{2}e(t-\tau (t)) - 2 h^T(t) h(t)+ 2 h^T(t)g(e(t)) + e^T(t)Q e(t)\nonumber \\&\quad\quad -(1-\gamma ) e^T(t-\tau (t))Q e(t-\tau (t)). \end{aligned}$$
(10)

By Assumption 1 and Lemma 1, it can be seen that there exist positive scalars \(r_{1}, r_{2}, r_{3}, r_{4}>0\), it follows

$$\begin{aligned} e^T(t) \tilde{A} g(e(t))&\le e^T(t)\tilde{A}L e(t),\end{aligned}$$
(11)
$$\begin{aligned} 2 e^T(t)\tilde{B} g(e(t-\tau (t)))&\le \frac{1}{r_{1}}e^{T}(t-\tau (t))e(t-\tau (t)) + r_{1} e^T(t)(\tilde{B}L)^{T}(\tilde{B} L )e(t),\end{aligned}$$
(12)
$$\begin{aligned} 2 e^T(t) H h(t)&\le r_{2}e^T(t)H^{T}H e(t)+ \frac{1}{r_{2}}h^{T}(t) h(t),\end{aligned}$$
(13)
$$\begin{aligned} 2 e^T(t) K_{2} e(t-\tau (t))&\le r_{3} e^T(t) K_{2}^{T}K_{2}e(t) + \frac{1}{r_{3}}e^{T}(t-\tau (t))e(t-\tau (t)),\end{aligned}$$
(14)
$$\begin{aligned} 2h^T(t) g(e(t))&\le \frac{1}{r_{4}} h^T(t)h(t)+ r_{4}e^{T}(t)L^{T}L e(t). \end{aligned}$$
(15)

Substituting (11)–(15) into (10) we have

$$\begin{aligned}&\dot{V}(t,e(t)) \le - e^T(t)\left[ \frac{2I}{\varepsilon } - \frac{2}{\varepsilon }K_{1} - \frac{r_{1}}{\varepsilon }(\tilde{B}L)^{T}(\tilde{B}L)- Q - \frac{2}{\varepsilon }\tilde{A}L\right. \nonumber \\&\quad \left. - \frac{r_{2}}{\varepsilon }H^{T}H - \frac{r_{3}}{\varepsilon } K_{2}^{T}K_{2} - \frac{r_{4}}{\varepsilon }L^{T}L\right] e(t)\nonumber \\&\quad + e^{T}(t-\tau (t))\left[ \frac{I}{\varepsilon r_{1}} + \frac{I}{\varepsilon r_{3}} - (1-\gamma )Q\right] e(t-\tau (t))\nonumber \\&\quad + h^T(t)\left[ -2I + \frac{1}{\varepsilon r_{2}} + \frac{1}{ r_{4}} \right] h(t), \end{aligned}$$
(16)

where \(I\) is the identity matrix of appropriate dimension.

It is easy to know that there are real numbers \(r_{2}\) and \(r_{4}\) such that

$$\begin{aligned} \frac{I}{\varepsilon r_{2}} + \frac{I}{r_{4}}- 2 < 0. \end{aligned}$$
(17)

Letting

$$\begin{aligned} & (1-\gamma ) Q= \frac{I}{\varepsilon r_{1}} + \frac{I}{\varepsilon r_{3}}\nonumber \\ &\lambda= \text{min}\left\{ \lambda _{\text{min}}(T), 2 -\frac{1}{\varepsilon r_{2}} - \frac{1}{ r_{4}}\right\} . \end{aligned}$$
(18)

From (16)–(18), it can be seen that

$$\begin{aligned} \dot{V}(t,e(t))\le - \lambda (\Vert e(t)\Vert ^{2}+\Vert h(t)\Vert ^{2}). \end{aligned}$$
(19)

Moreover, in (19), the equality holds if and only if \(\Vert e(t)\Vert ^{2} +\Vert h(t)\Vert ^{ 2}=0\), i.e.,\(\Vert e(t)\Vert ^{2}=0\) and \(\Vert h(t)\Vert ^{2}=0\). It can be concluded from Lyapunov stability theory that

$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty }\Vert e(t)\Vert ^{2}=0,\, \mathop {\lim }\limits _{t \rightarrow \infty } \Vert h(t)\Vert ^{2}=0. \end{aligned}$$

According to Definition 3, the trivial solution of system (8) or (9) is globally asymptotically stable. We can conclude that the neural networks (4) and (6) or (5) and (7) can be synchronized with control inputs (10). The proof is complete.

Remark 2

When system (1) does not exhibit memristive, system (1) is a continuous system without switching jumps, Theorem 1 in this paper is similar to of Theorem 1 in [2224].

Corollary 1

Under assumptions 1–2, when \(\tau (t)= \tau >0,\) systems (2) and (4) or (3) and (5) can be synchronized with control inputs (8), if there exist constants \(r_{1}, r_{2}, r_{3},\, r_{4}>0\), diagonal matrix \(Q>0\) and \(K_{1}, K_{2}\) such that

$$\begin{aligned} T>0, \end{aligned}$$

where \(T =\frac{2}{\varepsilon }I - \frac{2}{\varepsilon }K_{1} -\frac{r_{1}}{\varepsilon }(\tilde{B}L)^{T}(\tilde{B}L)- Q -\frac{2}{\varepsilon }\tilde{A}L -\frac{r_{2}}{\varepsilon }H^{T}H -\frac{r_{3}}{\varepsilon } K_{2}^{T}K_{2} -\frac{r_{4}}{\varepsilon }L^{T}L.\)

Proof

We can obtain Corollary 1 directly from Theorem 1 by taking \(Q= \frac{I}{\varepsilon r_{1}} + \frac{I}{\varepsilon r_{3}}\).

4 Numerical example

In the following, we give some numerical simulations to illustrate the results above. Consider the following memristor-based competitive neural networks with different time scales:

$$\begin{aligned}&\text{STM}: \varepsilon \dot{x}_i(t) = - x_i(t) +\sum \limits _{j = 1}^n {a_{ij}(x_i)f_j (x_j(t))}\nonumber \\&\quad +\sum \limits _{j = 1}^n{b_{ij}(x_i)f_j(x_j(t-\tau (t)))} +H_{i}s_{i}(t), \quad j=1,2,\nonumber \\&\text{LTM}: \dot{s}_{i}(t)=- s_{i}(t)+f_i(x_i(t)),\quad i=1,2, \end{aligned}$$
(20)

let \(\varepsilon =0.8,\, \tau (t) = 0.5\left| {\sin t} \right| ,\, f(x(t)) = \tanh (x(t)),\, H_{1} = 1.6,\, H_{2} = 0.3\), with initial values with initial values \(x_1(\theta )= -0.4,\, x_2(\theta ) = 0.5,\, s_1(\theta ) = 0.5,\, s_2(\theta ) = -0.3,\, \forall \theta \in [ -0.5,0]\).

$$\begin{aligned} a_{11} (x_1)&= \left\{ {\begin{array}{l} {2.5,\left| {x_1} \right| > 1,} \\ {2.0,\left| {x_1} \right| \le 1,} \\ \end{array}} \right. , \quad a_{12} (x_1 ) = \left\{ {\begin{array}{l} { - 0.1,\left| {x_1} \right| > 1,} \\ { - 0.05,\left| {x_1} \right| \le 1,} \\ \end{array}} \right. , \quad a_{21} (x_2) = \left\{ {\begin{array}{l} { - 0.15, \left| {x_2} \right| > 1,} \\ { - 0.1, \left| {x_2} \right| \le 1,} \\ \end{array}} \right. ,\\ a_{22} (x_2)&= \left\{ {\begin{array}{l} { 3.5, \left| {x_2} \right| > 1,} \\ { 3.0, \left| {x_2} \right| \le 1,} \\ \end{array}} \right. , \quad b_{11} (y_1) = \left\{ {\begin{array}{l} {-2.0,\left| {y_1} \right| > 1,} \\ {-1.5,\left| {y_1} \right| \le 1,} \\ \end{array}} \right. , \quad b_{12} (y_1) = \left\{ {\begin{array}{l} { - 0.5,\left| {y_1} \right| > 1,} \\ { - 0.3,\left| {y_1} \right| \le 1,} \\ \end{array}} \right. ,\\ b_{21} (y_2)&= \left\{ {\begin{array}{l} { - 0.3, \left| {y_2} \right| > 1,} \\ { - 0.2, \left| {y_2} \right| \le 1,} \\ \end{array}} \right. ,\quad b_{22} (y_2) = \left\{ {\begin{array}{l} { -2.0, \left| {y_2} \right| > 1,} \\ { -1.5, \left| {y_2} \right| \le 1,} \\ \end{array}} \right. . \end{aligned}$$
$$\begin{aligned}&\text{STM}: \varepsilon \dot{y}_i(t) = - y_i(t) +\sum \limits _{j = 1}^n \tilde{a}_{ij} f_j (y_j(t)) +\sum \limits _{j = 1}^n \tilde{b}_{ij}f_j(y_j(t-\tau (t))) +H_{i}r_{i}(t)+ u_{i}(t),\nonumber \\&\text{LTM}: \dot{r}_{i}(t)= -\beta _{i}r_{i}(t)+f_i(y_i(t)), \quad i=1,2,\ldots ,n. \end{aligned}$$
(21)

with initial values \(y_{1}(\theta ) = 0.3,\, y_{2}(\theta ) = -0.5,\, r_{1}(\theta ) = 0.2,\, r_{2}(\theta ) = -0.6,\, \forall \theta \in [-0.5,0],\, u(t)=K_{1}e(t,x)+K_{2}e(t-\tau (t)),\, K_{1}=\left( \begin{array}{ccc} -1 &{} 0 \\ 0 &{} -1 \\ \end{array} \right) ,\, K_{2}=\left( \begin{array}{ccc} 1 &{} 0 \\ 0 &{} 1 \\ \end{array} \right)\).

Fig. 1
figure 1

a Dynamical behavior of synchronization error \(e_{1} (t).\) b Dynamical behavior of synchronization error \(e_{2} (t).\) c Dynamical behavior of synchronization error \(h_{1} (t).\) d Dynamical behavior of synchronization error \(h_{2} (t)\)

Figure 1a–d depicts the synchronization errors of state variables between drive and response systems. According to Theorem 1, the response system and the drive system with the controller \(u(t)\) can be globally asymptotically synchronized.

5 Conclusions

The memory property of memristor enables us to build a new model of competitive neural networks with different time scales. By constructing a proper Lyapunov–Krasovskii functional, as well as employing differential inclusions theory, a feedback controller is designed to achieve the asymptotical synchronization of coupled competitive neural networks. The proposed synchronization algorithm is simple and can be easily realized. A simulation example is given to show the effectiveness of the theoretical results.