1 Introduction

Recently, memristor-based neural networks have been designed by replacing the resistors in the primitive neural networks with memristors due to the memristor-based neural networks are well suited to characterize the nonvolatile feature of the memory cell because of hysteresis effects, just as the neurons in the human brain have [17]. The memristive neural networks can remember its past dynamical history, store a continuous set of states, and be “plastic” according to the presynaptic and postsynaptic neuronal activity. Because of this feature, the studies of memristive neural networks would benefit a lot of applications in associative memories [1], new classes of artificial neural systems [27], etc.

In addition, anti-synchronization control of neural networks [8, 9] play important roles in many potential applications, e.g., non-volatile memories, neuromorphic devices to simulate learning, adaptive and spontaneous behavior. Moreover, the anti-synchronization analysis for memristive neural networks can provide a designer with an exciting variety of properties, richness of flexibility, and opportunities [10]. Therefore, the problem of anti-synchronization control of memristive neural networks is an important area of study.

Moreover, many researchers are concentrated on the dynamical nature of memristor with constant time delays [2, 3], time-varying delays [6], distributed delays or bounded time-varying delays [11]. And in [12], authors researched the mixed delays, which contain the time-varying discrete delays and unbounded distributed delays. While, proportional delay is a time-varying delay with time proportional which is unbounded and different from the above types of delays. Proportional delay is one of many delay types and if object exists, such as in Web quality of service (QoS) routing decision proportional delay usually is required. The proportional delay system as an important mathematical model often rises in some fields such as physics, biology systems and control theory and it has attracted many scholars’ interest [1318]. To the best of the authors’ knowledge, few researchers have considered dynamical behavior for the anti-synchronization control of memristive neural networks with multiple proportional delays.

Motivated by the above discussions, in this paper, our aim is to shorten thus gap by making an attempt to deal with the anti-synchronization problem for memristive neural networks with proportional delays. The main advantage of this paper lies in the following aspects. Firstly, we study proportional delays of memristive neural networks for the first time. Secondly, anti-synchronization control criteria of memristive neural networks complement and extend earlier publications. Lastly, by using the concept of Filippov solutions for the differential equations with discontinuous right-hand sides and differential inclusions theory, some new criteria are derived to ensure anti-synchronization of memristive neural networks with multiple proportional delays, and the proposed criteria are very easy to verify and achieve a valuable improvement the easier published results.

2 Preliminaries

In this paper, solutions of all the systems considered in the following are intended in Filippov’s sense. \(co\{\hat{\xi },\check{\xi }\}\) denotes closure of the convex hull generated by \(\hat{\xi }\) and \(\check{\xi }\). \([\cdot , \cdot ]\) represents an interval. For a continuous function \(k(t):\mathbb {R} \rightarrow \mathbb {R}\), \(D^{+}k(t)\) is called the upper right Dini derivative and defined as \(D^{+}k(t)=\lim _{h\rightarrow 0^{+}}\frac{1}{h}(k(t+h)-k(t))\).

Hopfield neural network model can be implemented in a circuit where the self-feedback connection weights and the connection weights are implemented by resistors. In [10, 19] authors use memristors instead of resistors to build the memristive Hopfield neural network model, where the time-varying delays are bounded. In the following, we describe a general class of memristive Hopfield neural networks with multiple proportional delays where the delays are unbounded:

(1)

where

$$\begin{aligned} a_{ij}(x_{i}(t))= & {} \frac{M_{ij}}{\mathcal {C}{_{i}}}\times sgin_{ij}, \\ b_{ij}(x_{i}(t))= & {} \frac{W_{ij}}{\mathcal {C}{_{i}}}\times sgin_{ij}, \end{aligned}$$

and

$$\begin{aligned} sgin_{ij}=\left\{ \begin{array}{rl} 1, \quad i\ne j,\\ -1, \quad i=j.\\ \end{array} \right. \end{aligned}$$

\(M_{ij}\) and \(W_{ij}\) denote the memductances of memristor \(R_{ij}\) and \(\hat{R}_{ij}\), respectively. And \(R_{ij}\) represents the memristor between the neuron activation function \(f_{j}(x_{j}(t))\) and \(x_{i}(t)\). \(\hat{R}_{ij}\) represents the memristor between the neuron activation functions \(f_{j}(x_{j}(q_{ij}t))\) and \(x_{i}(t)\). \(a_{ij}(x_{i}(t))\) and \(b_{ij}(x_{i}(t))\) represent memristors synaptic connection weights, which denote the strength of connectivity between the neuron \(j\) and \(i\) at time \(t\) and \(q_{ij}(t)\); \(q_{ij}(t)\) is a proportional delay factor and satisfying \(q_{ij}(t)=t-(1-q_{ij})t\), in which \((1-q_{ij})t\) corresponds to the time delay required in processing and transmitting a signal from the \(j\hbox {th}\) neuron to the \(i\hbox {th}\) neuron. The capacitor \(\mathcal {C}_{i}\) is constant, the memductances \(M_{ij}\) and \(W_{ij}\) respond to changes in pinched hysteresis loops. So \(a_{ij}(x_{i}(t))\) and \(b_{ij}(x_{i}(t))\) will change, as the pinched hysteresis loops change. According to the feature of the memristor and the current-voltage characteristic, then

$$\begin{aligned} a_{ij}(x_{i}(t)= & {} \left\{ \begin{array}{rl} \hat{a}_{ij}, \quad \mid x_{i}(t)\mid \le T,\\ \check{a}_{ij}, \quad \mid x_{i}(t)\mid > T,\\ \end{array} \right. \\ b_{ij}(x_{i}(t)= & {} \left\{ \begin{array}{rl} \hat{b}_{ij}, \quad \mid x_{i}(t)\mid \le T,\\ \check{b}_{ij}, \quad \mid x_{i}(t)\mid > T,\\ \end{array} \right. \end{aligned}$$

in which the switching jump \(T>0\) and \(\hat{a}_{ij}\), \(\check{a}_{ij},\, \hat{b}_{ij}\) and \(\check{b}_{ij}\) are constants, and \(\bar{a}_{ij}=\max \{\hat{a}_{ij}, \check{a}_{ij}\},\, \underline{a}_{ij}=\min \{\hat{a}_{ij}, \check{a}_{ij}\},\, \bar{b}_{ij}=\max \{\hat{b}_{ij}, \check{b}_{ij}\},\, \underline{b}_{ij}=\min \{\hat{b}_{ij}, \check{b}_{ij}\}\).

We give some definitions and assumptions which will be used in the following:

Definition 1

Suppose \(E\subseteq \mathbb {R}^{n}\), then \(x\rightarrow F(x)\) is called a set-valued map from \(E\rightarrow \mathbb {R}^{n}\), if for each point \(x\epsilon E\), there exists a nonempty set \(F(x)\subseteq \mathbb {R}^{n}\). A set-valued map \(F\) with nonempty values is said to be upper semicontinuous at \(x_{0}\epsilon E\), if for any open set \(N\) containing \(F(x_{0})\), there exists a neighborhood \(M\) of \(x_{0}\) such that \(F(M)\subseteq N\). The map \(F(x)\) is said to have a closed (convex, compact) image if for each \(x\epsilon E, F(x)\) is closed (convex, compact).

Definition 2

(See [20]) For the system \(\dot{x}(t)=g(x)\), \(x\epsilon \mathbb {R}^{n}\), with discontinuous right-hand sides, a set-valued map is defined as

$$\begin{aligned} G(t,x)=\bigcap _{\delta >0}\bigcap _{\mu (N)=0}co[g(B(x,\delta )\setminus N)], \end{aligned}$$

where \(co[E]\) is the closure of the convex hull of set \(E\), \(B(x,\delta )=\{y:\Vert y-x\Vert \le \delta \}\) and \(\mu (N)\) is the Lebesgue measure of the set \(N\). A solution in the Filippov’s sense of the Cauchy problem for this system with initial condition \(x(0)=x_{0}\) is an absolutely continuous function \(x(t)\), which satisfies \(x(0)=x_{0}\) and the differential inclusion

$$\begin{aligned} \dot{x}(t)\epsilon G(t,x). \end{aligned}$$

Assumption 1

The function \(f_{i}\) is an odd function and bounded, and satisfies a Lipschitz condition with a Lipschitz constant \(L_{i}\), i.e.,

$$\begin{aligned} \mid f_{i}(x)-f_{i}(y)\mid \le L_{i}\mid x-y\mid , \end{aligned}$$

for all \(x,y\epsilon R\).

Assumption 2

For \( i,j=1,2,\ldots , n\),

$$\begin{aligned}&co\{\hat{a}_{ij},\check{a}_{ij}\}f_{j}(x_{j}(t))+co\{\hat{a}_{ij},\check{a}_{ij}\}f_{j}(y_{j}(t)) \\&\quad \subseteq co\{\hat{a}_{ij},\check{a}_{ij}\}(f_{j}(x_{j}(t))+f_{j}(y_{j}(t))),\\&co\{\hat{b}_{ij},\check{b}_{ij}\}f_{j}(x_{j}(t))+co\{\hat{b}_{ij},\check{b}_{ij}\}f_{j}(y_{j}(t)) \\&\quad \subseteq co\{\hat{b}_{ij},\check{b}_{ij}\}(f_{j}(x_{j}(t))+f_{j}(y_{j}(t))). \end{aligned}$$

The system (1) is a differential equation with discontinuous right-hand sides, and based on the theory of differential inclusion, if \(x_{i}(t)\) is a solution of (1) in the sense of Filippov, then system (1) can be modified by the following stochastic neural networks

$$\begin{aligned}&dx_{i}(t)\epsilon \bigg [{-}x_{i}(t)+\sum _{j=1}^{n}co\{a_{ij}(x_{i}(t))\} f_{j}(x_{j}(t)) +\sum _{j=1}^{n}co\{b_{ij}(x_{i}(t))\}f_{j}(x_{j}(q_{ij}t))\bigg ]dt, \nonumber \\&\quad t\ge 0, \quad i=1,2,\ldots , n, \end{aligned}$$
(2)

or equivalently, there exist \(a_{ij}(t)\epsilon co\{a_{ij}(x_{i}(t))\}\) and \(b_{ij}(t)\) \(\epsilon co(b_{ij}(x_{i}(t)))\), such that

$$\begin{aligned} dx_{i}(t)&= \bigg [{-}x_{i}(t)+\sum _{j=1}^{n}a_{ij}(t) f_{j}(x_{j}(t)) +\sum _{j=1}^{n}b_{ij}(t) f_{j}(x_{j}(q_{ij}t))\bigg ]dt, \nonumber \\&t\ge 0, \quad i=1,2,\ldots , n. \end{aligned}$$
(3)

Lemma 1

If Assumption 1 holds, then there is at least a local solution \(x(t)\) of system (1), and the local solution \(x(t)\) can be extended to the interval \([0,+\infty ]\) in the sense of Filippov.

In this paper, we consider system (2) or (3) as the drive system and the corresponding response system is:

$$\begin{aligned}&dy_{i}(t)\epsilon \bigg [{-}y_{i}(t)+\sum _{j=1}^{n}co\{a_{ij}(y_{i}(t))\} f_{j}(y_{j}(t)) +\sum _{j=1}^{n}co\{b_{ij}(y_{i}(t))\}f_{j}(y_{j}(q_{ij}t))\bigg ]dt, \nonumber \\&t\ge 0, \quad i=1,2,\ldots , n, \end{aligned}$$
(4)

or equivalently, there exist \(a_{ij}(t)\epsilon co\{a_{ij}(y_{i}(t))\}\) and \(b_{ij}(t)\,\epsilon co(b_{ij}(y_{i}(t)))\), such that

$$\begin{aligned} dy_{i}(t)&= \bigg [{-}y_{i}(t)+\sum _{j=1}^{n}a_{ij}(t) f_{j}(y_{j}(t)) +\sum _{j=1}^{n}b_{ij}(t) f_{j}(y_{j}(q_{ij}t))\bigg ]dt, \nonumber \\&t\ge 0, \quad i=1,2,\ldots , n. \end{aligned}$$
(5)

Let \(e(t)=(e_{1}(t),e_{2}(t),\ldots , e_{n}(t))^{T}\) be the anti-synchronization error, where \(e_{i}(t)=x_{i}(t)+y_{i}(t)\). According to Assumption 2, by using the theories of set-valued maps and differential inclusions, then we get the anti-synchronization error system as follows

$$\begin{aligned}&de_{i}(t) \epsilon \bigg [{-}e_{i}(t)+\sum _{j=1}^{n}co\{a_{ij}(e_{i}(t))\} F_{j}(e_{j}(t)) +\sum _{j=1}^{n}co\{b_{ij}(e_{i}(t))\}F_{j}(e_{j}(q_{ij}t))\bigg ]dt, \nonumber \\&\quad t\ge 0, \quad i=1,2,\ldots , n, \end{aligned}$$
(6)

or equivalently, there exist \(a_{ij}(t)\epsilon co\{a_{ij}(e_{i}(t))\}\) and \(b_{ij}(t)\) \(\epsilon co(b_{ij}(e_{i}(t)))\), such that

$$\begin{aligned} de_{i}(t)&= \bigg [{-}e_{i}(t)+\sum _{j=1}^{n}a_{ij}(t) F_{j}(e_{j}(t)) +\sum _{j=1}^{n}b_{ij}(t) F_{j}(e_{j}(q_{ij}t))\bigg ]dt, \nonumber \\&t\ge 0, \quad i=1,2,\ldots , n, \end{aligned}$$
(7)

where \(F_{j}(e_{j}(t))=f_{j}(x_{j}(t))+f_{j}(y_{j}(t)),\, F_{j}(q_{ij}(t))=f_{j}(x_{j}(q_{ij}t))+f_{j}(y_{j}(q_{ij}t))\).

Remark 1

The model of memristive neural networks with multi-proportional delays in (6) or (7) is different from the neural networks with multi-proportional delays in [1318], so that those stability results cannot be directly applied to it.

Remark 2

The Eq. (7) is a discontinuous system with proportional delays, but the proportional delays can be transferred into the common time-varying delays, so the discontinuous system (7) with proportional delays exist local solution. The proof is similar to the proof of the local existence theorem for Filippov solution in [20] and similar to the proof of the Theorem 6 in Ref. [21], so it is omitted here.

Remark 3

According to Assumption 1, the activation functions \(f_{j}\) are odd functions. Then we get \(F_{i}(e_{i}(t))\) possesses the following properties:

$$\begin{aligned} \mid F_{i}(e_{i}(t))\mid \, \le L_{i}\mid e_{i}(t)\mid , \end{aligned}$$
(8)

and

$$\begin{aligned} F_{i}(0)=f_{i}(x_{i}(t))+f_{i}(-x_{i}(t))=0, \quad i=1,2,\ldots ,n. \end{aligned}$$
(9)

We transform system (7) by

$$\begin{aligned} z_{i}(t)=e_{i}(e^{t}), \quad i=1,2,\ldots ,n, \end{aligned}$$
(10)

then we get the following system (see [1618])

$$\begin{aligned} \dot{z}_{i}(t)= & {} e^{t}\Big \{-z_{i}(t)+\sum _{j=1}^{n}a_{ij}(t)F_{j}(z_{j}(t)) +\sum _{j=1}^{n}b_{ij}(t)F_{j}(z_{j}(t-\tau _{ij}))\Big \}, \end{aligned}$$
(11)

where \(\tau _{ij}=-\log q_{ij}\ge 0,\, \tau =\max _{1\le i,j\le n}\{\tau _{ij}\}\).

3 Main Results

In this section, two anti-synchronization criteria are given under designed controllers for memristive neural networks with proportional delays.

Theorem 1

If there exist positive diagonal matrices \(P=diag \{P_{1}, P_{2}, \ldots , P_{n}\},\, K=diag\{k_{1}, k_{2}, \ldots , k_{n}\},\, N_{i}=diag\{n_{i1},n_{i2},\ldots ,n_{in}\}\) and a constant \(r>0\), such that the following inequality holds:

$$\begin{aligned}&-2PL^{-1}-2PKL^{-1}+P\bar{A}+\bar{A}^{T}P \nonumber \\&\quad +\, r^{-1}\sum _{i=1}^{n}PW_{i}N_{i}^{-1}W_{i}^{-1}P+rN_{i}Q_{i}^{-1}<0, \end{aligned}$$
(12)

where \(W_{i}\) is an \(n\times n\) square matrix, whose \(i\)th row is composed of \((\overline{b}_{i1}, \overline{b}_{i2},\ldots , \overline{b}_{in})\) and other rows are zeros, \(i=1,2, \ldots ,n\), and \(Q_{i}^{-1}=diag \big (q_{i1}^{-1}, q_{i2}^{-1}, \ldots , q_{in}^{-1}\big ),\, i=1,2, \ldots ,n\). \(L=diag(L_{1},L_{2},\ldots , L_{n}),\, \bar{A}=(\bar{a}_{ij})_{n\times n}\), then the drive system and the response system become anti-synchronized under the controller \(u_{i}(t)\),

$$\begin{aligned} u_{i}(t)=-k_{i}(y_{i}(t)+x_{i}(t)), \quad i=1,2\ldots ,n. \end{aligned}$$
(13)

Proof

The error system (7) under the controller (13) can be described by

$$\begin{aligned} de_{i}(t)&= \Bigg [{-}e_{i}(t)+\sum _{j=1}^{n}a_{ij}(t) F_{j}(e_{j}(t)) +\sum _{j=1}^{n}b_{ij}(t) F_{j}(e_{j}(q_{ij}t))-k_{i}e_{i}(t)\Bigg ]dt, \nonumber \\&t\ge 0, \quad i=1,2,\ldots , n. \end{aligned}$$
(14)

Construct the following Lyapunov functional:

$$\begin{aligned} V(e(t))= & {} \sum _{i=1}^{n}2P_{i}\int _{0}^{e_{i}(t)}F_{i}(s)ds +\sum _{i=1}^{n}\sum _{j=1}^{n}\frac{r}{q_{ij}}\int _{q_{ij}t}^{t}n_{ij}F_{j}^{2}(e_{j}(s))ds, \end{aligned}$$
(15)

where \(P_{i}>0, n_{ij}>0\), and \(r>0\).

According to (8), we get

$$\begin{aligned} F_{i}^{2}(e_{i}(\cdot ))\le L_{i}e_{i}(\cdot )\cdot F_{i}(e_{i}(\cdot )). \end{aligned}$$
(16)

In (15), by (16), if \(e(t)=(e_{1}(t), e_{2}(t), \ldots , e_{n}(t))=0,\, e_{i}(t)=0,\, i=1,2,\ldots ,n\), then \(F(e_{i}(t))=0,\, i=1,2,\ldots ,n\), thus \(V(e(t))=0\). And we show \(V(e(t))>0\) as \(e(t)\ne 0\). \(\square \)

In fact, by \(e(t)\ne 0\), there exists at least one index \(i\) such that \(e_{i}(t)\ne 0\). By the integral mean value theorem, \(\int _{0}^{e_{i}(t)}F_{i}(s)ds=F_{i}(\theta _{i})e_{i}(t)\), where \(\theta _{i}\) is a number between \(0\) and \(e_{i}(t)\). From (8), when \(e_{i}(t)>0\), we get \(\theta _{i}>0\), \(F_{i}(\theta _{i})\ge 0\), \(F_{i}(\theta _{i})e_{i}(t)\ge 0\); When \(e_{i}(t)< 0\), we have \(\theta _{i}<0,\, F_{i}(\theta _{i})\le 0,\, F_{i}(\theta _{i})e_{i}(t)\ge 0\). Thus, we obtain \(\int _{0}^{e_{i}(t)}F_{i}(s)ds\ge 0\), and \(\sum _{i=1}^{n}2P_{i}\int _{0}^{e_{i}(t)}F_{i}(s)ds\ge 0, e(t)\ne 0\). Further, we will prove that \(\sum _{i=1}^{n}2P_{i}\int _{0}^{e_{i}(t)}F_{i}(s)\,ds=0\), as \(e(t)\ne 0\) does not hold. Assume that \(\sum _{i=1}^{n}2P_{i}\,\int _{0}^{e_{i}(t)}F_{i}(s)ds=0,\, e(t)\ne 0\), there must be numbers \(\theta _{i},\, i=1,2,\ldots ,n\) that \(\sum _{i=1}^{n}2P_{i}\int _{0}^{e_{i}(t)}F_{i}(s)\,ds=\sum _{i=1}^{n}2P_{i}\,F_{i}(\theta _{i})e_{i}(t)=0\), where \(\theta _{i}\) is a number between \(0\) and \(e_{i}(t)\). Thus, we obtain \(F_{i}(\theta _{i})=0\) or \(e_{i}(t)=0\) for \(i=1,2,\ldots ,n\). When \(e_{i}(t)=0,\, i=1,2, \ldots , n\), we get \(e(t)=0\), this contradicts with \(e(t)\ne 0\). When \(F_{i}(\theta _{i})=0,\, i=1,2,\ldots ,n\), we have \(F_{i}(\theta _{i})=f_{i}(x_{i}+\theta _{i})+f_{i}(y_{i})=0\), i.e. \(f_{i}(x_{i}+\theta _{i})=-f_{i}(y_{i}),\, i=1,2,\ldots ,n\). According to Assumption 1, then \(f_{i}(\theta _{i}+x_{i})\) is a constant functions for \(\theta _{i}\epsilon [0, e_{i}(t)]\) or \(\theta _{i}\epsilon [e_{i}(t),0]\), this contradicts with a nonlinear activation function \(f_{i}(x_{i}(t))\). Thus \(\sum _{i=1}^{n}2P_{i}\int _{0}^{e_{i}(t)}F_{i}(s)ds >0\) as \(e(t)\ne 0\).

That is to say, the first term of \(V(e(t))\) is positive definite. Clearly, the second term of \(V(e(t))\ge 0\). That is \(V(e(t))>0\), \(e(t)\ne 0\). Thus \(V(e(t))\) is positive definite.

Via calculating the upper right derivation of \(V(e(t))\) along the trajectory of system (14), we obtain

$$\begin{aligned} D^{+}V(e(t))= & {} 2\sum _{i=1}^{n}P_{i}F_{i}(e_{i}(t))\dot{e}_{i}(t) +\sum _{i=1}^{n}\sum _{j=1}^{n}\frac{rn_{ij}}{q_{ij}}\big [F^{2}_{j}(e_{j}(t))-F_{j}^{2}(e_{j}(q_{ij}t))q_{ij}\big ] \nonumber \\= & {} 2\sum _{i=1}^{n}P_{i}F_{i}(e_{i}(t))\Bigg [\!-e_{i}(t)+\sum _{j=1}^{n}a_{ij}(t)F_{j}(e_{j}(t)) \nonumber \\&+\sum _{j=1}^{n}b_{ij}(t)F_{j}(e_{j}(q_{ij}t))-k_{i}e_{i}(t)\Bigg ] \nonumber \\&+\sum _{i=1}^{n}\sum _{j=1}^{n}\frac{rn_{ij}}{q_{ij}}\big [F_{j}(e^{2}_{j}(t))-F^{2}_{j}(e_{j}(q_{ij}t))q_{ij}\big ] \nonumber \\= & {} -2F^{T}(e(t))Pe(t)+2F^{T}(e(t))PA(t)F(e(t)) \nonumber \\&+2\sum _{i=1}^{n}P_{i}F_{i}(e_{i}(t))\big [b_{i1}(t),b_{i2}(t),\ldots ,b_{in}(t)\big ]F(e(\overline{q}_{i}(t))) \nonumber \\&-2F^{T}(e(t))PKe(t)+rF^{T}(e(t))N_{i}Q_{i}^{-1}F(e(t)) \nonumber \\&-rF^{T}(e(\overline{q}_{i}t))N_{i}F(e(\overline{q}_{i}t)), \end{aligned}$$
(17)

where

$$\begin{aligned} F(e(\overline{q}_{i}t))=\big (F_{1}(e_{1}(q_{i1}t)), F_{2}(e_{2}(q_{i2}t)), \ldots , F_{n}(e_{n}(q_{in}t))\big )^{T}, \end{aligned}$$

\(Q_{i}^{-1}=diag\big (q_{i1}^{-1}, q_{i2}^{-1}, \ldots , q_{in}^{-1}\big )\), and \(F(e(t))=\big (F(e_{1}(t)),\, F(e_{2}(t)), \ldots ,\, F(e_{n}(t))\big )^{T},\, P=diag(P_{1},P_{2},\ldots , P_{n}),\, A(t)\,=(a_{ij}(t))_{n\times n}\).

From the following condition

$$\begin{aligned}&2\sum _{i=1}^{n}P_{i}F_{i}(e_{i}(t))\big [\overline{b}_{i1},\overline{b}_{i2},\ldots ,\overline{b}_{in}\big ]F(e(\overline{q}_{i}t)) \nonumber \\&\quad =2\sum _{i=1}^{n}F^{T}(e(t))PW_{i}F(e(\overline{q}_{i}t)) \nonumber \\&\quad \le r^{-1}F^{T}(e(t))\left( \sum _{i=1}^{n}PW_{i}N_{i}^{-1}W_{i}^{T}P\right) F(e(t)) +r\sum _{i=1}^{n}F^{T}\big (e(\overline{q}_{i}t))N_{i}F(e(\overline{q}_{i}t)\big ).\qquad \quad \end{aligned}$$
(18)

Substituting (18) into (17) yields

$$\begin{aligned}&D^{+}V(e(t))\le -2F^{T}(e(t))Pe(t)-2F^{T}(e(t))PKe(t) \nonumber \\&\quad +2F^{T}(e(t))P\bar{A}F(e(t)) \nonumber \\&\quad +r^{-1}F^{T}(e(t))\left( \sum _{i=1}^{n}PW_{i}N_{i}^{-1}W_{i}^{T}P\right) F(e(t)) \nonumber \\&\quad +r\sum _{i=1}^{n}F^{T}(e(\overline{q}_{i}t))N_{i}F(e(\overline{q}_{i}t)) \nonumber \\&\quad +r\sum _{i=1}^{n}F^{T}(e(t))N_{i}Q_{i}^{-1}F(e(t)) \nonumber \\&\quad -r \sum _{i=1}^{n}F^{T}(e(\overline{q}_{i}t))N_{i}F(e(\overline{q}_{i}t)) \nonumber \\&\quad =-2F^{T}(e(t))Pe(t)-2F^{T}(e(t))PKe(t) \nonumber \\&\quad +\,2F^{T}(e(t))P\overline{A}F(e(t))F^{T}(e(t))\Bigg [r^{-1}\sum _{i=1}^{n}PW_{i}N_{i}^{-1}W_{i}^{T}P +\sum _{i=1}^{n}rN_{i}Q_{i}^{-1}\Bigg ]F(e(t)) \nonumber \\&\quad +\,2F^{T}(e(t))PL^{-1}F(e(t))-2F^{T}(e(t))PL^{-1}F(e(t)). \end{aligned}$$
(19)

From Assumption 1, we obtain

$$\begin{aligned} -\sum _{i=1}^{n}L_{i}e_{i}(t)F_{i}(e_{i}(t))\le -\sum _{i=1}^{n}F^{2}_{i}(e_{i}(t)), \end{aligned}$$
(20)

that is

$$\begin{aligned} -2F^{T}(e(t))e(t)\le -2F^{T}(e(t))L^{-1}F(e(t)). \end{aligned}$$
(21)

Thus we have

$$\begin{aligned} -2F^{T}(e(t))Pe(t)+2F^{T}(e(t))PL^{-1}F(e(t))\le 0, \end{aligned}$$
(22)

and

$$\begin{aligned} -2F^{T}(e(t))PKe(t)+2F^{T}(e(t))PKL^{-1}F(e(t))\le 0, \end{aligned}$$
(23)

Let \(F(e(t))\ne 0\), which implies that \(e(t)\ne 0\). From (19)–(23), we get

$$\begin{aligned} D^{+}V(e(t))\le & {} F^{T}(e(t))\Bigg [-2PL^{-1}-2PKL^{-1} +P\bar{A}+\bar{A}^{T}P\nonumber \\&+r^{-1}\sum _{i=1}^{n}PW_{i}N_{i}^{-1}W_{i}^{-1}P +rN_{i}Q_{i}^{-1}\Bigg ]F(e(t)). \end{aligned}$$
(24)

Thus, if (12) holds, then \(D^{+}V(e(t))\le 0\).

Consider the case where \(F(e(t))=0\) and \(e(t)\ne 0\), then we have

$$\begin{aligned} D^{+}V(e(t))= & {} -\sum _{i=1}^{n}\sum _{j=1}^{n}rn_{ij}F_{j}^{2}(e_{j}(q_{ij}t)) \nonumber \\= & {} -\sum _{i=1}^{n}rN_{i}F^{T}(e(\overline{q}_{i}t))F(e(\overline{q}_{i}t)), \end{aligned}$$
(25)

if there exist at least one index \(i\) such that \(F(e(\overline{q}_{i}t))\ne 0\). We obtain \(D^{+}V(e(t))<0\).

Assume that \(F(e(\overline{q}_{i}t))=0\) for all \(i\). Since \(F(e(\overline{q}_{i}t))=(F_{1}(e(\overline{q}_{i1}t)),F_{2}(e(\overline{q}_{i2}t)),\ldots , F_{n}(e(\overline{q}_{in}t)))^{T}\), we get \(F_{j}\) \((e_{j}(q_{ij}t))=0,\, i,j=1,2,\ldots ,n\), i.e. \(F_{j}(e_{j}(q_{ij}t))=f_{j}(x_{j}(q_{ij}t))+f_{j}(y_{j}(q_{ij}t))=0\), so \(f_{j}(x_{j}(q_{ij}t))=-f_{j}(y_{j}(q_{ij}t))\). Because function \(f_{i}\) is an odd function, we get \(x_{j}(q_{ij}t)=-y_{j}(q_{ij}t)\), so

$$\begin{aligned} e_{j}(q_{ij}t)=0, \quad i,j=1,2,\ldots ,n. \end{aligned}$$
(26)

By \(e(t)\ne 0\), we have \(e(\overline{q}_{i}t)\ne 0\), there exist one index \(j\) such that \(e_{j}(q_{ij}t)\ne 0\), so this contradict with (26). Thus, we have proven that \(D^{+}V(e(t))\) for every \(e(t)\ne 0\).

Next, we let \(e(t)=0\) which implies that \(F(e(t))=0\), then

$$\begin{aligned} D^{+}V(e(t))= & {} -\sum _{i=1}^{n}\sum _{j=1}^{n}rn_{ij}F_{j}^{2}(e_{j}(q_{ij}t)) \nonumber \\= & {} -r\sum _{i=1}^{n}F^{T}(e(\overline{q}_{i}t))N_{i}F(e(\overline{q}_{i}t)). \end{aligned}$$
(27)

If there exist one index \(i\) such that \(F(e(\overline{q}_{i}t))\ne 0\), so we get \(D^{+}V(e(t))<0\). And \(D^{+}V(e(t))=0\) if and only if \(F(e(\overline{q}_{i}t))=0,\, i=1,2,\ldots ,n\). Then \(D^{+}V(e(t))\) is negative definite.

In the following, we give another synchronization which is a delay-independent one.

Theorem 2

If there exist positive diagonal matrices \(P=diag \{P_{1}, P_{2}, \ldots , P_{n}\}\), \(K=diag\{k_{1}, k_{2}, \ldots , k_{n}\}\), \(N_{i}=diag\{n_{i1},n_{i2},\ldots ,n_{in}\}\) and a constant \(r>0\), such that the following inequality holds:

$$\begin{aligned}&P\bar{A}+\bar{A}^{T}P+rN+r^{-1}P^{T}\bar{B}N^{-1}\bar{B}P \nonumber \\&\quad -2PDL^{-1}-2PKL^{-1}<0, \end{aligned}$$
(28)

where \(L=diag(L_{1},L_{2},\ldots , L_{n})\), \(\bar{A}=(\bar{a}_{ij})_{n\times n}\), \(W_{i}\) is an \(n\times n\) square matrix, whose \(i\hbox {th}\) row is composed of \((\overline{b}_{i1},\overline{b}_{i2},\ldots ,\overline{ b}_{in})\) and other rows are all zeros, \(i,j=1,2, \ldots ,n\). Then the error system under the controller \(u_{i}(t)\) can be described by

$$\begin{aligned} \dot{z}_{i}(t)= & {} e^{t}\Bigg \{{-}z_{i}(t)+\sum _{j=1}^{n}a_{ij}(t)F_{j}(z_{j}(t)) +\sum _{j=1}^{n}b_{ij}(t)F_{j}(z_{j}(t-\tau _{ij}))+u_{i}(t)\Bigg \},\quad \end{aligned}$$
(29)

and

$$\begin{aligned} u_{i}(t)=-k_{i}z_{i}(t), \quad i=1,2,\ldots ,n, \end{aligned}$$
(30)

then the drive system and the response system get anti-synchronized under the controller (30).

Proof

The system (29) under the controller (30) can be described by

$$\begin{aligned} \dot{z}_{i}(t)= & {} e^{t}\Bigg \{{-}z_{i}(t)+\sum _{j=1}^{n}a_{ij}(t)F_{j}(z_{j}(t)) +\sum _{j=1}^{n}b_{ij}(t)F_{j}(z_{j}(t-\tau _{ij}))-k_{i}z_{i}(t)\Bigg \},\qquad \end{aligned}$$
(31)

Consider the following Lyapunov functional:

$$\begin{aligned} V(z(t))= & {} 2\sum _{i=1}^{n}e^{-t}P_{i}\int _{0}^{z_{i}(t)}F_{i}(s)ds +\sum _{i=1}^{n}\sum _{j=1}^{n}\int _{t-\tau _{ij}}^{t}rn_{ij}F^{2}_{j}(z_{j}(s))ds, \end{aligned}$$
(32)

where \(P_{i}>0,\, n_{ij}>0,i,j=1,2,\ldots ,n\).

Since \(V(z(t))\ge \sum _{i=1}^{n}\sum _{j=1}^{n}\int _{t-\tau _{ij}}^{t}rn_{ij}F_{j}^{2}z_{j}(s)ds\), where \(\sum _{i=1}^{n}\sum _{j=1}^{n}\int _{t-\tau _{ij}}^{t}rn_{ij}F_{j}^{2}z_{j}(s)ds\) is positive definite, and \(V(0)\equiv 0\), thus \(V(z(t))\) is positive definite. And \(V(z(t))\le U(z(t))\), where

$$\begin{aligned} U(z(t))= & {} 2\sum _{i=1}^{n}P_{i}\int _{0}^{z_{i}(t)}F_{i}(s)ds+\sum _{i=1}^{n}\sum _{j=1}^{n}\int _{t-\tau _{ij}}^{t}rn_{ij}F^{2}_{j}(z_{j}(s))ds \end{aligned}$$

is positive definite. \(\square \)

We calculate the upper right derivation of \(V(z(t))\) along the trajectories of system (31)

$$\begin{aligned} D^{+}V(z(t))= & {} -2\sum _{i=1}^{n}e^{-t}P_{i}\int _{0}^{z_{i}(t)}F_{i}(s)ds \nonumber \\&+2\sum _{i=1}^{n}e^{-t}P_{i}F_{i}(z_{i}(t))\dot{z}_{i}(t) \nonumber \\&+\sum _{i=1}^{n}\sum _{j=1}^{n}n_{ij}r\big [F_{j}^{2}(z_{j}(t)-F_{j}^{2}(z_{j}(t-\tau _{ij})))\big ] \nonumber \\&\le 2\sum _{i=1}^{n}e^{-t}P_{i}F_{i}(z_{i}(t))\dot{z}_{i}(t) \nonumber \\&+\sum _{i=1}^{n}\sum _{j=1}^{n}n_{ij}r\big [F_{j}^{2}(z_{j}(t)-F_{j}^{2}(z_{j}(t-\tau _{ij})))\big ] \nonumber \\= & {} 2\sum _{i=1}^{n}e^{-t}P_{i}F_{i}(z_{i}(t))\Bigg \{e^{t}\Bigg [-z_{i}(t)+\sum _{j=1}^{n}a_{ij}(t)F_{j}(z_{j}(t)) \nonumber \\&+\sum _{j=1}^{n}b_{ij}(t)F_{j}(z_{j}(t-\tau _{ij}))-k_{i}z_{i}(t)\Bigg ]\Bigg \} \nonumber \\&+\sum _{i=1}^{n}\sum _{j=1}^{n}n_{ij}r\big [F_{j}^{2}(z_{j}(t))-F_{j}^{2}(z_{j}(t-\tau _{ij}))\big ] \nonumber \\&\le -2F^{T}(z(t))Pz(t)+2F^{T}(z(t))P\bar{A}F(z(t)) \nonumber \\&+2\sum _{i=1}^{n}P_{i}F_{i}(z_{i}(t))\big [\overline{b}_{i1},\overline{b}_{i2},\ldots ,\overline{b}_{in}\big ] \nonumber \\&F(z(t-\overline{\tau }_{i}))-2F^{T}(z(t))PKz(t) \nonumber \\&+\sum _{i=1}^{n}\Big [rF^{T}(z(t))N_{i}F(z(t)) -rF^{T}(z(t-\overline{\tau }_{i}))N_{i}F(z(t-\overline{\tau }_{i}))\Big ], \end{aligned}$$
(33)

where \(F(z(t-\overline{\tau }_{i}))=(F_{1}(z_{1}(t-\tau _{i1})),F_{2}(z_{2}(t-\tau _{i2})),\, \ldots ,F_{n}(z_{n}(t-\tau _{in}))),\, i,j=1,2,\ldots ,n\).

The following condition holds:

$$\begin{aligned}&2\sum _{i=1}^{n}P_{i}F(z_{i}(t))[\overline{b}_{i1},\overline{b}_{i2},\ldots ,\overline{b}_{in}]F(z(t-\overline{\tau }_{i})) \nonumber \\&\quad =2\sum _{i=1}^{n}F^{T}(z(t))PW_{i}F(t-\overline{\tau }_{i}) \nonumber \\&\quad \le r^{-1}F^{T}(z(t))\left( \sum _{i=1}^{n}PW_{i}N_{i}^{-1}W_{i}^{T}P\right) F(z(t)) \nonumber \\&\quad +r\sum _{i=1}^{n}F^{T}(z(t-\overline{\tau }_{i}))N_{i}F(z(t-\overline{\tau }_{i})), \end{aligned}$$
(34)

then we get

$$\begin{aligned}&D^{+}V(z(t))\le -2F^{T}(z(t))Pz(t)-2F^{T}(z(t))PKz(t) \nonumber \\&\quad +2F^{T}(z(t))P\bar{A}F(z(t)) \nonumber \\&\quad +r^{-1}F^{T}(z(t)) \Bigg [\sum _{i=1}^{n}PW_{i}N_{i}^{-1}W_{i}^{T}P\Bigg ]F(z(t)) \nonumber \\&\quad +r\sum _{i=1}^{n}F^{T}(z(t-\overline{\tau }_{i}))N_{i}F(z(t-\overline{\tau }_{i})) \nonumber \\&\quad +\sum _{i=1}^{n}[rF^{T}(z(t))N_{i}F(z(t))-rF^{T}(z(t)-\overline{\tau }_{i})N_{i}F(z(t-\overline{\tau }_{i}))] \nonumber \\&\quad \le F^{T}(z(t))\Bigg [P\bar{A}+\bar{A}^{T}P+\sum _{i=1}^{n}(rN_{i}+r^{-1}PW_{i}N_{i}^{-1}W_{i}^{T}P) \nonumber \\&\quad -2PL^{-1}-2PKL^{-1}\Bigg ]F(z(t)). \end{aligned}$$
(35)

If \(P\bar{A}+\bar{A}^{T}P+\sum _{i=1}^{n}\big (rN_{i}+r^{-1}PW_{i}N_{i}^{-1}W_{i}^{T}P\big ) -2PL^{-1}-2PKL^{-1}<0\), then similar to the proof of Theorem 1, we get \(D^{+}V(z(t))<0\) for any \(z(t)\ne 0\). And \(D^{+}V(z(t))=0\) if and only if \(z(t)=F(z(t))=F(z(t-\overline{\tau }_{i}))=0\), \(i=1,2,\ldots ,n\).

4 Illustrative Example

In this section, two numerical examples are given to illustrate the effectiveness of the results obtained above.

Example 1

We consider a two-dimensional memristive neural network as follows:

$$\begin{aligned} \left\{ \begin{array}{rl} \dot{e}_{1}(t)=-e_{1}(t)+a_{11}(e_{1}(t))F(e_{1}(t))\\ +a_{12}(e_{1}(t))F(e_{2}(t))+b_{11}(e_{1}(t))F(e_{1}(q_{11}t))\\ +b_{12}(e_{1}(t))F(e_{2}(q_{12}t))-k_{1}e_{1}(t),\\ \dot{e}_{2}(t)=-e_{1}(t)+a_{21}(e_{2}(t))F(e_{1}(t))\\ +a_{22}(e_{2}(t))F(e_{2}(t))+b_{21}(e_{2}(t))F(e_{1}(q_{21}t))\\ +b_{22}(e_{2}(t))F(e_{2}(q_{22}t))-k_{2}e_{2}(t),\\ \end{array} \right. \end{aligned}$$
(36)

where

$$\begin{aligned} a_{11}(e_{1}(t))= & {} \left\{ \begin{array}{rl} 0.6,\quad \mid e_{1}(t)\mid \le 1,\\ 0.8,\quad \mid e_{1}(t)\mid > 1,\\ \end{array} \right. \\ a_{12}(e_{1}(t))= & {} \left\{ \begin{array}{rl} 0.2,\quad \mid e_{1}(t)\mid \le 1,\\ 0.3,\quad \mid e_{1}(t)\mid > 1,\\ \end{array} \right. \\ a_{21}(e_{2}(t))= & {} \left\{ \begin{array}{rl} 0.5,\quad \mid e_{2}(t)\mid \le 1,\\ 0.7,\quad \mid e_{2}(t)\mid > 1,\\ \end{array} \right. \\ a_{22}(e_{2}(t))= & {} \left\{ \begin{array}{rl} 0.1,\quad \mid e_{2}(t)\mid \le 1,\\ 0.3,\quad \mid e_{2}(t)\mid > 1,\\ \end{array} \right. \\ b_{11}(e_{1}(t))= & {} \left\{ \begin{array}{rl} 0.3,\quad \mid e_{1}(t)\mid \le 1,\\ 0.5,\quad \mid e_{1}(t)\mid > 1,\\ \end{array} \right. \\ b_{12}(e_{1}(t))= & {} \left\{ \begin{array}{rl} 0.8,\quad \mid e_{1}(t)\mid \le 1,\\ 1,\quad \mid e_{1}(t)\mid > 1,\\ \end{array} \right. \\ b_{21}(e_{2}(t))= & {} \left\{ \begin{array}{rl} 0.4,\quad \mid e_{2}(t)\mid \le 1,\\ 0.5,\quad \mid e_{2}(t)\mid > 1,\\ \end{array} \right. \\ b_{22}(e_{2}(t))= & {} \left\{ \begin{array}{rl} 0.1,\quad \mid e_{2}(t)\mid \le 1,\\ 0.2,\quad \mid e_{2}(t)\mid > 1.\\ \end{array} \right. \\ \end{aligned}$$

We take the activation function as: \(f(e_{i}(t))=\frac{1}{2}\big (\mid e_{i}(t)+1\mid -\mid e_{i}(t)-1\mid \big )\). Obviously, \(f(e_{i}(t))\) is odd, bounded and a Lipschitz continuous function with the Lipschitz constants \(L_{1}=L_{2}=0.1\), and \(r=1\).

And

$$\begin{aligned} {Q}= & {} \left( \begin{array}{cc} 0.2&{} \quad 0.4\\ 0.4&{} \quad 0.2\\ \end{array} \right) , \bar{A}=\left( \begin{array}{cc} 0.8&{} \quad 0.3\\ 0.7&{} \quad 0.3\\ \end{array} \right) .\\ \bar{B}= & {} \left( \begin{array}{cc} 0.5&{} \quad 1\\ 0.5&{} \quad 0.2\\ \end{array} \right) , {K}=\left( \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad 1\\ \end{array} \right) \end{aligned}$$

We choose

$$\begin{aligned} {P} = \left( \begin{array}{cc} 5&{}\quad 0\\ 0&{}\quad 5\\ \end{array} \right) , {N}=\left( \begin{array}{cc} 1&{}\quad 0\\ 0&{}\quad 1\\ \end{array} \right) . \end{aligned}$$

Then we get \(-2PL^{-1}-2PKL^{-1}+P\bar{A}+\bar{A}^{T}P+r^{-1}\) \(\sum _{i=1}^{n}PW_{i}N_{i}^{-1}W_{i}^{-1}P +rN_{i}Q_{i}^{-1}<0\) which satisfy the condition of Theorem 1. Choose randomly two initial conditions for \(e_{1}(t)\) and \(e_{2}(t)\). The simulation results are depicted in Figs. 1 and 2, which show the evolutions of the errors \(e_{1}(t),e_{2}(t)\) for the controlled system (36) in Example 1. So the simulation results have confirmed the effectiveness of Theorem 1.

Fig. 1
figure 1

(Color online) The error curves \(e_{1}(t)\) of system (36) with multi-proportional delays and different initial values

Fig. 2
figure 2

(Color online) The error curves \(e_{2}(t)\) of system (36) with multi-proportional delays and different initial values

Example 2

In order to illustrate Theorem 2, we consider the following two-dimensional memristive neural network

$$\begin{aligned} \dot{z}_{i}(t)= & {} e^{t}\Bigg \{-z_{i}(t)+\sum _{j=1}^{2}a_{ij}(t)F_{j}(z_{j}(t)) +\sum _{j=1}^{2}b_{ij}(t)F_{j}(z_{j}(t-\tau _{ij}))\Bigg \}, \end{aligned}$$
(37)

where \(\tau _{ij}=0.5,\, r=1\), and the other parameters are the same as those in Example 1. We verified that \(P\bar{A}+\bar{A}^{T}P+rN+r^{-1}P^{T}\bar{B}N^{-1}\bar{B}P-2PDL^{-1}-2PKL^{-1}<0\), which satisfy the condition of Theorem 2. From Figs. 3 and 4, we see that the curves become convergent which shows the effectiveness of Theorem 2 and the drive system and the response system get anti-synchronized.

Fig. 3
figure 3

(Color online) The curves \(z_{1}(t)\) of system (37) with constant delay \(\tau _{ij}=0.5\) and different initial values

Fig. 4
figure 4

(Color online) The error curves \(z_{2}(t)\) of system (37) with constant delay \(\tau _{ij}=0.5\) and different initial values

5 Conclusion

In this paper, we adopted the differential inclusion theory to handle memristive neural networks with multiple proportional delays. In particular, new sufficient conditions were derived for the anti-synchronization control of memristive neural networks with multiple proportional delays, which was different from the existing ones and also complement, as well as, extend the earlier publications. Finally, two numerical examples were given to illustrate the effectiveness of the proposed results.