Introduction

Memristor was originally postulated by Chua in 1971 (Chua 1971; Chua and Kang 1976) and fabricated by scientists at the Hewlett-Packard (HP) research team (Strukov et al. 2008; Tour and He 2008). It has been proposed as synapse emulation because of their similar transmission characteristics and the particular device advantage such as nanoscale, low energy dissipation which are significant for the designing and optimizing of the neuromorphic circuits (Strukov et al. 2008; Tour and He 2008; Cantley et al. 2011; Kim et al. 2011). Therefore, one can apply memristor to build memristor-based neural networks to emulate the human brain. In recent years, dynamics analysis of memristor-based recurrent neural networks has been attracted increasing attention (Hu and Wang 2010; Wen et al. 2012a, b; Wu et al. 2012; Wen and Zeng 2012; Wang and Shen 2013; Zhang et al. 2012; Wu and Zeng 2012a, b; Guo et al. 2013). In Hu and Wang (2010), the dynamical analysis of memristor-based recurrent neural networks was studied and the global uniform asymptotic stability was investigated by constructing proper Lyapunov functions and using the differential inclusion theory. Following, the stability and synchronization control of memristor-based recurrent neural networks have been further investigated (Wen et al. 2012a, b; Wu et al. 2012; Wen and Zeng 2012; Wang and Shen 2013; Zhang et al. 2012; Wu and Zeng 2012a, b; Guo et al. 2013). As well known, memristor-based neural networks exhibit state-dependent nonlinear switching behaviors because of the abrupt changes at certain instants during the dynamical processes. Therefore, it is more complicated to study the stabilization of memristor-based neural networks. So, recently, many researchers begin to turn their attentions to construct the general memristor-based neural networks and analyze the dynamic behavior (Wen et al. 2012a, b; Wu et al. 2012). In this paper, we focus on the general memristor-based neural networks constructed in Wen et al. (2012a, b), Wu et al. (2012).

In the implementation of artificial memristive neural networks, time delays are unavoidable due to finite switching speeds of amplifiers and may cause undesirable dynamic behavior such as oscillation, instability and chaos (He et al. 2013a, b; Wang et al. 2012). On the other hand, the state of electronic networks is often subject to instantaneous perturbations and experiences abrupt change at certain instants that is the impulsive effects. Therefore, memristive neural networks model with delays and impulsive effects should be more accurate to describe the evolutionary process of the system. During the last few years, there has been increasing interest in the stability problem in delayed impulsive neural networks (Lu et al. 2010; Hu et al. 2012; Chen and Zheng 2009; Yang and Xu 2007; Hu et al. 2010; Liu and Liu 2007; Liu et al. 2011; Lu et al. 2011, 2012; Guan et al. 2006; Yang and Xu 2005; Zhang et al. 2006). In Liu et al. (2011), synchronization for nonlinear stochastic dynamical networks was investigated using pinning impulsive strategy. In Guan et al. (2006), a new class of hybrid impulsive models has been introduced and some good results about asymptotic stability properties have been obtained by using the “average dwell time” concept. In general, there are two kinds of impulsive effects in dynamical systems. An impulsive sequence is said to be destabilizing if the impulsive effects can suppress the stability of dynamical systems. Conversely, an impulsive is said to be stabilizing if it can enhance the stability of dynamic systems. Stability of neural networks with stabilizing impulses or destabilizing has been studied in many papers (Lu et al. 2010; Hu et al. 2012; Chen and Zheng 2009; Yang and Xu 2007; Hu et al. 2010; Liu and Liu 2007; Liu et al. 2011; Lu et al. Lu et al. 2011, 2012; Guan et al. 2006; Yang and Xu 2005; Zhang et al. 2006). When the impulsive effects are stabilizing, the frequency of the impulses should not be too low. In most of the literature (Lu et al. 2010; Hu et al. 2012; Chen and Zheng 2009; Yang and Xu 2007; Hu et al. 2010; Liu and Liu 2007; Liu et al. 2011; Lu et al. 2011, 2012; Guan et al. 2006; Yang and Xu 2005) only investigate the stability problem when the impulses are stabilizing and the upper bound of the impulse intervals is used to guarantee the frequency of the impulses. When the impulsive effects are destabilizing, the lower bound of the impulsive intervals can be used to ensure that the impulses do not occur too frequently. For instance, in the Yang and Xu (2005), Zhang et al. (2006), the authors consider such kind of impulsive effects. In all those literature, it is implicitly assumed that the destabilizing and stabilizing impulses occur separately. However, in practice, many electronic biological systems are often subject to instantaneous disturbance and then exposed to time-varying impulsive strength, and both the destabilizing and stabilizing impulses might exist in the practical systems.

Motivated by the aforementioned discussion, different from the previous works, in this paper, we shall formulate the memristive neural networks with time-varying impulses in which the destabilizing and stabilizing impulse are considered simultaneously and deal with its global exponential stability. The upper and lower bounds of stabilizing and destabilizing impulsive intervals are defined, respectively to describe the impulsive sequences such that the destabilizing impulses do not occur frequently and the frequency of the stabilizing impulses should not be too law. By using the differential inclusion theory and the Lyapunov method, the sufficient criteria will be obtained under the stability of delayed memristor-based neural networks with time-varying impulses is guaranteed.

The organization of this paper is as follows. Model description and the preliminaries are introduced in “Model description and preliminaries” section. Some algebraic conditions concerning global exponential stability are derived in “Main results” section. Numerical simulations are given in “Numerical example” section. Finally, this paper ends by the conclusions in “Conclusions” section.

Model description and preliminaries

Model description

Several memristor-based recurrent neural networks have been constructed, such as those in Hu and Wang (2010), Wen et al. (2012a, b), Wu et al. (2012), Wen and Zeng (2012), Wang and Shen (2013), Zhang et al. (2012), Wu and Zeng (2012a, b), Guo et al. (2013). Based on these works, in this paper, we consider a more general class of memristive neural networks with time-varying impulses described by the following equations:

$$\left\{ {\begin{array}{l} \frac{{dx_{i} \left( t \right)}}{dt} = - d_{i} \left( {x_{i} } \right)x_{i} \left( t \right) + \sum\limits_{j = 1}^{n} {a_{ij} \left( {x_{i} } \right)} f_{j} \left( {x_{j} } \right) + \sum\limits_{j = 1}^{n} {b_{ij} \left( {x_{i} } \right)} g_{j} \left( {x_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right) + I_{i} , \hfill \\ \qquad\qquad t \ge 0,\quad t \ne t_{k} \;i = 1,2, \ldots ,n. \hfill \\ \qquad\qquad{x_{i} \left( {t_{k}^{ + } } \right) = \alpha_{k} x_{i} \left( {t_{k}^{ - } } \right),k \in N_{ + } } \\ \end{array} } \right.$$
(1)

where \(x_{i} \left( t \right)\) is the state variable of the ith neuron, d i is the ith self-feedback connection weight, \(a_{ij} \left( {x_{i} } \right)\) and \(b_{ij} \left( {x_{i} } \right)\) are, respectively, connection weights and those associated with time delays. I i is the ith external constant input. \(f_{i} \left( \cdot \right)\) and \(g_{i} \left( \cdot \right)\) are the ith activation functions and those associated with time delays satisfying the following Assumption 2. The time-delay\(\tau_{ij} \left( t \right)\) is a bounded function, i.e., 0 ≤ τ ij (t) ≤ τ where τ ≥ 0 is a constant. \(\left\{ {t_{1} ,t_{2} ,t_{3} , \ldots } \right\}\) is a sequence of strictly increasing impulsive moments,\(\alpha_{k} \in R\) represents the strength of impulses. We assume that x i (t) is right continuous at t = t k , i.e., \(x_{i} \left( {t_{k}^{ + } } \right) = \alpha_{k} x_{i} \left( {t_{k}^{ - } } \right)\). Therefore, the solution of (1) are the piecewise right-hand continuous functions with discontinuities at t = t k for \(k \in N_{ + }\).

Remark 1

The parameter α k in the equality \(x_{i} (t_{k}^{ + } ) = \alpha_{k} x_{i} (t_{k}^{ - } )\) describes the influence of impulses on the absolute value of the state. When \(\left| {\alpha_{k} } \right| > 1\), the absolute value of the state is enlarged. Thus the impulses may be viewed as destabilizing impulses. When \(\left| {\alpha_{k} } \right| < 1\), the absolute value of the state is reduced, thus the impulses may be viewed as stabilizing impulses.

Remark 2

In this paper, both stabilizing and destabilizing impulses are considered into the model simultaneously. we assume that the impulsive strengths of destabilizing impulses takes value from a finite set \(\left\{ {\mu_{1} ,\mu_{2} , \ldots ,\mu_{N} } \right\}\) and the impulsive strengths of stabilizing impulses take values from \(\left\{ {\gamma_{1} ,\gamma_{2} , \ldots ,\gamma_{M} } \right\}\), where \(\left| {\mu_{i} } \right| > 1\),\(0 < \left| {\gamma_{j} } \right| < 1\), for \(i = 1,2, \ldots ,N\),\(j = 1,2, \ldots ,M\). We assume that \(t_{ik \uparrow } ,t_{jk \downarrow }\) denote the activation time of the destabilizing impulses with impulsive strength μ i and the activation time of the stabilizing impulses with impulsive strength γ i , respectively. The following assumption is given to enforce the upper and lower bounds of stabilizing and destabilizing impulses, respectively.

Assumption 1

$$\inf \left\{ {t_{ik \uparrow } - t_{i(k - 1) \uparrow } } \right\} = \xi_{i} ,\hbox{max} \left\{ {t_{jk \downarrow } - t_{j(k - 1) \downarrow } } \right\} = \zeta_{j}$$

where \(t_{ik \uparrow } ,t_{jk \uparrow } \in \left\{ {t_{1} ,t_{2} ,t_{3} , \ldots } \right\}\).

Assumption 2

For \(i = 1,2, \ldots ,n,\quad\forall \alpha ,\beta \in R,\alpha \ne \beta ,\) then neuron activation function \(f_{i} \left( {x_{i} } \right),g_{i} \left( {x_{i} } \right)\) in (1) are bounded and satisfy

$$0 \le \frac{{f_{i} \left( \alpha \right) - f_{i} \left( \beta \right)}}{{\alpha-\beta}} \le k_{i} ,\quad 0 \le \frac{{g_{i} \left( \alpha \right) - g_{i} \left( \beta \right)}}{\alpha - \beta } \le l_{i}$$
(2)

where k i , l i are nonnegative constants.

Preliminaries

For convenience, we first make the following preparations. R + and R n denote, respectively, the set of nonnegative real numbers and the n-dimensional Euclidean space. For\(x \in R^{n}\),\(X \in R^{{n \times n}}\), let \(\left| x \right|\) denotes the Euclidean vector norm, and \(\left\| X \right\| = \sqrt {\lambda_{ \hbox{max} } \left( {X^{T} X} \right)}\)the induced matrix norm. \(\lambda_{ \hbox{min} } \left( \cdot \right)\) and \(\lambda_{ \hbox{max} } \left( \cdot \right)\) denote the minimum and maximum eigenvalues of the corresponding matrix, respectively. For continuous functions \(f\left( t \right):R \to R\),\(D^{ + } f\left( t \right)\) is called the upper right Dini derivative defined as \(D^{ + } f\left( t \right) = \overline{{\mathop {lim}\limits_{{h \to 0^{ + } }} }} \left( {1/h} \right)\left( {f\left( {t + h} \right) - f\left( t \right)} \right)\). N + denotes the set of positive integers.

As we well known, memristor is a switchable device. It follows from its construction and fundamental circuit laws that the memristance is nonlinear and time-varying. Hence, the current–voltage characteristic of a memristor showed in Fig. 1. According to piecewise linear model (Hu and Wang 2010; Wen et al. 2012a, b) and the previous work (Wu et al. 2012; Wen and Zeng 2012; Wang and Shen 2013; Zhang et al. 2012; Wu and Zeng 2012a, b; Guo et al. 2013), we let

$$d_{i} \left( {x_{i} } \right) = \left\{ {\begin{array}{*{20}c} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{d}_{i} ,\left| {x_{i} \left( t \right)} \right| < T_{i} } \\ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{d}_{i} ,\left| {x_{i} \left( t \right)} \right| > T_{i} } \\ \end{array} } \right.\quad a_{ij} \left( {x_{i} } \right) = \left\{ {\begin{array}{*{20}c} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{a}_{ij} ,\left| {x_{i} \left( t \right)} \right| < T_{i} } \\ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a}_{ij} ,\left| {x_{i} \left( t \right)} \right| > T_{i} } \\ \end{array} } \right.\quad b_{ij} \left( {x_{i} } \right) = \left\{ {\begin{array}{*{20}c} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{b}_{ij} ,\left| {x_{i} \left( t \right)} \right| < T_{i} } \\ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{b}_{ij} ,\left| {x_{i} \left( t \right)} \right| > T_{i} } \\ \end{array} } \right.,$$

Here, T i  > 0, are memristive switching rules and \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{d}_{i} > 0, \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{d}_{i} > 0, \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{a}_{ij} ,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a}_{ij} , \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{b}_{ij} , \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{b}_{ij} ,\) \(i, j = 1, 2,\ldots,n.\) are constants relating to memristance.

Fig. 1
figure 1

The typical current–voltage characteristic of memristor. It is a pinched hysteresis loop

Let, for \(i,j = 1,2, \ldots ,n\), \(\overline{d}_{i} = { \hbox{max} }\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{d}_{i} ,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{d}_{i} } \right),\) \(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} = { \hbox{min} }\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{d}_{i} ,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{d}_{i} } \right),\) \(\bar{a}_{ij} = { \hbox{max} }\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{a}_{ij} ,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a}_{ij} } \right),\) \(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{a}_{ij} = { \hbox{min} }\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{a}_{ij} ,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a}_{ij} } \right),\) \(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{a}_{ij} = { \hbox{min} }\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{a}_{ij} ,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a}_{ij} } \right),\) \(\bar{b}_{ij} = max\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{b}_{ij} ,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{b}_{ij} } \right),\) \(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{b}_{ij} = min\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{b}_{ij} ,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{b}_{ij} } \right),\) and \(Co[\overline{\xi}_{i} ,\underline{\xi}_{i} ]\) denote the convex hull of \([\overline{\xi}_{i} ,\underline{\xi}_{i} ]\). Clearly, in this paper, we have \([\overline{\xi}_{i} ,\underline{\xi}_{i} ] = Co[\overline{\xi}_{i} ,\underline{\xi}_{i} ]\).

Now, according to the literature (Hu and Wang 2010; Wen et al. 2012a, b), by applying the theories of set-valued maps and differential inclusion, we have from (1)

$$\begin{aligned}&\frac{{dx_{i} \left( t \right)}}{dt} \in { - }Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} , {\bar{\text{d}}}_{i} ]x_{i} \left( t \right) + \sum\limits_{\text{j = 1}}^{\text{n}} {Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{a}_{ij} ,\bar{a}_{ij} ]} {\text{f}}_{j} \left( {x_{j} } \right) + \sum\limits_{\text{j = 1}}^{\text{n}} {Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{b}_{ij} ,\bar{b}_{ij} ]} {\text{g}}_{\text{j}} \left( {x_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right){\text{ + I}}_{\text{i}} , {\text{ + I}}_{\text{i}} ,\hfill \\ &\quad t \ge 0,i = 1,2, \ldots ,n. \hfill \\ \end{aligned}$$
(3)

Or, equivalently, for \(i,j = 1,2, \ldots ,n,\) there exist

$$\mathop d\limits^{ * }_{i} \in {\text{Co}}[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} ,\bar{d}_{i} ],\;\;\mathop a\limits^{ * }_{ij} \in Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{a}_{ij} ,\bar{a}_{ij} ],\;\;\mathop b\limits^{ * }_{ij} = Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{b}_{ij} ,\bar{b}_{ij} ]$$

such that

$$\frac{{dx_{i} \left( t \right)}}{dt} = - \mathop d\limits^{ * }_{i} x_{i} \left( t \right) + \sum\limits_{j = 1}^{n} {\mathop a\limits^{ * }_{ij} } f_{j} \left( {x_{j} } \right) + \sum\limits_{j = 1}^{n} {\mathop b\limits^{ * }_{ij} }\, g_{j} \left( {x_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right) + I_{i} , t \ge 0.$$
(4)

Definition 1

Aconstant vector \(x^{*} = (x_{1}^{*} ,x_{2}^{*} , \ldots ,x_{n}^{*} )^{T}\) is said to be an equilibrium point of network (1), if for \(i,j = 1,2, \ldots ,n,\)

$$0\in { - }Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} ,\bar{d}_{i} ]x_{i}^{*} { + }\sum\limits_{\text{j = 1}}^{\text{n}} {Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{a}_{ij} ,\bar{a}_{ij} ]} {\text{f}}_{j} \left( {x_{j}^{*} } \right){ + }\sum\limits_{\text{j = 1}}^{\text{n}} {Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{b}_{ij} ,\bar{b}_{ij} ]} {\text{g}}_{j} \left( {x_{j}^{*} } \right){\text{ + I}}_{\text{i}} ,$$
(5)

Or, equivalently, for \(i,j = 1,2, \ldots ,n,\) there exist

$$\mathop d\limits^{ * }_{i} \in {\text{C}}o[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} ,\bar{d}_{i} ],\mathop a\limits^{ * }_{ij} \in Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{a}_{ij} ,\bar{a}_{ij} ],\mathop b\limits^{ * }_{ij} = Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{b}_{ij} ,\bar{b}_{ij} ]$$

such that

$$0 = - \mathop d\limits^{ * }_{i} x_{i}^{*} { + }\sum\limits_{\text{j = 1}}^{\text{n}} {\mathop a\limits^{ * }_{ij} } {\text{f}}_{\text{j}} \left( {x_{j}^{*} } \right){ + }\sum\limits_{\text{j = 1}}^{\text{n}} {\mathop b\limits^{ * }_{ij} } {\text{g}}_{\text{j}} \left( {x_{j}^{*} } \right){\text{ + I}}_{\text{i}} .$$
(6)

If \(x^{*} = (x_{1}^{*} ,x_{2}^{*} , \ldots ,x_{n}^{*} )^{T}\) is an equilibrium point of network (1), then by letting\(y_{i} \left( t \right) = x_{i} \left( t \right) - x_{i}^{*}\), \(i = 1,2, \ldots ,n,\) we have

$$\frac{{{\text{d}}y_{i} \left( t \right)}}{\text{dt}} \in - {\text{ Co}}[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} ,\bar{d}_{i} ]{\text{y}}_{\text{i}} \left( t \right){ + }\sum\limits_{\text{j = 1}}^{\text{n}} {Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{a}_{ij} ,\bar{a}_{ij} ]} {\bar{\text{f}}}_{j} \left( {y_{j} } \right){ + }\sum\limits_{\text{j = 1}}^{\text{n}} {Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{b}_{ij} ,\bar{b}_{ij} ]} \bar{g}_{j} \left( {y_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right),$$
(7)

Or, equivalently, there exist \(\mathop d\limits^{ * }_{i} \in {\text{Co}}[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} ,\bar{d}_{i} ],\mathop a\limits^{ * }_{ij} \in Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{a}_{ij} ,\bar{a}_{ij} ],\mathop b\limits^{ * }_{ij} = Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{b}_{ij} ,\bar{b}_{ij} ],\) such that

$$\frac{{dy_{i} \left( t \right)}}{dt} = - \mathop d\limits^{ * }_{i} y_{\text{i}} \left( t \right){ + }\sum\limits_{\text{j = 1}}^{\text{n}} {\mathop a\limits^{ * }_{ij} } {\bar{\text{f}}}_{j} \left( {y_{j} } \right){ + }\sum\limits_{\text{j = 1}}^{\text{n}} {\mathop b\limits^{ * }_{ij} } {\bar{\text{g}}}_{j} \left( {y_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right),$$
(8)

Obviously, \(f_{i} \left( {x_{i} } \right),g_{i} \left( {x_{i} } \right)\) satisfy Assumption 1 and we can easily see that \({\bar{\text{f}}}_{i} \left( {x_{i} } \right),\bar{g}_{i} \left( {x_{i} } \right)\) also satisfy the following assumption:

Assumption 3

For \(i = 1,2, \ldots ,n,\quad\forall \alpha ,\beta \in R,\alpha \ne \beta ,\) then neuron activation function \(\bar{f}_{i} (x_{i} ),\bar{g}_{i} (x_{i} )\) in (5) and (6) are bounded and satisfy

$$0 \le \frac{{\bar{f}_{i} (\alpha ) - \bar{f}_{i} (\beta )}}{\alpha - \beta } \le k_{i} ,\quad 0 \le \frac{{\bar{g}_{i} (\alpha ) - \bar{g}_{i} (\beta )}}{\alpha - \beta } \le l_{i}$$
(9)

where \(i = 1,2, \ldots ,n,k_{i} ,l_{i}\) are nonnegative constants.

Definition 2

If there exist constants \(\gamma > 0\), \(M\left( \gamma \right) > 0\) and \(T_{0} > 0\) such that for any initial values

$$\left| {y\left( t \right)} \right| \le M\left( \gamma \right)e^{ - \gamma t} , \quad {\text{for all}}\,\,t \ge T_{0} ,$$

then system (10) is said to be exponentially stable with exponential convergence rate γ.

For further deriving the global exponential stability conditions, the following lemmas are needed.

Lemma 1

Filippov (1960) Under Assumption2, there is at least a local solution x(t) of system (1) with the initial conditions \(\phi \left( s \right) = \left( {\phi_{1} \left( s \right),\phi_{2} \left( s \right), \ldots \phi_{n} \left( s \right)} \right)^{T}\), \(s \in [ - \tau ,0]\) , which is essentially bounded. Moreover, this local solution x(t) can be extended to the interval \([0, + \infty ]\) in the sense of Filippov.

Under Assumption 2, and \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{d}_{i} > 0, \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{d}_{i} > 0, \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{a}_{ij} , \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a}_{ij} , \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{b}_{ij} , \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{b}_{ij} , I_{i}\), are all constant numbers, from the references (Wu et al. 2012; Hu et al. 2012), in order to study the memristive neural network (1), we can turn to the qualitative analysis of the relevant differential inclusion (3).

Lemma 2

Baæinov and Simeonov (1989) Let \(0 \le \tau_{i} \left( t \right) \le \tau\). \(F\left( {t,u,u_{1} ,u_{2} , \ldots ,u_{m} } \right):R^{ + } \times R \times \cdots \times R \to R\) be nondecreasing in u i for each fixed \(\left( {t,u,u_{1} , \ldots ,u_{i - 1} ,u_{i + 1} , \ldots u_{m} } \right)\), \(i = 1,2, \ldots ,m,\) and \(I_{K} \left( u \right):R \to R\) be nondecreasing in u. Suppose that

$$\left\{ \begin{aligned} &D^{ + } u\left( t \right) \le F\left( {t,u,u\left( {t - \tau_{1} \left( t \right)} \right), \ldots ,u_{m} \left( {t - \tau_{m} \left( t \right)} \right)} \right) \hfill \\ &u\left( {t_{k}^{ + } } \right) \le I_{k} u\left( {t_{k}^{ - } } \right),\quad k \in N_{ + } \hfill \\ \end{aligned} \right.$$

and

$$\left\{ \begin{aligned}& D^{ + } v\left( t \right) > F\left( {t,v,v\left( {t - \tau_{1} \left( t \right)} \right), \ldots ,v_{m} \left( {t - \tau_{m} \left( t \right)} \right)} \right) \hfill \\ &v\left( {t_{k}^{ + } } \right) \ge I_{k} v\left( {t_{k}^{ - } } \right),\quad k \in N_{ + } \hfill \\ \end{aligned} \right.$$

Then \(u\left( t \right) \le v\left( t \right)\) , for \(- \tau \le t \le 0\) implies that \(u\left( t \right) \le v\left( t \right)\) , for \(t \ge 0\).

In the following section, the paper aims to analysis the globally exponential stability of the system (1).

Main results

The main results of the paper are given in the following theorem.

Theorem 1

Consider the memristive neural networks (1), suppose that Assumptions 1 and 2 hold. Then, the memristive neural network (1) with time-varying impulses will be globally exponentially stable if the following inequality holds

$$\alpha - Rq > 0,$$

where

$$R = \prod\nolimits_{i = 1}^{N} {\prod\limits_{j = 1}^{M} {\left| {\frac{{\mu_{i} }}{{\gamma_{i} }}} \right|^{2} } } ,\,\alpha = - \left( {P + \sum\limits_{i = 1}^{N} {\frac{{2\ln \left| {\mu_{i} } \right|}}{{\xi_{i} }} + \sum\limits_{j = 1}^{M} {\frac{{2\ln \left| {\gamma_{j} } \right|}}{{\zeta_{j} }}} } } \right),$$
$$p = - \mathop { \hbox{min} }\limits_{1 \le i \le n} \left\{ {\underline{d}_{i} } \right\} + 2\sum\limits_{j = 1}^{n} {\sum\limits_{k = 1}^{n} {\frac{1}{{\underline{\text{d}}_{j} }}\mathop a\limits^{ * * 2}_{jk} {\text{k}}_{k}^{2} } } ,\,q = 2\sum\limits_{j = 1}^{n} {\sum\limits_{k = 1}^{n} {\frac{1}{{\underline{d}_{j} }}} } \mathop b\limits^{ * * 2}_{jk} l_{k}^{2} ,$$

here,

$$\mathop a\limits^{ * * }_{jk} = { \hbox{max} }\left\{ {\left| {\bar{a}_{jk} } \right|,\left| {\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{a}_{jk} } \right|} \right\},\;\mathop b\limits^{ * * }_{jk} = { \hbox{max} }\left\{ {\left| {\bar{b}_{jk} } \right|,\left| {\underline{b}_{jk} } \right|} \right\},\;\;\;i,j,k = 1,2, \ldots ,n.$$

Proof

We choose a Lyapunov functional for system (5) or (6) as

$$V\left( t \right) = \frac{1}{2}\sum\limits_{i = 1}^{n} {} y_{i}^{2} \left( t \right).$$
(10)

The upper and right Dini derivative of V(t) along the trajectories of the system (5) or (6) is

$$\begin{aligned} D^{ + } V\left( t \right) &= \sum\limits_{i = 1}^{n} {y_{i} \left( t \right)y_{i} \left( t \right)^{'} } \\ & = \sum\limits_{i = 1}^{n} {y_{i} \left( t \right)\left\{ { - \mathop d\limits^{ * }_{i} y_{i} \left( t \right) + \sum\limits_{j = 1}^{n} {\mathop a\limits^{ * }_{ij} } \bar{f}_{j} \left( {y_{j} } \right) + \sum\limits_{j = 1}^{n} {\mathop b\limits^{ * }_{ij} } \bar{g}_{j} \left( {y_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right)} \right\}} \\ &\in \sum\limits_{i = 1}^{n} {y_{i} \left( t \right)-\left\{ { Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} ,\bar{d}_{i} ]y_{i} \left( t \right) + \sum\limits_{j = 1}^{n} {Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{a}_{ij} ,\bar{a}_{ij} ]} \bar{f}_{j} \left( {y_{j} } \right) + \sum\limits_{j = 1}^{n} {Co[\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{b}_{ij} ,\bar{b}_{ij} ]} \bar{g}_{j} \left( {y_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right)} \right\}} \\ \end{aligned}$$
(11)

From Assumption 3, we have

$$0 \le \mathop {\sup }\limits_{{y_{i} \ne 0}} \frac{{\bar{f}_{i} \left( {y_{i} } \right)}}{{y_{i} }} \le k_{i} ,\quad 0 \le \mathop {\sup }\limits_{{y_{i} \ne 0}} \frac{{\bar{g}_{i} \left( {y_{i} } \right)}}{{y_{i} }} \le l_{i}$$
(12)

where \(i = 1,2, \ldots ,n,\,k_{i} ,l_{i}\) are nonnegative constants.By (11) and (12), we get

$$\begin{aligned} D^{ + } V\left( t \right) &\le \sum\limits_{i = 1}^{n} {\left\{ { - \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} y_{i}^{2} \left( t \right) + \left| {y_{i} \left( t \right)} \right|\sum\limits_{j = 1}^{n} {\mathop a\limits^{ * * }_{ij} } k_{j} \left| {y_{j} \left( t \right)} \right| + \left| {y_{i} \left( t \right)} \right|\sum\limits_{j = 1}^{n} {\mathop b\limits^{ * * }_{ij} } l_{j} \left| {y_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right|} \right\}} \hfill \\ &= \sum\limits_{i = 1}^{n} {\left\{ { - \frac{1}{2}\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} y_{i}^{2} \left( t \right) + \left| {y_{i} \left( t \right)} \right|\sum\limits_{j = 1}^{n} {\mathop a\limits^{ * * }_{ij} } k_{j} \left| {y_{j} \left( t \right)} \right| - \frac{1}{2}\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} y_{i}^{2} \left( t \right) + \left| {y_{i} \left( t \right)} \right|\sum\limits_{j = 1}^{n} {\mathop b\limits^{ * * }_{ij} } l_{j} \left| {y_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right|} \right\}} \hfill \\ \end{aligned}$$
(13)

By mean-value inequality, we have

$$\begin{aligned} D^{ + } V\left( t \right) &\le \sum\limits_{i = 1}^{n} {\left\{ { - \frac{1}{4}\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} y_{i}^{2} \left( t \right) + \frac{1}{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} }}\left( {\sum\limits_{j = 1}^{n} {\mathop a\limits^{ * * }_{ij} } k_{j} \left| {y_{j} \left( t \right)} \right|} \right)^{2} - \frac{1}{4}\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} y_{i}^{2} \left( t \right) + \frac{1}{{\underline{d}_{i} }}\left( {\sum\limits_{j = 1}^{n} {\mathop b\limits^{ * * }_{ij} } l_{j} \left| {y_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right|} \right)^{2} } \right\}} \hfill \\ &= \sum\limits_{i = 1}^{n} {\left\{ { - \frac{1}{2}\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} y_{i}^{2} \left( t \right) + \frac{1}{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} }}\left( {\sum\limits_{j = 1}^{n} {\mathop a\limits^{ * * }_{ij} } k_{j} \left| {y_{j} \left( t \right)} \right|} \right)^{2} + \frac{1}{{\underline{d}_{i} }}\left( {\sum\limits_{j = 1}^{n} {\mathop b\limits^{ * * }_{ij} } l_{j} \left| {y_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right|} \right)^{2} } \right\}} \hfill \\ \end{aligned}$$
(14)

By Cauchy–Schwarz inequality, we obtain

$$\left( {\sum\limits_{j = 1}^{n} {\mathop a\limits^{ * * }_{ij} } k_{j} \left| {y_{j} (t)} \right|} \right)^{2} \le \sum\limits_{j = 1}^{n} {\mathop a\limits^{ * * 2}_{ij} } k_{j}^{2} \sum\limits_{j = 1}^{n} {\left| {y_{j} \left( t \right)} \right|^{2} } = \sum\limits_{k = 1}^{n} {\mathop a\limits^{ * * 2}_{ik} } k_{k}^{2} \sum\limits_{j = 1}^{n} {y_{j}^{2} \left( t \right)}$$
(15)

and

$$\left( {\sum\limits_{j = 1}^{n} {\mathop b\limits^{ * * }_{ij} } l_{j} \left| {y_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right|} \right)^{2} \le \sum\limits_{j = 1}^{n} {\mathop b\limits^{ * * 2}_{ij} } l_{j}^{2} \sum\limits_{j = 1}^{n} {\left| {y_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right|^{2} } = \sum\limits_{k = 1}^{n} {\mathop b\limits^{ * * 2}_{ik} } l_{k}^{2} \sum\limits_{j = 1}^{n} {y_{j}^{2} \left( {t - \tau_{ij} \left( t \right)} \right)}$$
(16)

It follows from (15) and (16) that

$$\begin{aligned} D^{ + } V\left( t \right) &\le \sum\limits_{i = 1}^{n} {\left\{ {{ - }\frac{1}{2}\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} y_{i}^{2} \left( t \right){ + }\frac{1}{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} }}\sum\limits_{\text{k = 1}}^{\text{n}} {\mathop a\limits^{ * * 2}_{ik} } {\text{k}}_{k}^{2} \sum\limits_{\text{j = 1}}^{\text{n}} {y_{j}^{2} \left( t \right)} { + }\frac{1}{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} }}\sum\limits_{\text{k = 1}}^{\text{n}} {\mathop b\limits^{ * * 2}_{ik} } l_{k}^{2} \sum\limits_{\text{j = 1}}^{\text{n}} {y_{j}^{2} \left( {t - \tau_{ij} \left( t \right)} \right)} } \right\}} \\ &= \sum\limits_{i = 1}^{n} {\left[ {{ - }\frac{1}{2}\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{i} + \sum\limits_{j = 1}^{n} {\sum\limits_{k = 1}^{n} {\frac{1}{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{d}_{j} }}\mathop a\limits^{ * * 2}_{jk} {\text{k}}_{k}^{2} } } } \right]} y_{i}^{2} \left( t \right) + \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {\sum\limits_{k = 1}^{n} {\frac{1}{{\underline{\text{d}}_{j} }}} } } \mathop b\limits^{ * * 2}_{jk} l_{k}^{2} y_{i}^{2} \left( {t - \tau_{ij} \left( t \right)} \right) \\ &\le pV\left( t \right) + qV\left( {t - \tau \left( t \right)} \right) \\ \end{aligned}$$
(17)

where \(p = - \mathop { \hbox{min} }\limits_{1 \le i \le n} \left\{ {\underline{d}_{i} } \right\} + 2\sum\limits_{j = 1}^{n} {\sum\limits_{k = 1}^{n} {\frac{1}{{\underline{d}_{j} }}\mathop a\limits^{ * * 2}_{jk} k_{k}^{2} } }\), \(q = 2\sum\limits_{j = 1}^{n} {\sum\limits_{k = 1}^{n} {\frac{1}{{\underline{\text{d}}_{j} }}} } \mathop b\limits^{ * * 2}_{jk} l_{k}^{2}\), \(t \in (t_{k - 1} ,t_{k} ],k \in N_{ + }\). For t = t k , from the second equation of (1), we get

$$V\left( {t_{k}^{ + } } \right) = \alpha_{k}^{2} V\left( {t_{k}^{ - } } \right)$$
(18)

For any σ > 0, let \(\upsilon \left( t \right)\) be a unique solution of the following impulsive delay system

$$\left\{ \begin{aligned}& \dot{\upsilon } = p\upsilon \left( t \right) + q\upsilon \left( {t - \tau \left( t \right)} \right) + \sigma ,\quad t \ne t_{k} \hfill \\ &\upsilon \left( {t_{k} } \right) = \alpha_{k}^{2} \upsilon \left( {t_{k}^{ - } } \right),t = t_{k},\quad k \in N_{ + } \hfill \\ &\upsilon \left( s \right) = \left| {\phi \left( s \right)} \right|^{2} , \quad- \tau \le s \le 0 \hfill \\ \end{aligned} \right.$$
(19)

Note that \(V\left( s \right) \le \left| {\phi \left( s \right)} \right|^{2} = \upsilon \left( s \right)\), for\(- \tau \le s \le 0\). Then it follows from (17), (18) and Lemma 2 that

$$\upsilon \left( t \right) \ge V\left( t \right) \ge 0,\quad t \ge 0$$
(20)

By the formula for the variation of parameters, it follows from (19) that

$$\upsilon \left( t \right) = W\left( {t,0} \right)v\left( 0 \right) + \int\limits_{0}^{t} {W\left( {t,s} \right)[qv\left( {s - \tau \left( s \right)} \right) + \sigma ]} ds$$

where \(W\left( {t,s} \right),t,s \ge 0\) is the Cauchy matrix of linear system

$$\left\{ \begin{array}{ll} &y\left( t \right) = py\left( t \right),\quad t \ne t_{k} \hfill \\ &y\left( {t_{k} } \right) = \alpha_{k}^{2} \upsilon \left( {t_{k}^{ - } } \right),\quad t = t_{k} ,\quad k \in N_{ + } \hfill \\ \end{array} \right.$$
(21)

According to the representation of the Cauchy matrix, one can obtain the following estimation

$$W\left( {t,s} \right) = e^{{p\left( {t - s} \right)}} \prod\limits_{{s < t_{k} \le t}} {\alpha_{k}^{2} } .$$
(22)

For any t > 0, if there exist an s such that there are N i destabilizing impulses and M j stabilizing impulses in the interval (s, t), then from Assumption 1, we can easily get \(N_{i} \le \frac{t - s}{{\xi_{i} }} + 1\), \(M_{j} \ge \frac{t - s}{{\zeta_{j} }} - 1\), then it follows from Assumption 1 and (22) that

$$\begin{aligned} W\left( {t,s} \right) &\le e^{{p\left( {t - s} \right)}} \prod\limits_{i = 1}^{N} {\prod\limits_{j = 1}^{M} {\left| {\mu_{i} } \right|^{{2\frac{t - s}{{\xi_{i} }} + 2}} \left| {\gamma_{i} } \right|^{{2\frac{t - s}{{\zeta_{j} }} - 2}} } } \hfill \\ &\le \prod\limits_{i = 1}^{N} {\prod\limits_{j = 1}^{M} {\left| {\mu_{i} } \right|^{2} \left| {\gamma_{i} } \right|^{ - 2} e^{{\left( {p + P + \sum\limits_{i = 1}^{N} {\frac{{2\ln \left| {\mu_{i} } \right|}}{{\xi_{i} }} + \sum\limits_{j = 1}^{M} {\frac{{2\ln \left| {\gamma_{j} } \right|}}{{\zeta_{j} }}} } } \right)\left( {t - s} \right)}} } } \hfill \\ &\quad \text{Re}^{{ - \alpha \left( {t - s} \right)}} \hfill \\ \end{aligned}$$
(23)

where \(R = \prod\nolimits_{i = 1}^{N} {\prod\nolimits_{j = 1}^{M} {\left| {\frac{{\mu_{i} }}{{\gamma_{i} }}} \right|^{2} } }\), \(\alpha = - \left( {P + \sum\nolimits_{i = 1}^{N} {\frac{{2\ln \left| {\mu_{i} } \right|}}{{\xi_{i} }} + \sum\nolimits_{j = 1}^{M} {\frac{{2\ln \left| {\gamma_{j} } \right|}}{{\zeta_{j} }}} } } \right)\). Let \(\eta = R\sup_{ - \tau \le s \le 0} \left| {\phi \left( s \right)} \right|^{2}\) then we can get that

$$\upsilon \left( t \right) \le \eta e^{ - \alpha t} + \int_{0}^{t} {\text{Re}^{{ - \alpha \left( {t - s} \right)}} \left[ {qv\left( {s - \tau \left( s \right)} \right) + \sigma } \right]} ds$$
(24)

Let \(h\left( \upsilon \right) = \upsilon - \alpha + Rqe^{\upsilon \tau }\). It follows from \(\alpha - Rq > 0\) that \(h\left( 0 \right) < 0\), \(\mathop {\lim }\limits_{v \to + \infty } h\left( \upsilon \right) = + \infty\), and \(\mathop h\limits^{ \cdot } \left( \upsilon \right) > 0\). Therefore there is unique \(\lambda > 0\) such that \(h\left( \lambda \right) = 0\).On the other hand, it is obvious that \(R^{ - 1} \alpha - 1 > 0\). Hence,

$$\upsilon \left( t \right) = \left| {\phi \left( t \right)} \right|^{2} \le \eta \le \eta e^{ - \lambda t} + \frac{\sigma }{{R^{ - 1} \alpha - q}}, \quad- \tau \le t \le 0$$
(25)

So it suffices to prove

$$\upsilon \left( t \right) \le \eta e^{ - \lambda t} + \frac{\sigma }{{R^{ - 1} \alpha - q}},\quad {\text{ for}}\,t > 0.$$

By the contrary, there exist t > 0 such that

$$\upsilon \left( t \right) \ge \eta e^{ - \lambda t} + \frac{\sigma }{{R^{ - 1} \alpha - q}}.$$

We set

$$t^{*} = \inf \left\{ {t > 0;\upsilon \left( t \right) \ge \eta e^{ - \lambda t} + \frac{\sigma }{{R^{ - 1} \alpha - q}}} \right\}.$$

Then \(t^{*} > 0\) and \(\upsilon \left( {t^{*} } \right) \ge \eta e^{ - \lambda t} + \frac{\sigma }{{R^{ - 1} \alpha - q}}\). Thus, for \(t \in \left( {0,t^{*} } \right)\),

$$\upsilon \left( t \right) \le \eta e^{ - \lambda t} + \frac{\sigma }{{R^{ - 1} \alpha - q}}.$$

From (24) and (25), one observes that

$$\begin{aligned} v\left( {t^{*} } \right) &\le \eta e^{{ - \alpha t^{*} }} + \int\limits_{0}^{{t^{*} }} {\text{Re}^{{ - \alpha \left( {t^{*} - s} \right)}} } [qv\left( {s - \tau \left( s \right)} \right) + \sigma ]ds \\ &\le e^{{ - \alpha t^{*} }} \left\{ {\eta + \frac{\sigma }{{R^{ - 1} \alpha - q}} + \int\limits_{0}^{{t^{*} }} {\text{Re}^{ - \alpha s} } [q\left( {\eta e^{{ - \lambda \left( {s - \tau \left( s \right)} \right)}} + \frac{\sigma }{{R^{ - 1} \alpha - q}}} \right) + \sigma ]ds} \right\} \\ &< \eta e^{{ - \lambda t^{*} }} + \frac{\sigma }{{R^{ - 1} \alpha - q}} \\ \end{aligned}$$
(26)

This is a contradiction. Thus, \(\upsilon (t) \le \eta e^{ - \lambda t} + \frac{\sigma }{{R^{ - 1} \alpha - q}}\), for t > 0, holds. Letting \(\sigma \to 0\), we can get from (15) that \(V\left( t \right) \le \upsilon \left( t \right) \le \eta e^{ - \lambda t}\). By Definition 1, the solution y(t) of the memristive neural network (1) is exponentially stable. This completes the proof.

In order to show the influence of the stabilizing impulses and destabilizing impulses clearly, we assume that both the destabilizing and stabilizing impulses are time-invariant, i.e., for \(i = 1,2 \ldots ,N,\) \(j = 1,2, \ldots ,M\), \(\mu_{i} = \mu ,\;\gamma_{j} = \gamma ,\;\;\xi_{i} = \xi ,\;\;\zeta_{j} = \zeta ,\;\) \(\;t_{ik \uparrow } = t_{k \uparrow } ,\;\;t_{jk \uparrow } = t_{k \uparrow }\). Then we get the following corollary.

Corollary 1

Consider the memristive neural networks (1). Suppose that Assumptions 1 and 2 hold. Then, the memristive neural network (1) with time-invariant impulses will be globally exponentially stable if the following inequality holds

$$\alpha - Rq > 0,$$

where

$$R = \left| {\frac{\mu }{\gamma }} \right|^{2} , \,\alpha = - \left( {p + \frac{2\ln \left| \mu \right|}{\xi } + \frac{2\ln \left| \gamma \right|}{\zeta }} \right),$$
$$p = - \mathop { \hbox{min} }\limits_{1 \le i \le n} \left\{ {\underline{d}_{i} } \right\} + 2\sum\limits_{j = 1}^{n} {\sum\limits_{k = 1}^{n} {\frac{1}{{\underline{\text{d}}_{j} }}\mathop a\limits^{ * * 2}_{jk} {\text{k}}_{k}^{2} } } ,q = 2\sum\limits_{j = 1}^{n} {\sum\limits_{k = 1}^{n} {\frac{1}{{\underline{\text{d}}_{j} }}} } \mathop b\limits^{ * * 2}_{jk} l_{k}^{2} .$$

Here, \(\mathop a\limits^{ * * }_{jk} = { \hbox{max} }\{ \left| {\bar{a}_{jk} } \right|,\left| {\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{a}_{jk} } \right|\} ,\) \(\mathop b\limits^{ * * }_{jk} = { \hbox{max} }\{ \left| {\bar{b}_{jk} } \right|,\left| {\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{b}_{jk} } \right|\} ,\) \(i,j,k = 1,2, \ldots ,n\).

Proof

Corollary 1 can be similarly proved as Theorem 1. So the process will be omitted here.

Numerical example

In this section, we will present an example to illustrate the effectiveness of our results. Let us consider a two-dimensional memristive neural network

$$\left\{ {\begin{array}{l} \frac{{dx_{1} \left( t \right)}}{dt} = - d_{1} \left( {x_{1} } \right)x_{1} \left( t \right) + a_{11} \left( {x_{1} } \right)f_{1} \left( {x_{1} } \right) + a_{12} \left( {x_{1} } \right)f_{2} \left( {x_{2} } \right) \\ \quad+ b_{11} \left( {x_{1} } \right)g_{1} \left( {x_{1} \left( {t - \tau \left( t \right)} \right)} \right) + b_{12} \left( {x_{1} } \right)g_{2} \left( {x_{2} \left( {t - \tau \left( t \right)} \right)} \right) + I_{1} \\ \frac{{dx_{2} \left( t \right)}}{dt} = - d_{2} \left( {x_{2} } \right)x_{2} \left( t \right) + a_{21} \left( {x_{2} } \right)f_{1} \left( {x_{1} } \right) + a_{22} \left( {x_{2} } \right)f_{2} \left( {x_{2} } \right) \\ \quad+ b_{21} \left( {x_{2} } \right)g_{1} \left( {x_{1} \left( {t - \tau \left( t \right)} \right)} \right) + b_{22} \left( {x_{2} } \right)g_{2} \left( {x_{2} \left( {t - \tau \left( t \right)} \right)} \right) + I_{2} \\ \end{array} } \right.$$
(27)

where\(\begin{aligned} d_{1} \left( {x_{1} } \right) = \left\{ \begin{array}{l} 1.2,\left| {x_{1} \left( t \right)} \right| < 1 \hfill \\ 1,\left| {x_{1} \left( t \right)} \right| > 1 \hfill \\ \end{array} \right.\qquad d_{2} \left( {x_{2} } \right) = \left\{ \begin{array}{l} 1,\left| {x_{2} \left( t \right)} \right| < 1 \hfill \\ 1.2,\left| {x_{2} \left( t \right)} \right| > 1 \hfill \\ \end{array} \right. \hfill \\ a_{11} \left( {x_{1} } \right) = \left\{ \begin{array}{l} \frac{1}{6},\left| {x_{1} \left( t \right)} \right| < 1 \hfill \\ - \frac{1}{6},\left| {x_{1} \left( t \right)} \right| > 1 \hfill \\ \end{array} \right.{\kern 1pt} \quad a_{21} \left( {x_{1} } \right) = \left\{ \begin{array}{l} \frac{1}{5},\left| {x_{1} \left( t \right)} \right| < 1 \hfill \\ - \frac{1}{5},\left| {x_{1} \left( t \right)} \right| > 1 \hfill \\ \end{array} \right. \hfill \\ a_{21} \left( {x_{2} } \right) = \left\{ \begin{array}{l} \frac{1}{5},\left| {x_{2} \left( t \right)} \right| < 1 \hfill \\ - \frac{1}{5},\left| {x_{2} \left( t \right)} \right| > 1 \hfill \\ \end{array} \right.\quad a_{22} \left( {x_{2} } \right) = \left\{ \begin{array}{l} \frac{1}{8},\left| {x_{2} \left( t \right)} \right| < 1 \hfill \\ - \frac{1}{8},\left| {x_{2} \left( t \right)} \right| > 1 \hfill \\ \end{array} \right. \hfill \\ b_{11} \left( {x_{1} } \right) = \left\{ \begin{array}{l}\frac{1}{4},\left| {x_{1} \left( t \right)} \right| < 1 \hfill \\ - \frac{1}{4},\left| {x_{1} \left( t \right)} \right| > 1 \hfill \\ \end{array} \right.\quad b_{21} \left( {x_{1} } \right) = \left\{ \begin{array}{l} \frac{1}{6},\left| {x_{1} \left( t \right)} \right| < 1 \hfill \\ - \frac{1}{6},\left| {x_{1} \left( t \right)} \right| > 1 \hfill \\ \end{array} \right. \hfill \\ b_{21} \left( {x_{2} } \right) = \left\{ \begin{array}{l}\frac{1}{7},\left| {x_{2} \left( t \right)} \right| < 1 \hfill \\ - \frac{1}{7},\left| {x_{2} \left( t \right)} \right| > 1 \hfill \\ \end{array} \right.\quad b_{22} \left( {x_{2} } \right) = \left\{ \begin{array}{l} \frac{1}{3},\left| {x_{2} \left( t \right)} \right| < 1 \hfill \\ - \frac{1}{3},\left| {x_{2} \left( t \right)} \right| > 1 \hfill \\ \end{array} \right. \hfill \\ \end{aligned}\)Therefore,

$$\underline{D} = \left( {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right),\;\;\mathop A\limits^{ * * } = \left( {\begin{array}{*{20}c} \frac{1}{6} & \frac{1}{5} \\ \frac{1}{5} & \frac{1}{8} \\ \end{array} } \right),\;\;\mathop B\limits^{ * * } = \left( {\begin{array}{*{20}c} \frac{1}{4} & \frac{1}{6} \\ \frac{1}{7} & \frac{1}{3} \\ \end{array} } \right).$$

Let \(\tau \left( t \right) = 1.8 + 0.5{ \sin }\left( t \right)\), \(I_{1} = I_{2} = 0\), \(f_{i} \left( a \right) = g_{i} \left( a \right) = \frac{1}{2}\left( {\left| {a + 1} \right| - \left| {a - 1} \right|} \right).\) By simple calculation, we get \(k_{i} = l_{i} = 1,\;\;i = 1,2,\;p = - 0.7532,\;\;q = 0.4436\). For the time-varying impulses, we Choose the impulsive strengths of destabilizing impulses \(\mu_{1} = \mu_{2} = 1.2\), the impulsive strength of stabilizing impulses \(\gamma_{1} = \gamma_{2} = \cdots = 0.9\), and the lower bounds of stabilizing \(\;\;\xi_{1} = \xi_{2} = 0.5\). According to Corollary 1 that the neural network (27) can be stabilized if the maximum impulsive interval ζ of the stabilizing impulsive sequence is not more than 0.2755. If we let \(t_{k \uparrow } - t_{{\left( {k - 1} \right) \uparrow }} = 0.5\),\(t_{k \downarrow } - t_{{\left( {k - 1} \right) \downarrow }} = 0.2\) the whole impulses can be described as, \(t_{0 \uparrow } = 0,{\kern 1pt}\) \(t_{k \uparrow } = t_{{\left( {k - 1} \right) \uparrow }} + 0.5k\) for destabilizing impulses and \(t_{0 \downarrow } = 0.09,{\kern 1pt}\) \(t_{k \downarrow } = t_{{\left( {k - 1} \right) \downarrow }} + 0.2k\) for stabilizing impulses then the corresponding trajectories of the impulsive neural networks (1) are plotted as shown in Fig. 2, where one observes that, when \(\zeta < 0.2755\), the neural networks (1) can be stabilized.

Fig. 2
figure 2

State trajectories of the memristive neural networks (10) with different conditions: a without impulses (blue); b the maximum impulsive interval of the stabilizing impulsive is 0.2 (green). (Color figure online)

Conclusions

In this paper, we investigated the exponential stability analysis problem for a class of general memristor-based neural networks with time-varying delay and time-varying impulses. To investigate the dynamic properties of the system, under the framework of Filippov’s solution, we can turn to the qualitative analysis of a relevant differential inclusion. By using the Lyapunov method, the stability conditions were obtained. A numerical example was also given to illustrate effectiveness of the theoretical results.