1 Introduction

As an extension of integer-order calculus, fractional-order calculus has drawn much attention from many researchers. Lots of dynamics can be described by fractional differential equations, such as electrochemistry [1], diffusion [2], viscoelastic materials [3] and control [4]. Recently, fractional neural networks have been studied. Kaslik studied stability, bifurcations and chaos of fractional-order Hopfield neural networks [5]. Ref. [6] investigated the global Mittag–Leffler stability and synchronization of a class of fractional-order memristor-based neural networks. Due to the finite speed of the signal transmission between neurons, time delay often exists in almost every neural networks. At the same time, time delay could also effect the dynamic behavior of neural networks. Thus, time delay is unavoidable in the analysis of neural networks. Time-delayed fractional-order neural networks also have been researched. For example, Chen discussed the existence, uniqueness and stability of a fractional-order delay neural network’s equilibrium point [7]; Stamova investigated global Mittag–Leffler stability and synchronization of impulsive fractional-order cellular neural networks with time-varying delays [8]. More results about fractional neural networks can be found in Refs. [912].

Bidirectional associative memory (BAM) neural networks attract many studies due to its applications in many fields, such as signal processing [13], image processing [14] and pattern recognition [15]. In 1987, Kosko introduced BAM neural networks [16], which are composed of neurons arranged in two layers, i.e., the U-layer and V-layer. The neurons in one layer are fully interconnected to the one in the other layer, while there are no interconnections among neurons in the same layer. Stability plays an important role on the study of neural networks, which is the premise for the application. There are also lots of results about the stability for BAM neural networks in recent years. For instance, in [17], Sakthivel analyzed the stability for a class of delayed stochastic BAM neural networks with Markovian jumping parameters and impulses. The global robust stability problem of BAM neural networks with multiple time delays under parameter uncertainties has been researched by Feng et al [18]. Exponential stability via impulsive control is investigated in [19] for the Markovian jumping stochastic BAM neural networks with mode-dependent probabilistic time-varying delays. Ref. [20] discussed the stabilization of BAM neural networks with time-varying delays in the leakage terms by using sampled-data control. More impressive results can be found and their references in herein [2125].

So far, the study of BAM neural networks mainly focused on neural networks with only first derivative of the states. As we all known, the memory is one of the main features of BAM neural networks [16]. There is a particular attract point that fractional-order derivative is nonlocal and has weakly singular kernels. Thus, the major advantage of the fractional-order derivatives is the description of memory [26, 27]. However, a few studies focused on fractional BAM neural networks [28, 29]. Cao investigated the finite-time stability of fractional-order BAM neural networks with distributed delay in [28]. Ref. [29] studied the uniform stability analysis of fractional-order BAM neural networks with delays in the leakage terms. Instead of Lyapunov approach, inequality technique plays an important role in the previous two papers [28, 29]. As is well known, most of results related to stability of integer-order BAM neural networks are obtained by constructing Lyapunov function [1725]. Some results about the stability of fractional-order systems via Lyapunov function approach have been published [3033]. Based on Lyapunov function approach, this paper devotes to presenting a sufficient criterion for asymptotic stability of fractional-order BAM neural networks.

In addition, many real-world systems often suddenly receive external disturbance, which makes systems undergo abrupt changes in a very short time. This phenomenon is called impulse. Dynamic systems with impulses are neither purely continuous time nor purely discrete time and exhibit a combination of continuous and discrete characteristics. It is clear that such a short-time disturbance must have some effects on dynamics of systems. There are some results about integer-order impulsive network systems (see for example [3436]). Due to the finite speed of the signal transmission between neurons, time delay exists in almost every neural networks. Since the existence of delays and impulses is frequently result in instability, bifurcation and chaos for neural networks, it is necessary to study the delay and impulse effects on the stability of networks. Integer-order BAM neural networks with delay and impulse effects have gotten some results [17, 22, 37, 38], but impulsive fractional-order BAM neural networks with time delay have not been seen yet.

Motivated by the above discussions, we study the asymptotic stability of a class of impulsive fractional-order BAM neural networks with time delay. We shall study the fractional-order BAM neural networks by employing both fractional Barbalat’s lemma [33] and Razumikhin-type stability theorems [39]. Some stability criteria are obtained for ensuring the equilibrium point of the system to be global asymptotic stability.

The rest of this paper is organized as follows: In Sect. 2, we introduce some definitions and some lemmas which are necessary for presenting our results in the following. The main results about stability conditions for fractional-order BAM neural networks are presented in Sect. 3. Then, two examples will be given to demonstrate the effectiveness of our results in Sect. 4. Conclusions are finally drawn in Sect. 5.

2 Model description and preliminaries

In this section, some definitions and lemmas about fractional calculus are introduced, which will be used in deriving the main results. Then, the time-delayed fractional-order BAM model with impulsive effects will be introduced.

There are some definitions for fractional derivative, such as Riemann–Liouville derivative (R–L derivative), Caputo derivative and Grünwald–Letnikov derivative (G–L derivative).

R–L fractional operator often plays an important role in the stability analysis of fractional-order systems. Moreover, the R–L derivative is a continuous operator of the order \(\alpha\) and is a natural generalization of classical derivative [40]. Consequently, we will choose R–L derivative in this paper.

Definition 1

[4] The fractional integral of order \(\alpha\) for a function w(t) is defined as

$$\begin{aligned} D^{-\alpha }_{t_{0}}w(t)=\frac{1}{\Gamma (\alpha )} \int _{t_{0}}^{t}(t-\tau )^{\alpha -1}w(\tau ){\text{d}}\tau \end{aligned}$$
(1)

where \(t\geqslant t_{0}\) and \(\alpha >0\).

Definition 2

[4] The R–L fractional derivative with order \(\alpha\) for a continuous function x(t) is defined as follows:

$$\begin{aligned} {}^{RL}D^{\alpha}_{t_{0},t}x(t)&=\frac{{\text{d}}^{m}}{\text{d}t^{m}} \left[D^{-(m-\alpha )}_{t_{0},t}x(t)\right] \\&= \frac{1}{\Gamma (m-\alpha )}\frac{{\text{d}}^{m}}{{\text{d}}t^{m}} \int _{t_{0}}^{t}(t-\tau )^{m-\alpha -1}x(\tau ){\text{d}}\tau \end{aligned}$$
(2)

in which \(m-1<\alpha <m,\ m\in Z^{+}.\)

Without loss of generality, the order \(\alpha\) of R–L derivative is given as \(0<\alpha <1\) in Definition 2. For simply, denote \(D^{\alpha }x(t)\) as the R–L derivative \(^{RL}D^{\alpha }_{t_{0},t}x(t)\). Some properties of R–L derivative are listed in the following lemma.

Lemma 1

[4] If \(w(t), u(t)\in C^{1}[t_{0},b]\), and \(\alpha >0, \beta >0\), then

$$\begin{aligned}1. \quad&D^{\alpha }D^{-\beta }w(t)=D^{\alpha -\beta }w(t)\\ 2. \quad&D^{-\alpha }D^{\alpha }w(t)=w(t)\\ 3. \quad&D^{\alpha }(w(t)\pm u(t))=D^{\alpha }w(t)\pm D^{\alpha }u(t)\\ \end{aligned}$$

Considering the following time-delayed fractional BAM neural networks with impulsive effects:

$$\begin{aligned} \left\{ \begin{array}{l} D^{\alpha }x_{i}(t)=-a_{i}x_{i}(t)+\sum \nolimits _{j=1}^{m}c_{ij}f_{j} \left( y_{j}\left( t-\tau ^{(1)}_{ji}\right) \right) +I_{i}\\ \Delta x_{i}(t_{k})=\gamma ^{(1)}_{k}(x_{i}(t_{k}))\quad i=1,2,\ldots ,n;k=1,2,\ldots \\ D^{\alpha }y_{j}(t)=-b_{j}y_{j}(t)+\sum \nolimits _{i=1}^{n}d_{ji}g_{i} \left( x_{i}\left( t-\tau ^{(2)}_{ij}\right) \right) +J_{j}\\ \Delta y_{j}(t_{k})=\gamma ^{(2)}_{k}(y_{j}(t_{k}))\quad j=1,2,\ldots ,m;k=1,2,\ldots \\ \end{array}\right. \end{aligned}$$
(3)

There were two layers in a BAM, where \(U=\{x_{1}, x_{2},\ldots x_{n}\}\) and \(V=\{y_{1}, y_{2},\ldots y_{m}\}\). In which, \(x_{i}(t)\) and \(y_{j}(t)\) denote the membrane voltages of ith neuron in the U-layer and the membrane voltages of jth neuron in the V-layer, respectively. \(a_{i}>0, b_{j}>0\) denote decay coefficients of signals from neurons \(x_{i}\) to \(y_{j}\), respectively. \(f_{i}\) and \(g_{j}\) denote the transfer function for neurons. \(c_{ij}\) and \(d_{ji}\) denote connection strengths between neuron \(x_{i}\) and \(y_{j}\). \(I_{i}\) and \(J_{j}\) denote external input of U-layer and V-layer, respectively. In addition, \(\Delta x_{i}(t_{k})=x_{i}(t_{k}^{+})-x_{i}(t_{k}^{-})\) and \(\Delta y_{j}(t_{k})=y_{j}(t_{k}^{+})-y_{j}(t_{k}^{-})\). Impulsive moment \(\{t_{k}|k=1,2,3,\ldots \}\) satisfies \(0\leqslant t_{0}<t_{1}<t_{2}<\ldots <t_{k}<\ldots , t_{k}\rightarrow +\infty\) as \(k\rightarrow +\infty\), and \(x(t_{k}^{+})=\lim _{t\rightarrow t_{k}^{+}}x(t)\) and \(x(t_{k}^{-})=x(t_{k})\).

Lemma 2

[33] (Fractional Barbalat’s lemma) If \(\int _{t_{0}}^{t}w(s){\text{d}}s\) has a finite limit as \(t\rightarrow +\infty , D^{\alpha }w(t)\) is bounded, then \(w(t)\rightarrow 0\) as \(t\rightarrow +\infty\), where \(0<\alpha <1\).

Lemma 3

[39] Suppose that \(\omega _1, \omega _2\): \(R\rightarrow R\) are continuous nondecreasing functions, \(\omega _1(s)\) and \(\omega _2(s)\) are positive for \(s>0\), and \(\omega _1(0)=\omega _2(0)=0, \omega _1, \omega _2\) strictly increasing. If there exists a continuously differentiable function \(V: R\rightarrow R\) such that \(\omega _1(\parallel x(t)\parallel )\leqslant V(t,x(t))\leqslant \omega _2(\parallel x(t)\parallel )\) holds, and there exist two constants \(q>p>0\) such that for any given \(t_0\in R\) the R–L fractional derivative of V along the solution x(t) of R–L system \(D^{\alpha }x(t)=f(t,x(t),x(t-\tau ))\) satisfies

$$\begin{aligned} D^{\alpha }V(t,x(t))\leqslant -qV(t,x(t)) +p\sup _{-\tau \leqslant \theta \leqslant 0}V(t+\theta , x(t+\theta )) \end{aligned}$$

for \(t\geqslant t_0\), then R–L system \(D^{\alpha }x(t)=f(t,x(t),x(t-\tau ))\) is globally asymptotically stable.

Furthermore, the transfer functions \(f_{j}, g_{i}\) and impulsive operator satisfy the following assumptions:

(H1) The functions \(f_{i},g_{j}(i=1,2,\ldots ,n; j=1,2,\ldots ,m)\) are Lipschitz continuous. That is, there exist positive constants \(F_{j},G_{i}\) such that

$$\begin{aligned}&\mid f_{j}(x)-f_{j}(y)\mid \leqslant F_{j}\mid x-y\mid ,\nonumber \\&\mid g_{i}(x)-g_{i}(y)\mid \leqslant G_{i}\mid x-y\mid ,\nonumber \\&\quad \forall x,y\in R. \end{aligned}$$
(4)

(H2) The impulsive operators \(\gamma ^{(1)}_{k}(x_{i}(t_{k}))\) and \(\gamma ^{(2)}_{k}(y_{j}(t_{k}))\) satisfy

$$\begin{aligned} \left\{ \begin{array}{l} \gamma ^{(1)}_{k}(x_{i}(t_{k}))=-\lambda _{ik}^{(1)}(x_{i}(t_{k})-x^{*}) \quad i=1,2,\ldots ,n;k=1,2,\ldots \\ \gamma ^{(2)}_{k}(y_{j}(t_{k}))=-\lambda _{jk}^{(2)}(y_{j}(t_{k})-y^{*})\quad j=1,2,\ldots ,m;k=1,2,\ldots \\ \end{array}\right. \end{aligned}$$
(5)

where \(\lambda _{ik}^{(1)}\in (-2,0),(i=1,2,\ldots ,n;\quad k=1,2,\ldots )\) and \(\lambda _{jk}^{(2)}\in (-2,0), (j=1,2,\ldots ,m;\quad k=1,2,\ldots )\).

3 Main results

In this section, we will state our main results in the following theorems.

Theorem 1

Suppose that (H1) and (H2) hold, and then the equilibrium \((x^*,y^*)\) of system (3) is globally asymptotically stable if \(\hat{\xi }^{(1)}>0\) and \(\hat{\xi }^{(2)}>0\), where

$$\begin{aligned} \hat{\xi }^{(1)}= & {} \min _{i}\{\xi _{i}^{(1)}\}, \hat{\xi }^{(2)}=\min _{j}\{\xi _{j}^{(2)}\},\\ \xi _{i}^{(1)}= & {} G_{i}\left\{ \frac{a_{i}}{G_{i}}-\sum _{j=1}^{m}|d_{ji}|\right\} , \xi _{j}^{(2)}=F_{j}\left\{ \frac{b_{j}}{F_{j}}-\sum _{i=1}^{n}|c_{ij}|\right\} . \end{aligned}$$

Proof

Translating the equilibrium point to the origin via the transformation: \(x_{i}(t)=u_{i}(t)+x^*, y_{i}(t)=v_{i}(t)+y^*\), then Eq. (3) is converted into:

$$\begin{aligned} \left\{ \begin{array}{l} D^{\alpha }u_{i}(t)=-a_{i}u_{i}(t)+\sum \nolimits _{j=1}^{m}c_{ij} \left( f_{j}\left( y_{j}\left( t-\tau ^{(1)}_{ji}\right) \right) -f_{j}(y_{j}^*)\right) \\ u_{i}(t_{k}^{+})=\left( 1-\lambda _{ik}^{(1)}\right) u_{i}(t_{k}^{-})\quad i=1,2,\ldots ,n;k=1,2,\ldots \\ D^{\alpha }v_{j}(t)=-b_{j}v_{j}(t)+\sum \nolimits _{i=1}^{n}d_{ji} \left( g_{i}\left( x_{i}\left( t-\tau ^{(2)}_{ij}\right) \right) -g_{i}(x_{i}^*)\right) \\ v_{j}(t_{k}^{+})=\left( 1-\lambda _{jk}^{(2)}\right) v_{j}(t_{k}^{-})\quad j=1,2,\ldots ,m;k=1,2,\ldots \\ \end{array}\right. \end{aligned}$$
(6)

Consider a Lyapunov function defined by

$$\begin{aligned} V(t)&=D^{-(1-\alpha )}_{t_{0}}\left( \sum _{i=1}^{n}|u_{i}(t)| +\sum _{j=1}^{m}|v_{j}(t)|\right) +\sum _{i=1}^{n}\mu ^{(1)}\sum _{j=1}^{m} F_{j}|c_{ij}|\int _{t-\tau _{ij}^{(1)}}^{t}|v_{j}(s)|{\text{d}}s\\&\quad +\sum _{j=1}^{m}\mu ^{(2)}\sum _{i=1}^{n}G_{i}| d_{ji}|\int _{t-\tau _{ji}^{(2)}}^{t}|u_{i}(s)|{\text{d}}s \end{aligned}$$

When \(t\ne t_{k}\), calculating the derivatives of V(t) along the solutions of system (6), based on the definition of R–L derivative and Lemma 1, we obtain

$$\begin{aligned} \dot{V}(t)&=\frac{{\text{d}}\left\{ D^{-(1-\alpha )}_{t_{0}} \left( \sum _{i=1}^{n}\left|{u_{i}(t)}\right| +\sum _{j=1}^{m}\left| v_{j}(t)\right| \right) \right\} }{{\text{d}}t}\\&\quad +\frac{{\text{d}}\left\{ \sum _{i=1}^{n}\sum _{j=1}^{m}F_{j}\left| c_{ij}\right| \int _{t-\tau _{ij}^{(1)}}^{t}\left| v_{j}(s)\right| {\text{d}}s +\sum _{j=1}^{m}\sum _{i=1}^{n}G_{i}\left| d_{ji}\right| \int _{t-\tau _{ji}^{(2)}}^{t}\left| u_{i}(s)\right| {\text{d}}s\right\} }{{\text{d}}t}\\&=D^{\alpha }\left\{ \sum _{i=1}^{n}\left( \left| u_{i}(t)\right| +\sum _{j=1}^{m}\left| v_{j}(t)\right| \right) \right\} \\&\quad +\sum _{i=1}^{n}\sum _{j=1}^{m}F_{j}\left| c_{ij}\right| \left( \left| v_{j}(t)\right| -\left| v_{j}\left( t-\tau ^{(1)}_{ij}\right) \right| \right) +\sum _{j=1}^{m}\sum _{i=1}^{n}G_{i}|d_{ji}|\left( |u_{i}(t)|-\left| u_{i} \left( t-\tau ^{(2)}_{ji}\right) \right| \right) \\&\leqslant \sum _{i=1}^{n}{\text{sgn}}(u_{i}(t))D^{\alpha }u_{i}(t)+\sum _{j=1}^{m}{\text{sgn}}(v_{j}(t))D^{\alpha }v_{j}(t)\\&\quad +\sum _{i=1}^{n} \sum _{j=1}^{m}F_{j}\left| c_{ij}\right| \left( \left| v_{j}(t)\right| -\left| v_{j} \left( t-\tau ^{(1)}_{ij}\right) \right| \right) +\sum _{j=1}^{m}\sum _{i=1}^{n}G_{i}\left| d_{ji}\right| \left( \left| u_{i}(t)\right| -\left| u_{i}\left( t-\tau ^{(2)}_{ji}\right) \right| \right) \\&\leqslant \sum _{i=1}^{n}{\text{sgn}}(u_{i}(t))\left\{ -a_{i}u_{i}(t)+\sum _{j=1}^{m}c_{ij} \left( f_{j}\left( y_{j}\left( t-\tau ^{(1)}_{ji}\right) \right) -f_{j}(y_{j}^*)\right) \right\} \\&\quad +\sum _{j=1}^{m}h(v_{j}(t))\left\{ -b_{j}v_{j}(t)+\sum _{i=1}^{n}d_{ji} \left( g_{i}\left( x_{i}\left( t-\tau ^{(2)}_{ij}\right) \right) -g_{i}\left( x_{i}^*\right) \right) \right\} \\&\quad +\sum _{i=1}^{n}\sum _{j=1}^{m}F_{j}\left| c_{ij}\right| \left( \left| v_{j}(t)\right| -\left| v_{j} \left( t-\tau ^{(1)}_{ij}\right) \right| \right) +\sum _{j=1}^{m}\sum _{i=1}^{n}G_{i}\left| d_{ji}\right| \left( \left| u_{i}(t)\right| -\left| u_{i} \left( t-\tau ^{(2)}_{ji}\right) \right| \right) \\&\leqslant -\sum _{i=1}^{n}a_{i}\left| u_{i}(t)\right| -\sum _{j=1}^{m}b_{i}\left| v_{j}(t)\right| \\&\quad +\sum _{i=1}^{n}\sum _{j=1}^{m}F_{j}\left| c_{ij}\right| \left| v_{j} \left( t-\tau ^{(1)}_{ij}\right) \right| +\sum _{j=1}^{m}\sum _{i=1}^{n}G_{i}\left| d_{ji}\right| \left| u_{i} \left( t-\tau ^{(2)}_{ji}\right) \right| \\&\quad +\sum _{i=1}^{n}\sum _{j=1}^{m}F_{j}\left| c_{ij}\right| \left( \left| v_{j}(t)\right| -\left| v_{j} \left( t-\tau ^{(1)}_{ij}\right) \right| \right) +\sum _{j=1}^{m}\sum _{i=1}^{n}G_{i}\left| d_{ji}\right| \left( \left| u_{i}(t)\right| -\left| u_{i}\left( t-\tau ^{(2)}_{ji}\right) \right| \right) \\&\leqslant -\sum _{i=1}^{n}a_{i}\left| u_{i}(t)\right| -\sum _{j=1}^{m}b_{i}\left| v_{j}(t)\right| +\sum _{i=1}^{n}\sum _{j=1}^{m}F_{j}\left| c_{ij}\right| \left| v_{j}(t)\right| +\sum _{j=1}^{m}\sum _{i=1}^{n}G_{i}\left| d_{ji}\right| \left| u_{i}(t)\right| \\&\leqslant \sum _{i=1}^{n}G_{i}\left\{ -\frac{a_{i}}{G_{i}}+\sum _{j=1}^{m}\left| d_{ji}\right| \right\} \left| u_{i}(t)\right| +\sum _{j=1}^{m}F_{j}\left\{ -\frac{b_{j}}{F_{j}}+\sum _{i=1}^{n}\left| c_{ij}\right| \right\} \left| v_{j}(t)\right| \\&=-\sum _{i=1}^{n}\xi ^{(1)}_{i}\left| u_{i}(t)\right| -\sum _{j=1}^{m}\xi ^{(2)}_{j}\left| v_{j}(t)\right| \\&\leqslant -\hat{\xi }^{(1)}\sum _{i=1}^{n}\left| u_{i}(t)\right| -\hat{\xi }^{(2)}\sum _{j=1}^{m}\left| v_{j}(t)\right| \\ \end{aligned}$$

then, for \(\forall t\in [t_{k-1},t_{k})\), we have

$$\begin{aligned} V(t)+\int _{t_{k-1}}^{t}\left( \hat{\xi }^{(1)}\sum _{i=1}^{n}\left| u_{i}(s)\right| +\hat{\xi }^{(2)}\sum _{j=1}^{m}\left| v_{j}(s)\right| \right) {\text{d}}s\leqslant V(t_{k-1}^{+}). \end{aligned}$$

On the other hand, from (5), one has

$$\begin{aligned} V(t_{k}^{+})&=D^{-(1-\alpha )}_{t_{0}}\left( \sum _{i=1}^{n}\left| u_{i}(t_{k}^{+})\right| +\sum _{j=1}^{m}\left| v_{j}(t_{k}^{+})\right| \right) \\&\quad +\sum _{i=1}^{n}\sum _{j=1}^{m}F_{j}\left| c_{ij}\right| \int _{t_{k}^{+}-\tau _{ij}^{(1)}}^{t_{k}^{+}}\left| v_{j}(s)\right| {\text{d}}s +\sum _{j=1}^{m}\sum _{i=1}^{n}G_{i}\left| d_{ji}\right| \int _{t_{k}^{+}-\tau _{ji}^{(2)}}^{t_{k}^{+}}\left| u_{i}(s)\right| {\text{d}}s\\&=D^{-(1-\alpha )}_{t_{0}}\left( \sum _{i=1}^{n}\left| 1-\lambda _{ik}^{(1)}\right| \left| u_{i}(t_{k}^{-})\right| +\sum _{j=1}^{m}\left| 1-\lambda _{jk}^{(2)}\right| \left| v_{j}(t_{k}^{-})\right| \right) \\&\quad +\sum _{i=1}^{n}\sum _{j=1}^{m}F_{j}\left| c_{ij}\right| \int _{t_{k}^{+}-\tau _{ij}^{(1)}}^{t_{k}^{+}}\left| v_{j}(s)\right| {\text{d}}s +\sum _{j=1}^{m}\sum _{i=1}^{n}G_{i}\left| d_{ji}\right| \int _{t_{k}^{+}-\tau _{ji}^{(2)}}^{t_{k}^{+}}\left| u_{i}(s)\right| {\text{d}}s\\&< D^{-(1-\alpha )}_{t_{0}}\left( \sum _{i=1}^{n}\left| u_{i}(t_{k}^{-})\right| +\sum _{j=1}^{m}\left| v_{j}(t_{k}^{-})\right| \right) \\&\quad +\sum _{i=1}^{n}\sum _{j=1}^{m}F_{j}\left| c_{ij}\right| \int _{t_{k}^{+}-\tau _{ij}^{(1)}}^{t_{k}^{+}}\left| v_{j}(s)\right| {\text{d}}s +\sum _{j=1}^{m}\sum _{i=1}^{n}G_{i}\left| d_{ji}\right| \int _{t_{k}^{+}-\tau _{ji}^{(2)}}^{t_{k}^{+}}\left| u_{i}(s)\right| {\text{d}}s\\&=V(t_{k}^{-}) \end{aligned}$$

let \(U(t)=\sum _{i=1}^{n}\left| u_{i}(t)\right| +\sum _{j=1}^{m}\left| v_{j}(t)\right|\), then \(\forall t\in [t_{k-1},t_{k}), V(t)\leqslant -\int_{t_{k-1}}^{t}U(s){\text{d}}s+ V(t_{k-1}^{+})\leqslant -\int_{t_{k-1}}^{t}U(s){\text{d}}s+ V(t_{k-1}^{-})\leqslant -\int_{t_{k-2}}^{t}U(s){\text{d}}s+ V(t_{k-2}^{-})\leqslant \ldots \leqslant -\int_{t_{0}}^{t}U(s){\text{d}}s+ V(t_{0}).\)

Thus

$$\begin{aligned} V(t)+\int_{t_{0}}^{t}U(s){\text{d}}s\leqslant V(t_{0}), \end{aligned}$$

let \(t\rightarrow +\infty\), it is obviously that \(\lim _{t\rightarrow +\infty }U(t)\) is bounded. According to Eq. (6), \(\mid D^{\alpha }u_{i}(t)\mid\) and \(\mid D^{\alpha }v_{j}(t)\mid\) are bounded. From the fractional Barbalat’s lemma, it follows \(\sum _{i=1}^{n}|u_{i}(t)|\rightarrow 0\) and \(\sum _{j=1}^{m}|v_{j}(s)|\rightarrow 0\) as \(t\rightarrow +\infty\). Therefore, the equilibrium \((x^*,y^*)\) of system (3) is global asymptotic stability. This completes our proof. \(\square\)

Corollary 1

Suppose that (H1) and (H2) hold, then the equilibrium \((x^*,y^*)\) of system (3) is globally asymptotically stable if

$$\begin{aligned} \rho _{1}=\max _{1\leqslant i\leqslant n}\left\{ \frac{G_{i}\sum _{j=1}^{m}|d_{ji}|}{a_{i}}\right\} <1, \quad \rho _{2}=\max _{1\leqslant j\leqslant m}\left\{ \frac{F_{j}\sum _{i=1}^{n}|c_{ij}|}{b_{j}}\right\} <1. \end{aligned}$$

Proof

By some simple computations, all the conditions of Theorem 1 hold, then the equilibrium \((x^*,y^*)\) is globally asymptotically stable. \(\square\)

Theorem 2

Under (H1) and (H2), then the equilibrium \((x^*,y^*)\) of system (3) is globally asymptotically stable if \(q>p>0\), where

$$\begin{aligned} q=\min \{\hat{a},\ \hat{b}\},\ \hat{a}=\min _{i}\{a_{i}\},\ \hat{b}=\min _{j}\{b_{j}\}, \end{aligned}$$

and

$$\begin{aligned} p=\max \{\hat{d},\hat{c}\}, \hat{c}=\max _{j}\{c^{*}_{j}F_{j}\}, \hat{d}=\max _{i}\{d^{*}_{i}G_{i}\}, c^{*}_{j}=\max _{i}\{|c_{ij}|\}, d^{*}_{i}=\max _{j}\{|d_{ji}|\}. \end{aligned}$$

Proof

Based on Eq. (6), considering the following Lyapunov function:

$$\begin{aligned} V(t)=\sum _{i=1}^{n}|u_{i}(t)|+\sum _{j=1}^{m}|v_{j}(t)|. \end{aligned}$$

When \(t\ne t_{k}\), calculating the derivatives of V(t) along the solutions of system (6), one has

$$\begin{aligned} D^{\alpha }{V}(t)&=\sum _{i=1}^{n}D^{\alpha }|u_{i}(t)|+\sum _{j=1}^{m}D^{\alpha }|v_{j}(t)|\\&=\sum _{i=1}^{n}{\text{sgn}}(u_{i}(t))D^{\alpha }u_{i}(t)+\sum _{j=1}^{m}{\text{sgn}}(v_{j}(t))D^{\alpha }v_{j}(t)\\&=\sum _{i=1}^{n}{\text{sgn}}(u_{i}(t))\left( -a_{i}u_{i}(t)+\sum _{j=1}^{m}c_{ij} \left( f_{j}\left( y_{j} \left( t-\tau ^{(1)}_{ji}\right) \right) -f_{j}\left( y_{j}^*\right) \right) \right) \\&\quad +\sum _{j=1}^{m}{\text{sgn}}(v_{j}(t))\left( -b_{j}v_{j}(t)+\sum _{i=1}^{n}d_{ji} \left( g_{i}\left( x_{i}\left( t-\tau ^{(2)}_{ij}\right) \right) -g_{i}\left( x_{i}^*\right) \right) \right) \\&\leqslant \sum _{i=1}^{n}\left( -a_{i}|u_{i}(t)|+\sum _{j=1}^{m}c_{ij}F_{j}\left| v_{j} \left( t-\tau ^{(1)}_{ij}\right) \right| \right) \\&\quad +\sum _{j=1}^{m}\left( -b_{j}|v_{j}(t)|+\sum _{i=1}^{n}d_{ji}G_{i}\left| u_{i} \left( t-\tau ^{(2)}_{ji}\right) \right| \right) \\&\leqslant -\hat{a}\sum _{i=1}^{n}|u_{i}(t)|-\hat{b}\sum _{j=1}^{m}F_{j}|v_{j}(t)|\\&\quad +\sum _{i=1}^{n}\sum _{j=1}^{m}c_{ij}\left| v_{j} \left( t-\tau ^{(1)}_{ij}\right) \right| +\sum _{j=1}^{m}\sum _{i=1}^{n}d_{ji}G_{i}\left| u_{i} \left( t-\tau ^{(2)}_{ji}\right) \right| \\&\leqslant -qV(t)+\left( \sum _{j=1}^{m}c_{j}^{*}\eta ^{(1)}F_{j}\left| v_{j} \left( t-\tau ^{(1)}_{ij}\right) \right| \right) +\left( \sum _{i=1}^{n}d_{i}^{*}G_{i}\left| u_{i} \left( t-\tau ^{(2)}_{ji}\right) \right| \right) \\&\leqslant -qV(t)+\hat{c}\sum _{j=1}^{m}\left| v_{j}\left( t-\tau ^{(1)}_{ij}\right) \right| +\hat{d}\sum _{i=1}^{n}\left| u_{i}\left( t-\tau ^{(2)}_{ji}\right) \right| \\&\leqslant -qV(t)+p\bar{V}(t) \end{aligned}$$

where \(\bar{V}(t)=sup_{t-\tau ^{*}\leqslant s\leqslant t}V(s)\).

On the other hand, by (H2), we have

$$\begin{aligned} V(t_{k}^{+})=&\sum _{i=1}^{n}|u_{i}(t_{k}^{+})|+\sum _{j=1}^{m}|v_{j}(t_{k}^{+})|\\ =&\sum _{i=1}^{n}|1-\lambda _{ik}^{(1)}||u_{i}(t_{k}^{-})|+\sum _{j=1}^{m}|1-\lambda _{ik}^{(2)}||v_{j}(t_{k}^{-})|\\ <&\sum _{i=1}^{n}|u_{i}(t_{k})|+\sum _{j=1}^{m}|v_{j}(t_{k})|\\ =&V(t_{k}^{-}) \end{aligned}$$

From Lemma 2, the equilibrium \((x^*,y^*)\) of system (3) is global asymptotic stability. This completes our proof. \(\square\)

Remark 1

Because the R–L derivative is a continuous operator of the order \(\alpha\), when \(\alpha =1\), the fractional-order BAM neural network will become the first-order derivative model. From the proof of above results, it is obvious that when \(\alpha =1\), both Theorems 1 and 2 still hold.

Remark 2

In addition, the sufficient conditions in Theorems 1 and 2 are independent, which can be checked in the numerical simulations.

4 Numerical simulations

In this section, we will illustrate our results from two examples.

Two low-dimensional cases of Example 1 will be used to compare our results in Theorems 1 and 2. We will also give a higher dimension to illustrate the usefulness of our results in Example 2. The algorithm to simulate the R–L fractional-order neural network can be seen in Ref. [33].

Example 1

Case 1 Consider the following impulsive fractional-order BAM neural network with five neurons, three in the first field and others in the second field, which can be described as:

$$\begin{aligned} \left\{ \begin{array}{l} D^{\alpha }x_{i}(t)=-a_{i}x_{i}(t)+\sum \nolimits _{j=1}^{2}c_{ij} {\text{tan}}h(y_{j}(t-0.1))+I_{i}\\ \Delta x_{i}(t_{k})=-0.7(x_{i}(t_{k})-x^{*}) \quad i=1,2,3;k=1,2,\ldots \\ D^{\alpha }y_{j}(t)=-b_{j}y_{j}(t)+\sum \nolimits _{i=1}^{3} d_{ji}{\text{tan}}h(x_{i}(t-0.1))+J_{j}\\ \Delta y_{j}(t_{k})=-0.8(y_{j}(t_{k})-y^{*})\quad j=1,2; k=1,2,\ldots \\ \end{array}\right. \end{aligned}$$

In this case, set \(a_{1}=1.5, a_{2}=0.5, a_{3}=0.5, b_{1}=0.5, b_{2}=1.5, C=[-0.05, 0.1; 0.02, 0.2; -0.01, 1], D=[-0.05, 0.1, 0.02;0.7, -0.01, 0.01]\); \(I_{i}=J_{j}=0,i=1,2,3;j=1,2;\) Let \(F_{j}=G_{i}=1,i=1,2,3;j=1,2\), by some simple computations, one has \(\xi _{1}^{(1)}=0.75, \xi _{2}^{(1)}=0.39, \xi _{3}^{(1)}=0.47\), and \(\xi _{1}^{(2)}=0.42, \xi _{2}^{(2)}=0.2\); it is easy to check that \(\hat{\xi }^{(1)}=0.39>0\) and \(\hat{\xi }^{(2)}=0.2>0\), the conditions in Theorem 1 are satisfied. So, the equilibrium point of system (3) is globally asymptotically stable. Figures 1 and 2 show the trajectories of variable \(x_{i}(t)\) and \(y_{j}(t)\) of system (3).

Fig. 1
figure 1

Time response of state variables x(t) in system (3)

Fig. 2
figure 2

Time response of state variables y(t) in system (3)

Case 2 In this case, set \(a_{1}=1.5, a_{2}=0.5, a_{3}=0.5, b_{1}=0.5, b_{2}=1.2, C=[-0.1, 0.45;0.3, 0.45;0.3, 0.4], D=[0.45, -0.45, 0.3;-0.1, -0.4, 0.45]\); \(I_{i}=J_{j}=0,i=1,2,3;j=1,2;\) Let \(F_{j}=G_{i}=1,i=1,2,3;j=1,2\), by some simple computations, one has \(\hat{c}=\hat{d}=0.5, \hat{a}=0.7\) and \(\hat{b}=0.6\); it is easy to check that \(q=0.6>p=0.5>0\), the conditions in Theorem 2 are satisfied. So, the equilibrium point of system (3) is globally asymptotically stable. Figures 3 and 4 show the trajectories of variable \(x_{i}(t)\) and \(y_{j}(t)\) of system (3).

Fig. 3
figure 3

Time response of state variables x(t) in system (3)

Fig. 4
figure 4

Time response of state variables y(t) in system (3)

Remark 3

In Example 1, it is easy to get that \(\hat{a}=\hat{b}=0.5, \hat{c}=1\) and \(\hat{d}=0.7\), then \(q=0.5<p=0.7\), which does not meet the criteria in Theorem 2. On the other hand, in Example 2, one has \(\xi _{1}^{(1)}=0.95, \xi _{2}^{(1)}=-0.35, \xi _{3}^{(1)}=-0.25\) and \(\xi _{1}^{(2)}=-0.2, \xi _{2}^{(2)}=-0.1\), it is easy to check that \(\hat{\xi }^{(1)}=-0.35>0\) and \(\hat{\xi }^{(2)}=-0.2<0\), which does not meet the criteria in Theorem 1. Thus, the sufficient conditions in Theorems 1 and 2 are independent.

Example 2

Consider the following impulsive fractional-order BAM neural network with two hundred neurons, one hundred in the first field and others in the second field. Under the conditions of Theorems 1 and 2, we select suitable higher-dimensional matrices ABC and D. Other parameters are the same with them of Example 1. The time responses of state variables are shown in Figs. 5 and 6.

Fig. 5
figure 5

Time response of state variables x(t) a higher-dimensional BAM neural networks

Fig. 6
figure 6

Time response of state variables y(t) a higher-dimensional BAM neural networks

Remark 4

The uniform stability of fractional-order BAM neural networks has been investigated in Ref. [29]. Time delay has been taken into account, but the impulsive effects have not been considered, compared with which this paper has studied the asymptotic stability of time-delayed BAM neural networks with impulsive effects, and the above numerical simulation can be checked for our theoretical result.

5 Conclusion

Two sufficient conditions were obtained to ensure the impulsive fractional-order BAM networks to be globally asymptotically stable in this paper. By employing the fractional Barbalat’s lemma and Razumikhin-type stability theorems, the new results were easy to test in the practical fields. Furthermore, the methods employed in this paper were useful to study some other time-delayed fractional-order neural systems. In the end, two examples were given to show that two sufficient conditions that we have got are independent of each other. We would like to point out that there are lots of results of BAM neural networks about practical application in engineering science; however, there are few results about the practical application of fractional-order BAM, which will be our future works.