1 Introduction

In the recent few decades, various artificial neural network models such as Hopfield neural networks (de Castro and Valle 2020), recurrent neural networks (Achouri and Aouiti 2022), cellular neural networks (Xu et al. 2021) have been widely investigated due to their extensively application in optimization combination, filtering, signal processing, and so on. These applications are strongly affected by the dynamical behaviors of neural networks, especially stability and synchronization. Therefore, the investigation on the stability and synchronization of neural networks has become an attractive subject and a great deal of excellent results have been proposed, see for examples (Chen et al. 2020; Wang et al. 2021; Xiao et al. 2021; Liu et al. 2021).

Time delays, induced by the limited speed of the transmission between neurons, are usually ubiquitous in neural networks. What is more, some complicated behaviors such as oscillation, bifurcation, or chaos may be produced by time delays (Song and Peng 2006; Aouiti et al. 2020). Thus, it is significant and unavoidable to study the stability and synchronization of delayed neural networks. Zhang and Zeng (2021) provided several criteria to check the stability and synchronization of reaction–diffusion neural networks with general time-varying delays. By Lyapunov–Krasovskii functions and linear matrix inequalities, the asymptotic stability of recurrent neural networks with multiple discrete delays and distributed delays was concerned in Cao et al. (2006). The exponential stability of Clifford-valued inertial neural networks with mixed delays was studied by the means of pseudo almost periodic function theory and some inequality theories in Aouiti and Ben Gharbia (2020). In Sheng et al. (2021), the finite-time stability of competitive neural networks with discrete time-varying delays was discussed by adopting comparison strategies and inequality techniques. By now, a variety of results about the stability and synchronization of neural networks have been derived.

It is worth noting that the above analysis about neural networks is focusing on the integer calculus. Fractional calculus (Podlubny 1999; Kilbas et al. 2006), as an extension of the integer calculus, has received considerable attention due to its broad applications in many fields such as viscoelasticity (Koeller 1984), bioengineering (Debnath 2003), fluid mechanic (Tripathi 2011), and so on. Compared with the traditional integer-order systems, fractional-order models can depict multifarious processes and phenomena more exactly because of its memory and hereditary properties. Owning to these superiorities, many researchers have attempted to combine fractional calculus with neural networks, leading to fractional-order neural networks. Moreover, the dynamical behavior of fractional neural networks has become a noticeable subject and numerous results have been widely investigated (Fan et al. 2018; Huang et al. 2020; Zhang et al. 2018; Xiao and Zhong 2019; Pratap et al. 2018; Chen et al. 2018).

Obviously, it is of considerable importance to investigate the stability of fractional neural networks. In Liu et al. (2017), the properties of activation functions and M-matrix were utilized to consider the Mittag–Leffler stability of fractional recurrent neural networks. In Zhang and Zhang (2020), Chen et al. (2021), the method of the Lyapunov–Krasovskii function was used to study the asymptotic stability of fractional neural networks with time delays. Yao et al. (2021) considered the exponential stability of fractional-order fuzzy cellular neural networks with multiple delays by Laplace transform method and complex function. Some criteria about the finite-time stability of fractional inertial neural networks with time-varying delays were proposed by Lyapunov–Krasovskii function and analytical techniques in Aouiti et al. (2022). In Li et al. (2021), based on the sign function and activation functions, the uniform stability of complex-valued fractional neural networks with linear impulses and fixed time delays was discussed.

In addition, the synchronization of fractional neural networks is another heated topic in recent years. In Li et al. (2022), Bai et al. (2022), via employing the method of Lyapunov–Krasovskii function, the exponential synchronization and secure synchronization of fractional complex neural networks were investigated. You et al. (2020) studied the Mittag–Leffler synchronization of discrete-time complex fractional neural networks with time delay by applying the Lyapunov direct method and designing a suitable controller. Du and Lu (2021) utilized a new fractional-order Gronwall inequality to explore the finite-time synchronization of fractional memristor-based neural networks with time delay. The quasi-uniform synchronization of fractional neural networks with leakage and discrete delays was discussed by Laplace transformation and several analytical techniques in Zhang et al. (2021).

Apparently, we can find that there are several effective methods such as Laplace transformation (Yao et al. 2021; Zhang et al. 2021), linear programming approach (Yang et al. 2020), and Lyapunov direct method (Zhang and Zhang 2020) in demonstrating the stability and synchronization of neural networks. Among them, the Lyapunov direct method is the most frequently used in the existing literatures (Zhang and Wu 2022). For fractional systems, the main difficulty is how to construct an appropriate Lyapunov–Krasovskii function, because similar tools in integer calculus can not be generalized to fractional calculus easily. On the other hand, most of the previous works regarded the time delay as a single time-varying delay or constant time delays (Yao et al. 2021). However, it may not keep unchanged during the transmission in neuron and the delays between the neurons may different. In view of this, it is necessary and meaningful to explore some effective methods to investigate the stability and synchronization of fractional neural networks with various types of time delays. However, as far as we know, such investigation are rare (Syed Ali et al. 2021; Wu et al. 2022).

Motivated by the above analysis, we probe the asymptotical stability and synchronization of Riemann–Liouville fractional neural networks with multiple time-varying delays and distributed delays. The main contributions in this paper can be summarized as follows:

  • We introduce the multiple time-varying delays and distributed delays to fractional neural networks simultaneously. Compared with the previous works, the model is more close to actual applications.

  • The uniqueness, asymptotical stability and synchronization of the demonstrated system are proposed. In addition, the obtained results are expressed as the matrix inequalities, which are more concise and feasible to use.

  • To avoid the difficulty of calculating the fractional-order derivative of a function, two Lyapunov–Krasovskii functions associated with fractional integral terms are considered and we can compute their first-order derivative directly.

  • Based on the relationship between the stability and synchronization of fractional systems, two sufficient conditions about the synchronization of the considered system are proposed.

This paper is organized as follows. In Sect. 2, some preliminaries and the fractional neural network model are described. The asymptotical stability and synchronization criteria of the considered system are studied in Sect. 3. In Sect. 4, four numerical examples are taken to check the validity and feasibility of the proposed results. Some conclusions and further works are summarized in Sect. 5.

2 Preliminaries and model description

At present, there are several definitions for fractional-order derivatives, such as Riemann–Liouville and Caputo definitions. The two definitions have their own advantages. The Caputo derivative is more applicable in practical engineering, because its initial conditions are the same as integral-order systems in form. However, the Caputo derivative requires the function to be n-order differentiable and the Riemann–Liouville derivative only requires that the function be continuous. On the other hand, compared with the Caputo derivative, the Riemann–Liouville derivative can be considered as a natural generalization of integer derivative, because it is a continuous operator from the integer order to arbitrary order. Therefore, the Riemann–Liouville derivative is applied in this paper. Some preliminaries and the fractional neural network model are introduced in this section.

Definition 1

The Riemann–Liouville fractional integral of order p for the function u(t) is defined as

$$\begin{aligned} ^{R}_{t_{0}}D^{-p}_{t}u(t)=\frac{1}{\Gamma (p)}\int _{t_{0}}^{t}(t-s)^{p-1}u(s) \textrm{d}s,p>0, \end{aligned}$$

where \(\Gamma (\cdot )\) is the Gamma function and \(\Gamma (p)=\int _{t_{0}}^{\infty }t^{p-1}e^{-t}\textrm{d}t\).

Definition 2

The Riemann–Liouville fractional derivative of order q for the function u(t) is defined as

$$\begin{aligned} ^{R}_{t_{0}}D^{q}_{t}u(t)=\frac{1}{\Gamma (n-q)}\frac{\textrm{d}^n}{\textrm{d}t^n}\int _{t_{0}}^{t}(t-s)^{n-q-1}u(s)\textrm{d}s, n-1\le q <n\in \textrm{Z}^+. \end{aligned}$$

In this paper, we investigate the Riemann–Liouville fractional neural networks with multiple time-varying delays and distributed delays described by

$$\begin{aligned} ^{R}_{t_{0}}D^{\alpha }_{t}u_i(t)= & {} -a_iu_i(t)+\sum _{j=1}^{n}b_{ij}f_j(u_j(t)) +\sum _{k=1}^{K}\sum _{j=1}^{n}c_{ij}^k g_j(u_j(t-\varrho _k(t)))\nonumber \\{} & {} +\sum _{j=1}^{n}h_{ij}\int ^{t}_{-\infty }\psi _j(t-s)r_j(u_j(s))\textrm{d}s+I_i, \quad t>0, \end{aligned}$$
(1)

where \(0<\alpha <1, i=1,2,\cdots ,n\); \(u_i(t)\) denotes the state of the ith neuron; \(b_{ij}, c^k_{ij}, h_{ij}\) represent the neuron interconnection parameters at time t; \(a_{i}>0\) is a constant; \(f_j, g_j, r_j\) are neuron activation functions with \(f_j(0)=g_j(0)=r_j(0)=0\); \(\varrho _k(t)\) is the time-varying delay satisfying \(\varrho _k(t)\le \varrho _k, \dot{\varrho }_{(k)}(t)\le \gamma _k<1\), \(I_i\) is the external input.

The initial conditions of system (1) are given by

$$\begin{aligned} _{0}D^{-(1-\alpha )}_{t}u_i(t)=\phi _i(t),~~t \le 0. \end{aligned}$$

Assumption 1

(A1) The delay kernel \(\psi _j \in C([0,+\infty ),\textrm{R})\) is a nonnegative function. Then, the following equalities hold

$$\begin{aligned}{} & {} (a)~\int ^{+\infty }_{0}\psi _j(s)\textrm{d}s=1;\\{} & {} (b)~\int ^{+\infty }_{0}s\psi _j(s)\textrm{d}s<\infty . \end{aligned}$$

In this paper, the delay kernel \(\psi _j(t)\) is given by \(\psi _j(t)=e^{-t}.\)

Assumption 2

(A2) The neuron activation functions \(f_{j}(\cdot ), g_{j}(\cdot ), r_{j}(\cdot )\) are continuous and satisfy the Lipschitz condition, which means that the following inequalities hold

$$\begin{aligned}{} & {} (a)|f_{j}(u_1)-f_{j}(u_2)|\le l^1_{j}|u_1-u_2 |;\\{} & {} (b)~|g_{j}(u_1)-g_{j}(u_2)|\le l^2_{j}|u_1-u_2|;\\{} & {} (c)~|r_{j}(u_1)-r_{j}(u_2)|\le l^3_{j}|u_1-u_2|, \end{aligned}$$

where \(u_1, u_2 \in \textrm{R}\) and \(l^1_{j}, l^2_{j}, l^3_{j}\) are Lipschitz constants.

Assumption 2* (A2*) The neuron activation functions \(g_{j}(\cdot ), r_{j}(\cdot )\) are continuous and satisfy the Lipschitz condition, which means that the following inequalities hold

$$\begin{aligned}{} & {} (a)|g_{j}(u_1)-g_{j}(u_2)|\le l^2_{j}|u_1-u_2 |;\\{} & {} (b)|r_{j}(u_1)-r_{j}(u_2)|\le l^3_{j}|u_1-u_2|, \end{aligned}$$

where \(u_1, u_2 \in \textrm{R}\) and \(l^2_{j}, l^3_{j}\) are Lipschitz constants. In particularly, the neuron activation function \(f_{j}(\cdot )\) is monotonically increasing, bounded and Lipschitz continuous, that is

$$\begin{aligned}{} & {} (c)~0\le \frac{f_{j}(u_1)-f_{j}(u_2)}{u_1-u_2}\le l^1_{j}, \end{aligned}$$

where \(l^1_{j}\) is a positive constant.

Lemma 1

If \(\alpha> \beta >0\), then the following property holds for integrable function u(t)

$$\begin{aligned} ^{R}_{t_{0}}D^{\alpha }_{t}(^{R}_{t_{0}}D^{-\beta }_{t}u(t))= ^{R}_{t_{0}}D^{\alpha -\beta }_{t}u(t). \end{aligned}$$

Lemma 2

Let \(u(t)\in \textrm{R}^n\) be a vector of differentiable function, then the following inequality holds

$$\begin{aligned} \frac{1}{2}\;^{R}_{t_{0}}D^{\alpha }_{t}(u^{\textrm{T}}(t)Pu(t)) \le \mu ^{\textrm{T}}(t)P^{R}_{t_{0}}D^{\alpha }_{t}u(t),~~0<\alpha <1, \end{aligned}$$

where \(P\in \textrm{R}^{n \times n}\) is a constant, square, symmetric, and positive definite matrix.

Lemma 3

For any \(x, y \in \textrm{R}^{n},\epsilon >0\), the following inequality holds

$$\begin{aligned} 2x^{\textrm{T}}y \le \epsilon x^{\textrm{T}}x+\frac{1}{\epsilon }y^{\textrm{T}}y. \end{aligned}$$

Lemma 4

For constant matrices \(\Omega _{11}, \Omega _{12}, \Omega _{22}\), where \(\Omega _{11}=\Omega ^{\textrm{T}}_{11},\Omega _{22}=\Omega ^{\textrm{T}}_{22}\), the following two inequalities are equivalent

$$\begin{aligned}{} & {} (a)~ \Omega =\left( \begin{array}{cc} \Omega _{11} &{} \Omega _{12} \\ \Omega ^\textrm{T}_{12} &{} \Omega _{22} \\ \end{array} \right)>0;\\{} & {} (b)~ \Omega _{22}>0,\Omega _{11}-\Omega ^\textrm{T}_{12}\Omega _{22}^{-1}\Omega _{12}>0. \end{aligned}$$

Definition 3

A constant vector \(u^*=(u_1^*, u_2^*, \cdots , u_n^*)^\textrm{T}\in \textrm{R}^n\) is an equilibrium point of system (1) if and only if \(u^*\) satisfy the following equations:

$$\begin{aligned} 0= & {} -a_iu_i^*+\sum _{j=1}^{n}b_{ij}f_j(u_j^*)+\sum _{k=1}^{K}\sum _{j=1}^{n}c^k_{ij}g_j(u_j^*)\\{} & {} +\sum _{j=1}^{n}h_{ij}\int ^{t}_{-\infty }\psi _j(t-s)r_j(u_j^*)\textrm{d}s+I_i,~~ i=1, 2, \cdots , n. \end{aligned}$$

3 Main results

In this section, several sufficient conditions for asymptotical stability and synchronization of Riemann–Liouville fractional delayed neural networks are derived.

3.1 Asymptotic stability criteria

Theorem 1

Suppose that A1, A2 hold; Let \(u=(\tilde{u}_1, \tilde{u}_2, \cdots , \tilde{u}_n)^\textrm{T} \in {\mathbb {B}}\), where \({\mathbb {B}}\) is a Banach space endowed with the norm \(\Vert u\Vert _1=\sum _{i=1}^{n}|\tilde{u}_i|\). Then, there must exist a unique equilibrium point \(u^*\) for system (1) if the following inequalities hold.

$$\begin{aligned} \rho =\sum _{i=1}^{n}\bigg (\begin{array}{c} \max \\ 1 \le j \le n \end{array}\frac{|b_{ij}|l^1_{j}+|h_{ij}|l^3_{j}}{a_j}+\sum _{k=1}^{K}\begin{array}{c} \max \\ 1 \le j \le n \end{array}\frac{|c^k_{ij}|l^2_{j}}{a_j}\bigg )<1,~i=1,2,\cdots ,n. \end{aligned}$$
(2)

Proof

Let \(u=(\tilde{u}_1, \tilde{u}_2, \cdots , \tilde{u}_n)^\textrm{T}=(a_1u_1, a_2u_2, \cdots , a_nu_n)^\textrm{T}\in \textrm{R}^n\). Constructing a mapping \(\varphi : {\mathbb {B}} \rightarrow {\mathbb {B}}, \varphi (u)=(\varphi _1(u), \varphi _2(u), \cdots , \varphi _n(u))^\textrm{T}\) and

$$\begin{aligned} \varphi _i(u) =\sum _{j=1}^{n}b_{ij}f_j\left( \frac{\tilde{u}_j}{a_j}\right) +\sum _{k=1}^{K} \sum _{j=1}^{n}c^k_{ij} g_j\left( \frac{\tilde{u}_j}{a_j}\right) +\sum _{j=1}^{n}h_{ij}\int ^{t} _{-\infty }\psi _j(t-s)r_j\left( \frac{\tilde{u}_j}{a_j}\right) \textrm{d}s+I_i. \end{aligned}$$

For any two different points \(\ell =(\ell _1, \ell _2, \cdots , \ell _n)^\textrm{T}, \jmath =(\jmath _1, \jmath _2, \cdots , \jmath _n)^\textrm{T}\), we have

$$\begin{aligned} |\varphi _i(\ell )-\varphi _i(\jmath )|\le & {} \Bigg |\sum _{j=1}^{n}b_{ij} f_j\left( \frac{\ell _j}{a_j}\right) -\sum _{j=1}^{n}b_{ij} f_j(\frac{\jmath _j}{a_j})\Bigg |\\{} & {} +\Bigg |\sum _{k=1}^{K}\sum _{j=1}^{n}c^k_{ij} g_j(\frac{\ell _j}{a_j})-\sum _{k=1}^{K}\sum _{j=1}^{n}c^k_ {ij}g_j\left( \frac{\jmath _j}{a_j}\right) \Bigg |\\{} & {} +\Bigg |\sum _{j=1}^{n}h_{ij}\int ^{t}_{-\infty }\psi _j(t-s)r_j\left( \frac{\ell _j}{a_j}\right) \textrm{d}s -\sum _{j=1}^{n}h_{ij}\int ^{t}_{-\infty }\psi _j(t-s)r_j\left( \frac{\jmath _j}{a_j}\right) \textrm{d}s\Bigg |\\\le & {} \sum _{j=1}^{n}\frac{|b_{ij}|l^1_{j}+|h_{ij}|l^3_{j}}{a_j}|\ell _j-\jmath _j|+\sum _{k=1}^{K}\sum _{j=1}^{n}\frac{|c^k_{ij}|l^2_{j}}{a_j}|\ell _j-\jmath _j|. \end{aligned}$$

Then, we can get

$$\begin{aligned} \Vert \varphi (\ell )-\varphi (\jmath ) \Vert= & {} \sum _{i=1}^{n}|\varphi _i(\ell )-\varphi _i(\jmath )|\\= & {} \sum _{i=1}^{n}\sum _{j=1}^{n}\frac{|b_{ij}|l^1_{j}+|h_{ij}|l^3_{j}}{a_j}|\ell _j-\jmath _j |+\sum _{i=1}^{n}\sum _{k=1}^{K}\sum _{j=1}^{n}\frac{|c^k_{ij}|l^2_{j}}{a_j} |\ell _j-\jmath _j|\\\le & {} \sum _{i=1}^{n}\left( \begin{array}{c} \max \\ 1 \le j \le n \end{array}\frac{|b_{ij}|l^1_{j}+|h_{ij} |l^3_{j}}{a_j}\right) \sum _{j=1}^{n}|\ell _j-\jmath _j|\\{} & {} +\sum _{i=1}^{n}\sum _{k=1}^{K}\begin{array}{c} \max \\ 1 \le j \le n \end{array}\frac{|c^k_{ij}|l^2_{j}}{a_j}\sum _{j=1}^{n}|\ell _j-\jmath _j|\\= & {} \sum _{i=1}^{n}\left( \begin{array}{c} \max \\ 1 \le j \le n \end{array}\frac{|b_{ij}|l^1_{j}+|h_{ij}|l^3_{j}}{a_j}+\sum _{k=1}^{K}\begin{array}{c} \max \\ 1 \le j \le n \end{array}\frac{|c^k_{ij}|l^2_{j}}{a_j}\right) \Vert \ell -\jmath \Vert . \end{aligned}$$

So \(\Vert \varphi (\ell )-\varphi (\jmath )\Vert \le \rho \Vert \ell -\jmath \Vert \). Based on (2), we can find that the mapping \(\varphi \) is a contraction mapping on \(\textrm{R}^n\). Hence , there must exist a unique fixed point \(\tilde{u}^*\in \textrm{R}^n\), such that \(\varphi (\tilde{u}^*)=\tilde{u}^*\). Scilicet,

$$\begin{aligned} \tilde{u}_i^*=\sum _{j=1}^{n}b_{ij}f_j\left( \frac{\tilde{u}_j^*}{a_j}\right) +\sum _{k=1}^{K} \sum _{j=1}^{n}c^k_{ij} g_j\left( \frac{\tilde{u}_j^*}{a_j}\right) +\sum _{j=1}^{n}h_{ij}\int ^{t}_ {-\infty }\psi _j(t-s)r_j\left( \frac{\tilde{u}_j^*}{a_j}\right) \textrm{d}s+I_i. \end{aligned}$$

Let \(u_i^*=\frac{\tilde{u}_i^*}{a_i}\), we have

$$\begin{aligned} 0=-a_iu_i^* +\sum _{j=1}^{n}b_{ij}f_j(u_j^*)+\sum _{k=1}^{K}\sum _{j=1}^{n}c^k_{ij} g_j(u_j^*)+\sum _{j=1}^{n}h_{ij}\int ^{t}_{-\infty }\psi _j(t-s)r_j(u_j^*)\textrm{d}s+I_i. \end{aligned}$$

According to the Definition 3, the theorem can be proved. \(\square \)

By the transformation \(\mu _i(t)=u_i(t)-u_i^{*}\), we can rewrite system (1) into the following vector form

$$\begin{aligned} ^{R}_{t_{0}}D^{\alpha }_{t}\mu (t)= & {} -A\mu (t)+Bf(\mu (t))+\sum _{k=1}^{K}C_kg(\mu (t-\varrho _k(t)))\nonumber \\{} & {} +H\int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s, \end{aligned}$$
(3)

where \(\mu (t)=[\mu _1(t),\mu _2(t),\cdots ,\mu _n(t)]^\textrm{T};\) \(A=\textrm{diag}[a_1,a_2,\cdots ,a_n],\) \(B=[b_{ij}], C_k=[c^k_{ij}],\) \(H=[h_{ij}],\) \(\psi (t-s)=\textrm{diag}[\psi _1(t-s), \psi _2(t-s), \cdots , \psi _n(t-s)]\) and \(f(\mu (t))=\big [f_1(\mu _1(t)), f_2(\mu _2(t)), \cdots , f_n(\mu _n(t))\big ]^\textrm{T},\) \(g(\mu (t-\varrho _k(t)))=\big [g_1(\mu _1(t-\varrho _k(t))), g_2(\mu _2(t-\varrho _k(t))), \cdots , g_n(\mu _n(t-\varrho _k(t)))\big ]^\textrm{T},\) \(r(\mu (t))=\big [r_1(\mu _1(t)), r_2(\mu _2(t)), \cdots , r_n(\mu _n(t))\big ]^\textrm{T}.\)

Theorem 2

Suppose that A1, A2 and Theorem 1 hold; Then the system (3) is asymptotical stable if there exist a positive definite matrix P, positive definite diagonal matrices \(M, G_k\) and positive scalars \(\delta _k, q_j\) such that the following inequality holds.

$$\begin{aligned} \Xi _1=\left( \begin{array}{cccccc} S &{} PB &{} \sqrt{\frac{1}{\delta _1(1-\gamma _1)}}PC_1 &{} \cdots &{} \sqrt{\frac{1}{\delta _K(1-\gamma _K)}}PC_K &{} PH \\ * &{} M &{} 0 &{} \cdots &{} 0 &{} 0 \\ * &{} * &{} G_1 &{} \cdots &{} 0 &{} 0 \\ * &{} * &{} * &{} \vdots &{} \vdots &{} \vdots \\ * &{} * &{} * &{} * &{} G_K &{} 0 \\ * &{} * &{} * &{} * &{} 0 &{} Q\\ \end{array} \right) >0, \end{aligned}$$
(4)

where \(S=2PA-L_1ML_1-L_3QL_3-\sum _{k=1}^{K}\delta _k L_2G_kL_2, L_1=\textrm{diag}[l^1_1, l^1_2, \cdots , l^1_n], L_2=\textrm{diag}[l^2_1, l^2_2, \cdots , l^2_n], L_3=\textrm{diag}[l^3_1, l^3_2, \cdots , l^3_n], Q=\textrm{diag}[q_1, q_2, \cdots , q_n].\)

Proof

Consider the following Lyapunov–Krasovskii function

$$\begin{aligned} V(t)=V_1(t)+V_2(t)+V_3(t), \end{aligned}$$

where

$$\begin{aligned}{} & {} V_1(t)=^{R}_{t_{0}}D^{\alpha -1}_{t}\mu ^{\textrm{T}}(t)P\mu (t),\\{} & {} V_2(t)=\sum _{k=1}^{K}\delta _k\int _{-\varrho _k(t)}^{0}g^{\textrm{T}}(\mu (t+s))G_kg(\mu (t+s))\textrm{d}s,\\{} & {} V_3(t)=\sum _{j=1}^{n}q_j\int _{0}^{\infty }\psi _j(\eta )\int _{t-\eta }^{t}\psi _j(\eta )(r_j(\mu _j(\xi )))^2\textrm{d} \xi \textrm{d} \eta . \end{aligned}$$

Next, we can compute the derivative of V(t) with the help of Lemmas 1, 2 yields

$$\begin{aligned} \dot{V}_1(t)= & {} ^{R}_{t_{0}}D^{\alpha }_{t}\mu ^{\textrm{T}}(t)P\mu (t)\le 2\mu ^{\textrm{T}}(t)P\ ^{R}_{t_{0}}D^{\alpha }_{t}\mu (t)\nonumber \\= & {} 2\mu ^{\textrm{T}}(t)P\bigg [-A\mu (t)+Bf(\mu (t)) +\sum _{k=1}^{K}C_kg(\mu (t-\varrho _k(t))\nonumber \\{} & {} +H\int ^{t}_{-\infty }\psi (t-s)r(\mu (s)))\textrm{d}s\bigg ]\nonumber \\= & {} 2\mu ^{\textrm{T}}(t)(-PA)\mu (t)+2\mu ^{\textrm{T}}(t)PBf(\mu (t))+2\sum _ {k=1}^{K}\mu ^{\textrm{T}}(t)PC_kg(\mu (t-\varrho _k(t)))\nonumber \\{} & {} +2\mu ^{\textrm{T}}(t)PH\int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s, \end{aligned}$$
(5)
$$\begin{aligned} \dot{V}_2(t)= & {} \sum _{k=1}^{K}\delta _k g^{\textrm{T}}(\mu (t))G_kg(\mu (t))\nonumber \\{} & {} -\sum _{k=1}^{K}\delta _k (1-\dot{\varrho }_k(t))g^{\textrm{T}}(\mu (t-\varrho _k(t)))G_kg(\mu (t-\varrho _k(t)))\nonumber \\\le & {} \sum _{k=1}^{K}\delta _k g^{\textrm{T}}(\mu (t))G_kg(\mu (t))\nonumber \\{} & {} -\sum _{k=1}^{K}\delta _k (1-\gamma _k)g^{\textrm{T}}(\mu (t-\varrho _k(t)))G_kg(\mu (t-\varrho _k(t))). \end{aligned}$$
(6)

Based on the integral form of Cauchy’s inequality

$$\begin{aligned} \bigg (\int u(\tau )v(\tau )\textrm{d}\tau \bigg )^2\le \bigg (\int u^2(\tau )\textrm{d}s\bigg )\bigg (\int v^2(\tau )\textrm{d}\tau \bigg ), \end{aligned}$$

we have

$$\begin{aligned} \dot{V}_3(t)= & {} \sum _{j=1}^{n}q_j\int _{0}^{\infty }\psi _j(\eta )(r(\mu _j(t)))^2\textrm{d} \eta -\sum _{j=1}^{n}q_j\int _{0}^{\infty }\psi _j(\eta )(r_j(\mu _j(t-\eta )))^2\textrm{d} \eta \nonumber \\\le & {} r^\textrm{T}(\mu (t))Qr(\mu (t)) -\sum _{j=1}^{n}q_j\int _{0}^{\infty }\psi _j(\eta )\textrm{d}\eta \int _{0}^{\infty }\psi _j(\eta )(r_j (\mu _j(t-\eta )))^2\textrm{d}\eta \nonumber \\\le & {} \mu ^\textrm{T}(t)L_3QL_3\mu (t)\nonumber \\{} & {} -\left( \int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s\right) ^\textrm{T}Q(\int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s). \end{aligned}$$
(7)

According to Lemma 3, there must exist a positive definite diagonal matrix M such that

$$\begin{aligned}{} & {} 2\mu ^{\textrm{T}}(t)PBf(\mu (t))\nonumber \\{} & {} \quad =2\mu ^{\textrm{T}}(t)PBM^{-\frac{1}{2}}M^{\frac{1}{2}}f(\mu (t))\nonumber \\{} & {} \quad \le \mu ^{\textrm{T}}(t)PBM^{-1}B^{\textrm{T}}P\mu (t)+f^{\textrm{T}}(\mu (t))Mf(\mu (t)), \end{aligned}$$
(8)
$$\begin{aligned}{} & {} 2\sum _{k=1}^{K}\mu ^{\textrm{T}}(t)PC_kg(\mu (t-\varrho _k(t)))\nonumber \\{} & {} \quad =\sum _{k=1}^{K}2\mu ^{\textrm{T}}(t)PC_kG_k^{-\frac{1}{2}}G_k^{\frac{1}{2}}g(\mu (t-\varrho _k(t)))\nonumber \\{} & {} \quad \le \sum _{k=1}^{K}\frac{1}{\delta _k(1-\gamma _k)}\mu ^{\textrm{T}}(t)PC_kG_k^{-1}C_k^{\textrm{T}}P\mu (t)\nonumber \\{} & {} \qquad +\sum _{k=1}^{K}\delta _k(1-\gamma _k)g^{\textrm{T}}(\mu (t-\varrho _k(t))) G_kg(\mu (t-\varrho _k(t))), \end{aligned}$$
(9)
$$\begin{aligned}{} & {} 2\mu ^{\textrm{T}}(t)PH\int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s\nonumber \\{} & {} \quad =2\mu ^{\textrm{T}}(t)PHQ^{-\frac{1}{2}}Q^{\frac{1}{2}}\int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s\nonumber \\{} & {} \quad \le \mu ^{\textrm{T}}(t)PHQ^{-1}H^{\textrm{T}}P\mu (t)\nonumber \\{} & {} \qquad +\bigg (\int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s\bigg )^{\textrm{T}} Q\big (\int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s\big ). \end{aligned}$$
(10)

Hence, from (5)–(10), we can get

$$\begin{aligned} \dot{V}(t)= & {} \dot{V_1}(t)+\dot{V_2}(t)+\dot{V_3}(t)\\\le & {} 2\mu ^{\textrm{T}}(t)(-PA)\mu (t)+ \mu ^{\textrm{T}}(t)PBM^{-1}B^{\textrm{T}}P\mu (t)+f^{\textrm{T}}(\mu (t))Mf(\mu (t))\\{} & {} +\sum _{k=1}^{K}\frac{1}{\delta _k(1-\gamma _k)}\mu ^{\textrm{T}}(t)PC_kG_k^{-1}C_k^{\textrm{T}}P\mu (t)+\mu ^{\textrm{T}} (t)PHQ^{-1}H^{\textrm{T}}P\mu (t)\\{} & {} +\sum _{k=1}^{K} \delta _k g^{\textrm{T}}(\mu (t))G_kg(\mu (t))+ \mu ^\textrm{T}(t)L_3QL_3\mu (t)\\\le & {} \mu ^{\textrm{T}}(t)\bigg [-2PA+ L_3QL_3+L_1ML_1+\sum _{k=1}^{K}\delta _k L_2G_kL_2+PBM^{-1}B^{\textrm{T}}P \\{} & {} +PHQ^{-1}H^{\textrm{T}}P+\sum _{k=1}^{K}\frac{1}{\delta _k((1-\gamma _k)}PC_kG_k^{-1}C_k^{\textrm{T}}P\bigg ]\mu (t) \\= & {} -\mu ^{\textrm{T}}(t)\Lambda \mu (t), \end{aligned}$$

where \(\Lambda =2PA-L_1ML_1- L_3QL_3-\sum _{k=1}^{K} \delta _k L_2G_kL_2-PBM^{-1}B^{\textrm{T}}P-PHQ^{-1}H^{\textrm{T}}P- \sum _{k=1}^{K}\frac{1}{\delta _k(1-\gamma _k)}PC_kG_k^{-1}C_k^{\textrm{T}}P\). So \(\dot{V}(t)\) is negative if and only if \(\Lambda >0\). We convert the \(\Lambda \) to the form of matrix \(\Xi _1\) by Lemma 4 and the theorem can be proved. \(\square \)

Remark 1

It is worth noting that the matrices M and \(G_k\) should be positive and diagonal in Theorem 2. In the following Theorem, the constraint has been removed with the help of another Lyapunov–Krasovskii function.

Theorem 3

Suppose that A1, A2*, and Theorem 1 hold; Then, the system (3) is asymptotical stable if there exist positive definite matrices P, \(G_k\) and positive scalars \(\delta _k, w_j, q_j\) such that the following inequality holds.

$$\begin{aligned} \Xi _2=\left( \begin{array}{cccccc} \Phi _1 &{} \Phi _3 &{} -PC_1 &{} \cdots &{} -PC_K &{} -PH \\ * &{} \Phi _2 &{} -WC_1 &{} \cdots &{} -WC_K &{} -WH \\ * &{} * &{} \delta _1(1-\gamma _1)G_1 &{} \cdots &{} 0 &{} 0 \\ * &{} * &{} * &{} \vdots &{} \vdots &{} \vdots \\ * &{} * &{} * &{} * &{} \delta _K(1-\gamma _K)G_K &{} 0\\ * &{} * &{} * &{} * &{} * &{} Q\\ \end{array} \right) >0, \end{aligned}$$
(11)

where \(\Phi _1=2PA- L_3QL_3-\sum _{k=1}^{K}\delta _k L_2G_kL_2, \Phi _2=2WAL_1^{-1}, \Phi _3=-(L_1^TWB+PB)\) and \(W=\textrm{diag}[w_1, w_2, \cdots , w_n]\).

Proof

Consider the following Lyapunov–Krasovskii function

$$\begin{aligned} V(t)=V_1(t)+V_2(t)+V_3(t)+V_4(t), \end{aligned}$$

where

$$\begin{aligned}{} & {} V_1(t)=^{R}_{t_0}D^{\alpha -1}_{t}\mu ^{\textrm{T}}(t)P\mu (t),\\{} & {} V_2(t)=\sum _{k=1}^{K}\delta _k\int _{-\varrho _k(t)}^{0}g^{\textrm{T}}(\mu (t+s))G_kg(\mu (t+s))\textrm{d}s ,\\{} & {} V_3(t)= \sum _{j=1}^{n}q_j\int _{0}^{\infty }\psi _j(\eta )\int _{t-\eta }^{t}\psi _j(\eta )(r_j(\mu _j(\xi )))^2\textrm{d}\xi \textrm{d}\eta ,\\{} & {} V_4(t)=2\sum _{j=1}^{n}w_j\int _{0}^{t}f_j(\mu _j(s))^{R}_{t_{0}}D^{\alpha }_{t}\mu _j(s)\textrm{d}s. \end{aligned}$$

Similar to the process of Theorem 2, we can compute the derivative of \(V_4(t)\) yields

$$\begin{aligned} \dot{V}_4(t)= & {} 2f^\textrm{T}(\mu (t))W\ _{t_0}^{R}D^{\alpha }_{t}\mu (t)\nonumber \\= & {} 2f^\textrm{T}(\mu (t))W\bigg [-A\mu (t)+Bf(\mu (t))+\sum _{k=1}^{K}C_kg(\mu (t-\varrho _k(t)))\nonumber \\{} & {} +H\int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s\bigg ]\nonumber \\\le & {} -2f^\textrm{T}(\mu (t))WAL_1^{-1}f(\mu (t))+2f^\textrm{T}(\mu (t))WBf(\mu (t))\nonumber \\{} & {} +2\sum _{k=1}^{K}f^\textrm{T}(\mu (t))WC_kg(\mu (t-\varrho _k(t)))\nonumber \\{} & {} +2f^T(\mu (t))WH\int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s. \end{aligned}$$
(12)

Combining (5), (6), (7), (12), we have

$$\begin{aligned} \dot{V}(t)= & {} \dot{V_1}(t)+\dot{V_2}(t)+\dot{V_3}(t)+\dot{V_4}(t)\\\le & {} \mu ^{\textrm{T}}(t)(-2PA+ L_3QL_3+\sum _{k=1}^{K}\delta _k L_2G_kL_2)\mu (t)\\{} & {} +2\mu ^{\textrm{T}}(t)(L_1^\textrm{T}WB+PB)f(\mu (t))+2\sum _{k=1}^{K}\mu ^{\textrm{T}}(t)PC_kg(\mu (t-\varrho _k(t))) \\{} & {} +2\mu ^{\textrm{T}}(t)PH\int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s-2f^\textrm{T}(\mu (t))WAL_1^{-1}f(\mu (t))\\{} & {} +2\sum _{k=1}^{K}f^\textrm{T}(\mu (t))WC_kg(\mu (t-\varrho _k(t)))\\{} & {} +2f^\textrm{T}(\mu (t))WH\int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s\\{} & {} -\sum _{k=1}^{K}\delta _k(1-\gamma _k)g^{\textrm{T}}(\mu (t-\varrho _k(t)))G_kg(\mu (t-\varrho _k(t)))\\{} & {} - \left( \int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s\right) ^\textrm{T}Q\left( \int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s\right) \\= & {} - \left( \begin{array}{cccc} \mu (t) \\ f(\mu (t)) \\ g(\mu (t-\varrho _1(t)))\\ \vdots \\ g(\mu (t-\varrho _K(t)))\\ \int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s\\ \end{array} \right) ^\textrm{T}\Xi _2 \left( \begin{array}{cccccc} \mu (t) \\ f(\mu (t)) \\ g(\mu (t-\varrho _1(t)))\\ \vdots \\ g(\mu (t-\varrho _K(t)))\\ \int ^{t}_{-\infty }\psi (t-s)r(\mu (s))\textrm{d}s\\ \end{array} \right) .\\ \end{aligned}$$

Therefore, the system (3) is asymptotical stable under the condition of (11). This completes the proof. \(\square \)

Remark 2

Obviously, the constraint that the matrices \(G_k\) should be positive and diagonal has been replaced with the positive matrices in Theorem 3. However, the restriction A2* is more strict than A2. Therefore, two sufficient conditions in Theorems 2 and 3 can be chosen according to the practical engineering. In the following part, we will discuss the synchronization problem of the system (1) based on the relationship between the stability and synchronization.

3.2 Asymptotic synchronization criteria

Taking system (1) as the drive system, the response system can be defined as

$$\begin{aligned} ^{R}_{t_{0}}D^{\alpha }_{t}v_i(t)= & {} -a_iv_i(t)+\sum _{j=1}^{n}b_{ij}f_j(v_j(t))+\sum _{k=1}^{K}\sum _ {j=1}^{n}c^k_{ij}g_j(v_j(t-\varrho _k(t)))\nonumber \\{} & {} +\sum _{j=1}^{n}h_{ij}\int ^{t}_{-\infty }\psi _j(t-s)r_j(v_j(s))\textrm{d}s+I_i+z_i(t), \end{aligned}$$
(13)

where \(z_i(t)\) is the suitable controller. In this subsection, we investigate the synchronization between (1) and (13).

Define the error \(e_i(t)=v_i(t)-u_i(t)\). Then, we can get the following error system between (1) and (13)

$$\begin{aligned} ^{R}_{t_{0}}D^{\alpha }_{t}e_i(t)= & {} -a_ie_i(t)+\sum _{j=1}^{n}b_{ij}\bar{f}_j(e_j(t))+ \sum _{k=1}^{K}\sum _{j=1}^{n}c^k_{ij}\bar{g}_j(e_j(t-\varrho _k(t)))\nonumber \\{} & {} +\sum _{j=1}^{n}h_{ij}\int ^{t}_{-\infty }\psi _j(t-s)\bar{r}_j(e_j(s))\textrm{d}s+z_i(t), \end{aligned}$$
(14)

where \(\bar{f}_j(e_j(t))=f_j(v_j(t))-f_j(u_j(t)), \bar{g}_j(e_j(t-\varrho _k(t)))=g_j(v_j(t-\varrho _k(t)))-g_j(u_j(t-\varrho _k(t))), \bar{r}_j(e_j(t))=r_j(v_j(t))-r_j(u_j(t))\). The control law \(z_i(t)\) is adopted as \(z_i(t)=-\sigma _{i}e_i(t), \sigma _{i}\in \textrm{R}.\) For convenience, we transform the above system into the vector form yields

$$\begin{aligned} ^{R}_{t_{0}}D^{\alpha }_{t}e(t)= & {} -(A+\bar{\sigma })e(t)+B\bar{f}(e(t))+\sum _{k=1}^ {K}C_k\bar{g}(e(t-\varrho _k(t)))\nonumber \\{} & {} +H\int ^{t}_{-\infty }\psi (t-s)\bar{r}(e(s))\textrm{d}s, \end{aligned}$$
(15)

where \(\bar{\sigma }=\textrm{diag}[\sigma _{1}, \sigma _{2}, \cdots , \sigma _{n}]\). It is easy to find that the synchronization between the system (1) and (13) is equivalent to the asymptotical stability of the system (15) (Hu et al. 2018).

Theorem 4

Suppose that A1, A2, and Theorem 1 hold; Then, the systems (1) and (13) are in synchronization if the following inequality holds.

$$\begin{aligned} \Xi _3=\left( \begin{array}{cccccc} S &{} PB &{} \sqrt{\frac{1}{\delta _1(1-\gamma _1)}}PC_1 &{} \cdots &{} \sqrt{\frac{1}{\delta _K(1-\gamma _K)}}PC_K &{} PH \\ * &{} M &{} 0 &{} \cdots &{} 0 &{} 0 \\ * &{} * &{} G_1 &{} \cdots &{} 0 &{} 0 \\ * &{} * &{} * &{} \vdots &{} \vdots &{} \vdots \\ * &{} * &{} * &{} * &{} G_K &{} 0 \\ * &{} * &{} * &{} * &{} 0 &{} Q\\ \end{array} \right) >0, \end{aligned}$$
(16)

where \(\bar{S}=2P(A+\bar{\sigma })-L_1ML_1- L_3QL_3-\sum _{k=1}^{K}\delta _k L_2G_kL_2.\)

Theorem 5

Suppose that A1, A2*, and Theorem 1 hold; Then the systems (1) and (13) are in synchronization if the following inequality holds.

$$\begin{aligned} \Xi _4=\left( \begin{array}{cccccc} \bar{\Phi }_1 &{} \Phi _3 &{} -PC_1 &{} \cdots &{} -PC_K &{} -PH \\ * &{} \bar{\Phi }_2 &{} -WC_1 &{} \cdots &{} -WC_K &{} -WH \\ * &{} * &{} \delta _1(1-\gamma _1)G_1 &{} \cdots &{} 0 &{} 0 \\ * &{} * &{} * &{} \vdots &{} \vdots &{} \vdots \\ * &{} * &{} * &{} * &{} \delta _K(1-\gamma _K)G_K &{} 0\\ * &{} * &{} * &{} * &{} * &{} Q\\ \end{array} \right) >0, \end{aligned}$$
(17)

where \(\bar{\Phi }_1=2P(A+\bar{\sigma })- L_3QL_3-\sum _{k=1}^{K}\delta _k L_2G_kL_2, \bar{\Phi }_2=2W(A+\bar{\sigma })L_1^{-1}\).

Remark 3

The theorems obtained in this paper reveal the relationship between the asymptotical stability and synchronization of fractional neural networks.

Remark 4

Compared with Hu et al. (2018), the neural networks in this paper are more applicable due to its multiple time-varying delays and distributed delays. Some inequality theories are used in dealing with the time delays.

4 Numerical simulations

In this section, four examples are taken to show the correctness of our proposed results by LMI toolbox and predictor-corrector algorithm (Bhalekar and Daftardar-Gejji 2011).

Example 1

Consider the following two-dimensional fractional-order delayed neural networks

$$\begin{aligned} ^{R}_{0}D^{\alpha }_{t}\mu (t)= & {} -A\mu (t)+B\textrm{sin}(\mu (t))+C\textrm{tanh}(\mu (t-0.5))\nonumber \\{} & {} +H\int ^{t}_{-\infty }\psi (t-s)\textrm{cos}(\mu (s))\textrm{d}s. \end{aligned}$$
(18)

The parameters of (18) are set as

$$\begin{aligned} A=\left( \begin{array}{ccc} 2.1 &{} 0 \\ 0 &{} 2.1\\ \end{array} \right) , B=\left( \begin{array}{ccc} 0.1 &{} 0.17\\ 0.3 &{} 0.2\\ \end{array} \right) , C=\left( \begin{array}{ccc} 0.3 &{} -0.1\\ -0.2 &{} 0.1\\ \end{array} \right) , H=\left( \begin{array}{ccc} 0.2 &{} -0.1\\ 0 &{} -0.2\\ \end{array} \right) . \end{aligned}$$

Obviously, the neuron activation functions satisfy the condition A2 and \(l^1_j=l^2_j=l^3_j=1\). It can be verified that

$$\begin{aligned} \rho =\sum _{i=1}^{2}\left( \begin{array}{c} \max \\ 1 \le j \le 2 \end{array}\frac{|b_{ij}|l^1_{j}+|h_{ij}|l^3_{j}}{a_j}+\frac{|c_{ij} |l^2_{j}}{a_j}\right) =\frac{1.37}{2.1}<1. \end{aligned}$$

So, there exists a unique equilibrium point for system (18).

According to the Theorem 2, we can obtain positive scalars \(\delta =q_1=q_2=1\) and a positive definite matrix P, positive definite diagonal matrices MG by Matlab LMI toolbox, which illustrate the asymptotical stability of system (18)

$$\begin{aligned}{} & {} P=\left( \begin{array}{cc} 1.6534 &{} -0.0024 \\ -0.0024 &{} 1.6433 \\ \end{array} \right) , M=\left( \begin{array}{cc} 1.9915 &{} 0 \\ 0 &{} 1.9810 \\ \end{array} \right) , G=\left( \begin{array}{cc} 1.9915 &{} 0 \\ 0 &{} 1.9810 \\ \end{array} \right) . \end{aligned}$$

To further verify the correctness of the above analysis, the state trajectories of system (18) under different initial conditions and different fractional-order conditions are simulated by predictor–corrector algorithm in Figs. 1, 2. It is easy to find that the trajectories of states \(u_1(t), u_2(t)\) are asymptotical stable.

Fig. 1
figure 1

Trajectories of states under different initial conditions. The fractional order is set as \(\alpha =0.6\)

Fig. 2
figure 2

Trajectories of states under different fractional-order conditions. The initial condition is set as [0.8, 0.7]

Example 2

Consider the following two-dimensional fractional-order delayed neural networks

$$\begin{aligned} ^{R}_{0}D^{\alpha }_{t}\mu (t)= & {} -A\mu (t)+B\textrm{tanh}(\mu (t))+C\textrm{tanh}(\mu (t-0.5))\nonumber \\{} & {} +H\int ^{t}_{-\infty }\psi (t-s)\textrm{cos}(\mu (s))\textrm{d}s. \end{aligned}$$
(19)

The parameters of (18) are set as

$$\begin{aligned} A=\left( \begin{array}{cc} 1.8 &{} 0 \\ 0 &{} 1.8 \\ \end{array} \right) , B=\left( \begin{array}{cc} -0.1 &{} 0 \\ 0 &{} -0.1 \\ \end{array} \right) , C=\left( \begin{array}{cc} 0.3 &{} -0.2 \\ -0.2 &{} 0.3 \\ \end{array} \right) , H=\left( \begin{array}{cc} 0.1 &{} -0.2 \\ 0 &{} -0.1 \\ \end{array} \right) . \end{aligned}$$

Similarly, the Theorem 1 can be proved as follows

$$\begin{aligned} \rho =\sum _{i=1}^{2}\left( \begin{array}{c} \max \\ 1 \le j \le 2 \end{array}\frac{|b_{ij}|l^1_{j}+|h_{ij}|l^3_{j}}{a_j}+\frac{|c_{ij}|l^2 _{j}}{a_j}\right) =\frac{1}{1.8}<1. \end{aligned}$$

So, there exists a unique equilibrium point for system (19).

Then, we can find positive scalars \(\delta =q_j=\omega _j=1, j=1,2\) and positive definite matrices PG that satisfy the inequality (11)

$$\begin{aligned} P=\left( \begin{array}{cc} 2.3456 &{} -0.0017 \\ -0.0017 &{} 2.3473 \\ \end{array} \right) , G=\left( \begin{array}{cc} 4.0327 &{} -0.0027 \\ -0.0027 &{} 4.0354 \\ \end{array} \right) . \end{aligned}$$

Therefore, we can conclude that the system (19) is asymptotical stable based on the Theorem 3. The trajectories of states \(u_1(t), u_2(t)\) under different initial conditions and different fractional-order conditions are given in Figs. 3, 4, which can verify the accuracy of the previous work.

Fig. 3
figure 3

Trajectories of states under different initial conditions. The fractional order is set as \(\alpha =0.6\)

Fig. 4
figure 4

Trajectories of states under different fractional-order conditions. The initial condition is set as [0.8, 0.7]

Example 3

Consider the following two-dimensional fractional-order delayed neural networks

$$\begin{aligned} ^{R}_{0}D^{\alpha }_{t}\mu (t)= & {} -A\mu (t)+B\textrm{sin}(\mu (t))+C\textrm{tanh}(\mu (t-0.5))\nonumber \\{} & {} +H\int ^{t}_{-\infty }\psi (t-s)\textrm{cos}(\mu (s))\textrm{d}s. \end{aligned}$$
(20)

The parameters of (20) are set as

$$\begin{aligned} A=\left( \begin{array}{cc} 1.7 &{} 0 \\ 0 &{} 1.7 \\ \end{array} \right) , B=\left( \begin{array}{cc} 0.4 &{} -0.1 \\ 0.3 &{} -0.2 \\ \end{array} \right) , C=\left( \begin{array}{cc} 0.6 &{} 0.1 \\ -0.2 &{} 0.4 \\ \end{array} \right) , H=\left( \begin{array}{cc} 0.3 &{} -0.2 \\ 0 &{} -0.1 \\ \end{array} \right) . \end{aligned}$$

Then, the error system between (13) and (14) can be devised as

$$\begin{aligned} ^{R}_{t_{0}}D^{\alpha }_{t}e(t)= & {} -(A+\bar{\sigma })e(t)+B\textrm{sin} (e(t))+C\textrm{tanh}(e(t-0.5))\nonumber \\{} & {} +H\int ^{t}_{-\infty }\psi (t-s)\textrm{cos}(e(s))\textrm{d}s+z(t), \end{aligned}$$
(21)

where the control law can be chosen as

$$\begin{aligned} \bar{\sigma }=\left( \begin{array}{cc} -0.2 &{} 0 \\ 0 &{} 0.05 \\ \end{array} \right) . \end{aligned}$$

Based on the Theorem 4, the positive definite matrix P and positive definite diagonal matrices M, G can be calculated by Matlab yields

$$\begin{aligned}{} & {} P=\left( \begin{array}{cc} 4.4674 &{} 0.1959 \\ 0.1959 &{} 5.2716\\ \end{array} \right) , M=\left( \begin{array}{cc} 2.3818 &{} 0 \\ 0 &{} 4.9722 \\ \end{array} \right) , G=\left( \begin{array}{cc} 2.8494 &{} 0 \\ 0 &{} 5.4527 \\ \end{array} \right) . \end{aligned}$$

So, the error system (21) is asymptotical stable which means that the systems (13) and (14) are in synchronization. Figure 5 presents the trajectories of \(e_1(t), e_2(t)\) under different fractional-order conditions. We can find that the time responses of the states tend to zero, which illustrates the correctness of the above analysis.

Fig. 5
figure 5

Trajectories of states under different fractional-order conditions. The parameters are set as \(\alpha =0.6\) and \(\alpha =0.8\) respectively

Example 4

Consider the following two-dimensional fractional-order delayed neural networks

$$\begin{aligned} ^{R}_{0}D^{\alpha }_{t}\mu (t)= & {} -A\mu (t)+B\textrm{tanh}(\mu (t))+C\textrm{tanh}(\mu (t-0.5))\nonumber \\{} & {} +H\int ^{t}_{-\infty }\psi (t-s)\textrm{cos}(\mu (s))\textrm{d}s. \end{aligned}$$
(22)

The parameters of (22) are set as

$$\begin{aligned} A=\left( \begin{array}{cc} 1.5 &{} 0 \\ 0 &{} 1.5 \\ \end{array} \right) , B=\left( \begin{array}{cc} 0.2 &{} -0.1 \\ -0.5 &{} 0.3 \\ \end{array} \right) , C=\left( \begin{array}{cc} -0.1 &{} -0.2 \\ -0.3 &{} 0.17 \\ \end{array} \right) , H=\left( \begin{array}{cc} 0.3 &{} -0.21 \\ 0.1 &{} -0.2 \\ \end{array} \right) . \end{aligned}$$

Similarly, we can get the error system between (13) and (14), that is

$$\begin{aligned} ^{R}_{t_{0}}D^{\alpha }_{t}e(t)= & {} -(A+\bar{\sigma })e(t)+B\textrm{tanh}(e(t))+C\textrm{tanh}(e(t-0.5))\nonumber \\{} & {} +H\int ^{t}_{-\infty }\psi (t-s)\textrm{cos}(e(s))\textrm{d}s+z(t), \end{aligned}$$
(23)

where the control law can be devised as

$$\begin{aligned} \bar{\sigma }=\left( \begin{array}{cc} -0.13 &{} 0 \\ 0 &{} -0.08 \\ \end{array} \right) . \end{aligned}$$

So, there exist positive definite matrices PG that satisfy the inequality (17)

$$\begin{aligned} P=\left( \begin{array}{cc} 1.9251 &{} -0.0358 \\ -0.0358 &{} 1.8965 \\ \end{array} \right) , G=\left( \begin{array}{cc} 2.1986 &{} 0.0930 \\ 0.0930 &{} 1.8433 \\ \end{array} \right) . \end{aligned}$$

Based on the Theorem 5, we can conclude that the error system (21) is asymptotical stable, which means that the systems (13) and (14) are in synchronization. The synchronization trajectories of \(e_1(t), e_2(t)\) in Fig. 6 further verify the accuracy of the above analysis.

Fig. 6
figure 6

Trajectories of states under different fractional-order conditions. The parameters are set as \(\alpha =0.6\) and \(\alpha =0.8\) respectively

5 Conclusion

As we know, various types of time delays are inevitable in the implementation of fractional neural networks. Recently, the dynamical analysis of fractional delayed neural networks has received considerable attention. In view of this, we consider the fractional neural networks with both multiple time-varying delays and distributed delays, and then investigate their asymptotical stability and synchronization. First, by the Banach’s fixed point theorem, the existence and uniqueness of the considered system are studied. Then, two sufficient conditions are derived to ensure the asymptotical stability of the addressed model by integer-order Lyapunov direct method, which can avoid calculating the fractional-order derivative of Lyapunov–Krasovskii functions. Furthermore, the synchronization criteria are presented as our main results. Numerical simulations are proposed at last by LMI toolbox and predictor–corrector algorithm to check the effectiveness of the obtained results.

In the future, we might study the dynamical behaviors including stability, synchronization and bifurcation of fractional memristive complex-valued neural networks with time delays and impulsive effects.