Introduction

In the past decades, there have been increasing research interests in analyzing the dynamic behaviors of neural networks due to their widespread applications (Guo and Li 2012; Manivannan and Samidurai 2016; Mattia and Sanchez-Vives 2012; Yu et al. 2013), and the references therein. In many applications, complex signals are involved and complex-valued neural network is preferable. In the recent years, the complex-valued neural network is an emerging field of research in both theoretical and practical points of view. The major advantage of complex-valued neural networks is to explore new capabilities and higher performance of the designed network. According to that there has been increasing attention paid to study the dynamical behavior of the complex-valued neural networks and found an applications in different areas, such as pattern classification problems (Nait-Charif 2010), associative memory (Tanaka and Aihara 2009) and optimization problems (Jiang 2008). The equilibrium point of those existing applications are necessary to keep the networks to be stable. Therefore, the stability analysis is the most important dynamical property of complex-valued neural networks.

In real life situation, a time delay often occurs for the reason of the finite switching speed of the amplifiers, and it also appears in the electronic implementation of the neural networks when processing the signal transmission, which may cause the dynamical behaviors of neural networks in the form of instability, bifurcation and oscillation (Alofi et al. 2015; Cao and Li 2017; Hu et al. 2014; Huang et al. 2017). Thus, many authors have taken into account the constant time delays (Hu and Wang 2012; Subramanian and Muthukumar 2016) and time-varying delays (Chen et al. 2017; Gong et al. 2015) in the stability analysis of complex-valued neural networks.

In addition, a time delay in leakage term of the systems, which is called the leakage delay and a considerable factor affecting dynamics for the worse in the systems, is being put to use in the stability analysis of neural networks. The aforesaid results (Bao et al. 2016; Chen et al. 2017; Gong et al. 2015; Hu and Wang 2012; Subramanian and Muthukumar 2016) concerning the dynamical behavior analysis of the complex-valued neural networks with constant time delays or time-varying delays and did not consider the leakage effect. Even though, the leakage delay is extensively studied for real-valued neural networks (Lakshmanan et al. 2013; Li and Cao 2016; Sakthivel et al. 2015; Xie et al. 2016), the complex-valued neural networks with leakage delay has been rarely considered in the literature (Chen and Song 2013; Chen et al. 2016; Gong et al. 2015). As an example, by constructing an appropriate Lyapunov–Krasovskii functionals, the authors Gong et al. (2015) and Chen et al. (2016), respectively, studied the global \(\mu \)-stability problem for the continuous-time and discrete-time complex-valued neural networks with leakage time delay and unbounded time-varying delays. By employing a combination of fixed point theory, Lyapunov–Krasovskii functional and the free weighting matrix method, the existence, uniqueness and global stability of the equilibrium point of complex-valued neural networks with both leakage delay and time delay on time scales are established in Chen and Song (2013).

Furthermore, the neural networks model with two successive time-varying delays has been introduced in Zhao et al. (2008) and the authors correctly pointed out that since signal transmissions may experience a few segment of networks and the conditions of network transmission may differ for each other, which can possibly produce successive delays with different properties. It is not rational to lump the two time delays into one time delay. Thus, it is more reasonable to model the neural networks with additive time-varying delays. In Shao and Han (2012), the authors discussed the stability and stabilization for continuous-time systems with two additive time-varying input delays arising from networked control systems. The problem of stability criteria of neural networks with two additive time-varying delay components are addressed in Tian and Zhong (2012) by using the both reciprocally convex and convex polyhedron approach. The authors Rakkiyappan et al. (2015) studied the passivity and passification problem for a class of memristor-based recurrent neural networks with additive time-varying delays. From the above, the additive time-varying delays are only presented in the real-valued neural networks in the previous literature.

However, it is worth noting that in those existing results, the time-varying delay considered in the complex-valued neural networks is usually a single. Stability research on complex-valued neural networks with additive time-varying delays has not been considered in the literature, which motivates our research interesting. To the best of authors knowledge, the global asymptotic stability analysis for complex-valued neural networks with leakage delays and additive time-varying delays has not been considered in the literature, and remains as a topic for further investigation.

In this paper, the main contributions are given as follows:

  • It is the first time to establish the global asymptotic stability of complex-valued neural networks with leakage delays and additive time-varying delay components.

  • A suitable Lyapunov–Krasovskii functional is constructed with the full information of additive time-varying delays and leakage delays.

  • A new type of complex-valued triple integral inequality is introduced to estimate the upper bound of the derivative of Lyapunov–Krasovskii functional.

  • Based on the model transformation technique, sufficient conditions for the global asymptotic stability of proposed neural networks are obtained in the linear matrix inequality form, which can be checked numerically by using the effective YALMIP toolbox in MATLAB.

  • Finally, three illustrative examples are provided to show the effectiveness of the proposed criteria.

The rest of this paper is organized as follows: In “Problem formulation and preliminaries” section, the model of the complex-valued neural networks with leakage delay and additive time-varying delays is presented, and some preliminaries are briefly outlined. In “Main result” section, the sufficient conditions are derived to ascertain the global asymptotic stability of the complex-valued neural networks with leakage delay and additive time-varying delays by Lyapunov–Krasovskii functional method. Three numerical examples are given to show the effectiveness of the acquired conditions in “Numerical example” section. Finally, conclusions are drawn in “Conclusion” section.

Notations

The notation used throughout this paper is fairly standard. \(\mathbb {C}^n\) and \(\mathbb {C}^{m \times n}\) denote the set of n-dimensional complex vectors, \(m \times n\) complex matrices, respectively. The superscript T and \(*\) denotes the matrix transposition and complex conjugate transpose, respectively; i denotes the imaginary unit, that is \(i=\sqrt{-1}\). For any matrix P, \(P>0\) \((P<0)\) means P is positive definite (negative definite) matrix. For complex number \(z=x+iy\), the notation \(|z|=\sqrt{x^2+y^2}\) stands for the module of z and \(\Vert z\Vert =\sqrt{z^*z}\); \( \text{ diag }\{\cdot \}\) stands for diagonal of the block- diagonal matrix. If \(A \in \mathbb {C}^{n\times n}\), denotes by \(\Vert A\Vert \) its operator norm, i.e., \(\Vert A\Vert =\sup \{\Vert Ax\Vert :\Vert x\Vert =1\}=\sqrt{\lambda _{\max }(A^*A)}\). The notation \(\star \) always denotes the conjugate transpose of block in a Hermitian matrix.

Problem formulation and preliminaries

In this paper, we consider a model of complex-valued neural networks with leakage delay and two additive time-varying delay components, which can be described by

$$\begin{aligned} \dot{u}(t)= -Au(t-\sigma )+Bg(u(t))+Cg(u(t-\tau _1(t)-\tau _2(t)))+J, \end{aligned}$$
(1)

where \(u(t) = (u_1(t), u_2(t),\ldots , u_n(t))^T \in \mathbb {C}^n\) is the state vector of the neural networks with n neurons at time t, \(A=diag\{a_1,a_2,\ldots ,a_n\} \in \mathbb {R}^{n \times n}\) with \(a_j >0\; (j=1,2,\ldots ,n) \) is the self feedback connection weight matrix. \(B=(b_{jk})_{n \times n} \in \mathbb {C}^{n \times n}\) and \(C=(c_{jk})_{n \times n} \in \mathbb {C}^{n \times n}\) are the connection weight matrix and delayed connection weight matrix, respectively; \(g(u(t)) = (g_1(u_1(\cdot)), g_2(u_2(\cdot)),\ldots , g_n(u_n(\cdot)))^T \in \mathbb{C}^n\) is the complex-valued neuron activation function; \(J=(J_1,J_2,\ldots ,J_n)^T\) is the external input vector; \(\sigma \) denotes leakage delay, \(\tau _1(t)\) and \(\tau _2(t)\) are two time varying delays satisfies \(0 \le \tau _1(t)\le \tau _1\), \(\dot{\tau }_1(t)\le \mu _1\), \(0 \le \tau _2(t)\le \tau _2\), \(\dot{\tau }_2(t)\le \mu _2\), \(\tau (t)= \tau _1(t)+\tau _2(t)\), \(\tau = \tau _1+\tau _2\), \(\mu =\mu _1+\mu _2\) and \(\mu _1,\mu _2,\mu \) are less than one. The initial condition associated with the complex-valued neural network (1) is given by

$$\begin{aligned} u(s)= \phi (s),\quad s \in [-\rho ,0],\; \text{ where } \;\rho \in \max \{\sigma ,\tau \},\, \phi \in C( [-\rho ,0],D). \end{aligned}$$

It means that u(s) is continuous and satisfies (1). Let \(C( [-\rho ,0],D)\) be the space of continuous functions mapping \([-\rho ,0]\) into \(D \in \mathbb {C}^n.\)

Assumption 2.1

Let \(g_j(\cdot )\) satisfies the Lipschitz continuity condition in the complex domain, that is, for all \(j=1,2,\ldots ,n\), there exists a positive constant \(\hat{F}_j\), such that, for \(u_1,u_2 \in \mathbb {C}\), we have

$$\begin{aligned} |g_j(u_1) -g_j(u_2)| \le \hat{F}_j|u_1-u_2|. \end{aligned}$$

Definition 2.2

The vector \(\hat{u} \in \mathbb {C}^n\) is said to be an equilibrium point of complex valued neural networks (1) if it satisfies the following condition

$$\begin{aligned} -A \hat{u}+(B+C)g(\hat{u}) +J=0. \end{aligned}$$

Theorem 2.3

(Existence of equilibrium point) Under the Assumption  2.1, there exist an equilibrium point \(\hat{u} \in \mathbb {C}^n\) for the system (1) if

$$\begin{aligned} -A \hat{u}+(B+C)g(\hat{u}) +J=0. \end{aligned}$$

Proof

Since the activation function of the system (1) is bounded, there exists a constant \(M_i\) such that,

$$\begin{aligned} |g_i(u_i)| \le M_i \text{ for } \text{ any } u_i \in \mathbb {C},\; i=1,2,\ldots ,n. \end{aligned}$$

Let \(M=\sum \limits _{i=1}^{n}(M_i^2)^{\frac{1}{2}}\). Then \(\Vert g(u)\Vert \le M,\) for \(u=(u_1,u_2,\ldots ,u_n) \in \mathbb {C}^n.\) We denote \(\mathcal {A}=\{u \in \mathbb {C}^n:\Vert u\Vert \le \Vert A^{-1}\Vert (\Vert B+C\Vert )M+\Vert J\Vert \}\) and let us define the map \(\mathbb {C}^n \rightarrow \mathbb {C}^n\) by,

$$\begin{aligned} H(u)=A^{-1}(Bg(u)+Cg(u)+J). \end{aligned}$$

Since, H is a continuous map and using the condition \(\Vert g(u)\Vert \le M,\) we obtain that

$$\begin{aligned} \Vert H(u)\Vert \le \Vert A^{-1}\Vert (\Vert B+C\Vert M+\Vert J\Vert ). \end{aligned}$$

Therefore from the definition of \(\mathcal {A}\), H maps \(\mathcal {A}\) into itself. By Brouwers fixed point theorem, it can be inferred that there exist a fixed point \(\hat{u}\) of H, which satisfies

$$\begin{aligned} A^{-1}(Bg(\hat{u})+C g(\hat{u})+J)= \hat{u}. \end{aligned}$$

Pre multiplying by the matrix A on both sides, gives

$$\begin{aligned} -A\hat{u} +Bg(\hat{u})+C g(\hat{u})+J= 0. \end{aligned}$$

That is, by Definition 2.2, \(\hat{u}\) is an equilibrium point of (1). Hence, the proof is completed. \(\square \)

For convenience, we shift the equilibrium point \(\hat{u}\) to the origin by letting \(z(t)=u(t)-\hat{u}.\) Then, the system (1) can be written as

$$\begin{aligned} \dot{z}(t)= -Az(t-\sigma )+Bf(z(t))+Cf(z(t-\tau _1(t)-\tau _2(t))). \end{aligned}$$
(2)

where, \(f(z(t))=g(z(t)+\hat{u})-g(\hat{u}).\) By using the transformation, the system (2) has an equivalent form as follows

$$\begin{aligned} \frac{d}{dt}\left[ z(t)-A\int \limits _{t-\sigma }^{t}z(s)ds\right] = -Az(t)+Bf(z(t))+Cf(z(t-\tau _1(t)-\tau _2(t))). \end{aligned}$$
(3)

In the following, we introduce relevant assumption and lemmas to facilitate the presentation of main results in the ensuing sections.

Assumption 2.4

Let \(f_j(\cdot )\) satisfies the Lipschitz continuity condition in the complex domain, that is, for all \(j=1,2,\ldots ,n\), there exists a positive constant \(F_j\), such that, for \(z_1,z_2 \in \mathbb {C}\), we have

$$\begin{aligned} |f_j(z_1) -f_j(z_2)| \le F_j|z_1-z_2|, \end{aligned}$$

where \(F_j\) is called Lipschitz constant. Moreover, define \(\Gamma =\text{ diag }\{F_1^2,F_2^2,\ldots ,F_n^2\}.\)

Lemma 2.5

Velmurugan et al. (2015)(Schur Complement) A given matrix,

$$\begin{aligned} \Omega = \left( \begin{array}{cc} \Omega _{11} &{} \Omega _{12} \\ \Omega _{12}^T &{} \Omega _{22} \end{array} \right) >0, \end{aligned}$$

where \(\Omega _{11}= \Omega _{11}^T\), \(\Omega _{22}= \Omega _{22}^T\) is equivalent to any one of the following conditions

$$\begin{aligned} \Omega _{22}>0, \quad \Omega _{11}-\Omega _{12}^T\Omega _{22}^{-1} \Omega _{12}>0, \\ \Omega _{11}>0, \quad \Omega _{22}-\Omega _{12}\Omega _{11}^{-1} \Omega _{12}^T>0. \end{aligned}$$

Lemma 2.6

For any constant Hermitian matrix \(M\in \mathbb {C}^{n \times n}\) and \(M >0 \), a scalar function \(z(s) : [a,b] \rightarrow \mathbb {C}^{n}\) with scalars \(a < b\) such that the following inequalities are satisfied:

  1. (i)

    \(\left( \int \limits _{a}^{b}z(s)ds\right) ^* M\left( \int \limits _{a}^{b}z(s)ds\right) \le (b-a) \int \limits _{a}^{b}z^*(s) M z(s)ds\)

  2. (ii)

    \(\left( \int \limits _{a}^{b}\int \limits _{s}^{b}z(\theta )d\theta ds\right) ^* M\left( \int \limits _{a}^{b}\int \limits _{s}^{b}z(\theta )d\theta ds\right) \le \frac{(b-a)^2}{2} \int \limits _{a}^{b}\int \limits _{s}^{b}z^*(\theta ) M z(\theta )d\theta ds\)

  3. (iii)

    \(\left( \int \limits _{a}^{b}\int \limits _{s}^{b}\int \limits _{\theta }^{b}z(\gamma )d\gamma d\theta ds\right) ^* M\left( \int \limits _{a}^{b}\int \limits _{s}^{b}\int \limits _{\theta }^{b}z(\gamma )d\gamma d\theta ds\right) \le \frac{(b-a)^3}{6} \int \limits _{a}^{b}\int \limits _{s}^{b}\int \limits _{\theta }^{b}z^*(\gamma ) M z(\gamma )d\gamma d\theta ds.\)

Proof

The proof of complex-valued Jensen’s inequality (i) is given in Chen and Song (2013). Therefore, we have to prove (ii) and (iii) as follows:

From (i), the following inequality holds:

$$\begin{aligned} \left( \int \limits _{s}^{b}z(\theta )d\theta \right) ^* M\left( \int \limits _{s}^{b}z(\theta )d\theta \right) \le (b-s) \int \limits _{s}^{b}z^*(\theta ) M z(\theta )d\theta . \end{aligned}$$

By the Schur complement Lemma (Velmurugan et al. 2015), the above inequality becomes,

$$\begin{aligned} \left[ \begin{array}{cc} \int \limits _{s}^{b}z^*(\theta ) M z(\theta )d\theta &{} \int \limits _{s}^{b}z^*(\theta )d\theta \\ \int \limits _{s}^{b}z(\theta )d\theta &{} (b-s)M^{-1} \end{array} \right] \ge 0. \end{aligned}$$
(4)

Integrating (4) from a to b, we have

$$\begin{aligned} \left[ \begin{array}{cc} \int \limits _{a}^{b}\int \limits _{s}^{b}z^*(\theta ) M z(\theta )d\theta ds &{}\left( \int \limits _{a}^{b} \int \limits _{s}^{b}z(\theta )d\theta ds\right) ^* \\ \int \limits _{a}^{b}\int \limits _{s}^{b}z(\theta )d\theta ds &{} \int \limits _{a}^{b}(b-s)M^{-1}ds \end{array} \right] \ge 0,\nonumber \\ \left[ \begin{array}{cc} \int \limits _{a}^{b}\int \limits _{s}^{b}z^*(\theta ) M z(\theta )d\theta ds &{}\left( \int \limits _{a}^{b} \int \limits _{s}^{b}z(\theta )d\theta ds \right) ^* \\ \int \limits _{a}^{b}\int \limits _{s}^{b}z(\theta )d\theta ds &{} \frac{(b-a)^2}{2}M^{-1} \end{array} \right] \ge 0. \end{aligned}$$
(5)

By using Schur complement Lemma, the inequality (5) is equivalent to

$$\begin{aligned} \left( \int \limits _{a}^{b}\int \limits _{s}^{b}z(\theta )d\theta ds\right) ^* M\left( \int \limits _{a}^{b}\int \limits _{s}^{b}z(\theta )d\theta ds\right) \le \frac{(b-a)^2}{2} \int \limits _{a}^{b}\int \limits _{s}^{b}z^*(\theta ) M z(\theta )d\theta ds. \end{aligned}$$

This completes the proof of (ii). By applying the same procedure presented in the proof of (ii), the inequality (iii) can be easily derived. Thus, it is omitted. \(\square \)

Main result

In this section, by utilizing a Lyapunov–Krasovskii functional and integral inequalities, we will present a delay-dependent stability criterion for the complex-valued neural networks with leakage delays and additive time-varying delays (3) via linear matrix inequality.

Theorem 3.1

Under Assumption 2.4, the complex-valued neural networks (3) is globally asymptotically stable, if there exist positive Hermitian matrices J, M, N, O, P, Q, R, S, T, U, V, W, X, Y and positive diagonal matrix G such that the following linear matrix inequality holds:

$$\begin{aligned} \varTheta =\left[ \begin{array}{cccccccccccc} \varTheta _{1,1} &{} 0 &{} 0&{} W &{} X &{} 0 &{} PB &{} PC &{}\varTheta _{1,9} &{} 0 &{} \tau _1 M &{} \tau _2 N\\ \star &{} \varTheta _{2,2} &{} 0 &{} 0&{} 0&{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \star &{} \star &{} \varTheta _{3,3} &{} 0&{} 0&{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \star &{}\star &{} \star &{}\varTheta _{4,4} &{} 0 &{} 0&{} 0&{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \star &{}\star &{} \star &{} \star &{}\varTheta _{5,5} &{} 0 &{} 0&{} 0&{} 0 &{} 0 &{} 0 &{} 0 \\ \star &{}\star &{} \star &{} \star &{} \star &{} \varTheta _{6,6} &{}\varTheta _{6,7} &{} \varTheta _{6,8}&{} 0&{} 0 &{} 0 &{} 0 \\ \star &{}\star &{} \star &{} \star &{} \star &{} \star &{} \varTheta _{7,7} &{}\varTheta _{7,8} &{} \varTheta _{7,9}&{} 0&{} 0 &{} 0 \\ \star &{}\star &{} \star &{} \star &{} \star &{} \star &{} \star &{}\varTheta _{8,8} &{} \varTheta _{8,9}&{} 0 &{} 0 &{} 0 \\ \star &{}\star &{} \star &{} \star &{} \star &{} \star &{} \star &{} \star &{}\varTheta _{9,9} &{} 0 &{} 0 &{} 0 \\ \star &{}\star &{} \star &{} \star &{} \star &{} \star &{} \star &{} \star &{} \star &{}\varTheta _{10,10} &{} 0 &{} 0 \\ \star &{}\star &{} \star &{} \star &{} \star &{} \star &{} \star &{} \star &{} \star &{} \star &{}\varTheta _{11,11} &{} 0 \\ \star &{}\star &{} \star &{} \star &{} \star &{} \star &{} \star &{} \star &{} \star &{} \star &{} \star &{}\varTheta _{12,12} \end{array} \right] <0, \end{aligned}$$
(6)

where \(\varTheta _{1,1}=-PA-A^*P+Q+R+S+T+U+\sigma ^2Y-W-X-\sigma ^2O+\frac{\sigma ^6}{36}J+G\Gamma -\tau _1^2M-\tau _2^2N\), \(\varTheta _{1,9}= A^*PA+\sigma O\), \(\varTheta _{2,2}=-(1-\mu _1)Q \), \(\varTheta _{3,3}=-(1-\mu _2)R\), \(\varTheta _{4,4}=-W-S\), \(\varTheta _{5,5}= -X-T\), \(\varTheta _{6,6}=-U+\tau _1^2A^*WA+\tau _2^2A^*XA+\frac{\tau _1^4}{4} A^*MA+\frac{\tau _2^4}{4} A^*NA+\frac{\sigma ^4}{4} A^*OA\), \(\varTheta _{6,7}=-\tau _1^2A^*WB-\tau _2^2A^*XB-\frac{\tau _1^4}{4} A^*MB-\frac{\tau _2^4}{4} A^*NB-\frac{\sigma ^4}{4} A^*OB\), \(\varTheta _{6,8}=-\tau _1^2A^*WC-\tau _2^2A^*XC-\frac{\tau _1^4}{4} A^*MC-\frac{\tau _2^4}{4} A^*NC-\frac{\sigma ^4}{4} A^*OC\), \(\varTheta _{7,7}=\tau _1^2B^*WB+\tau _2^2B^*XB+\frac{\tau _1^4}{4} B^*MB+\frac{\tau _2^4}{4} B^*NB+\frac{\sigma ^4}{4} B^*OB+V-G\), \(\varTheta _{7,8}=\tau _1^2B^*WC+\tau _2^2B^*XC+\frac{\tau _1^4}{4} B^*MC+\frac{\tau _2^4}{4} B^*NC+\frac{\sigma ^4}{4} B^*OC\), \(\varTheta _{7,9}=-B^*PA \), \(\varTheta _{8,8}=\tau _1^2C^*WC+\tau _2^2C^*XC+\frac{\tau _1^4}{4} C^*MC+\frac{\tau _2^4}{4} C^*NC+\frac{\sigma ^4}{4} C^*OC-(1-\mu )V\), \(\varTheta _{8,9}=-C^*PA\), \(\varTheta _{9,9}=-Y-O\), \(\varTheta _{10,10}=-J\), \(\varTheta _{11,11}=-M\), \(\varTheta _{12,12}=-N\).

Proof

Consider the following Lyapunov–Krasovskii functional

$$\begin{aligned} V(t)=\sum \limits _{j=1}^{5}V_i(t), \end{aligned}$$
(7)

where

$$\begin{aligned} V_1(t)=&\left[ z(t)-\int \limits _{t-\sigma }^{t}z(s)ds\right] ^*P\left[ z(t)-\int \limits _{t-\sigma }^{t}z(s)ds\right] ,\\ V_2(t)=&\int \limits _{t-\tau _1(t)}^{t}z^*(s)Qz(s)ds+\int \limits _{t-\tau _2(t)}^{t}z^*(s)Rz(s)ds+\int \limits _{t-\tau _1}^{t}z^*(s)Sz(s)ds+\int \limits _{t-\tau _2}^{t}z^*(s)Tz(s)ds\\&+\,\int \limits _{t-\sigma }^{t}z^*(s)Uz(s)ds+\int \limits _{t-\tau (t)}^{t}f^*(z(s))Vf(z(s))ds,\\ V_3(t)=&\tau _1\int \limits _{-\tau _1}^{0}\int \limits _{t+\theta }^{t}\dot{z}^*(s) W \dot{z}(s)ds d \theta + \tau _2\int \limits _{-\tau _2}^{0}\int \limits _{t+\theta }^{t}\dot{z}^*(s) X \dot{z}(s)ds d \theta + \sigma \int \limits _{-\sigma }^{0}\int \limits _{t+\theta }^{t} z^*(s) Y z(s)ds d \theta ,\\ V_4(t)=&\frac{\tau _1^2}{2}\int \limits _{t-\tau _1}^{t}\int \limits _{\theta }^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) M \dot{z}(s)ds d \gamma d \theta + \frac{\tau _2^2}{2}\int \limits _{t-\tau _2}^{t}\int \limits _{\theta }^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) N \dot{z}(s)ds d \gamma d \theta ,\\ V_5(t)=&\frac{\sigma ^2}{2}\int \limits _{t-\sigma }^{t}\int \limits _{\theta }^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) O \dot{z}(s)ds d \gamma d \theta + \frac{\sigma ^3}{6}\int \limits _{t-\sigma }^{t}\int \limits _{\theta }^{t}\int \limits _{\gamma }^{t}\int \limits _{\delta }^{t} z^*(s) J z(s)ds d \delta d \gamma d \theta . \end{aligned}$$

Taking the time derivative of V(t) along the trajectories of system (3), it follows that

$$\begin{aligned} \dot{V}_1(t)&=\,2\left[ z(t)-A\int \limits _{t-\sigma }^{t}z(s)ds\right] ^*P[-Az(t)+Bf(z(t))+Cf(z(t-\tau _1(t)-\tau _2(t)))],\nonumber \\ &=-2z^*(t)PA z(t)+2z^*(t)PB f(z(t))+2z^*(t)PCf(z(t-\tau _1(t)-\tau _2(t)))+2\int \limits _{t-\sigma }^{t}z^*(s)dsA^*PA z(t)\nonumber \\&\,-2\int \limits _{t-\sigma }^{t}z^*(s)dsA^*PB f(z(t))-2\int \limits _{t-\sigma }^{t}z^*(s)dsA^*PC f(z(t-\tau _1(t)-\tau _2(t))), \end{aligned}$$
(8)
$$\begin{aligned} {\dot{V}_2(t) \le }&{ z^*(t)[Q+R+S+T+U]z(t)-\,(1-\mu _1)z^*(t-\tau _1(t))Qz(t-\tau _1(t))-\,(1-\mu _2)z^*(t-\tau _2(t))R}\nonumber \\&\,{z(t-\tau _2(t))-z^*(t-\tau _1)Sz(t-\tau _1)-\,z^*(t-\tau _2)Tz(t-\tau _2)-z^*(t-\sigma )Uz(t-\sigma )}\nonumber \\&\,{+f^*(z(t))Vf(z(t))-\,(1-\mu )f^*(z(t-\tau (t)))Vf(z(t-\tau (t))),}\end{aligned}$$
(9)
$$\begin{aligned} \dot{V}_3(t) &=\dot{z}^*(t)(\tau _1^2W+\tau _2^2 X) \dot{z}(t) +\sigma ^2 z^*(t)Yz(t) -\tau _1 \int \limits _{t-\tau _1}^{t}\dot{z}^*(s) W \dot{z}(s) ds -\tau _2 \int \limits _{t-\tau _2}^{t}\dot{z}^*(s) X \dot{z}(s) ds \nonumber \\&-\sigma \int \limits _{t-\sigma }^{t} z^*(s) Y z(s) ds,\end{aligned}$$
(10)
$$\begin{aligned} {\dot{V}_4(t)}&=\,{\frac{\tau _1^2}{2} \int \limits _{t-\tau _1}^{t}\int \limits _{\gamma }^{t}\dot{z}^*(t) M \dot{z}(t) ds d\gamma -\frac{\tau _1^2}{2} \int \limits _{t-\tau _1}^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) M \dot{z}(s) ds d\gamma +\frac{\tau _2^2}{2} \int \limits _{t-\tau _2}^{t}\int \limits _{\gamma }^{t}\dot{z}^*(t) N \dot{z}(t) ds d\gamma }\nonumber \\&{-\frac{\tau _2^2}{2} \int \limits _{t-\tau _2}^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) N \dot{z}(s) ds d\gamma }, \nonumber \\ &=\,\dot{z}^*(t)\left( \frac{\tau _1^4}{4}M+\frac{\tau _2^4}{4} N\right) \dot{z}(t) -\frac{\tau _1^2}{2} \int \limits _{t-\tau _1}^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) M \dot{z}(s) ds d\gamma -\frac{\tau _2^2}{2} \int \limits _{t-\tau _2}^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) N \dot{z}(s) ds d\gamma ,\end{aligned}$$
(11)
$$\begin{aligned} {\dot{V}_5(t)}&={\frac{\sigma ^2}{2} \int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\dot{z}^*(t) O \dot{z}(t) ds d\gamma -\frac{\sigma ^2}{2} \int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) O \dot{z}(s) ds d\gamma +\frac{\sigma ^3}{6} \int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\int \limits _{\delta }^{t} z^*(t) J z(t) ds d \delta d\gamma } \nonumber \\&{-\frac{\sigma ^3}{6} \int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\int \limits _{\delta }^{t} z^*(s) J z(s) ds d \delta d\gamma }, \nonumber \\ &=\frac{\sigma ^4}{4}\dot{z}^*(t)O \dot{z}(t)+ \frac{\sigma ^6}{36}z^*(t)J z(t) -\frac{\sigma ^2}{2} \int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) O \dot{z}(s) ds d\gamma -\frac{\sigma ^3}{6} \int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\int \limits _{\delta }^{t} z^*(s) J z(s) ds d \delta d\gamma . \end{aligned}$$
(12)

Applying Lemma 2.6 (i) to the integral terms of (10) which produces that

$$\begin{aligned} -\tau _1 \int \limits _{t-\tau _1}^{t}\dot{z}^*(s) W \dot{z}(s) ds \le&- \int \limits _{t-\tau _1}^{t}\dot{z}^*(s) ds W \int \limits _{t-\tau _1}^{t}\dot{z}(s) ds =-[z(t)-z(t-\tau _1)]^*W[z(t)-z(t-\tau _1)], \end{aligned}$$
(13)
$$\begin{aligned} -\tau _2 \int \limits _{t-\tau _2}^{t}\dot{z}^*(s) X \dot{z}(s) ds \le&- \int \limits _{t-\tau _2}^{t}\dot{z}^*(s) ds X \int \limits _{t-\tau _2}^{t}\dot{z}(s) ds =-[z(t)-z(t-\tau _2)]^*X[z(t)-z(t-\tau _2)],\end{aligned}$$
(14)
$$\begin{aligned} -\sigma \int \limits _{t-\sigma }^{t} z^*(s) Y z(s) ds \le&- \int \limits _{t-\sigma }^{t} z^*(s) ds Y \int \limits _{t-\sigma }^{t} z(s) ds. \end{aligned}$$
(15)

From (3) and (13)–(15), it follows that

$$\begin{aligned} \dot{V}_3(t) \le&z^*(t-\sigma )[\tau _1^2 A^*WA+\tau _2^2 A^*XA]z(t-\sigma )- z^*(t-\sigma )[\tau _1^2 A^*WB+\tau _2^2 A^*XB]f(z(t))- z^*(t-\sigma ) \nonumber \\&\,\,[\tau _1^2 A^*WC+\tau _2^2 A^*XC]f(z(t-\tau (t))) - f^*(z(t))[\tau _1^2 B^*WA+\tau _2^2B^*XA]z(t-\sigma ) +\,f^*(z(t))\nonumber \\&\, [\tau _1^2 B^*WB+\tau _2^2 B^*XB]f(z(t))+\,f^*(z(t))[\tau _1^2 B^*WC+\tau _2^2 B^*XC]f(z(t-\tau (t)))\nonumber \\&\,-f^*(z(t-\tau (t)))[\tau _1^2 C^*WA+\tau _2^2 C^*XA]z(t-\sigma )+ f^*(z(t-\tau (t)))[\tau _1^2 C^*WB+\tau _2^2 C^*XB]f(z(t))\nonumber \\&\,+ f^*(z(t-\tau (t)))[\tau _1^2 C^*WC+\tau _2^2 C^*XC]f(z(t-\tau (t)))+\,z^*(t)[\sigma ^2Y-W-X]z(t)-z^*(t-\tau _1)\nonumber \\&\, \times Wz(t-\tau _1) +\,z^*(t)Wz(t-\tau _1)+\,z^*(t-\tau _1)Wz(t)-z^*(t-\tau _2)Xz(t-\tau _2)+\,z^*(t)Xz(t-\tau _2)\nonumber \\&+\,z^*(t-\tau _2)Xz(t)-\int \limits _{t-\sigma }^{t}z^*(s)ds Y \int \limits _{t-\sigma }^{t}z(s)ds. \end{aligned}$$
(16)

By using Lemma 2.6 (2), the integral terms in (11) can be estimated as follows:

$$\begin{aligned} -\frac{\tau _1^2}{2} \int \limits _{t-\tau _1}^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) M \dot{z}(s) ds d\gamma \le&-\int \limits _{t-\tau _1}^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) ds d\gamma M \times \int \limits _{t-\tau _1}^{t}\int \limits _{\gamma }^{t}\dot{z}(s) ds d\gamma ,\nonumber \\ =&-\left( \tau _1z(t)-\int \limits _{t-\tau _1}^{t}z(s)d s\right) ^* \times M\left( \tau _1z(t)-\int \limits _{t-\tau _1}^{t}z(s)d s\right) , \end{aligned}$$
(17)
$$\begin{aligned} -\frac{\tau _2^2}{2} \int \limits _{t-\tau _2}^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) M \dot{z}(s) ds d\gamma \le&-\int \limits _{t-\tau _2}^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) ds d\gamma N \int \limits _{t-\tau _2}^{t}\int \limits _{\gamma }^{t}\dot{z}(s) ds d\gamma ,\nonumber \\ =&-\left( \tau _2z(t)-\int \limits _{t-\tau _2}^{t}z(s)d s\right) ^* N \left( \tau _2z(t)-\int \limits _{t-\tau _2}^{t}z(s)d s\right) . \end{aligned}$$
(18)

Then, substituting (17) and (18) into (11), yields

$$\begin{aligned} \dot{V}_4(t) \le&;z^*(t-\sigma )\left[ \frac{\tau _1^4}{4}A^*MA+\frac{\tau _1^4}{4} A^*NA\right] z(t-\sigma )- z^*(t-\sigma )\left[ \frac{\tau _1^4}{4} A^*MB+\frac{\tau _2^4}{4} A^*NB\right] f(z(t))- z^*(t-\sigma ) \nonumber \\&\,\times \left[ \frac{\tau _1^4}{4} A^*MC+\frac{\tau _2^4}{4} A^*NC\right] f(z(t-\tau (t))) - f^*(z(t))\left[ \frac{\tau _2^4}{4} B^*MA+\frac{\tau _2^4}{4}B^*NA\right] z(t-\sigma ) +f^*(z(t))\nonumber \\&\, \times \left[ \frac{\tau _1^4}{4} B^*MB+\frac{\tau _2^4}{4} B^*NB\right] f(z(t))+f^*(z(t))\left[ \frac{\tau _1^4}{4} B^*MC+\frac{\tau _2^4}{4} B^*NC\right] f(z(t-\tau (t))) \nonumber \\&\,-f^*(z(t-\tau (t)))\left[ \tau _1^2 C^*MA+\tau _2^2 C^*NA\right] z(t-\sigma )+ f^*(z(t-\tau (t)))\left[ \tau _1^2 C^*MB+\tau _2^2 C^*NB\right] \nonumber \\& f(z(t))+ f^*(z(t-\tau (t)))\left[ \frac{\tau _1^4}{4} C^*MC+\frac{\tau _2^4}{4} C^*NC\right] f(z(t-\tau (t)))-z^*(t)[\tau _1^2M+\tau _2^2N]z(t)\nonumber \\&\,- \int \limits _{t-\tau _1}^{t}z^*(s)ds M \int \limits _{t-\tau _1}^{t}z(s)ds+\tau _1 z^*(t)M \int \limits _{t-\tau _1}^{t}z(s)ds+\tau _1\int \limits _{t-\tau _1}^{t}z^*(s)ds M z(t)\nonumber \\&\,- \int \limits _{t-\tau _2}^{t}z^*(s)ds N \int \limits _{t-\tau _2}^{t}z(s)ds+\tau _2 z^*(t)N \int \limits _{t-\tau _2}^{t}z(s)ds+\tau _2\int \limits _{t-\tau _2}^{t}z^*(s)ds N z(t). \end{aligned}$$
(19)

Similarly, by using Lemma 2.6 (ii) and (iii), the integral terms in \(\dot{V}_5(t)\) can be obtained as follows:

$$\begin{aligned} -\frac{\sigma ^2}{2} \int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) O \dot{z}(s) ds d\gamma \le&-\int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\dot{z}^*(s) ds d\gamma O \times \int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\dot{z}(s) ds d\gamma ,\nonumber \\ =&-\left( \sigma z(t)-\int \limits _{t-\sigma }^{t}z(s)d s\right) ^* \times O\left( \sigma z(t)-\int \limits _{t-\sigma }^{t}z(s)ds\right) , \end{aligned}$$
(20)
$$\begin{aligned} -\frac{\sigma ^3}{6} \int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\int \limits _{\delta }^{t} z^*(s) J z(s) ds d \delta d\gamma \le&-\int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\int \limits _{\delta }^{t} z^*(s) ds d \delta d\gamma J \int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\int \limits _{\delta }^{t} z(s) ds d \delta d\gamma . \end{aligned}$$
(21)

Therefore, together with (12) and (20), (21), we get

$$\begin{aligned} \dot{V}_5(t)\le&z^*(t-\sigma )\left[ \frac{\sigma ^4}{4}A^*OA\right] z(t-\sigma )- z^*(t-\sigma )\left[ \frac{\sigma ^4}{4} A^*OB\right] f(z(t))- z^*(t-\sigma ) \left[ \frac{\sigma ^4}{4} A^*OC\right] f(z(t-\tau (t)))\nonumber \\&- f^*(z(t))\left[ \frac{\sigma ^4}{4} B^*OA\right] z(t-\sigma ) +f^*(z(t)) \left[ \frac{\sigma ^4}{4} B^*OB\right] f(z(t))+\,f^*(z(t))\left[ \frac{\sigma ^4}{4} B^*OC\right] f(z(t-\tau (t))) \nonumber \\&-\,f^*(z(t-\tau (t)))[\sigma ^2 C^*OA]z(t-\sigma )+ f^*(z(t-\tau (t)))[\sigma ^2 C^*OB]f(z(t))+\,f^*(z(t-\tau (t)))\nonumber \\& \left[ \frac{\sigma ^4}{4} C^*OC\right] f(z(t-\tau (t)))-z^*(t)\left[ \sigma ^2O+\frac{\sigma ^6}{36}J\right] z(t)-\int \limits _{t-\sigma }^{t}z^*(s)ds O \int \limits _{t-\sigma }^{t}z(s)ds\nonumber \\&+\sigma z^*(t) O \int \limits _{t-\sigma }^{t}z(s)ds+\sigma \int \limits _{t-\sigma }^{t}z^*(s)ds O z(t) -\int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\int \limits _{\delta }^{t}z^*(s)ds d \delta d \gamma J \int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\int \limits _{\delta }^{t}z(s)ds d \delta d \gamma . \end{aligned}$$
(22)

Moreover, based on Assumption 2.4, for any \(p=1,2,\ldots ,n,\) we have

$$\begin{aligned} |f_p(z_p(t))| \le F_p |z_p(t)|. \end{aligned}$$
(23)

Let \(G=\text{ diag }\{s_1,s_2,\ldots ,s_n\}>0\). From (23), it can be seen that

$$\begin{aligned} s_pf_p^*(z_p(t)) f_p(z_p(t))-s_p F_p^2 z^*_p(t) z_p(t) \le 0,\quad \forall p=1,2,\ldots ,n. \end{aligned}$$

Thus,

$$\begin{aligned} f^*(z(t))G f(z(t))- z^*(t)G\Gamma z(t) \le 0. \end{aligned}$$
(24)

Combining (8), (9), (16), (19), (22) and (24), one can deduce that

$$\begin{aligned} \dot{V}(t)\le \zeta ^*(t)\varTheta \zeta (t), \end{aligned}$$

where

$$\begin{aligned} \zeta ^*(t)=\left[ z^*(t)\quad z^*(t-\tau _1(t))\quad z^*(t-\tau _2(t))\quad z^*(t-\tau _1)\quad z^*(t-\tau _2)\quad z^*(t-\sigma )\quad f^*(z(t))\right. \\ \left. f^*(z(t-\tau (t)))\quad \int \limits _{t-\sigma }^{t}z^*(s)ds \quad \int \limits _{t-\sigma }^{t}\int \limits _{\gamma }^{t}\int \limits _{\delta }^{t}z^*(s)ds d\delta d \gamma \quad \int \limits _{t-\tau _1}^{t}z^*(s)ds \quad \int \limits _{t-\tau _2}^{t}z^*(s)ds\right] . \end{aligned}$$

If (6) holds, then we get

$$\begin{aligned} \dot{V}(t)<0. \end{aligned}$$

Hence, the complex-valued neural networks (3) is globally asymptotically stable. This completes the proof. \(\square \)

Remark 3.2

In Dey et al. (2010), the problem of asymptotic stability for continuous-time systems with additive time-varying delays is investigated by utilizing the free matrix variables. The authors Wu et al. (2009) addressed the stability problem for a class of uncertain systems with two successive delay components. By using a convex polyhedron method, the delay-dependent stability criteria is established in Shao and Han (2011) for neural networks with two additive time-varying delay components. However, when constructing the Lyapunov–Krasovskii functional, those results are not adequately use the full information about the additive time-varying delays \(\tau _1(t)\), \(\tau _2(t)\) and \(\tau (t)\), which would be inevitably conservative to some extent. Since, the authors Cheng et al. (2014) utilized the full information about the additive time-varying delays in the constructed Lyapunov–Krasovskii functional and studied the delay-dependent stability of real-valued continuous-time system which gives the less conservative results. Inspired by the above, in the present paper, we also make the full information of additive time-varying delays \(\tau _1(t)\), \(\tau _2(t)\) and \(\tau (t)\) in the study of stability analysis for complex-valued neural networks.

Remark 3.3

In the existing literature, many researchers studied the stability problem of neural networks and proposed good results, for example see Lakshmanan et al. (2013); Sakthivel et al. (2015); Xie et al. (2016) and references there in. Most of these results are founded on the inequality \( \left( \int \limits _{a}^{b}z(s)ds\right) ^T M\left( \int \limits _{a}^{b}z(s)ds\right) \le (b-a) \int \limits _{a}^{b}z^T(s) M z(s)ds.\) Chen and Song (2013) and Velmurugan et al. (2015) studied the stability and passivity analysis of complex-valued neural networks with the help of complex-valued Jensen’s inequality \( \left( \int \limits _{a}^{b}z(s)ds\right) ^* M\left( \int \limits _{a}^{b}z(s)ds\right) \le (b-a) \int \limits _{a}^{b}z^*(s) M z(s)ds. \) Based on the above analysis and discussions, in this paper the single integral inequality is handled with the complex-valued Jensen’s inequality. Moreover, in this present paper, we introduce the double integral inequality \(\left( \int \limits _{a}^{b}\int \limits _{s}^{b}z(\theta )d\theta ds\right) ^* M\left( \int \limits _{a}^{b}\int \limits _{s}^{b}z(\theta )d\theta ds\right) \) \(\le \frac{(b-a)^2}{2} \int \limits _{a}^{b}\int \limits _{s}^{b}z^*(\theta ) M z(\theta )d\theta ds\) and as well as triple integral inequality \(\left( \int \limits _{a}^{b}\int \limits _{s}^{b}\int \limits _{\theta }^{b}z(\gamma )d\gamma d\theta ds\right) ^* M \left( \int \limits _{a}^{b}\int \limits _{s}^{b}\int \limits _{\theta }^{b}z(\gamma )d\gamma d\theta ds\right) \le \frac{(b-a)^3}{6} \int \limits _{a}^{b}\int \limits _{s}^{b}\int \limits _{\theta }^{b}z^*(\gamma ) M z(\gamma )d\gamma d\theta ds,\) for calculating the derivative of Lyapunov–Krasovskii functional in the complex-valued neural networks.

Remark 3.4

In the following, we will discuss the global asymptotic stability criteria for complex-valued neural networks with additive time-varying delays, that is, there is no leakage delay \((i.e., \sigma =0)\) in (3), then the system (3) becomes

$$\begin{aligned} \dot{z}(t)=&-Az(t)+Bf(z(t))+Cf(z(t-\tau _1(t)-\tau _2(t))),\nonumber \\ z(s)=\,&\phi (s), \quad s \in [-\tau ,0]. \end{aligned}$$
(25)

Then, according to Theorem 3.1, we have the following corollary for the delay-dependent global asymptotic stability of system (25).

Corollary 3.5

Given scalars \(\tau _1\), \(\tau _2\), \(\mu _1\) and \(\mu _2\), the equilibrium point of complex-valued neural networks (25) with additive time-varying delays is globally asymptotically stable if there exist positive Hermitian matrices M, N, P, Q, R, S, T, V, W, X and positive diagonal matrix G such that the following linear matrix inequality holds:

$$\begin{aligned} \tilde{\varTheta }=\left[ \begin{array}{cccccccccccc} \tilde{\varTheta }_{1,1} &{} 0 &{} 0&{} W &{} X &{}\tilde{\varTheta }_{1,6} &{} \tilde{\varTheta }_{1,7}&{} \tau _1 M &{} \tau _2 N\\ \star &{} \tilde{\varTheta }_{2,2} &{} 0 &{} 0&{} 0&{} 0 &{} 0 &{} 0 &{} 0 \\ \star &{} \star &{}\tilde{\varTheta }_{3,3} &{} 0&{} 0&{} 0 &{} 0 &{} 0 &{} 0 \\ \star &{}\star &{} \star &{}\tilde{\varTheta }_{4,4} &{} 0 &{} 0&{} 0&{} 0 &{} 0 \\ \star &{}\star &{} \star &{} \star &{}\tilde{\varTheta }_{5,5} &{} 0 &{} 0&{} 0&{} 0 \\ \star &{}\star &{} \star &{} \star &{} \star &{} \tilde{\varTheta }_{6,6} &{}\tilde{\varTheta }_{6,7} &{} 0 &{} 0 \\ \star &{}\star &{} \star &{} \star &{} \star &{} \star &{} \tilde{\varTheta }_{7,7} &{} 0 &{} 0 \\ \star &{}\star &{} \star &{} \star &{} \star &{} \star &{} \star &{}\tilde{\varTheta }_{8,8} &{} 0\\ \star &{}\star &{} \star &{} \star &{} \star &{} \star &{} \star &{} \star &{}\tilde{\varTheta }_{9,9} \end{array} \right] <0, \end{aligned}$$
(26)

where \(\tilde{\varTheta }_{1,1}=-PA-A^*P+Q+R+S+T-W-X+G\Gamma -\tau _1^2M-\tau _2^2N+\tau _1^2A^*WA+\tau _2^2A^*XA+\frac{\tau _1^4}{4}A^*MA+\frac{\tau _2^4}{4}A^*\times NA,\) \(\tilde{\varTheta }_{1,6}=PB-\tau _1^2A^*WB-\tau _2^2A^*XB-\frac{\tau _1^4}{4} A^*MB-\frac{\tau _2^4}{4} A^*\times NB,\) \(\tilde{\varTheta }_{1,7}=PC-\tau _1^2A^*WC-\tau _2^2A^*XC-\frac{\tau _1^4}{4} A^*MC-\frac{\tau _2^4}{4} A^*\times NC,\) \(\tilde{\varTheta }_{2,2}=-(1-\mu _1)Q,\) \(\tilde{\varTheta }_{3,3}=-(1-\mu _2)R,\) \(\tilde{\varTheta }_{4,4}=-W-S,\) \(\tilde{\varTheta }_{5,5}= -X-T,\) \(\tilde{\varTheta }_{6,6}=\tau _1^2B^*WB+\tau _2^2B^*XB+\frac{\tau _1^4}{4} B^*MB+\frac{\tau _2^4}{4} B^*NB+V-G,\) \(\tilde{\varTheta }_{6,7}=\tau _1^2B^*WC+\tau _2^2B^*XC+\frac{\tau _1^4}{4} B^*MC+\frac{\tau _2^4}{4} B^*NC,\) \(\tilde{\varTheta }_{7,7}=\tau _1^2C^*WC+\tau _2^2C^*XC+\frac{\tau _1^4}{4} C^*MC+\frac{\tau _2^4}{4} C^*NC-(1-\mu )V\), \(\tilde{\varTheta }_{8,8}=-M,\) \(\tilde{\varTheta }_{9,9}=-N.\)

Proof

The proof immediately follows from the proof of Theorem 3.1, by setting \(\sigma =0\), \(U=0,\) \(Y=0\), \(O=0\) and \(J=0,\) hence it is omitted. This completes the proof. \(\square \)

Remark 3.6

When \(\tau _1(t)=0\) or \(\tau _2(t)=0\) complex-valued neural networks with additive time-varying delays (25) reduces to single time-varying delays. Without loss of generality, assume that \(\tau _2(t)=0\) then (25) becomes

$$\begin{aligned} \dot{z}(t)=&-Az(t)+Bf(z(t))+Cf(z(t-\tau _1(t))),\nonumber \\ z(s)=\,&\phi (s), \quad s \in [-\tau _1,0]. \end{aligned}$$
(27)

By letting \(\tau _2=0,\) \(R=T=X=N=0\) in Corollary 3.5, we can easily obtain the sufficient condition for global asymptotic stability of complex-valued neural networks with time-varying delays (27), which are summarized the following corollary.

Corollary 3.7

Given scalars \(\tau _1\) and \(\mu _1\), the equilibrium point of complex-valued neural networks (27) is globally asymptotically stable if there exist positive Hermitian matrices M, P, Q, S, V, W and positive diagonal matrix G such that the following linear matrix inequality holds:

$$\begin{aligned} \hat{\varTheta }=\left[ \begin{array}{cccccc} \hat{\varTheta }_{1,1} &{} 0 &{} W&{} \hat{\varTheta }_{1,4} &{} \hat{\varTheta }_{1,5}&{} \tau _1 M \\ \star &{}-(1-\mu _1)Q &{} 0 &{} 0&{} 0&{} 0\\ \star &{} \star &{}-W-S &{} 0&{} 0&{} 0 \\ \star &{}\star &{} \star &{}\hat{\varTheta }_{4,4} &{} 0 &{} 0 \\ \star &{}\star &{} \star &{} \star &{}\hat{\varTheta }_{5,5} &{} 0\\ \star &{}\star &{} \star &{} \star &{} \star &{} -M \end{array} \right] <0, \end{aligned}$$
(28)

where \(\hat{\varTheta }_{1,1}=-PA-A^*P+Q+S-W+G\Gamma -\tau _1^2M+\tau _1^2A^*WA+\frac{\tau _1^4}{4}A^*MA\), \(\hat{\varTheta }_{1,4}=PB-\tau _1^2A^*WB-\frac{\tau _1^4}{4} A^*MB\), \(\hat{\varTheta }_{1,5}=PC-\tau _1^2A^*WC-\frac{\tau _1^4}{4} A^*MC,\) \(\hat{\varTheta }_{4,4}=\tau _1^2B^*WB+\frac{\tau _1^4}{4} B^*MB+V-G\), \(\hat{\varTheta }_{4,5}=\tau _1^2B^*WC+\frac{\tau _1^4}{4} B^*MC\), \(\hat{\varTheta }_{5,5}=\tau _1^2C^*WC+\frac{\tau _1^4}{4} C^*MC-(1-{\mu_1} )V\), \(\hat{\varTheta }_{6,6}=-M.\)

Remark 3.8

In Liu and Chen (2016), the global exponential stability of complex-valued neural networks with asynchronous time delays is established by decomposing the complex-valued networks into its real and imaginary parts and construct an equivalent real-valued system. The authors Xu et al. (2014) derived the exponential stability condition for a class of complex-valued neural networks with time-varying delays and unbounded delays by utilizing the vector Lyapunov–Krasovskii functional method, homeomorphism mapping lemma and the matrix theory. In Liu and Chen (2016) and Xu et al. (2014), the authors addressed the stability results of complex-valued neural networks with constant or time-varying delays by separating their activation function into its real and imaginary parts. However, when the activation functions cannot be expressed by separating their real and imaginary parts, the proposed stability results in Liu and Chen (2016) and Xu et al. (2014) cannot be applied. It should be mentioned that, in this present paper, the proposed stability criterion for complex-valued neural networks is valid regardless of the active functions can be expressed by separating their real and imaginary parts. Thus, the derived delay dependent stability condition in this paper is more general than the existing literature (Liu and Chen 2016; Xu et al. 2014).

Numerical example

In this section, we give three numerical examples to demonstrate the derived main results.

Example 4.1

Consider a two-dimensional complex-valued neural networks (3) with the following parameters:

$$\begin{aligned} A=\left[ \begin{array}{cc} 9 &{} 0\\ 0 &{} 9 \end{array}\right] , B=\left[ \begin{array}{cc} 1-i &{} -1-i\\ 2-i &{} 2-5i \end{array}\right] , C=\left[ \begin{array}{cc} 1-i &{} 1-i \\ 1+i &{} -1-i \end{array}\right] , \end{aligned}$$

The activation functions are chosen as \(f(z(t))=\frac{1-e^{-x(t)}}{1+e^{-x(t)}}+i \frac{1}{1+e^{-y(t)}}\). The time-varying delays are considered as \(\tau _1(t)=0.2sin t+0.2\), \(\tau _2(t)=0.5cos t+0.8\), which satisfying \(\tau _1=0.4,\) \(\tau _2=1.3,\) \(\mu _1=0.2\) and \(\mu _2=0.5.\) Take \(\Gamma =diag\{0.5 , 0.5\}\) and \(\sigma =0.08,\) by using the effective YALMIP toolbox in MATLAB, we can find the feasible solutions to linear matrix inequality in (6) as follows:

$$\begin{aligned}&P=\left[ \begin{array}{cc} 1.5815 &{}-0.1675 + 0.0342i\\ -0.1675 - 0.0342i &{} 1.0867 \end{array}\right] ,\quad Q=\left[ \begin{array}{cc} 0.2537 &{} -0.1143 + 0.0229i\\ -0.1143 - 0.0229i &{} 0.0772 \end{array}\right] ,\\&R=\left[ \begin{array}{cc} 0.2538 &{} -0.1144 + 0.0229i\\ -0.1144 - 0.0229i &{} 0.0773 \end{array}\right] ,\quad S=\left[ \begin{array}{cc} 0.2507 &{} -0.1129 + 0.0226i\\ -0.1129 - 0.0226i &{} 0.0764 \end{array}\right] ,\\&T=\left[ \begin{array}{cc} 0.2533 &{} -0.1141 + 0.0229i\\ -0.1141 - 0.0229i &{} 0.0771 \end{array}\right] ,\quad U=\left[ \begin{array}{cc} 0.8848 &{} -0.2728 + 0.0567i\\ -0.2728 - 0.0567i &{} 0.4261 \end{array}\right] ,\\&V=\left[ \begin{array}{cc} 7.0635 &{} -1.2995 + 0.8830i\\ -1.2995 - 0.8830i &{} 3.1009 \end{array}\right] ,\quad W=\left[ \begin{array}{cc} 0.0061 &{} -0.0028 + 0.0006i\\ -0.0028 - 0.0006i &{} 0.0017 \end{array}\right] ,\\&J=\left[ \begin{array}{cc} 87.0921 &{} -0.0011 + 0.0002i\\ -0.0011 - 0.0002i &{} 87.0905 \end{array}\right] ,\quad X= 10^{-03} \times \left[ \begin{array}{cc} 0.5620 &{} -0.2517 + 0.0505i\\ -0.2517 - 0.0505i &{} 0.1735 \end{array}\right] , \\&M=\left[ \begin{array}{cc} 0.2831 &{} -0.1315 + 0.0264i\\ -0.1315 - 0.0264i &{} 0.0802 \end{array}\right] ,\quad Y=10^{-03} \left[ \begin{array}{cc} 1.1409 &{} -0.1424 + 0.0287i\\ -0.1424 - 0.0287i &{} 0.7563 \end{array}\right] , \\&N=\left[ \begin{array}{cc} 0.0026 &{} -0.0012 + 0.0002i\\ -0.0012 - 0.0002i&{} 0.0007 \end{array}\right] ,\quad O= 10^{-02} \times \left[ \begin{array}{cc} 0.8791 &{} 0.5352 - 0.0968i\\ 0.5352 + 0.0968i &{} 1.4391 \end{array}\right] , \end{aligned}$$

and \(G=diag\{9.6363 , 7.7007\}.\) According to Theorem 3.1, the complex-valued neural networks with leakage delay and additive time-varying delays (3) is globally asymptotically stable. Figures 1 and 2 show that the time responses of the real and imaginary parts of the system (3) with 21 initial conditions, respectively. The phase trajectories of the real parts of the system (3) is given in Fig. 3. Similarly, the phase trajectories for imaginary parts of the system (3) is given in Fig. 4. Also, Figs. 5 and 6 depict the real and imaginary parts of states of the considered complex-valued neural networks (3) with \(\sigma =1\) under the same 21 initial conditions. It is easy to check that the unique equilibrium point of the system (3) is unstable, this implies that the delays in leakage term on the dynamics of complex-valued neural networks cannot be ignored when we analyze the stability of complex-valued neural networks.

Fig. 1
figure 1

State trajectories of real parts of the system (3) in Example 4.1

Fig. 2
figure 2

State trajectories of imaginary parts of the system (3) in Example 4.1

Fig. 3
figure 3

State trajectories of neural networks (3) between real subspace \([Re(z_1),Re(z_2)]\)

Fig. 4
figure 4

State trajectories of neural networks (3) between imaginary subspace \([Im(z_1(t)),Im(z_2(t))]\)

Fig. 5
figure 5

Time response of real parts of the system (3) when \(\delta =1\)

Fig. 6
figure 6

Time response of imaginary parts of the system (3) when \(\delta =1\)

Example 4.2

Consider the following two-dimensional complex-valued neural networks (25) with additive time-varying delays:

$$\begin{aligned} \dot{z}(t)= -Az(t)+Bf(z(t))+Cf(z(t-\tau _1(t)-\tau _2(t))), \end{aligned}$$

where

$$\begin{aligned} A=\left[ \begin{array}{cc} 10&{} 0\\ 0&{} 10 \end{array}\right] , B=\left[ \begin{array}{cc} 2-i &{}-1-3i\\ 2-2i &{} 2-i \end{array}\right] , C=\left[ \begin{array}{cc} 2-i &{} 1-i \\ 1+i &{}-3-i \end{array}\right] . \end{aligned}$$

The additive time-varying delays are taken as \(\tau _1(t)=0.5\sin (0.2t)\) and \(\tau _2(t)=0.4\cos (0.6t)\). Choose the nonlinear activation function as \(f(z) =\tan hz\) with \(\Gamma =\text{ diag }\{0.5,0.5\}\), \(\tau _1=0.5\), \(\tau _2=0.4\), \(\mu _1=0.1\), \(\mu _2=0.24.\) By using the YALMIP toolbox in MATLAB along with the above parameters, the linear matrix inequality (26) is feasible. From Figs. 7 and 8, we have found that the state trajectories of the system with its real and imaginary parts are converge to the zero equilibrium point with different initial conditions, respectively. The phase trajectories of real and imaginary parts of the system (25) are depicted in Figs. 9 and 10, respectively. By Corollary 3.5, we can conclude that the proposed neural networks (25) is globally asymptotically stable.

Fig. 7
figure 7

Trajectories of the real parts x(t) of the states z(t) for the neural network (25)

Fig. 8
figure 8

Trajectories of the imaginary parts y(t) of the states z(t) for the neural network (25)

Fig. 9
figure 9

State trajectories of neural networks (25) between real subspace \([x_1(t),x_2(t)]\)

Fig. 10
figure 10

State trajectories of neural networks (25) between imaginary subspace \([y_1(t),y_2(t)]\)

Example 4.3

Consider the following two-dimensional complex-valued neural networks (27) with time-varying delays:

$$\begin{aligned} \dot{z}(t)=&-Az(t)+Bf(z(t))+Cf(z(t-\tau _1(t))), \end{aligned}$$

where

$$\begin{aligned} A=\left[ \begin{array}{cc} 8&{} 0\\ 0&{} 8 \end{array}\right] , B=\left[ \begin{array}{cc} -1-2i &{} 1-3i\\ 2-3i&{} 4-i \end{array}\right] , C=\left[ \begin{array}{cc} 2-2i &{} 1-i \\ 3+i &{} 1-i \end{array}\right] . \end{aligned}$$

Choose the nonlinear activation function as \(f(z) =\tan hz\) with \(\Gamma =\text{ diag }\{0.5,0.5\}\). The time-varying delays are chosen as \(\tau _1(t)=0.1\sin t+0.6\) which satisfies \(\tau _1=0.7\), \(\mu _1=0.1.\) By employing the MATLAB YALMIP Toolbox, we can find the feasible solutions to linear matrix inequalities in (28) as follows, which guarantee the global asymptotic stability of the equilibrium point.

$$\begin{aligned}&P=\left[ \begin{array}{cc} 49.3881 &{} -14.0945 +18.3994i\\ -14.0945 -18.3994i &{} 37.0857 \end{array}\right] ,\quad Q=\left[ \begin{array}{cc} 51.1062 &{} -19.9936 +26.6048i\\ -19.9936 -26.6048i &{} 39.9656 \end{array}\right] ,\\&S=\left[ \begin{array}{cc} 44.6015 &{} -17.1853 +23.1041i\\ -17.1853 -23.1041i &{} 34.9374 \end{array}\right] ,\quad V=10^{02}\times \left[ \begin{array}{cc} 1.7782 &{} -0.0575 - 0.4004i\\ -0.0575 + 0.4004i&{} 0.7124 \end{array}\right] ,\\&W=\left[ \begin{array}{cc} 10.7229 &{} -4.4289 + 5.6768i\\ -4.4289 - 5.6768i &{} 7.4946 \end{array}\right] ,\quad M=\left[ \begin{array}{cc} 38.5826 &{} -8.7800 +11.8910i\\ -8.7800 -11.8910i &{} 31.9817 \end{array}\right] , \end{aligned}$$

\( G=diag\{ 311.1505, 230.5390\}. \) Figures 11 and 12, respectively, displays state trajectories of the complex-valued neural networks (27) with its real and imaginary parts are converge to the origin with 21 randomly selected initial conditions. The phase trajectories of real and imaginary parts of the system (27) are drawn in Figs. 13 and 14, respectively.

Fig. 11
figure 11

Time responses of real parts of the system (27) with 21 initial conditions

Fig. 12
figure 12

Time responses of imaginary parts of the system (27) with 21 initial conditions

Fig. 13
figure 13

Phase trajectories of real parts of the proposed system (27)

Fig. 14
figure 14

Phase trajectories of imaginary parts of the proposed system (27)

Conclusion

In this paper, the global asymptotic stability of the complex-valued neural networks with leakage and additive-time varying delays has been studied. The sufficient conditions have been proposed to ascertain the global asymptotic stability of the addressed neural networks based on the appropriate Lyapunov–Krasovskii functional with involving triple integral terms. The complex-valued linear matrix inequalities are used to study the main results which can be easily solved by YALMIP tool in MATLAB. Three numerical examples have been presented to illustrate the effectiveness of theoretical results.