1 Introduction

In the natural world, many practical systems can be modeled by complex dynamical networks (CDNs), such as internet, food webs, electric power grids, scientific citation networks and social networks. A CDN usually contains a large number of interconnected nodes, in which each node represents an element with certain dynamical system and edge represents the relationship between them. Due to the wide and potential applications in various fields, CDNs have attracted much attentation across many fields of science and engineering during the past few decades [14].

It is very common that many natural systems can often exhibit collective cooperative behaviors among their constituents. Synchronization, as a typical collective behavior, is a significant and interesting phenomenon in CDNs, which not only can explain many natural phenomena, such as the lighting of fireflies and the spread of an epidemic, but also has many potential applications in image processing, secure communication, synchronous information exchange in the internet, genetic regulatory process, as well as the synchronous transfer of digital signals in communication networks. Up to now, much effort has been devoted to the study of synchronization in large-scale networks by many researchers [518]. In [5], the authors have shown that the synchronizability of a scale-free dynamical network is robust against random removal of nodes but fragile to some specific removal of nodes. In [6], the authors investigated the locally and globally adaptive synchronization of an uncertain complex dynamical network. The problem of globally exponential synchronization of impulsive dynamical networks was investigated in [7]. The pinning synchronization problems in CDNs have been considered in [811]. Based on the input delay method, the sampled data synchronization problems for CDNs were investigated in [1215]. By utilizing periodically intermittent method, the synchronization problem of dynamical networks was dealt in [16, 17]. The non-fragile synchronization control for complex networks was discussed in [18].

As is known to all, due to the finite information transmission and processing speeds among the units, time-delayed coupling is ubiquitous in real-world networks, such as communication networks, biological neural networks, epidemiological models, electrical power grids, and so on. In order to give a more precise description of practical dynamical network, time-delayed coupling should be considered inevitably. Therefore, much attention has been drawn to consider the synchronization problem of CDNs with time-delayed couplings. In the study of synchronization in CDN with coupling delays, one of the fundamental problems is how to find the maximum upper bound of the delay to guarantee the synchronization by itself. This can be regarded as the delay-dependent synchronization stability problem. In [19], both continuous and discrete time network models with constant coupling delays have been taken into consideration, and some synchronization criteria were derived for both delay-independent and delay-dependent asymptotical stability. In [20], the authors developed several new delay-dependent synchronization stability criteria for some general complex dynamical network models with coupling delays. In [21], the authors introduced a new Lyapunov–Krasovskii functional (LKF) based on delay fractioning technique to derive improved synchronization stability condition for complex networks with constant coupling delays. In [22], local and global synchronization problems in general complex dynamical networks with delay coupling were analyzed, and some simple synchronization criteria were given in terms of linear matrix inequalities (LMIs). By using the free-weighting matrix technique, the synchronization problem for some general complex dynamical networks with time-varying delays in the network couplings and time-varying delays in the dynamical nodes were investigated in [23]. By using a piecewise analysis method and convexity of matrix function method, the synchronization stability problem has been investigated for general CDNs with interval time-varying delays in the dynamical nodes and the coupling term in [24]. Furthermore, the piecewise analysis method was used to study the synchronization problem for continuous complex dynamical networks with non-delayed and delayed coupling in [25]. If the delayed complex dynamical network cannot achieve asymptotic synchronization by itself, the authors in [26] proposed a local linear feedback strategy to deal with the problem. In [27], by using a simple local linear feedback control strategy and reciprocally convex combination approach, the problem of synchronization in complex dynamic networks with interval time-varying coupling delays has been considered. In [28], the authors dealt with the synchronization of both continuous and discrete time CDN by constructing a novel LKF and using the optimal partitioning approach and reciprocally convex combination technique. By choosing a suitable LKF and utilizing Finsler’s lemma, some new synchronization criteria for fuzzy CDNs with interval time-varying delays were established in [29]. The authors in [30] considered the synchronization stability problem of a class of neutral-type CDNs with interval time-varying coupling delays and a pair of nonlinear constraints. However, these mentioned results [2327] for dynamical networks with interval time-varying coupling delays are still conservative to some extent, which leave open room for further improvement.

It is known that the real-world dynamical networks usually contain a large number of nodes. If the number of nodes is big enough, it will lead to have a huge computation burden. On the other hand, the more decision variables, the proposed conditions involves, the bigger computational complexity will be. Therefore, in view of practical application, it is of great importance to find new synchronization conditions for CDNs with time-varying coupling delays with less conservatism and small computational complexity.

Motivated by the aforementioned discussion, this paper is further considered the synchronization stability problem for a general complex dynamical network with interval time-varying coupling delay and delay in the dynamical node. By developing a variable delay-partitioning approach, both the information of the variable subinterval delay and the lower and upper bound of delay can be taken into full consideration. By constructing different LKFs on these two subintervals and using reciprocally convex approach, some new and improved delay-dependent synchronization stability conditions are proposed in terms of LMIs, which can be solved effectively by using MATLAB LMI Toolbox. Numerical examples are given to demonstrate the effectiveness and less conservatism of the obtained results.

Notations: Throughout this paper, \( R^{n} \) denotes the n-dimensional Euclidean space, \( R^{m \times n} \) is the set of all \( m \times n \) real matrix. The notation \( P > 0 \) (respectively, \( P < 0 \)), for \( P \in R^{n \times n} \) means that the matrix P is a real symmetric positive definite (respectively, negative definite). The superscript “T” represents the transpose. The symmetric terms in a symmetric matrix are denoted by *. Matrices, if their dimensions are not explicitly stated, are assumed to have compatible dimensions for algebraic operations.

2 Problem formulation

Consider a delayed CDN consisting of N identical nodes, in which each node is an n-dimensional dynamical subsystem

$$ \dot{x}_{i} (t) = f(x_{i} (t),x_{i} (t - \tau (t))) + c_{1} \sum\limits_{j = 1}^{N} {g_{ij} \varGamma_{1} x_{j} (t)} + c_{2} \sum\limits_{j = 1}^{N} {g_{ij} \varGamma_{2} x_{j} (t - \tau (t))} ,\quad i = 1,2, \ldots ,N, $$
(1)

where \( x_{i} = (x_{i1} ,x_{i2} , \ldots ,x_{in} )^{\text{T}} \in R^{n} \) is the state vector of the ith node. \( f( \cdot ) \in R^{n} \) is a continuously differentiable vector function. The constant \( c_{l} > 0\;(l = 1,2) \) denote the coupling strength of non-delayed coupling and time-delayed coupling, respectively. \( \tau (t) \) represents the time-varying coupling delay, which satisfies

$$ \tau_{1} \le \tau (t) \le \tau_{2} , $$
(2)

where \( \tau_{1} \) and \( \tau_{2} \) are known positive constants. \( \varGamma_{l} = (\gamma_{lij} )_{n \times n} \in R^{n \times n} \;(l = 1,2) \) are the constant inner-coupling matrix and the time-delayed inner-coupling matrix, respectively. \( G = (g_{ij} ) \in R^{N \times N} \) is the coupling configuration matrix, where \( g_{ij} \) is defined as follows: if there is a connection between node i and node j \( (i \ne j) \), then \( g_{ij} > 0 \); otherwise, \( g_{ij} = 0 \), and the diagonal elements of matrix G are defined by \( g_{ii} = - \sum\nolimits_{j = 1,j \ne i}^{N} {G_{ij} } ,i = 1,2, \ldots ,N \).

Remark 1

The coupling configuration matrix G can always represent the topological structure of network. It should be noted that the coupling configuration matrix was assumed to be symmetric in [2325], which is quite restrictive in practice. However, in our network model (1), the coupling configuration matrix G does not need to be symmetric. Moreover, the non-delayed coupling and delayed coupling simultaneously exist in our network model. It means that there exists information communication of nodes not only at time t but also at time \( t - \tau (t) \). In effect, this phenomenon consists widely in our real world. For example, in the stock market, decision-making of single trader is influenced by that of others at time t as well as at time \( t - \tau (t) \). As a conclusion, the network model considered here is more general than those in [2327].

Similar to Zhou et al. [27], suppose that network (1) is connected in the sense that there are no isolated clusters, that is, G is an irreducible matrix. According to the relevant analysis in [22], since the row sums of G are all zero, it is easy to find that zero is an eigenvalue of G with multiplicity 1. For simplicity, we assume that G has \( v - 1\;(v \le N) \) different nonzero eigenvalues \( \lambda_{2} , \ldots ,\lambda_{v} \).

Definition 1

([27]) The delayed dynamical network (1) is said to achieve asymptotic synchronization if

$$ x_{1} (t) = \cdots = x_{N} (t) = s(t)\;{\text{as}}\;t \to \infty , $$
(3)

where \( s(t) \) is a solution of an isolated node, satisfying \( \dot{s}(t) = f(s(t),s(t - \tau (t))) \).

Normally, the synchronization of network requires \( x_{i} (t) - s(t)\; \to 0 \) as \( t \to \infty \), \( i = 1,2, \ldots ,N \). Let \( S(t) = (s(t),s(t), \ldots ,s(t))^{\text{T}} \) be the synchronization state of network (1). From Definition 1, when the network (1) can achieve asymptotic synchronization, it means that \( x_{i} (t) - s(t)\; \to 0 \), and the synchronization state \( S(t) \) is asymptotically stable in the state space. On the other hand, if the synchronization state \( S(t) \) is asymptotically stable, it is obvious that \( x_{i} (t) - s(t)\; \to 0 \), and the time-varying delayed network (1) will realize asymptotic synchronization. Therefore, the asymptotically stable of synchronization state is equivalent to asymptotic synchronization of network.

To proceed further, the following lemmas are needed, which play an important role in the derivation of main results.

Lemma 1

Consider the delayed dynamical network (1), if the following \( v - 1 \) linear time-varying delayed differential equations are asymptotic stable about their zero solutions

$$ \dot{\eta }_{k} (t) = (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} )\eta_{k} (t) + (J_{2} (t) + c_{2} \lambda_{k} \varGamma_{2} )\eta_{k} (t - \tau (t)),\quad k = 2, \ldots ,v, $$
(4)

where \( J_{1} (t) \) is the Jacobian of \( f(x(t),x(t - \tau (t))) \) at \( s(t) \), \( J_{2} (t) \) is the Jacobian of \( f(x(t),x(t - \tau (t))) \) at \( s(t - \tau (t)) \) , then the asymptotic synchronization of network (1) can be achieved.

Proof

Let \( e_{i} (t) = x_{i} (t) - s(t)\;(i = 1,2, \ldots ,N) \) be the synchronization error state. According to the Definition 1, it is clear that the synchronization of delayed complex dynamical network (1) is equivalent to \( e_{i} (t) \to 0 \) as \( t \to \infty \). Then, the error dynamics is given by

$$ \dot{e}_{i} (t) = f(x_{i} (t),x_{i} (t - \tau (t))) - f(s(t),s(t - \tau (t))) + c_{1} \sum\limits_{j = 1}^{N} {g_{ij} \varGamma_{1} e_{j} (t)} + c_{2} \sum\limits_{j = 1}^{N} {g_{ij} \varGamma_{2} e_{j} (t - \tau (t))} . $$
(5)

Because \( f( \cdot ) \) is a continuously differentiable vector function, by linearizing the error system (5) and letting \( e(t) = (e_{1} (t),e_{2} (t), \ldots ,e_{N} (t)) \), we can obtain

$$ \dot{e}(t) = J_{1} (t)e(t) + J_{2} (t)e(t - \tau (t)) + c_{1} \varGamma_{1} e(t)G^{\text{T}} + c_{2} \varGamma_{2} e(t - \tau (t))G^{\text{T}} . $$
(6)

By using matrix theory and similar to Lu and Ho [22], we have the following Jordan decomposition for matrix G: \( G^{\text{T}} = \varPhi J\varPhi^{ - 1} \), where \( J = {\text{diag}}\{ J_{1} ,J_{2} , \ldots ,J_{v} \} \) is a block diagonal matrix, and \( J_{k} \) is the Jordan block corresponding to the \( m_{k} \) multiple eigenvalues \( \lambda_{k} \) of G. Furthermore, let \( \eta (t) = e(t)\varPhi \), we have

$$ \dot{\eta }(t) = J_{1} (t)\eta (t) + J_{2} (t)\eta (t - \tau (t)) + c_{1} \varGamma_{1} \eta (t)J + c_{2} \varGamma_{2} \eta (t - \tau (t))J, $$
(7)

where \( \eta (t) = (\eta_{1} (t),\eta_{2} (t), \ldots ,\eta_{v} (t)) \), \( \eta_{k} (t) = (\eta_{k1} (t),\eta_{k2} (t), \ldots ,\eta_{{km_{k} }} (t)) \). From the assumption that \( \lambda_{1} = 0 \) with multiplicity 1, we have \( \eta_{1} (t) = 0 \). For \( k = 2, \ldots ,v \), it holds that

$$ \dot{\eta }_{k} (t) = (J_{1} (t) + c_{1} \lambda_{k}\Gamma _{1} )\eta_{k} (t) + (J_{2} (t) + c_{2} \lambda_{k}\Gamma _{2} )\eta_{k} (t - \tau (t)). $$

Therefore, if the linear time-varying delayed differential systems (4) are asymptotic stable about their zero solutions, the dynamical network (1) can achieve synchronization. This completes the proof.

Lemma 2

([31]) For any constant matrix \( Z = Z^{\text{T}} > 0 \) and scalars \( h_{2} > h_{1} > 0 \) such that the following integrations concerned are well defined, then

$$ - \int_{{t - h_{2} }}^{{t - h_{1} }} {x^{\text{T}} (s)Zx(s){\text{d}}s} \le - \frac{1}{{(h_{2} - h_{1} )}}\int_{{t - h_{2} }}^{{t - h_{1} }} {x^{\text{T}} (s){\text{d}}s} Z\int_{{t - h_{2} }}^{{t - h_{1} }} {x(s){\text{d}}s} . $$
(8)

Lemma 3

([32]) Let \( f_{1} ,f_{2} , \ldots ,f_{N} :\;R^{m} \mapsto R \) have positive values in an open subset D of \( R^{m} \) . Then, the reciprocally convex combination of \( f_{i} \) over D satisfies

$$ \mathop {\hbox{min} }\limits_{{\{ \alpha_{i} |\alpha_{i} > 0,\sum\limits_{i} {\alpha_{i} = 1} \} }} \sum\limits_{i} {\frac{1}{{\alpha_{i} }}f_{i} (t)} = \sum\limits_{i} {f_{i} (t)} + \mathop {\hbox{max} }\limits_{{g_{i,j} (t)}} \sum\limits_{i \ne j} {g_{i,j} (t)} $$
(9)

subject to

$$ \left\{ {g_{i,j} :\;R^{m} \mapsto R,g_{j,i} (t) \triangleq g_{i,j} (t),\left[ {\begin{array}{*{20}c} {f_{i} (t)} & {g_{i,j} (t)} \\ {g_{j,i} (t)} & {f_{j} (t)} \\ \end{array} } \right] \ge 0} \right\}. $$
(10)

Since the outer-coupling matrix G is not assumed to be symmetric, the eigenvalues \( \lambda_{k} \;(1 \le k \le v) \) may be nonzero complex numbers and \( \eta_{k} (t) \) should be treated as complex vectors. To avoid the complex arithmetic, similar to [22, 27], let \( \lambda_{k} = \alpha_{k} + j\beta_{k} \) and \( \eta_{k} = u_{k} + jv_{k} \) be the solution of system (4), in which j is the imaginary unit. Here, \( \alpha_{k} \) and \( \beta_{k} \) are the real part and imaginary part of the complex number \( \lambda_{k} \), respectively; \( u_{k} \) and \( v_{k} \) are the real part and imaginary part of the complex vector \( \eta_{k} \), respectively. Then one has

$$ \dot{u}_{k} (t) = (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} )u_{k} (t) + (J_{2} (t) + c_{2} \alpha_{k} \lambda_{k} \varGamma_{2} )u_{k} (t - \tau (t)) - (J_{2} (t) + c_{2} \beta_{k} \lambda_{k} \varGamma_{2} )v_{k} (t - \tau (t)), $$
(11)
$$ \dot{v}_{k} (t) = (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} )v_{k} (t) + (J_{2} (t) + c_{2} \alpha_{k} \lambda_{k} \varGamma_{2} )v_{k} (t - \tau (t)) - (J_{2} (t) + c_{2} \beta_{k} \lambda_{k} \varGamma_{2} )u_{k} (t - \tau (t)). $$
(12)

Letting

$$ w_{k} (t) = \left[ {\begin{array}{*{20}c} {u_{k} (t)} \\ {v_{k} (t)} \\ \end{array} } \right], $$
$$ \varLambda_{k} (t) = \left[ {\begin{array}{*{20}c} {J_{2} (t) + c_{2} \alpha_{k} \lambda_{k} \varGamma_{2} } & { - (J_{2} (t) + c_{2} \beta_{k} \lambda_{k} \varGamma_{2} )} \\ { - (J_{2} (t) + c_{2} \beta_{k} \lambda_{k} \varGamma_{2} )} & {J_{2} (t) + c_{2} \alpha_{k} \lambda_{k} \varGamma_{2} } \\ \end{array} } \right], $$

and

$$ \bar{J}_{k} (t) = \left[ {\begin{array}{*{20}c} {J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} } & 0 \\ 0 & {J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} } \\ \end{array} } \right], $$

it can be achieved from (8) and (9) that

$$ \dot{w}_{k} (t) = \bar{J}_{k} (t)w_{k} (t) + \varLambda_{k} (t)w_{k} (t - \tau (t)),\quad k = 1, \ldots ,v - 1. $$
(13)

Clearly, \( w_{k} (t) \) is real vector, i.e., \( w_{k} (t) \in R^{2n} \). The synchronization problem of the delayed complex network (1) has been equivalently converted into the asymptotical stability problem of system (13) about zero solution. Therefore, our attention will focus on deriving delay-dependent stability criteria for system (13) with interval time-varying delay.

3 Main results

In this section, we are in the position to propose several delay-dependent stability conditions for delayed system (13), which can guarantee the asymptotic synchronization of considered network (1).

Theorem 1

For given scalars \( \tau_{2} > \tau_{1} > 0 \), \( 0 < \alpha < 1 \) , if there exist matrices \( P_{k} > 0 \), \( Q_{kj} > 0 \), \( R_{kj} > 0 \) and \( S_{i} \) \( (k = 2, \ldots ,v,\;i = 1,2,\;j = 1,2) \) with appropriate dimensions such that the following LMIs hold

$$ \left[ {\begin{array}{*{20}c} {R_{k2} } & {S_{i} } \\ * & {R_{ki} } \\ \end{array} } \right] \ge 0, $$
(14)
$$ \varOmega_{ki} = \left[ {\begin{array}{*{20}c} {\varOmega_{11} } & {R_{k1} } & {\varOmega_{13} } & 0 \\ * & {\varOmega_{22} } & {\varOmega_{23} } & {S_{i} } \\ * & * & {\varOmega_{33} } & {\varOmega_{34} } \\ * & * & * & {\varOmega_{44} } \\ \end{array} } \right] < 0, $$
(15)

where

$$ \begin{aligned} \varOmega_{11} & = P_{k} \bar{J}_{k} (t) + \bar{J}^{\text{T}} (t)P_{k} + Q_{k1} + \bar{J}_{k}^{\text{T}} (t)\varTheta_{ki} \bar{J}_{k} (t) - R_{k1} , \\ \varOmega_{13} & = P_{k} \varLambda_{k} (t) + \bar{J}_{k}^{\text{T}} (t)\varTheta_{ki} \varLambda_{k} (t),\varOmega_{22} = Q_{k2} - Q_{k1} - R_{k1} - R_{k2} , \\ \varOmega_{23} & = R_{k2} - S_{i} ,\varOmega_{33} = \varLambda_{k}^{\text{T}} (t)\varTheta_{ki} \varLambda_{k} (t) - 2R_{k2} + S_{i} + S_{i}^{\text{T}} \\ \varOmega_{34} & = R_{k2} - S_{i} ,\varOmega_{44} = - Q_{k2} - R_{k2} ,\tau_{\delta } = \tau_{1} + \alpha (\tau_{2} - \tau_{1} ), \\ \varTheta_{k1} & = \tau_{1}^{2} R_{k1} + \alpha^{2} (\tau_{2} - \tau_{1} )^{2} R_{k2} ,\varTheta_{k2} = \tau_{\delta }^{2} R_{k1} + (1 - \alpha )^{2} (\tau_{2} - \tau_{1} )^{2} R_{k2} , \\ \end{aligned} $$

then the asymptotic synchronization of delayed complex dynamical network (1) is achieved.

Proof

Let us divide the delay interval \( [\tau_{1} ,\;\tau {}_{2}] \) into two segments: \( [\tau_{1} ,\;\tau_{\delta } ] \) and \( [\tau_{\delta } ,\;\tau_{2} ] \). If we can prove that Theorem 1 holds for two cases, \( \tau_{1} \le \tau (t) \le \tau {}_{\delta } \) and \( \tau_{\delta } \le \tau (t) \le \tau {}_{2} \), then Theorem 1 is true.

Case 1

When \( \tau (t) \in [\tau_{1} ,\;\tau_{\delta } ] \), we consider the following Lyapunov–Krasovskii functional candidate

$$ \begin{aligned} V_{k1} (t) & = w_{k}^{\text{T}} (t)P_{k} w_{k} (t) + \int_{{t - \tau_{1} }}^{t} {w_{k}^{\text{T}} (s)Q_{k1} w_{k} (s){\text{d}}s} + \int_{{t - \tau_{\delta } }}^{{t - \tau_{1} }} {w_{k}^{\text{T}} (s)Q_{k2} w_{k} (s){\text{d}}s} \\ & \quad + \tau_{1} \int_{{ - \tau_{1} }}^{0} {\int_{t + \theta }^{t} {\dot{w}_{k}^{\text{T}} (s)R_{k1} \dot{w}(s)_{k} {\text{d}}s{\text{d}}\theta + (\tau_{\delta } - \tau_{1} )} } \int_{{ - \tau_{\delta } }}^{{ - \tau_{1} }} {\int_{t + \theta }^{t} {\dot{w}_{k}^{\text{T}} (s)} R_{k2} \dot{w}_{k} (s){\text{d}}s{\text{d}}\theta .} \\ \end{aligned} $$
(16)

Taking the time derivative of \( V_{k1} (t) \) with respect to t along the trajectories of system (13) yields

$$ \begin{aligned} \dot{V}_{k1} (t) & = 2w_{k}^{\text{T}} (t)P_{k} \dot{w}_{k}^{\text{T}} (t) + w_{k}^{\text{T}} (t)Q_{k1} w_{k} (t) + w_{k}^{\text{T}} (t - \tau_{1})(Q_{k2} - Q_{k1})w_{k} (t - \tau_{1} ) \\ & \quad - w_{k}^{\text{T}} (t - \tau_{\delta } )Q_{k2} w_{k} (t - \tau_{\delta } ) + \dot{w}_{k}^{\text{T}} (t)(\tau_{1}^{2} R_{k1} + (\tau_{\delta } - \tau_{1} )^{2} R_{k2} )\dot{w}_{k} (t) \\ & \quad - \tau_{1} \int_{{t - \tau_{1} }}^{t} {\dot{w}_{k}^{\text{T}} (s)R_{k1} \dot{w}_{k} (s){\text{d}}s - (\tau_{\delta } - \tau_{1} )} \int_{{t - \tau_{\delta } }}^{{t - \tau_{1} }} {\dot{w}_{k}^{\text{T}} (s)R_{k2} \dot{w}_{k} (s){\text{d}}s} \\ & = 2w_{k}^{\text{T}} (t)P_{k} \left( {\bar{J}(t)w_{k} (t) + \varLambda_{k} (t)w_{k} (t - \tau (t))} \right) + w_{k}^{\text{T}} (t){Q_{k1}} w_{k} (t) \\ & \quad + w_{k}^{\text{T}} (t - \tau_{1} )(Q_{k2} - Q_{k1} )w_{k} (t - \tau_{1} ) - w_{k}^{\text{T}} (t - \tau_{\delta } )Q_{k2} w_{k} (t - \tau_{\delta } ) \\ & \quad + \left( {\bar{J}(t)w_{k} (t) + \varLambda_{k} (t)w_{k} (t - \tau (t))} \right)^{\text{T}} \varTheta_{k2} \left( {\bar{J}(t)w_{k} (t) + \varLambda_{k} (t)w_{k} (t - \tau (t))} \right) \\ & \quad - \tau_{1} \int_{{t - \tau_{1} }}^{t} {\dot{w}_{k}^{\text{T}} (s)R_{k1} \dot{w}_{k} (s){\text{d}}s} - (\tau_{\delta } - \tau_{1} )\int_{{t - \tau_{\delta } }}^{{t - \tau_{1} }} {\dot{w}_{k}^{\text{T}} (s)R_{k2} \dot{w}_{k} (s){\text{d}}s.} \\ \end{aligned} $$
(17)

Using Lemma 2, one has

$$ - \tau_{1} \int_{{t - \tau_{1} }}^{t} {\dot{w}_{k}^{\text{T}} (s)R_{k1} \dot{w}_{k} (s){\text{d}}s} \le - \left[ {w_{k} (t) - w_{k} (t - \tau_{1} )} \right]^{\text{T}} R_{k1} \left[ {w_{k} (t) - w_{k} (t - \tau_{1} )} \right]. $$
(18)

According to Lemma 3, if (14) is satisfied, one can get

$$ \begin{aligned} - (\tau_{\delta } - \tau_{1} )\int_{{t - \tau_{\delta } }}^{{t - \tau_{1} }} {\dot{w}_{k}^{\text{T}} (s)R_{k2} \dot{w}_{k} (s){\text{d}}s} \hfill \\ \quad = - (\tau_{\delta } - \tau_{1} )\int_{t - \tau (t)}^{{t - \tau_{1} }} {\dot{w}_{k}^{\text{T}} (s)R_{k2} \dot{w}_{k} (s){\text{d}}s - (\tau_{\delta } - \tau_{1} )} \int_{{t - \tau_{\delta } }}^{t - \tau (t)} {\dot{w}_{k}^{\text{T}} (s)R_{k2} \dot{w}_{k} (s){\text{d}}s} \hfill \\ \quad \le - \frac{{\tau_{\delta } - \tau_{1} }}{{\tau (t) - \tau_{1} }}\int_{t - \tau (t)}^{{t - \tau_{1} }} {\dot{w}_{k}^{\text{T}} (s){\text{d}}s} R_{k2} \int_{t - \tau (t)}^{{t - \tau_{1} }} {\dot{w}_{k} (s){\text{d}}s} - \frac{{\tau_{\delta } - \tau_{1} }}{{\tau_{\delta } - \tau (t)}}\int_{{t - \tau_{\delta } }}^{t - \tau (t)} {\dot{w}_{k}^{\text{T}} (s){\text{d}}s} R_{k2} \int_{{t - \tau_{\delta } }}^{t - \tau (t)} {\dot{w}_{k} (s){\text{d}}s} \hfill \\ \quad \le - \left[ \begin{aligned} w_{k} (t - \tau_{1} ) - w_{k} (t - \tau (t)) \hfill \\ w_{k} (t - \tau (t)) - w_{k} (t - \tau_{\delta } ) \hfill \\ \end{aligned} \right]^{\text{T}} \left[ {\begin{array}{*{20}c} {R_{k2} } & {S_{i} } \\ * & {R_{k2} } \\ \end{array} } \right]\left[ \begin{aligned} w_{k} (t - \tau_{1} ) - w_{k} (t - \tau (t)) \hfill \\ w_{k} (t - \tau (t)) - w_{k} (t - \tau_{\delta } ) \hfill \\ \end{aligned} \right]. \hfill \\ \end{aligned} $$
(19)

From (17) to (19), one can obtain

$$ \dot{V}_{k1} (t) \le \chi_{k1}^{\text{T}} \varOmega_{k1} \chi_{k1} (t), $$
(20)

where \( \chi_{k1} (t) = (w_{k}^{\text{T}} (t),w_{k}^{\text{T}} (t - \tau_{1} ),w_{k}^{\text{T}} (t - \tau (t)),w_{k}^{\text{T}} (t - \tau_{\delta } ))^{\text{T}} \). Therefore, if the LMI in (15) with \( i = 1 \) holds, one can conclude that \( \dot{V}_{k1} (t) < 0 \) is satisfied, which implies that system (13) is asymptotically stable.

Case 2

When \( \tau (t) \in [\tau_{\delta } ,\;\tau_{2} ] \), we consider the following Lyapunov–Krasovskii functional candidate

$$ \begin{aligned} V_{k2} (t) & = w_{k}^{\text{T}} (t)P_{k} w_{k} (t) + \int_{{t - \tau_{\delta } }}^{t} {w_{k}^{\text{T}} (s)Q_{k1} w_{k} (s){\text{d}}s} + \int_{{t - \tau_{2} }}^{{t - \tau_{\delta } }} {w_{k}^{\text{T}} (s)Q_{k2} w_{k} (s){\text{d}}s} \\ & \quad + \tau_{\delta } \int_{{ - \tau_{\delta } }}^{0} {\int_{t + \theta }^{t} {\dot{w}_{k}^{\text{T}} (s)R_{k1} \dot{w}(s)_{k} {\text{d}}s{\text{d}}\theta + (\tau_{2} - \tau_{\delta } )} } \int_{{ - \tau_{i + 1} }}^{{ - \tau_{i} }} {\int_{t + \theta }^{t} {\dot{w}_{k}^{\text{T}} (s)} R_{k2} \dot{w}_{k} (s){\text{d}}s{\text{d}}\theta .} \\ \end{aligned} $$
(21)

Defining \( \chi_{k2} (t) = (w_{k}^{\text{T}} (t),w_{k}^{\text{T}} (t - \tau_{\delta } ),w_{k}^{\text{T}} (t - \tau (t)),w_{k}^{\text{T}} (t - \tau_{2} ))^{\text{T}} \), and using a proof process similar to that for Case 1, if (15) with \( i = 2 \) is satisfied, then system (13) is asymptotically stable according to Lyapunov stability theory. By using Lemma 1, we know that the synchronization of dynamical network (1) is equivalent to the stability of system (13) about zero solution. Thus, the asymptotic synchronization of network (1) is achieved. This completes the proof.

Remark 2

In [24, 25, 27], by employing the delay decomposition approach, the delay interval was divided into two equidistant subintervals and a united LKF was chosen to obtain less conservative results. However, in our study, the delay interval \( [\tau_{1} ,\tau_{2} ] \) was first partitioned into two variable subintervals, \( [\tau_{1} ,\;\tau_{\delta } ] \) and \( [\tau_{\delta } ,\;\tau_{2} ] \), in which \( \tau_{\delta } = \tau_{1} + \alpha (\tau_{2} - \tau_{1} ) \) and \( 0 < \alpha < 1 \) is a tunable parameter, and slightly different LKF was constructed for each subinterval. This treatment is different from [24, 25, 27]. Though these two approaches are more effective in the reduction in conservatism, the derived conditions based on the former become more complicated and the computational cost grow bigger as the delay-decomposing number increases, while the latter result in some simple conditions with slightly different forms and low computational complexity. On the other hand, it is worthy mentioning that if the tunable parameter \( \alpha \) changes, the calculated maximum allowable upper bound on \( \tau_{2} \) may be different. The merit and reduced conservatism of our approach will be demonstrated by numerical examples in the next section.

In Theorem 1, it has been supposed that \( \tau_{1} > 0 \). When \( \tau_{1} = 0 \), the following corollary is easily established following the same line as in the proof of Theorem 1.

Corollary 1

For given scalars \( \tau_{2} > 0 \), \( 0 < \alpha < 1 \) , if there exist matrices \( P_{k} > 0 \), \( Q_{kj} > 0 \), \( Z_{kj} > 0 \) and \( S_{i} (k = 2, \ldots ,v,\,j = 1,2) \) with appropriate dimensions such that (14) and the following LMIs hold

$$ \hat{\varOmega }_{k1} = \left[ {\begin{array}{*{20}c} {\varSigma_{11} } & {\varSigma_{12} } & {S_{1} } \\ * & {\varSigma_{22} } & {R_{k2} - S_{1} } \\ * & * & { - Q_{k2} - R_{k2} } \\ \end{array} } \right] < 0, $$
(22)
$$ \hat{\varOmega }_{k2} = \left[ {\begin{array}{*{20}c} {\hat{\varOmega }_{11} } & {R_{k1} } & {\hat{\varOmega }_{13} } & 0 \\ * & {\varOmega_{22} } & {\varOmega_{23} } & {S_{i} } \\ * & * & {\hat{\varOmega }_{33} } & {\varOmega_{34} } \\ * & * & * & {\varOmega_{44} } \\ \end{array} } \right] < 0, $$
(23)

where

$$ \begin{aligned} \varSigma_{11} & = P_{k} \bar{J}_{k} (t) + \bar{J}^{\text{T}} (t)P_{k} + \bar{J}_{k}^{\text{T}} (t)\hat{\varTheta }_{k1} \bar{J}_{k} (t) + Q_{k2} - R_{k2} , \\ \varSigma_{12} & = P_{k} \varLambda_{k} (t) + \bar{J}_{k}^{\text{T}} (t)\hat{\varTheta }_{k1} \varLambda_{k} (t) + R_{k2} - S_{1} ,\varSigma_{22} = \varLambda_{k}^{\text{T}} (t)\hat{\varTheta }_{k1} \varLambda_{k} (t) - 2R_{k2} + S_{1} + S_{1}^{\text{T}} , \\ \hat{\varOmega }_{11} & = P_{k} \bar{J}_{k} (t) + \bar{J}_{k}^{\text{T}} (t)P_{k} + Q_{k1} + \bar{J}_{k}^{\text{T}} (t)\hat{\varTheta }_{k2} \bar{J}_{k} (t) - R_{k1} ,\hat{\varOmega }_{13} = P_{k} \varLambda_{k} (t) + \bar{J}_{k}^{\text{T}} (t)\hat{\varTheta }_{k2} \varLambda_{k} (t), \\ \hat{\varOmega }_{33} & = \varLambda_{k}^{\text{T}} (t)\hat{\varTheta }_{k2} \varLambda_{k} (t) - 2R_{k2} + S_{i} + S_{i}^{\text{T}} ,\hat{\varTheta }_{k1} = \alpha^{2} \tau_{2}^{2} R_{k2} ,\hat{\varTheta }_{k2} = (1 - \alpha )^{2} \tau_{2}^{2} (R_{k1} + R_{k2} ), \\ \end{aligned} $$

and the other terms have the same forms as those in Theorem 1, then the asymptotic synchronization of delayed complex dynamical network (1) is achieved.

Proof

Substituting \( \tau_{1} = 0 \), for \( \tau (t) \in [\tau_{1} ,\;\tau_{\delta } ] \), the integral terms \( \int_{{t - \tau_{1} }}^{t} {w_{k}^{\text{T}} (s)Q_{k1} w_{k} (s){\text{d}}s} \) and \( \tau_{1} \int_{{ - \tau_{1} }}^{0} {\int_{t + \theta }^{t} {\dot{w}_{k}^{\text{T}} (s)R_{k1} \dot{w}(s)_{k} {\text{d}}s{\text{d}}\theta } } \) disappear from the Lyapunov–Krasovskii functional. It is clear that when \( \tau (t) \in [\tau_{1} ,\;\tau_{\delta } ] \), all results still hold by removing all the terms with the variables \( Q_{k1} \) and \( R_{k1} \). When \( \tau (t) \in [\tau_{\delta } ,\;\tau_{2} ] \), the proof can be made in a similar way to that of Theorem 1. This is omitted here.

In addition, if the outer-coupling matrix G is symmetric, i.e., \( G = G^{\text{T}} \), we can easily obtain the following corollaries according to Theorem 1 and Corollary 1.

Corollary 2

Suppose \( G = G^{\text{T}} \) , for given scalars \( \tau_{2} > \tau_{1} > 0 \), \( 0 < \alpha < 1 \), if there exist matrices \( P_{k} > 0 \), \( Q_{kj} > 0 \), \( R_{kj} > 0 \) and \( S_{i} (k = 2, \ldots ,v,i = 1,2,j = 1,2) \) with appropriate dimensions such that (14) and the following LMIs hold

$$ \tilde{\varOmega }_{ki} = \left[ {\begin{array}{*{20}c} {\tilde{\varOmega }_{11} } & {R_{k1} } & {\tilde{\varOmega }_{13} } & 0 \\ * & {\varOmega_{22} } & {\varOmega_{23} } & {S_{i} } \\ * & * & {\tilde{\varOmega }_{33} } & {\varOmega_{34} } \\ * & * & * & {\varOmega_{44} } \\ \end{array} } \right] < 0, $$
(24)

where

$$ \begin{aligned} \tilde{\varOmega }_{11} & = P_{k} (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} ) + (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} )^{\text{T}} P_{k} + Q_{k1} \\ & \quad + (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} )^{\text{T}} (t)\varTheta_{ki} (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} ) - R_{k1} , \\ \tilde{\varOmega }_{13} & = P_{k} (J_{2} (t) + c_{2} \lambda_{k} \varGamma_{2} ) + (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} )^{\text{T}} \varTheta_{ki} (J_{2} (t) + c_{2} \lambda_{k} \varGamma_{2} ), \\ \tilde{\varOmega }_{33} & = (J_{2} (t) + c_{2} \lambda_{k} \varGamma_{2} )^{\text{T}} (t)\varTheta_{ki} (J_{2} (t) + c_{2} \lambda_{k} \varGamma_{2} ) - 2R_{k2} + S_{i} + S_{i}^{\text{T}} , \\ \end{aligned} $$

and the other terms have the same forms as those in Theorem 1, then the asymptotic synchronization of delayed complex dynamical network (1) is achieved.

Corollary 3

Suppose \( G = G^{T} \) , for given scalars \( \tau_{2} > 0 \), \( 0 < \alpha < 1 \) , if there exist matrices \( P_{k} > 0 \), \( Q_{kj} > 0 \), \( Z_{kj} > 0 \) and \( S_{i} (k = 2, \ldots ,v,j = 1,2) \) with appropriate dimensions such that (11) and the following LMIs hold

$$ \bar{\varOmega }_{k1} = \left[ {\begin{array}{*{20}c} {\bar{\varSigma }_{11} } & {\bar{\varSigma }_{12} } & {S_{1} } \\ * & {\bar{\varSigma }_{22} } & {R_{k2} - S_{1} } \\ * & * & { - Q_{k2} - R_{k2} } \\ \end{array} } \right] < 0, $$
(25)
$$ \bar{\varOmega }_{k2} = \left[ {\begin{array}{*{20}c} {\bar{\varOmega }_{11} } & {R_{k1} } & {\bar{\varOmega }_{13} } & 0 \\ * & {\varOmega_{22} } & {\varOmega_{23} } & {S_{i} } \\ * & * & {\bar{\varOmega }_{33} } & {\varOmega_{34} } \\ * & * & * & {\varOmega_{44} } \\ \end{array} } \right] < 0, $$
(26)

where

$$ \begin{aligned} \bar{\varSigma }_{11} & = P_{k} (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} ) + (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} )^{\text{T}} P_{k} + Q_{k2} - R_{k2} \\ & \quad + h_{\delta }^{2} (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} )^{\text{T}} R_{k2} (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} ), \\ \bar{\varSigma }_{12} & = P_{k} (J_{2} (t) + c_{2} \lambda_{k} \varGamma_{2} ) + h_{\delta }^{2} (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} )^{\text{T}} R_{k2} (J_{2} (t) + c_{2} \lambda_{k} \varGamma_{2} ) + R_{k2} - S_{1} , \\ \bar{\varSigma }_{22} & = h_{\delta }^{2} (J_{2} (t) + c_{2} \lambda_{k} \varGamma_{2} )^{\text{T}} R_{k2} (J_{2} (t) + c_{2} \lambda_{k} \varGamma_{2} ) - 2R_{k2} + S_{1} + S_{1}^{\text{T}} , \\ \end{aligned} $$
$$ \begin{aligned} \bar{\varOmega }_{11} & = P_{k} (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} ) + (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} )^{\text{T}} P_{k} + Q_{k1} - R_{k1} \\ & \quad + (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} )^{\text{T}} \hat{\varTheta }_{ki} (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} ), \\ \bar{\varOmega }_{13} & = P_{k} (J_{2} (t) + c_{2} \lambda_{k} \varGamma_{2} ) + (J_{1} (t) + c_{1} \lambda_{k} \varGamma_{1} )^{\text{T}} \hat{\varTheta }_{ki} (J_{2} (t) + c_{2} \lambda_{k} \varGamma_{2} ), \\ \bar{\varOmega }_{33} & = (J_{2} (t) + c_{2} \lambda_{k} \varGamma_{2} )^{\text{T}} \hat{\varTheta }_{ki} (J_{2} (t) + c_{2} \lambda_{k} \varGamma_{2} ) - 2R_{k2} + S_{i} + S_{i}^{\text{T}} , \\ \end{aligned} $$

and the other terms have the same forms as those in Theorem 1 and Corollary 1, then the asymptotic synchronization of delayed complex dynamical network (1) is achieved.

Remark 3

Since the actual networks may have a great deal of nodes, it is important to consider the computation burden when establishing synchronization conditions. Otherwise, it will be very difficult to use in practical applications. In the proof of Theorem 1, when the delay varies in each variable subinterval, reciprocally convex approach, which achieved performance behavior identical to approaches based on the integral inequality lemma but with much less decision variables, was adopted to deal with the crossing terms, \( - (\tau_{\delta } - \tau_{1} )\int_{{t - \tau_{\delta } }}^{{t - \tau_{1} }} {\dot{w}_{k}^{\text{T}} (s)R_{k2} \dot{w}_{k} (s){\text{d}}s} \) and \( - (\tau_{2} - \tau_{\delta } )\int_{{t - \tau_{2} }}^{{t - \tau_{\delta } }} {\dot{w}_{k}^{\text{T}} (s)R_{k2} \dot{w}_{k} (s){\text{d}}s} \). Compared with the free-weighting matrix method [24, 25], the number of decision variables in Theorem 1 will dramatically reduce. Table 1 provides a comparison of the numbers of the decision variables involved in Corollary 2 against some recently reported results in [2325, 27], which shows that the our method has less computational complexity.

Table 1 No. of decision variables by different methods

Remark 4

In this paper, the variable decomposition method may lead to reduction in conservatism if being able to set a suitable dividing point with relation to \( \alpha \). How to seek an appropriate \( \alpha \) such that one can obtain the maximum upper bound on \( \tau_{2} \) for given lower bound \( \tau_{1} \) of time delay, we put forward a simple algorithm as follows.

Algorithm 1 (maximizing \( \tau_{2} \) for a given \( \tau_{1} \))

Step 1: For given \( \tau_{1} \), choose an upper bound on \( \tau_{2} \) in the existing literatures, and then select this upper bound as the initial value \( \tau_{2} (0) \) of \( \tau_{2} \).

Step 2: Set appropriate step lengths, \( \tau_{{2,{\text{step}}}} \) and \( \alpha_{\text{step}} \) step for \( \tau_{2} \) and \( \alpha \), respectively. Set k as a counter, and \( k = 1 \). Let \( \tau_{2} = \tau_{2} (0) + \tau_{{2,{\text{step}}}} \) and the initial value \( \alpha_{0} = \alpha_{\text{step}} \).

Step 3: Let \( \alpha = k\alpha_{\text{step}} \), if the LMIs in (12) and (13) are feasible, go to step 4; otherwise, go to step 5.

Step 4: Let \( \tau_{2} (0) = \tau_{2} \), \( \alpha_{0} = \alpha_{\text{step}} \), \( k = 1 \), and \( \tau_{2} = \tau_{2} (0) + \tau_{{2,{\text{step}}}} \), go to step 3.

Step 5: Let \( k = k + 1 \), if \( k\alpha_{\text{step}} < 1 \), then go to step 3; otherwise, stop.

4 Numerical examples

In this section, four numerical examples are given to demonstrate the effectiveness and less conservativeness of the proposed method.

Example 1

([25]) Consider 5-node complex dynamical network, with each node being a simple three-dimensional linear delayed system

$$ \left\{ \begin{array}{l} \dot{x}_{1} (t) = - x_{1} (t) - x_{1} (t - \tau (t)) \hfill \\ \dot{x}_{2} (t) = - 2x_{2} (t) + x_{1} (t - \tau (t)) - x_{2} (t - \tau (t)) \hfill \\ \dot{x}_{3} (t) = - 3x_{3} (t) - x_{3} (t - \tau (t)) \hfill \\ \end{array} \right. $$

which is asymptotically stable at the equilibrium point \( s(t) = 0 \), and its Jacobin matrices are

$$ J_{1} (t) = \left[ {\begin{array}{*{20}c} { - 1} & 0 & 0 \\ 0 & { - 2} & 0 \\ 0 & 0 & { - 3} \\ \end{array} } \right],\quad J_{2} (t) = \left[ {\begin{array}{*{20}c} { - 1} & 0 & 0 \\ 1 & { - 1} & 0 \\ 0 & 0 & { - 1} \\ \end{array} } \right]. $$

For simplicity, we suppose that the coupling strength is \( c_{1} = c_{2} = c \), the inner-coupling matrix is \( \varGamma_{1} = \varGamma_{2} = I_{3} \), and the outer-coupling matrix is

$$ G = \left[ {\begin{array}{*{20}c} { - 2} & 1 & 0 & 0 & 1 \\ 1 & { - 3} & 1 & 1 & 0 \\ 0 & 1 & { - 2} & 1 & 0 \\ 0 & 1 & 1 & { - 3} & 1 \\ 1 & 0 & 0 & 1 & { - 2} \\ \end{array} } \right]. $$

By simple calculation, the nonzero eigenvalues of G are \( \lambda_{1} = - 1.382 \), \( \lambda_{2} = - 2.382 \), \( \lambda_{3} = - 3.168 \), and \( \lambda_{4} = - 4.168 \).

For a comparison with the results in [25], Table 3 lists the corresponding maximum upper delay bounds of \( \tau_{2} \) for various \( \tau_{1} \) and \( c \). From Table 2, it is clear that our results are significantly better than those in [25], that is much bigger upper bounds of \( \tau_{2} \) can be obtained in this paper. Moreover, it is found that the calculated maximum allowable upper bound on \( \tau_{2} \) may be different as the tunable parameter \( \alpha \) is different. Therefore, we can acquire a bigger upped bound of \( \tau_{2} \) by adjusting the tunable parameter \( \alpha \). Figure 1 shows the state response of the dynamical network for \( c = 0.5 \) and \( \tau (t) = 0.3 + 0.532\left| {\cos (t)} \right| \) under randomly chosen initial conditions in \( [ - 2,2] \). Clearly, it can be seen that the synchronization of network is achieved under the above conditions, which verifies the effectiveness of the proposed method.

Table 2 Maximum allowable τ 2 for different τ 1 and c for Example 1
Fig. 1
figure 1

State response of network in Example 1

Example 2

([24, 27]) Consider a lower-dimensional dynamical network consisting of 5 nodes, in which each node being a simple three-dimensional linear system

$$ \left\{ \begin{array}{l} \dot{x}_{1} (t) = - x_{1} (t) \hfill \\ \dot{x}_{2} (t) = - 2x_{2} (t) \hfill \\ \dot{x}_{3} (t) = - 3x_{3} (t) \hfill \\ \end{array} \right. $$

which is asymptotically stable at the equilibrium point \( s(t) = 0 \), and its Jacobin matrices are

$$ J_{1} (t) = \left[ {\begin{array}{*{20}c} { - 1} & 0 & 0 \\ 0 & { - 2} & 0 \\ 0 & 0 & { - 3} \\ \end{array} } \right],\quad J_{2} (t) = 0. $$

Assume that the constant inner-coupling matrix is \( \varGamma_{1} = 0 \), the time-delayed inner- coupling matrix is \( \varGamma_{2} = I_{3} \), and the outer-coupling matrix is the same as that in Example 1.

The purpose of this example is to calculate the maximum allowable \( \tau_{2} \) such that the consider network model (1) is asymptotically stable for given \( \tau_{1} \) and \( c \). The comparison among the results obtained in this paper and those obtained in [23, 24, 27] are listed in Table 3. It is clear that our results are less conservative than those in [23, 24, 27]. Furthermore, a bigger upped bound of \( \tau_{2} \) can be achieved by adjusting the tunable parameter \( \alpha \). Figure 2 depicts the state response of the dynamical network for \( c = 0.5 \) and \( \tau (t) = 0.5 + 1.035\left| {\sin t} \right| \) under randomly chosen initial conditions in \( [ - 2,2] \). It shows that the all states converge to zero under the above conditions, which implies the synchronization of network (1) can be obtained.

Table 3 Maximum allowable τ 2 for various τ1 and c for Example 2
Fig. 2
figure 2

State response of network in Example 2

Example 3

([26]) Consider a lower-dimensional dynamical network with five nodes, in which each node is a simple second-dimensional linear system

$$ \left\{ \begin{aligned} \dot{x}_{1} (t) = - 4x_{1} (t) \hfill \\ \dot{x}_{2} (t) = - 5x_{2} (t) \hfill \\ \end{aligned} \right. $$

which is asymptotically stable at the equilibrium point \( s(t) = 0 \), and its Jacobin matrices are

$$ J_{1} (t) = \left[ {\begin{array}{*{20}c} { - 4} & 0 \\ 0 & { - 5} \\ \end{array} } \right],\quad J_{2} (t) = \left[ {\begin{array}{*{20}c} 0 & 0 \\ 0 & 0 \\ \end{array} } \right]. $$

We assume that the constant inner-coupling matrix is \( \varGamma_{1} = 0 \), the time-delayed inner- coupling matrix is \( \varGamma_{2} = I_{2} \), and the outer-coupling matrix is

$$ G = \left[ {\begin{array}{*{20}c} { - 5} & 3 & 1 & 1 & 0 \\ 0 & { - 4} & 3 & 1 & 0 \\ 1 & 0 & { - 1} & 0 & 0 \\ 0 & 1 & 0 & { - 2} & 1 \\ 1 & 0 & 0 & 2 & { - 3} \\ \end{array} } \right]. $$

which is asymmetric. The nonzero eigenvalues of G are \( \lambda_{1} = - 1.0756 \), \( \lambda_{2} = - 4.8 + 1.2118j \), \( \lambda_{3} = - 4.8 - 1.2118j \), and \( \lambda_{4} = - 4.3243 \).

For different lower bound \( \tau_{1} \) and coupling strength \( c \), the corresponding maximum upper bounds of \( \tau_{2} \) are obtained by using the method in this paper and those in [26] are listed in Table 4. According to the Table 4, it shows that our proposed method in this paper can lead to less conservative results. Moreover, we can find that different upped bound of \( \tau_{2} \) can be obtained if the tunable parameter \( \alpha \) changes. Figure 3 depicts the state response of the dynamical network for \( c = 0.6 \) and \( \tau (t) = 0.3 + 0.9\left| {\sin (t)} \right| \) under randomly chosen initial conditions in \( [ - 2,2] \). Obviously, as seen in Fig. 3, the states of network (1) are asymptotically stable at zero equilibrium points under the above conditions. The numerical simulation result shows the validity of our theoretical analysis.

Table 4 Maximum allowable τ 2 for different τ 1 and c for Example 3
Fig. 3
figure 3

State response of network in Example 3

Example 4

Consider a higher-dimensional network with 50 nodes, where each node is the following delayed system

$$ \left\{ \begin{aligned} \dot{x}_{1} (t) = x_{2} (t) - x_{1} (t - \tau (t)) \hfill \\ \dot{x}_{2} (t) = - x_{1} (t) - x_{2} (t)(1 + x_{2} (t))^{2} + 0.5x_{1} (t - \tau (t)) - 0.1x_{2} (t - \tau (t)) \hfill \\ \end{aligned} \right. $$

which is asymptotically stable at \( s(t) = 0 \) and \( s(t - \tau (t)) = 0 \), and its Jacobin matrices are

$$ J_{1} (t) = \left[ {\begin{array}{*{20}c} 0 & 1 \\ { - 1} & { - 1} \\ \end{array} } \right],\quad J_{2} (t) = \left[ {\begin{array}{*{20}c} { - 1} & 0 \\ {0.5} & { - 0.1} \\ \end{array} } \right] $$

Assume that the coupling strength is \( c_{1} = c_{2} = c \), the inner-coupling matrix is \( \varGamma_{1} = \varGamma_{2} = I_{2} \), and the outer-coupling matrix is defined as

$$ G = \left[ {\begin{array}{*{20}c} { - 1} & 1 & 0 & 0 & \cdots & 0 \\ 1 & { - 1} & 0 & 0 & \cdots & 0 \\ 0 & 0 & { - 1} & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \cdots & \vdots \\ 0 & 0 & 0 & 0 & { - 1} & 1 \\ 0 & 0 & 0 & 0 & 1 & { - 1} \\ \end{array} } \right]_{50 \times 50} . $$

The nonzero eigenvalues of G are \( \lambda_{i} = - 2(i = 1, \ldots ,25) \). We can calculate the maximum delay bounds \( \tau_{2} \) that guarantee the asymptotic stability of the synchronized states by Theorem 1 for different values of the coupling strength c and lower bound \( \tau_{1} \), which are listed in Table 5. From Table 5, it indicates that the tunable parameter \( \alpha \) is useful in the reduction in conservatism. Figure 4 shows the state response of the dynamical network for \( c = 0.5 \) and \( \tau (t) = 0.5 + 0.52\left| {\cos t} \right| \) under randomly chosen initial conditions in \( [ - 5,5] \). As shown in Fig. 4, the trajectories of states converge to zero and the synchronization is achieved under the above conditions.

Table 5 Maximum allowable τ 2 for different τ 1 and c for Example 4
Fig. 4
figure 4

State response of network in Example 4

5 Conclusion

This paper is concerned with the synchronization stability problem for a general complex dynamical network with interval time-varying coupling delay and delay in the dynamical node. Based on the variable delay-partitioning approach, both the information of the variable subinterval delay and the lower and upper bound of delay can be taken into full consideration. By choosing different Lyapunov–Krasovskii functionals for these two subintervals and using reciprocally convex approach, some improved delay-dependent synchronization stability conditions are proposed by a set of linear matrix inequalities. Numerical examples show the validity of the theoretical results.