Keywords

1 Introduction

Since the master–slave (drive response) concept which was proposed by Pecora and Carroll in their pioneering work [1], researchers have spent considerable time and efforts to achieve the master–slave synchronization of chaotic neural networks with time delays. We must notice that it cannot be avoided generating time delays when the neuron transmitting signals during real application. Moreover, the advent of time delays will cause the neural networks’ presented complicated and instable performance. In order to make the results more universal, many articles [2, 3] take the time delays into account while analyzing the chaotic neural networks. Compare with [2], the authors of [3] considered the time varying delays instead of constant time delays, and obtained less conservative results.

So far, various control schemes are applied to achieve the synchronization of chaotic neural networks, which include impulsive control [4], adaptive control [5], state feedback control [6], sampled data control [7], pinning control [8].

Among these control schemes, the sampled data control technology has enjoyed widespread adoption due to its own outstanding advantages. The sampled data controller takes up low communication channel capacity and has high resistance to anti-interference, which can accomplish the control task more efficiently. Thanks to the input delay approach [9], we can deal with the discrete term more easily. In [10], For the sake of getting sufficient exponential synchronization conditions, the authors made use of Lyapunov stability theory, input delay approach as well as linear matrix inequalities (LMI) technology. But this article did not take the signal transmission delay into account. To the best of the authors’ knowledge, Little literature has been investigated the master–slave synchronization schemes for neural networks with discrete and distributed time varying delays using sampled-data control in the presence of a constant input delay, and still remains challenging.

From the foregoing discussions, the major thrust of the paper is to discuss the problem of master–slave synchronization for neural networks with mixed time varying delays (discrete and distributed delays) by utilizing sampled data control in the presence of a constant input delay. The desired sampled data controller can be obtained through computing some LMIs which depend on the Lyapunov functionals. The usefulness of input delay approach is also considered. Besides, the introduction of decomposition approach of delay interval will make our results less conservative. The proposed synchronization control scheme is verified through simulation results.

Notation: The notations which are used in this article are defined as: \( R^{n} \) and \( = \) denote the \( n \)—dimensional Euclidean space and the set of all \( m \times n \) real matrices, respectively. The notation \( X > Y(X \ge Y) \), where \( X \) and \( Y \) are symmetric matrices, means that \( X - Y \) is positive definite (positive semidefinite). \( I \) and \( 0 \) represent the identity matrix and a zero matrix, respectively. The superscript “T” denotes matrix transposition, and \( {\text{diag}}\{ \ldots \} \) stands for a block diagonal matrix. \( \left\| \cdot \right\| \) denotes the Euclidean norm of a vector or the spectral norm of matrices. For an arbitrary matrix \( B \) and two symmetric matrices \( A \) and \( C \), \( \left[ {\begin{array}{*{20}c} A & B \\ * & C \\ \end{array} } \right] \) denotes a symmetric matrix, the “*” are symmetric elements that stand for the symmetric matrix. If the dimensions of matrices are not particularly pointed out, we will deem the matrices have appropriate dimension for mathematical operations.

2 Model and Preliminaries

Consider a neural network with mixed delay as follow:

$$ \dot{x}(t) = - Cx(t) + Af(x(t)) + Bf(x(t - \tau (t))) + D\int\limits_{t - \sigma (t)}^{t} {f(x(\theta )){\text{d}}\theta } + J $$
(1)

where \( x(t) = [x_{1} (t)x_{2} (t)x_{3} (t) \cdots x_{n} (t)]^{T} \in R^{n} \) and \( f(x(k)) = [f_{1} (x_{1} (t))f_{2} (x_{2} (t))f_{3} (x_{3} (t)) \) \( \cdots f_{n} (x_{n} (t))]^{T} \) are, respectively, the state variable and the neuron activation function; \( C = {\text{diag}}\{ c_{1} ,c_{2} ,c_{3} , \ldots ,c_{n} \} \) is a diagonal matrix with positive entries; \( A = (a_{ij} )_{n \times n} , \) \( B = (b_{ij} )_{n \times n} , \) and \( D = (d_{ij} )_{n \times n} , \) are, respectively, the connection weight matrix, the discretely delayed connection weight matrix and the distributively delayed connection weight matrix; \( \tau (t) \) denotes the time varying delay, and satisfies \( 0 \le \tau (t) \le \tau ,\dot{\tau }(t) \le u \); \( \sigma (t) \) is expression of the distributed delay which is supposed to satisfied \( 0 \le \sigma (t) \le \sigma \). The mentioned definitions of \( \tau, \) \( u \) and \( \sigma \) are constants.

With regard to the neuron activation function, the following hypotheses will come into play.

Assumption 1

There exists some constants \( L_{i}^{ - } \), \( L_{i}^{ + } \), \( i = 1,2,3 \ldots n \), such that the activation function \( f( \cdot ) \) is satisfied with \( L_{i}^{ - } \le \frac{{f_{i} (\vartheta_{1} ) - f_{i} (\vartheta_{2} )}}{{\vartheta_{1} - \vartheta_{2} }} \le L_{i}^{ + } \) for \( \vartheta_{1} ,\vartheta_{2} \), and \( \vartheta_{1} \ne \vartheta_{2} \).

In this paper, neural networks system (1) is deemed as the master system and a slave system for (1) will be designed as

$$ \dot{y}(t) = - Cy(t) + Af(y(t)) + Bf(y(t - \tau (t))) + D\int\limits_{t - \sigma (t)}^{t} {f(y(\theta )){\text{d}}\theta } + J + u(t) $$
(2)

where \( D \), \( A \), \( B \) and \( C \) are matrices as in (1), and \( u(t) \in R^{n} \) is the control input to be designed.

The synchronization error signal is described as \( e(t) = y(t) - x(t) \), then the error signal system can be exhibited as

$$ \dot{e}(t) = - Ce(t) + Ag(e(t)) + Bg(x(t - \tau (t))) + D\int\limits_{t - \sigma (t)}^{t} {g(e(\theta )){\text{d}}\theta } + u(t) $$
(3)

where \( g(e(t)) = f(y(t)) - f(x(t)) \).

In this paper, we define the updating signal time of Zero-Order-Hold (ZOH) by \( t_{k} \), and assume that the updating signal (successfully transmitted signal from the sampler to the controller and to the ZOH) at the instant \( t_{k} \) has experienced a constant signal transmission delay \( \eta \). Here, the sampling intervals are supposed to be less than a given bound and satisfy \( t_{k + 1} - t_{k} = h_{k} \le h \).

The \( h \) represents the largest sampling interval. Thus, we can obtain that \( t_{k + 1} - t_{k} + \eta \le h + \eta \le d \).

The main aim of this paper is to achieve the synchronization of the master system (1) and slave system (2) together with the following sampled-data controller

$$ u(t) = Ke(t_{k} - \eta ),\quad t_{k} \le t < t_{k + 1} ,\quad k = 0,1,2, \ldots $$
(4)

where \( K \) is the sampled data feedback controller gain matrix to be determined.

Applying control law (4) into the error signal system (3)

$$ \dot{e}(t) = - Ce(t) + Ag(e(t)) + Bg(x(t - \tau (t))) + D\int\limits_{t - \sigma (t)}^{t} {g(e(\theta )){\text{d}}\theta } + Ke(t_{k} - \eta ) $$
(5)

Defining \( d(t) = t - t_{k} + \eta \), \( t_{k} \le t < t_{k + 1} \), besides, \( 0 \le d(t) \le d \). Then error signal system can be described as the following condition

$$ \dot{e}(t) = - Ce(t) + Ag(e(t)) + Bg(x(t - \tau (t))) + D\int\limits_{t - \sigma (t)}^{t} {g(e(\theta )){\text{d}}\theta } + Ke(t - d(t)) $$
(6)

Next, we shall briefly introduce the lemmas which will be used in this paper.

Lemma 1

(Jensen inequality) [11] For any matrix \( \omega > 0 \), there existing scalars \( \alpha \) and \( \beta \) \( (\beta > \alpha ) \), a vector function \( \phi :[\alpha ,\beta ] \to R^{n} \) such that the integrations concerned are well defined, then

$$ (\beta - \alpha )\int\limits_{\alpha }^{\beta } {\phi (\gamma )^{T} \omega \phi (\gamma ){\text{d}}\gamma } \ge \left[ {\int\limits_{\alpha }^{\beta } {\phi (\gamma ){\text{d}}\gamma } } \right]^{T} \omega \left[ {\int\limits_{\alpha }^{\beta } {\phi (\gamma ){\text{d}}\gamma } } \right] $$
(7)

Lemma 2

(Extended Wirtinger inequality) [12] For any matrix \( Z > 0 \), if \( \varphi (t) \in \omega [a,b) \) and \( \varphi (a) = 0 \), then following inequality holds:

$$ \int\limits_{a}^{b} {\varphi (\zeta )^{T} Z\varphi (\zeta ){\text{d}}\zeta \le } \frac{{4(b - a)^{2} }}{{\pi^{2} }}\int\limits_{a}^{b} {\dot{\varphi }(\zeta )^{T} Z\dot{\varphi }(\zeta ){\text{d}}\zeta } $$
(8)

Lemma 3

The constant matrix \( Y \in R^{n \times n} \) is a positive definite symmetric matrix, if the positive scalar \( d \) is satisfied with \( 0 \le d(t) \le d \), and the vector-valued function \( \dot{y}:[ - d,0] \to R^{n} \) is existent, then the integral term \( - d\int\limits_{t - d}^{t} {\dot{y}^{T} (\zeta )Y\dot{y}(\zeta )} {\text{d}}\zeta \) can be defined as

$$ - d\int\limits_{t - d}^{t} {\dot{y}^{T} (\zeta )Y\dot{y}(\zeta )} {\text{d}}\zeta \le \left[ {\begin{array}{*{20}c} {y(t)} \\ {y(t - d(t))} \\ {y(t - d)} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - Y} & Y & 0 \\ * & { - 2Y} & Y \\ * & * & { - Y} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {y(t)} \\ {y(t - d(t))} \\ {y(t - d)} \\ \end{array} } \right] $$
(9)

3 Main Results

In this section, a synchronization criterion will be presented which can make sure that the slave system (2) is synchronized with master system (1). Above all, we are going to divide the sampling interval into three parts, respectively, \( \left[ { - \eta ,0} \right],\left[ { - \eta , - \left( {\eta + \frac{h}{2}} \right)} \right],\left[ { - \left( {\eta + h} \right), - \left( {\eta + \frac{h}{2}} \right)} \right] \)

Next, we will list a collection of notations which are used in this paper.

$$\begin{aligned} L_{1} &= {\text{diag}}\left\{ {L_{1}^{ - } L_{1}^{ + } ,L_{2}^{ - } L_{2}^{ + } , \ldots ,L_{n}^{ - } L_{n}^{ + } } \right\}, \\ L_{2} &= {\text{diag}}\left\{ {\frac{{L_{1}^{ - } + L_{1}^{ + } }}{2},\frac{{L_{2}^{ - } + L_{2}^{ + } }}{2}, \ldots \frac{{L_{n}^{ - } + L_{n}^{ + } }}{2}} \right\}\end{aligned} $$

Theorem 1

Given scalar \( \gamma > 0 \), if there exist \( P > 0,\;R_{1} > 0,\;R_{2} > 0,\;R_{3} > 0,\;Z_{1} > 0,\;Z_{2} > 0,\;Z_{3} > 0,\;Z_{4} > 0,\;Z_{5} > 0,\;Z_{6} > 0,\;Q > 0,\;W > 0,\;G_{1} ,\;G_{2} ,\;G,\;F \)diagonal matrices \( M \ge 0,\;V_{1} > 0,\;V_{2} > 0 \) such that

$$ \varXi_{1} = \left[ {\begin{array}{*{20}c} {\varSigma_{11} } & {\varSigma_{12} } & {R_{3} } & 0 & {\varSigma_{15} } & {Z_{2} } & 0 & 0 & {\varSigma_{19} } & {GB} & {GD} \\ * & {\varSigma_{22} } & 0 & 0 & {\varSigma_{25} } & 0 & 0 & 0 & {\gamma GA + L_{3} M} & {\gamma GB} & {\gamma GD} \\ * & * & {\varSigma_{33} } & {R_{3} } & 0 & 0 & 0 & 0 & 0 & {V_{2} F_{2} } & 0 \\ * & * & * & {\varSigma_{44} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & {\varSigma_{55} } & {\frac{{\pi^{2} }}{4}W} & {Z_{4} } & {Z_{4} } & {GA} & {GB} & {GD} \\ * & * & * & * & * & {\varSigma_{66} } & {Z_{6} } & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & {\varSigma_{77} } & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & * & {\varSigma_{88} } & 0 & 0 & 0 \\ * & * & * & * & * & * & * & * & {\varSigma_{99} } & 0 & 0 \\ * & * & * & * & * & * & * & * & * & { - V_{2} } & 0 \\ * & * & * & * & * & * & * & * & * & * & { - Q} \\ \end{array} } \right] < 0 $$
(10)
$$ \varXi_{2} = \left[ {\begin{array}{*{20}c} {\varSigma_{11} } & {\varSigma_{12} } & {R_{3} } & 0 & {\varSigma_{15} } & {Z_{2} } & 0 & 0 & {\varSigma_{19} } & {GB} & {GD} \\ * & {\varSigma_{22} } & 0 & 0 & {\varSigma_{25} } & 0 & 0 & 0 & {\gamma GA + L_{3} M} & {\gamma GB} & {\gamma GD} \\ * & * & {\varSigma_{33} } & {R_{3} } & 0 & 0 & 0 & 0 & 0 & {V_{2} F_{2} } & 0 \\ * & * & * & {\varSigma_{44} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & {\hat{\varSigma }_{55} } & {\varSigma_{56} } & {Z_{6} } & {Z_{4} } & {GA} & {GB} & {GD} \\ * & * & * & * & * & {\varSigma_{66} } & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & {\varSigma_{77} } & {Z_{4} } & 0 & 0 & 0 \\ * & * & * & * & * & * & * & {\varSigma_{88} } & 0 & 0 & 0 \\ * & * & * & * & * & * & * & * & {\varSigma_{99} } & 0 & 0 \\ * & * & * & * & * & * & * & * & * & { - V_{2} } & 0 \\ * & * & * & * & * & * & * & * & * & * & { - Q} \\ \end{array} } \right] < 0 $$
(11)
$$ \begin{aligned} \varSigma_{11} & = R_{1} + R_{2} - R_{3} + Z_{1} + Z_{2} - F_{1} V_{1} - 2GC,\;\varSigma_{12} = P - \gamma GC - G - L_{3} M \\ \varSigma_{15} & = F - GC,\varSigma_{19} = V_{1} F_{2} + GA,\varSigma_{22} = \tau^{2} R_{3} + \eta^{2} Z_{2} + \frac{{h^{2} }}{4}Z_{4} + \frac{{h^{2} }}{4}Z_{6} + h^{2} W_{1} - 2\gamma G \\ \varSigma_{25} & = G - \gamma F,\varSigma_{33} = (u - 1)R_{1} - 2R_{3} - F_{1} V_{2} ,\varSigma_{44} = - R_{2} - R_{3} \\ \varSigma_{55} & = - 2Z_{4} - \frac{{\pi^{2} }}{4}W + 2F,\hat{\varSigma }_{55} = - 2Z_{6} - \frac{{\pi^{2} }}{4}W + 2F \\ \varSigma_{56} & = Z_{6} + \frac{{\pi^{2} }}{4}W,\varSigma_{66} = - Z_{1} - Z_{2} + Z_{5} - Z_{6} - \frac{{\pi^{2} }}{4}W \\ \varSigma_{77} & = Z_{3} - Z_{4} - Z_{5} - Z_{6} ,\varSigma_{88} = - Z_{3} - Z_{4} ,\varSigma_{99} = \sigma^{2} Q - V_{1} \\ \end{aligned} $$

then the slave system (1) is synchronized with master system (2). Furthermore, the sampled data controller gain can be obtained by \( K = G^{ - 1} F \).

Proof

Construct a discontinuous Lyapunov functional for the error system (7)

$$ V(t) = \sum\limits_{j = 1}^{7} {V_{j} (t)} ,\quad t \in [t_{k} ,t_{k + 1} ) $$
(12)
$$ \begin{aligned} V_{1} (t) & = e(t)^{T} Pe(t) + 2\sum\limits_{i = 1}^{n} {m_{i} \int\limits_{0}^{{e_{i} }} {(g_{i} (\theta ) - L_{i} \theta ){\text{d}}\theta } } \\ V_{2} (t) & = \int\limits_{t - \tau (t)}^{t} {e(\theta )^{T} R_{1} e(\theta ){\text{d}}\theta + \int\limits_{t - \tau }^{t} {e(\theta )^{T} R_{2} e(\theta ){\text{d}}\theta + \tau \int\limits_{ - \tau }^{0} {\int\limits_{t + \theta }^{t} {\dot{e}} } } } (\theta )^{T} R_{3} \dot{e}(\theta ){\text{d}}\theta {\text{d}}\xi \\ V_{3} (t) & = \int\limits_{t - \eta }^{t} {e(\theta )^{T} Z_{1} e(\theta ){\text{d}}\theta + h\int\limits_{ - \eta }^{0} {\int\limits_{t + \theta }^{t} {\dot{e}(\theta )^{T} Z_{2} \dot{e}(\theta ){\text{d}}\theta {\text{d}}\xi } } } \\ V_{4} (t) & = \int\limits_{t - (h + \eta )}^{{t - \left( {\eta + \frac{h}{2}} \right)}} {e(\theta )^{T} Z_{3} e(\theta ){\text{d}}\theta + \frac{h}{2}\int\limits_{ - (h + \eta )}^{{ - \left( {\eta + \frac{h}{2}} \right)}} {\int\limits_{t + \theta }^{t} {\dot{e}(\theta )^{T} Z_{4} \dot{e}(\theta ){\text{d}}\theta {\text{d}}\xi } } } \\ V_{5} (t) & = \int\limits_{{t - \left( {\eta + \frac{h}{2}} \right)}}^{t - \eta } {e(\theta )^{T} Z_{5} e(\theta ){\text{d}}\theta + \frac{h}{2}\int\limits_{{ - \left( {\eta + \frac{h}{2}} \right)}}^{ - \eta } {\int\limits_{t + \theta }^{t} {\dot{e}(\theta )^{T} Z_{6} \dot{e}(\theta ){\text{d}}\theta {\text{d}}\xi } } } \\ V_{6} (t) & = \sigma \int\limits_{ - \sigma }^{0} {\int\limits_{t + \theta }^{t} {g(e(\theta ))^{T} Qg(e(\theta )){\text{d}}\theta {\text{d}}\xi } } \\ V_{7} (t) & = (d - \eta )^{2} \int\limits_{{t_{k} - \eta }}^{t} {\dot{e}(\theta )^{T} W\dot{e}(\theta ){\text{d}}} \theta - \frac{{\pi^{2} }}{4}\int\limits_{{t_{k} - \eta }}^{t - \eta } {(e(\theta ) - e(t_{k} - \eta ))^{T} W(e(\theta ) - e(t_{k} - \eta )){\text{d}}\theta } \\ \end{aligned} $$

\( V_{7} (t) \) can be rewritten as

$$\begin{aligned} V_{7} (t) =&\, (d - \eta )^{2} \int\limits_{t - \eta }^{t} {\dot{e}(\theta )^{T} W\dot{e}(\theta ){\text{d}}} \theta + (d - \eta )^{2} \int\limits_{{t_{k} - \eta }}^{t - \eta } {\dot{e}(\theta )^{T} W\dot{e}(\theta ){\text{d}}} \theta \\&\, - \frac{{\pi^{2} }}{4}\int\limits_{{t_{k} - \eta }}^{t - \eta } {(e(\theta ) - e(t_{k} - \eta ))^{T} W(e(\theta ) - e(t_{k} - \eta )){\text{d}}\theta } \end{aligned}$$
$$ M = {\text{diag}}\{ m_{1} ,m_{2} , \ldots m_{n} \} \ge 0 $$

From the Lemma 2, we can infer \( V_{7} (t) \ge 0 \). Furthermore, \( V_{7} (t) \) will vanish at \( t = t_{k} \). Therefore, we can conclude that \( \mathop {\lim }\nolimits_{{t \to t_{k}^{ - } }} V(t) \ge V(t_{k} ) \).

Next, we will compute the derivative of \( V(t) \) with the corresponding trajectory of system (6)

$$ \begin{aligned} \dot{V}_{1} (t) & = 2e(t)^{T} P\dot{e}(t) + 2(g(e(t))^{T} - L_{3} e(t)^{T} )M\dot{e}(t) \\ \dot{V}_{2} (t) & \le e(t)^{T} R_{1} e(t) + (u - 1)e(t - \tau (t))^{T} R_{1} e(t - \tau (t)) + e(t)^{T} R_{2} e(t) - e(t - \tau )^{T} R_{2} e(t - \tau ) \\ & \quad + \tau^{2} \dot{e}(t)^{T} R_{3} \dot{e}(t) - \tau \int\limits_{t - \tau (t)}^{t} {\dot{e}(\theta )^{T} R_{3} \dot{e}(\theta )} {\text{d}}\theta - \tau \int\limits_{t - \tau }^{t - \tau (t)} {\dot{e}(\theta )^{T} R_{3} \dot{e}(\theta )} {\text{d}}\theta \\ \end{aligned} $$

According to Lemma 1

$$ - \tau \int\limits_{t - \tau }^{t} {\dot{e}(\theta )^{T} R_{3} \dot{e}(\theta )} {\text{d}}\theta \le \left[ {\begin{array}{*{20}c} {e(t)} \\ {e(t - \tau )} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - R_{3} } & {R_{3} } \\ * & { - R_{3} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {e(t)} \\ {e(t - \tau )} \\ \end{array} } \right] $$
(13)

Consequently, the following inequality holds

$$ \begin{aligned} \dot{V}_{2} (t) & \le e(t)^{T} R_{1} e(t) + (u - 1)e(t - \tau (t))^{T} R_{1} e(t - \tau (t)) + e(t)^{T} R_{2} e(t) - e(t - \tau )^{T} R_{2} e(t - \tau ) + \tau^{2} \dot{e}(t)^{T} R_{3} \dot{e}(t) \\ & \quad + \left[ {\begin{array}{*{20}c} {e(t)} \\ {e(t - \tau (t))} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - R_{3} } & {R_{3} } \\ * & { - R_{3} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {e(t)} \\ {e(t - \tau (t))} \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {e(t - \tau (t))} \\ {e(t - \tau )} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - R_{3} } & {R_{3} } \\ * & { - R_{3} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {e(t - \tau (t))} \\ {e(t - \tau )} \\ \end{array} } \right] \\ \dot{V}_{3} (t) & \le e(t)^{T} Z_{1} e(t) - e(t - \eta )^{T} Z_{1} e(t - \eta ) + \eta^{2} \dot{e}(t)^{T} Z_{2} \dot{e}(t) + \left[ {\begin{array}{*{20}c} {e(t)} \\ {e(t - \eta )} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - R_{3} } & {R_{3} } \\ * & { - R_{3} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {e(t)} \\ {e(t - \eta )} \\ \end{array} } \right] \\ \dot{V}_{4} (t) & = e\left( {t - \left( {\eta + \frac{h}{2}} \right)} \right)^{T} Z_{3} e\left( {t - \left( {\eta + \frac{h}{2}} \right)} \right) - e(t - (h + \eta ))^{T} Z_{3} e(t - (h + \eta )) + \left( \frac{h}{2} \right)^{2} \dot{e}(t)^{T} Z_{4} \dot{e}(t) \\ & \quad - \int\limits_{t - (h + \eta )}^{{t - \left( {\eta + \frac{h}{2}} \right)}} {\dot{e}(\theta )^{T} Z_{4} \dot{e}(\theta ){\text{d}}\theta } + \left( \frac{h}{2} \right)^{2} \dot{e}(t)^{T} Z_{4} \dot{e}(t) - \int\limits_{t - (h + \eta )}^{{t - \left( {\eta + \frac{h}{2}} \right)}} {\dot{e}(\theta )^{T} Z_{4} \dot{e}(\theta ){\text{d}}\theta } \\ \dot{V}_{5} (t) & = e(t - \eta )^{T} Z_{5} e(t - \eta ) - e\left( {t - \left( {\eta + \frac{h}{2}} \right)} \right)^{T} Z_{5} e\left( {t - \left( {\eta + \frac{h}{2}} \right)} \right) + \left( \frac{h}{2} \right)^{2} \dot{e}(t)^{T} Z_{6} \dot{e}(t) \\ \end{aligned} $$

According to Lemma1 and Lemma 3, if \( d(t) \in \left[ { - (h + \eta ), - \left(\eta + \frac{h}{2}\right)} \right] \), then the following inequalities hold

$$ - \int\limits\limits_{t - (h + \eta )}^{{t - \left( {\eta + \frac{h}{2}} \right)}} {\dot{e}(\theta )^{T} Z_{4} \dot{e}(\theta ){\text{d}}\theta } \le \left[ {\begin{array}{*{20}c} {e\left( {t - \left( {\eta + \frac{h}{2}} \right)} \right)} \\ {e(t - d(t))} \\ {e(t - (h + \eta ))} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - Z_{4} } & {Z_{4} } & 0 \\ 0 & { - 2Z_{4} } & {Z_{4} } \\ 0 & 0 & { - Z_{4} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {e\left( {t - \left( {\eta + \frac{h}{2}} \right)} \right)} \\ {e(t - d(t))} \\ {e(t - (h + \eta ))} \\ \end{array} } \right] $$
(14)
$$ - \int\limits_{{t - \left( {\eta + \frac{h}{2}} \right)}}^{t - \eta } {\dot{e}(\theta )^{T} Z_{6} \dot{e}(\theta ){\text{d}}\theta } \le \left[ {\begin{array}{*{20}c} {e(t - \eta )} \\ {e\left( {t - \left( {\eta + \frac{h}{2}} \right)} \right)} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - Z_{6} } & {Z_{6} } \\ * & { - Z_{6} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {e(t - \eta )} \\ {e\left( {t - \left( {\eta + \frac{h}{2}} \right)} \right)} \\ \end{array} } \right] $$
(15)

If \( d(t) \in \left[ { - \left( {\eta + \frac{h}{2}} \right), - \eta } \right] \), we have similar inequalities.

$$ \begin{aligned} \dot{V}_{6} (t) & = \sigma^{2} g(e(t))^{T} Qg(e(t)) - \sigma \int\limits_{t - \sigma }^{t} {g(e(\theta ))^{T} Qg(e(\theta ))} {\text{d}}\theta \\ & \le \sigma^{2} g(e(t))^{T} Qg(e(t)) - \int\limits\limits_{t - \sigma }^{t} {g(e(\theta ))^{T} {\text{d}}\theta Q} \int\limits\limits_{t - \sigma }^{t} {g(e(\theta )){\text{d}}\theta } \\ \dot{V}_{7} (t) & \le (d - \eta )^{2} \dot{e}(t)^{T} W\dot{e}(t) - \frac{{\pi^{2} }}{4}\left[ {\begin{array}{*{20}c} {e(t - \eta )} \\ {e(t_{k} - \eta )} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - W} & W \\ * & { - W} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {e(t - \eta )} \\ {e(t_{k} - \eta )} \\ \end{array} } \right] \\ \end{aligned} $$

Based on the error system (6), for any appropriately dimensioned matrices \( G_{1} \) and \( G_{2} \), the following equations are true

$$ \begin{aligned} & 0 = 2\left[ {e(t)^{T} G_{1} + e(t_{k} - \eta )^{T} G_{1} + \dot{e}(t)^{T} G_{2} } \right]\Big[ - \dot{e}(t) - Ce(t) + Ag(e(t)) \nonumber\\ &\quad\quad + Bg(e(t - \tau (t))) + D\int\limits_{t - \sigma (t)}^{t} {g(e(\theta )){\text{d}}\theta } + Ke(t - d(t)) \Big] \end{aligned} $$
(16)

where \( G_{1} \) and \( G_{2} \) are defined as \( G_{1} = G,\;G_{2} = \gamma G. \)

Besides, we can obtain from Assumption 1 that for \( j = 1,2,3, \ldots ,n: \)

$$ 0 \le \left[ {\begin{array}{*{20}c} {e(t)} \\ {g(e(t))} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - L_{j}^{ - } L_{j}^{ + } e_{j} e_{j}^{T} } & { - \frac{{L_{j}^{ - } + L_{j}^{ + } }}{2}e_{i} e_{i}^{T} } \\ * & {e_{j} e_{j}^{T} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {e(t)} \\ {g(e(t))} \\ \end{array} } \right] $$
(17)

where \( e_{j} \) stands for the unit column vector with 1 element on its \( j \)th row and zeros elsewhere. Therefore, the following inequality can be derived, for any appropriately dimensioned matrices \( V_{1} > 0 \) and \( V_{2} > 0 \).

$$ 0 \le \left\{ {\left[ {\begin{array}{*{20}c} {e(t)} \\ {g(e(t))} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - L_{1} V_{1} } & {L_{2} V_{1} } \\ * & { - V_{1} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {e(t)} \\ {g(e(t))} \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {e(t - \tau (t))} \\ {g(e(t - \tau (t)))} \\ \end{array} } \right]^{T} \left[ {\begin{array}{*{20}c} { - L_{1} V_{2} } & {L_{2} V_{2} } \\ * & { - V_{2} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {e(t - \tau (t))} \\ {g(e(t - \tau (t)))} \\ \end{array} } \right]} \right\} $$
(18)

Now substituting (16) and (18) to \( \dot{V}(t) \), and letting \( K = G^{ - 1} F \), then the following inequality will be achieved

$$ \dot{V}(t) \le \chi (t)^{T} (\varXi_{k} )\chi (t),\quad k = 1,2 $$
(19)

where

$$ \chi (t) = \left[ {\begin{array}{*{20}c} {e(t)^{T} } & {\dot{e}(t)^{T} } & {e(t - \tau (t))^{T} } & {e(t - \tau )^{T} } & {e(t - d(t))^{T} } & {e(t - \eta )^{T} } & {e\left( {t - \left( {\eta + \frac{h}{2}} \right)} \right)^{T} } \\ \end{array} } \right.\left. {\begin{array}{*{20}c} {e(t - (\eta + h))^{T} } & {g(e(t))^{T} } & {g(e(t - \tau (t)))^{T} } & {\int\limits_{t - \sigma (t)}^{t} {g(e(\theta ))^{T} {\text{d}}\theta } } \\ \end{array} } \right]^{T} $$

Thus, based on (10) and (11), we can conclude that \( \dot{V}(t) \le - \zeta \left\| {e(t)} \right\|^{2} \) for a small scalar \( \zeta > 0 \).

According to the Lyapunov stability theory, it can be inferred that the slave system (2) is synchronized with the master system (1). This completes the proof.

Remark 1

A new synchronization criterion for the master system (1) and slave system (2) are introduced in Theorem 1 through constructing a novel Lyapunov functional. Numerous LMIs which are applied to derive sufficient conditions which can be calculated effectively by the Matlab LMI control toolbox.

Remark 2

Thanks to the term \( V_{7} (t) \) as well as the decomposition method of delay interval, the sawtooth structure characteristic of sampling input delay is used properly, and the existing results will be improved significantly.

4 Numerical Example

In this simulation, we choose the activation functions as \( f_{1} ( {\text{s)}} = f_{2} ( {\text{s)}} = \tanh ( {\text{s)}} . \) The parameters of master system (1) and slave system (2) are assumed as

$$ \begin{aligned}& C = \left[ {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right],A = \left[ {\begin{array}{*{20}c} {1.8} & { - 0.15} \\ { - 5.2} & {3.5} \\ \end{array} } \right], B = \left[ {\begin{array}{*{20}c} { - 1.7} & { - 0.12} \\ { - 0.26} & { - 2.5} \\ \end{array} } \right],\\&D = \left[ {\begin{array}{*{20}c} {0.6} & {0.15} \\ { - 2} & { - 0. 1 2} \\ \end{array} } \right]\end{aligned} $$

It is clear that \( L_{1} = 0 \), \( L_{2} = 0.5I \).

We suppose that \( J = 0, \) discrete delay \( \tau (t) = \frac{{{\text{e}}^{t} }}{{{\text{e}}^{t} + 1}}, \) distributed delay \( \sigma (t) = 0.5\sin^{2} (t) \). The other parameters are defined as \( \tau = 1 \), \( u = 0.25 \) and \( \sigma = 0.5 \). The initial values of master system and slave system are presented as \( x(0) = \left[ {\begin{array}{*{20}c} {0.3} & {0.4} \\ \end{array} } \right]^{T} ,y(0) = \left[ {\begin{array}{*{20}c} { - 0.2} & {0.7} \\ \end{array} } \right]^{T} \), respectively. The chaotic behavior of the master system and the slave system without controller are given in Figs. 1 and 2, respectively.

Fig. 1
figure 1

Chaotic behavior of the master system (1)

Fig. 2
figure 2

Chaotic behavior of the slave system with \( u(t) = 0 \)

While employing Theorem 1, we create Table 1 to show the relationship between the transmission delay \( \eta \) and the maximum values of sampling interval \( h \). From Table 1, we can get the largest sampling interval \( h = 0.21 \) when the corresponding constant delay \( \eta = 0.01 \). Calculating the LMIs (10) and (11), the controller gain is presented as

Table 1 Maximum sampling interval \( h \) for different \( \eta \)
$$ K = \left[ {\begin{array}{*{20}c} {{ - 5} . 7 3 5 2} & { 0. 0 7 0 2} \\ {{ 1} . 1 6 3 2} & {{ - 6} . 6 5 3 2} \\ \end{array} } \right] $$

Based on the mentioned controller gain, the response curves of control input (4) and system error (6) are exhibited in Figs. 3 and 4, respectively. Clearly, numerical simulations demonstrate that the designed controller can achieve master–slave synchronization.

Fig. 3
figure 3

State responses of error system

Fig. 4
figure 4

Control input \( u(t) \)

5 Conclusions

In this paper, the problem of master–slave synchronization has been studied for chaotic neural networks with discrete and distributed time varying delays in the presence of a constant input delay. Based on the Lyapunov stability theory, input delay method as well as the decomposition approach of delay interval, we construct a new Lyapunov functional and derive the less conservative results. Ultimately, numerical simulations demonstrate the advantage and effectiveness of the obtained results.