1 Introduction

Based on physical symmetry arguments, Chua [7] predicted that besides the resistor, capacitor and inductor, there should be a fourth fundamental two-terminal circuit element called memristor (short for memory and resistor), which is defined by a nonlinear relationship between charge and flux linkage. About 40 years later, members of the Hewlett–Packard Laboratories [31] realized the memristor as a TiO\(_2\) nanocomponent. The memristor is a two-terminal passive device whose value depends on the magnitude and polarity of the voltage applied to it and the length of the time that the voltage has been applied. In other words, the memristor has variable resistance and exhibits the memory characteristics. For these properties, it is shown that the memristor device has many promising applications such as device modeling, signal processing, one of which is to emulate synaptic behavior. As well-known, artificial neural networks can be realized by nonlinear circuits. In the circuits, the connection weights are implemented by fixed value resistors, which are supposed to represent the strength of synaptic connections between neurons in brain. The strength of synapses changes and accords with Hebbian learning rule while the resistance is invariable [30]. In order to simulate the artificial neural network of human brain better, the resistor is replaced by the memristor, which leads to a new model of neural networks: memristor-based neural networks.

In the last two decades, synchronization of chaos has been extensively studied. In the seminal paper [26], Pecora and Carroll first found that two chaotic trajectories with different initial conditions can be synchronized. Since then researchers around the world have been actively engaged in discovering different possible synchronization scenario of chaos and have presented many types of synchronization approach due to their potential applications in secure communication, biological networks, chemical reactions, biological neural networks, information processing, etc [1, 12, 13, 1922, 27, 28, 3437, 4652]. Recently, some achievements about synchronization control of memristor-based neural networks have been obtained. For instance, based on the drive-response concept, the differential inclusions theory and Lyapunov functional method, Wu et al. [39] derived a delay-dependent feedback controller to achieve the exponential synchronization for memristor-based recurrent neural networks with time delays by use of linear matrix inequalities (LMIs), Wang et al. [32] established several sufficient conditions to guarantee the exponential synchronization for coupled memristive neural networks with time delays also by applying LMIs approach, Mathiyalagan et al. [23] proposed feedback controller gains to guarantee the exponential synchronization for delayed impulsive memristive BAM neural networks by using a time-varying Lyapunov function and LMIs technique, Wang et al. [33] obtained a sufficient condition to guarantee the exponential synchronization for a class of memristive chaotic neural networks with mixed delays and parametric uncertainties by using the comparison principle and LMIs form, Song et al. [30] designed two kinds of feedback controllers and gave one new algebraic criterion and a matrix-dependent condition to ensure the exponential synchronization in the p-th moment of the stochastic memristive neural networks by means of the stochastic differential inclusions theory, Jiang et al. [10] presented several memoryless controllers to guarantee the exponential synchronization for a class of memristor-based recurrent neural networks by use of LMIs, Jiang et al. [14] proposed a few sufficient conditions for finite-time stability and finite-time synchronization for a kind of memristor-based neural network with the designed controller by utilizing the matrix-norm inequality and LMIs.

However, the results of [4, 10, 14, 30, 32, 33, 3840, 44, 45] on synchronization or anti-synchronization control of delayed memristor-based neural networks were obtained under the following typical assumption:

$$\begin{aligned} \,[\underline{A},\bar{A}]f(x(t))-[\underline{A},\bar{A}]f(y(t))\subseteq [\underline{A},\bar{A}](f(x(t))-f(y(t))). \end{aligned}$$

As was pointed out in [41, 42] that this assumption holds only when f(x(t)) and f(y(t)) have different signs or \(f(x(t))f(y(t))=0.\) Hence, these results are useless to the theory and application in engineering. To establish effective synchronization conditions remains challenging.

During the past decade, the Jensen inequality [11] has been intensively used in the context of time-delay or sampled-data systems since it is an appropriate tool to derive tractable stability conditions expressed in terms of LMIs. Recently, Fang and Park [8] introduced a multiple integral form of the Jensen inequality. To reduce conservatism, Seuret and Gouaisbaut [29] presented a Wirtinger-based integral inequality which encompasses the Jensen one and significantly improves existing ones; Park et al. [24] and Lee et al. [17] established double and multiple integral form of the Wirtinger-based integral inequality respectively, which encompass the Jensen ones [8, 11] and obtains more tighter lower bounds of double and multiple integral terms. In order to further reduce conservatism, very recently, Zeng et al. [43] developed a free-matrix-based inequality which encompasses the Wirtinger-based inequality [29] and is more tighter than existing ones.

Motivated by aforementioned discussion, in this paper we study the synchronization problem for a class of delayed memristive chaotic neural networks. First, based on the Wirtinger-based double integral inequality (see Lemma 2), two novel inequalities (see Lemma 1) are proposed, which are multiple integral forms of the Wirtinger-based integral inequality [29]. Next, by applying the reciprocally convex combination approach for high order case (see Lemma 6), a free-matrix-based inequality (see Lemma 5), novel delay-dependent conditions are established to achieve the synchronization for the memristive chaotic neural networks. All the results are based on dividing the bounding of activation function into two subintervals with equal length. Finally, a numerical example is provided to demonstrate the effectiveness of the theoretical results.

Notation: Throughout this paper, solutions of all the systems considered in the following are intended in Filippov’s sense, where [ab] represents the interval \(a\le t\le b.\) Let \(W^T,W^{-1}\) denote the transpose and the inverse of a square matrix W,  respectively. Let \(W>0\)(<0) denote a positive (negative) definite symmetric matrix, \(I_{n},\ 0_{n}\) denote the identity matrix and the zero matrix of \(n-\)dimension respectively, \(0_{m\times n}\) denotes the \(m\times n\) zero matrix, the symbol “*” denotes a block that is readily inferred by symmetry. The shorthand \(\mathrm{col}\{M_1,M_2,\ldots ,M_k\}\) denotes a column matrix with the matrices \(M_1,M_2,\ldots ,M_k.\) sym(A) is defined as \(A+A^T,\) \(\mathrm{diag}\{\cdot \}\) stands for a diagonal or block-diagonal matrix, \(\mathbb {N}=\{1,2,\ldots ,n\}.\) For \(\chi>0, \mathcal {C}\big ([-\chi ,0];\mathbb {R}^n\big )\) denotes the family of continuous functions \(\phi\) from \([-\chi ,0]\) to \(\mathbb {R}^n\) with the norm \(||\phi ||=\sup _{-\chi \le s\le 0}|\phi (s)|.\) Matrices, if not explicitly stated, are assumed to have compatible dimensions.

2 Problem description

The memristor-based recurrent neural network can be implemented by very large scale of integration circuits and the connection weights are implemented by the memristors. By Kirchoff’s current law, a general class of memristor-based recurrent neural networks can be written in the form as:

$$\begin{aligned} \dot{x}_i(t)= & -c_ix_i(t)+\sum ^n_{j=1}a_{ij}(x_j(t))f_j(x_j(t)) \nonumber \\&+\sum ^n_{j=1}b_{ij}(x_j(t))f_j(x_j(t-\tau (t)))+J_i,\ \ t>0,\end{aligned}$$
(1)

with initial conditions \(x_i(t)=\varphi _i(t)\in \mathcal {C}\big ((-\infty ,0];\mathbb {R}\big ),\) where \(i\in \mathbb {N},n\) corresponds to the number of units in a neural network, \(x_i(t)\) is the voltage of the capacitor \(\mathbf {C}_i\), \(c_i>0\) represents the rate with which the i-th unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs at time instant t, \(f_j(\cdot )\) is the feedback function, \(\tau (t)\) is the discrete transmission time-varying delay satisfying \(0\le \tau (t)\le \bar{\tau },\) where \(\bar{\tau }\) is a real constant. \(a_{ij}(x_i(t)),b_{ij}(x_i(t))\) represent the memristive synaptic weights, and

$$\begin{aligned} a_{ij}(x_i(t))=\frac{\mathbf {M}_{ij}}{\mathbf {C}_i}\mathrm{sgn}_{ij},\quad b_{ij}(x_i(t))=\frac{\widetilde{\mathbf {M}}_{ij}}{\mathbf {C}_i}\mathrm{sgn}_{ij},\ J_i=\frac{\mathbf {J}_i}{\mathbf {C}_i},\quad \mathrm{sgn}_{ij}=\bigg \{\begin{array}{ll} 1,& i\ne j,\\ -1,& i=j,\end{array} \end{aligned}$$

where \(\mathbf {M}_{ij},\widetilde{\mathbf {M}}_{ij}\) are the memductance of the resistor \(\mathbf {R}_{ij},\widetilde{\mathbf {R}}_{ij}\) respectively. \(\mathbf {R}_{ij}\) denotes the resistor between the continuous feedback function \(f_i(x_i(t))\) and \(x_i(t)\); \(\widetilde{\mathbf {R}}_{ij}\) denotes the resistor between the feedback function \(f_i(x_i(t-\tau (t)))\) and \(x_i(t)\). \(\mathbf {J}_i\) denotes the external input or bias from outside the neural networks at time instant t.

As well-known, \(a_{ij}(x_j(t)),b_{ij}(x_j(t))\) will change as pinched hysteresis loops change [5]. According to the feature of the memristor and the current–voltage characteristics, we have

$$\begin{aligned} a_{ij}(x_j(t))=&\bigg \{ \begin{array}{ll} \acute{a}_{ij},& |x_j(t)|<T_j,\\ \grave{a}_{ij},& |x_j(t)|>T_j,\end{array}&b_{ij}(x_j(t))=&\bigg \{ \begin{array}{ll} \acute{b}_{ij},& |x_j(t)|<T_j,\\ \grave{b}_{ij},& |x_j(t)|>T_j, \end{array} \end{aligned}$$

and \(a_{ij}(\pm T_j)=\acute{a}_{ij}\) or \(\grave{a}_{ij},\) \(b_{ij}(\pm T_j)=\acute{b}_{ij}\) or \(\grave{b}_{ij},\) where switching jumps \(T_j>0,\acute{a}_{ij},\grave{a}_{ij},\acute{b}_{ij},\grave{b}_{ij}(i,j\in \mathbb {N})\) are known constants. To state conveniently, we denote \(\underline{a}_{ij}=\min \{\acute{a}_{ij},\grave{a}_{ij}\},\bar{a}_{ij}=\max \{\acute{a}_{ij},\grave{a}_{ij}\},\) \(\underline{b}_{ij}=\min \{\acute{b}_{ij},\grave{b}_{ij}\},\bar{b}_{ij}=\max \{\acute{b}_{ij},\grave{b}_{ij}\}.\)

To ensure the existence of the equilibrium point of system (1), the following assumption is given.

Assumption 1

The neural activation functions are bounded, \(f_j(\pm T_j)=0,\) and satisfy the following Lipschitz conditions with Lipschitz constants \(\sigma _j>0\)

$$\begin{aligned} |f_j(u)-f_j(v)|\le \sigma _j|u-v|,\ u,v\in \mathbb {R},\quad j\in \mathbb {N}. \end{aligned}$$

For notational simplicity, we denote \(\Sigma =\mathrm{diag}\{\sigma _1,\sigma _2,\ldots ,\sigma _n\}.\)

Throughout this paper, we will use the definitions in the sequel:

Definition 1

([3]) Suppose \(E \subseteq \mathbb {R}^n,\) then \(x\rightarrow F(x)\) is called a set-valued map, if for each point \(x\in E,\) there exists a nonempty set \(F(x)\subseteq \mathbb {R}^n.\) A set-valued map F with nonempty values is said to be upper semi-continuous at \(x_0\in E,\) if for any open set N containing \(F(x_0),\) there exists a neighborhood M of \(x_0\) such that \(F(M)\subseteq N.\) The map F(x) is said to have a closed (convex, compact) image if for each \(x\in E,\ F(x)\) is closed (convex, compact).

Definition 2

([9]) For the system \(\dot{x}(t)=F(x),\ x\in \mathbb {R}^n\) with discontinuous right-hand sides, a set-valued map is defined as follows:

$$\begin{aligned} \Phi (x)=\bigcap _{\delta>0}\bigcap _{\mu (M)=0}\mathrm{co}\{F(B(x,\delta )\backslash M)\}, \end{aligned}$$

where \(\mathrm{co}\{G\}\) is the closure of the convex hull of set \(G,B(x,\delta )=\{y:\Vert y-x\Vert \le \delta \},\) and \(\mu (E)\) is the Lebesque measure of set E. A solution in Filippov’s sense of the Cauchy problem for this system with initial condition \(x(0)=x_0\) is an absolutely continuous function \(x(t), t\in [0,T], 0 < T\le \infty ,\) which satisfies \(x(0)=x_0\) and differential inclusion:

$$\begin{aligned} \dot{x}(t)\in \Phi (x),\ \ \mathrm{for\ almost\ all}\ t\in [0,T]. \end{aligned}$$

Since the system (1) is a differential equation with discontinuous right-hand side, its solution in the conventional sense does not exist. Based on the theory of differential inclusions and the definition of Filippov solution [9], the model (1) can be rewritten as the following differential inclusion:

$$\begin{aligned} \dot{x}_i(t)\in \Phi (x_i)= & -c_ix_i(t)+\sum ^n_{j=1}[\underline{a}_{ij},\bar{a}_{ij}]f_j(x_j(t)) \\&+\sum ^n_{j=1}[\underline{b}_{ij},\bar{b}_{ij}]f_j(x_j(t-\tau (t)))+J_i,\ \ t>0. \end{aligned}$$

Obviously, \(\Phi (s)\) is an upper semi-continuous, closed, convex and bounded set-valued map for all \(s\in \mathbb {R}\). From [2, 18] there exists a Filippov solution for the model (1) with Assumption 1.

The system (1) is considered as a drive system. Based on the drive-response concept for synchronization of coupled chaotic systems, which was initially proposed by Pecora and Carroll in [26], the corresponding response system of (1) is given in the following form:

$$\begin{aligned} \dot{y}_i(t)= & -c_iy_i(t)+\sum ^n_{j=1}{a}_{ij}(y_j(t))f_j(y_j(t)) \nonumber \\&+\sum ^n_{j=1}{b}_{ij}(y_j(t))f_j(y_j(t-\tau (t)))+u_i(t)+J_i,\ \ t>0, \end{aligned}$$
(2)

with initial conditions \(y_i(t)=\phi _i(t)\in \mathcal {C}\big ((-\infty ,0];\mathbb {R}\big ),\) where \(u_i(t)\in \mathbb {R}\) is the state feedback controller given to achieve the synchronization between the drive system and the response system.

Let \(e_i(t)=y_i(t)-x_i(t)\) be the synchronization error and from the theory of differential inclusion, we can get the synchronization error system as follows:

$$\begin{aligned} \dot{e}_i(t)= & -c_ie_i(t)+\sum ^n_{j=1}\big [{a}_{ij}(y_j(t))f_j(y_j(t))-a_{ij}(x_j(t))f_j(x_j(t))\big ]\nonumber \\&+\sum ^n_{j=1}\big [{b}_{ij}(y_j(t))f_j(y_j(t-\tau (t)))-{b}_{ij}(x_j(t))f_j(x_j(t-\tau (t)))\big ] \nonumber \\&+u_i(t),\ \ t>0, \nonumber \\ e_i(t)= & \varphi _i(t)-\phi _i(t),\ \ t\in (-\infty ,0]. \end{aligned}$$
(3)

In this paper, the control input vector with state feedback is designed as follows:

$$\begin{aligned} u(t)={G_{1}}e(t)+{G_{2}}e(t-{\tau }(t)), \end{aligned}$$
(4)

where \(u(t)=(u_1(t),u_2(t),\ldots ,u_n(t))^T\in \mathbb {R}^n,\ e(t)=(e_1(t),e_2(t),\ldots ,e_n(t))^T\in \mathbb {R}^n.\) For convenience, we denote \(g_j(e_j(t))=f_j(y_j(t))-f_j(x_j(t)).\) From Assumption 1, we obtain that

$$\begin{aligned} |g_j(u)-g_j(v)|\le \sigma _j|u-v|,\ |g_j(u)|\le \sigma _j|u|,\ g_j(0)=0,\ j\in \mathbb {N} \end{aligned}$$
(5)

for any \(u,v\in \mathbb {R}.\)

Based on the results of Wirtinger-based integral inequality [29], Wirtinger-based double integral inequality [24] and Wirtinger-based multiple integral inequality [17], we introduce the following Wirtinger-based multiple integral inequality:

Lemma 1

(See Appendix I for a proof) Given a non-negative integer \(\imath\) and a positive definite matrix \(X\in \mathbb {R}^{n\times n}.\) For any continuous function \(\vartheta :[a, b]\rightarrow \mathbb {R}^n,\) the following inequality holds:

$$\begin{aligned} \Theta _\imath (a,b,\vartheta ,X)\ge & \frac{(\imath +1)!}{(b-a)^{\imath +1}}\rho _\imath (a,b,\vartheta )^TX\rho _\imath (a,b,\vartheta ) \nonumber \\&+\frac{(\imath +3)\imath !}{(b-a)^{\imath +1}}\bar{\rho }_\imath (a,b,\vartheta )^TX\bar{\rho }_\imath (a,b,\vartheta ), \end{aligned}$$
(6)

where

$$\begin{aligned} \Theta _\imath (a,b,\vartheta ,X)= & \int ^{b}_{a}\int ^{v_1}_{a}\cdots \int ^{v_\imath }_{a}{\vartheta }(s)^TX{\vartheta }(s)\mathrm{d}s\mathrm{d}v_\imath \ldots \mathrm{d}v_1,\ \ \Theta _0(a,b,\vartheta ,X) \\= & \int ^{b}_{a}{\vartheta }(s)^TX{\vartheta }(s)\mathrm{d}s;\\ \rho _\imath (a,b,\vartheta )= & \int ^{b}_{a}\int ^{v_1}_{a}\cdots \int ^{v_\imath }_{a}{\vartheta }(s)\mathrm{d}s\mathrm{d}v_\imath \ldots \mathrm{d}v_1,\ \ \rho _0(a,b,\vartheta )=\int ^{b}_{a}{\vartheta }(s)\mathrm{d}s;\\ \bar{\rho }_\imath (a,b,\vartheta )= & \rho _\imath (a,b,\vartheta )-\frac{\imath +2}{b-a}\rho _{\imath +1}(a,b,\vartheta ). \end{aligned}$$

Especially, let \(\vartheta :[a, b]\rightarrow \mathbb {R}^n\) be differentiable and its derivative be continuous, then for any positive integer i and any positive definite matrix \(X\in \mathbb {R}^{n\times n},\) the following inequality holds:

$$\begin{aligned} \Theta _i(a,b,\dot{\vartheta },X)\ge & \frac{(i+1)!}{(b-a)^{i+1}}\hbar _i(a,b,\vartheta )^TX\hbar _i(a,b,\vartheta ) \nonumber \\&+\frac{(i+3)i!}{(b-a)^{i+1}}\xi _i(a,b,\vartheta )^TX\xi _i(a,b,\vartheta ), \end{aligned}$$
(7)

where

$$\begin{aligned} \hbar _i(a,b,\vartheta )= & -\frac{(b-a)^i}{i!}\vartheta (a)+\rho _{i-1}(a,b,\vartheta ),\\ \xi _i(a,b,\vartheta )= & \frac{(b-a)^i}{(i+1)!}\vartheta (a)+\rho _{i-1}(a,b,\vartheta )-\frac{i+2}{b-a}\rho _i(a,b,\vartheta ). \end{aligned}$$

Lemma 2

(Seuret et al. [29], Park et al. [24], Lee et al. [17]) Given a non-negative integer \(\imath\) and a positive definite matrix \(X\in \mathbb {R}^{n\times n}\). For any continuous function \(\vartheta :[a, b]\rightarrow \mathbb {R}^n\), the following inequality holds:

$$\begin{aligned} \Omega _\imath (a,b,\vartheta ,X)\ge & \frac{(\imath +1)!}{(b-a)^{\imath +1}}\beta _\imath (a,b,\vartheta )^TX\beta _\imath (a,b,\vartheta ) \\&+\frac{(\imath +3)\imath !}{(b-a)^{\imath +1}}\bar{\beta }_\imath (a,b,\vartheta )^TX\bar{\beta }_\imath (a,b,\vartheta ), \end{aligned}$$

where

$$\begin{aligned} \Omega _\imath (a,b,\vartheta ,X)= & \int ^{b}_{a}\int ^{b}_{v_1}\ldots \int ^{b}_{v_\imath }{\vartheta }(s)^TX{\vartheta }(s)\mathrm{d}s\mathrm{d}v_\imath \ldots \mathrm{d}v_1,\ \ \Omega _0(a,b,\vartheta ,X)\\= & \int ^{b}_{a}{\vartheta }(s)^TX{\vartheta }(s)\mathrm{d}s;\\ \beta _\imath (a,b,\vartheta )= & \int ^{b}_{a}\int ^{b}_{v_1}\ldots \int ^{b}_{v_\imath }{\vartheta }(s)\mathrm{d}s\mathrm{d}v_\imath \ldots \mathrm{d}v_1,\ \ \beta _0(a,b,\vartheta )=\int ^{b}_{a}{\vartheta }(s)\mathrm{d}s;\\ \bar{\beta }_\imath (a,b,\vartheta )= & \beta _\imath (a,b,\vartheta )-\frac{\imath +2}{b-a}\beta _{\imath +1}(a,b,\vartheta ). \end{aligned}$$

Especially, let \(\vartheta :[a, b]\rightarrow \mathbb {R}^n\) be differentiable and its derivative be continuous, then for any positive integer i and any positive definite matrix \(X\in \mathbb {R}^{n\times n},\) the following inequality holds:

$$\begin{aligned} \Omega _i(a,b,\dot{\vartheta },X)\ge\, & \frac{(i+1)!}{(b-a)^{i+1}}\psi _i(a,b,\vartheta )^TX\psi _i(a,b,\vartheta ) \\&+\frac{(i+3)i!}{(b-a)^{i+1}}\omega _i(a,b,\vartheta )^TX\omega _i(a,b,\vartheta ), \end{aligned}$$

where

$$\begin{aligned} \psi _i(a,b,\vartheta )= & \frac{(b-a)^i}{i!}\vartheta (b)-\beta _{i-1}(a,b,\vartheta ),\\ \omega _i(a,b,\vartheta )= & -\frac{(b-a)^i}{(i+1)!}\vartheta (b)-\beta _{i-1}(a,b,\vartheta )+\frac{i+2}{b-a}\beta _i(a,b,\vartheta ). \end{aligned}$$

Similar to the result of Ref. [17], we have the following conclusion:

Lemma 3

(See Appendix II for a proof) Let \(\lambda :[a, b]\rightarrow \mathbb {R}^n\) be continuous, i be a positive integer and c be a scalar with \(a<c<b.\) \(\rho _i(a,b,\lambda ),\beta _{i-1}(a,b,\lambda )\) are defined in Lemmas 2 and 3 respectively, then the following relations hold

$$\begin{aligned} \beta _{i-1}(a,b,\lambda )&=\beta _{i-1}(a,c,\lambda )+\sum ^i_{j=1}\frac{(c-a)^{i-j}}{(i-j)!}\beta _{j-1}(c,b,\lambda ),\end{aligned}$$
(8)
$$\begin{aligned} \rho _{i-1}(a,b,\lambda )&=\rho _{i-1}(c,b,\lambda )+\sum ^i_{j=1}\frac{(b-c)^{i-j}}{(i-j)!}\rho _{j-1}(a,c,\lambda ). \end{aligned}$$
(9)

In order to obtain the results, we need the following lemmas.

Lemma 4

(Chen et al. [6]) Under Assumption 1, the following inequalities hold for \(i,j\in \mathbb {N}\)

$$\begin{aligned}&|[\underline{a}_{ij},\bar{a}_{ij}]f_j(x_j(t))-[\underline{a}_{ij},\bar{a}_{ij}]f_j(y_j(t))|\le \hat{a}_{ij}\sigma _j|x_j(t)-y_j(t)|,\\&|[\underline{b}_{ij},\bar{b}_{ij}]f_j(x_j(t-\tau (t)))-[\underline{b}_{ij},\bar{b}_{ij}]f_j(y_j(t-\tau (t)))| \\&\le \hat{b}_{ij}\sigma _j|x_j(t-\tau (t))-y_j(t-\tau (t))|, \end{aligned}$$

where \(\hat{a}_{ij}=\max \{|\underline{a}_{ij}|,|\bar{a}_{ij}|\},\hat{b}_{ij}=\max \{|\underline{b}_{ij}|,|\bar{b}_{ij}|\}.\) That is, for any \(\zeta _{ij}(x_j(t)),\zeta _{ij}(y_j(t))\in [\underline{a}_{ij},\bar{a}_{ij}],\ \varsigma _{ij}(x_j(t)),\) \(\varsigma _{ij}(y_j(t))\in [\underline{b}_{ij},\bar{b}_{ij}],\) we have

$$\begin{aligned}&|\zeta _{ij}(x_j(t))f_j(x_j(t))-\zeta _{ij}(y_j(t))f_j(y_j(t))|\le \hat{a}_{ij}\sigma _j|x_j(t)-y_j(t)|,\\&|\varsigma _{ij}(x_j(t))f_j(x_j(t-\tau (t)))-\varsigma _{ij}(y_j(t))f_j(y_j(t-\tau (t)))|\\&\le \hat{b}_{ij}\sigma _j|x_j(t-\tau (t))-y_j(t-\tau (t))|. \end{aligned}$$

Lemma 5

(Zeng et al. [43]) Let \(\vartheta :[a, b]\rightarrow \mathbb {R}^n\) be differentiable and its derivative be continuous, for symmetric matrices \(R\in \mathbb {R}^{n\times n},\ X,Z\in \mathbb {R}^{3n\times 3n}\), and any matrices \(Y\in \mathbb {R}^{3n\times 3n},\ M,N\in \mathbb {R}^{3n\times n}\) such that

$$\begin{aligned} \left[ \begin{array}{lll} X&Y&M \\ *&Z&N \\ *&*&R \end{array}\right] \ge 0 \end{aligned}$$

the following inequality holds:

$$\begin{aligned} -\int ^{b}_{a}\dot{\vartheta }(s)^TR\dot{\vartheta }(s)\mathrm{d}s\le &\, \varpi ^T\Big \{( b-a)\Big (X+\frac{1}{3}Z\Big ) \nonumber \\&+\mathrm{sym}\big \{M(I\ \ -I\ \ \ 0)+N(-I\ \ -I\ \ \ 2I)\big \}\Big \}\varpi , \end{aligned}$$
(10)

where \(\varpi =\mathrm{col}\Big \{\vartheta ( b),\vartheta (a),\frac{1}{ b-a}\int ^{b}_{a}{\vartheta }(s)\mathrm{d}s\Big \}.\)

Lemma 6

(High order reciprocally convex combination, Kim et al. [25], Lee et al. [17]) Given a positive integer k two positive scalars pq such that \(p+q=1.\) Assume that definite nonnegative symmetric constant matrices \(X,Y\in \mathbb {R}^{n\times n}\) and real constant matrices \(S_i\in \mathbb {R}^{n\times n}\) \((i=1,2,\ldots ,k)\) satisfy

$$\begin{aligned} \left[ \begin{array}{ll} \Big (\begin{array}{ll} k\\ i\end{array}\Big )X &S_i \\ *&\Big ( \begin{array}{ll} k\\ i\end{array} \Big )Y \end{array}\right] \ge 0,\ \ \forall \ i=1,2,\ldots ,k, \end{aligned}$$

then for any two vectors \(\alpha ,\beta \in \mathbb {R}^{n},\) the following inequality holds

$$\begin{aligned} \frac{1}{p^k}\alpha ^TX\alpha +\frac{1}{q^k}\beta ^TX\beta \ge \bigg [\begin{array}{ll} \alpha \\ \beta \end{array}\bigg ]^T\bigg [\begin{array}{ll} X& \sum _{i=1}^kS_i \\ *&Y \end{array}\bigg ]\bigg [\begin{array}{ll} \alpha \\ \beta \end{array}\bigg ], \end{aligned}$$

where \(\Big (\begin{array}{ll} k\\ i\end{array}\Big )=\frac{k!}{i!(k-i)!}.\)

3 Main results

Before presenting our results, for notational simplicity, we denote \(g(e(t))=(g_1(e_1(t)),g_2(e_2(t)),\ldots ,g_n(e_n(t)))^T,\) \(e_{t}=e(t),e_{\tau }=e(t-\tau (t)),e_{\bar{\tau }}=e(t-\bar{\tau }),\) and introduce a new vector as:

$$\begin{aligned} \eth (t)=\mathrm{col}\Bigg \{&e_{t},\ e_{\tau },\ e_{\bar{\tau }},\ g(e_{t}),\ g(e_{\tau }),\ g(e_{\bar{\tau }}),\ \dot{e}(t),\ \int _{t-\tau (t)}^{t} g(e_s) \mathrm{d}s,\ \int _{t-{\bar{\tau }}}^{t-\tau (t)} g(e_s)\mathrm{d}s,\\&\frac{1}{\tau (t)}\int _{t-\tau (t)}^{t} e_s \mathrm{d}s,\ \frac{1}{{\bar{\tau }}-\tau (t)}\int _{t-{\bar{\tau }}}^{t-\tau (t)} e_s \mathrm{d}s,\ \frac{1}{\tau (t)}\beta _1(t-\tau (t),t,e_t),\ \\&\frac{1}{{\bar{\tau }}-\tau (t)}\beta _1(t-{\bar{\tau }},t-\tau (t),e_t),\\&\ldots ,\ \frac{1}{\tau (t)}\beta _{m-1}(t-\tau (t),t,e_t),\ \frac{1}{{\bar{\tau }}-\tau (t)}\beta _{m-1}(t-{\bar{\tau }},t-\tau (t),e_t),\ \\&\frac{1}{\tau (t)}\rho _1(t-\tau (t),t,e_t),\\&\frac{1}{{\bar{\tau }}-\tau (t)}\rho _1(t-{\bar{\tau }},t-\tau (t),e_t),\ \ldots ,\ \frac{1}{\tau (t)}\rho _{m-1}(t-\tau (t),t,e_t),\ \\&\frac{1}{{\bar{\tau }}-\tau (t)}\rho _{m-1}(t-{\bar{\tau }},t-\tau (t),e_t)\Bigg \}. \end{aligned}$$

From the mean-value theorem for integral, it follows that

$$\begin{aligned}&\lim _{\tau (t)\rightarrow 0^{^+}}\frac{1}{\tau (t)}\int _{t-\tau (t)}^{t} e_s \mathrm{d}s=e_{t},\ \ \lim _{\tau (t)\rightarrow \bar{\tau }^-}\frac{1}{{\bar{\tau }}-\tau (t)}\int _{t-{\bar{\tau }}}^{t-\tau (t)} e_s \mathrm{d}s=e_{\bar{\tau }},\\&\lim _{\tau (t)\rightarrow 0^{^+}}\frac{1}{\tau (t)}\rho _i(t-\tau (t),t,e_t)=\lim _{\tau (t)\rightarrow 0^{^+}}\frac{1}{\tau (t)}\beta _i(t-\tau (t),t,e_t)=0,\\&\lim _{\tau (t)\rightarrow 0^{^+}}\frac{1}{\tau (t)}\rho _i(t-{\bar{\tau }},t-\tau (t),e_t)=\lim _{\tau (t)\rightarrow 0^{^+}}\frac{1}{\tau (t)}\beta _i(t-{\bar{\tau }},t-\tau (t),e_t)=0. \end{aligned}$$

for any positive integer i. Therefore \(\eth (t)\) can be well defined for \(\tau (t)=0\) or \(\tau (t)=\bar{\tau }\).

Let \(\eta _j\in \mathbb {R}^{n\times (4m+7)n}\) be block matrix entries, i.e. \(\eta _j=\big [\begin{array}{lll} 0_{n\times (j-1)n}&I_n&0_{n\times (4m+7-j)n}\end{array}\big ],\ j=1,2,\ldots ,4m+7.\) When the control feedback matrices \(G_1,G_2\) are given, we have the following synchronization result for error system (3).

Theorem 1

(See Appendix IV for a proof) Suppose that Assumption 1 holds. Given positive integer m,  constant scalars \(\mu ,\bar{\tau }>0,\) and the control law (4). The controlled slave system (2) is globally asymptotically synchronized with the master system (1) for any \(0\le \tau (t)\le \bar{\tau },\) \(\dot{\tau }(t)\le \mu ,\) if there exist positive definite matrices \(K,\mathcal {Q},\mathcal {R},\mathcal {S},U_i,M_i,Y,\mathcal {Z},\) positive diagonal matrices \(P,W_j,L_j(j=1,2,\ldots ,6),\) positive scalars \(\varepsilon _1,\varepsilon _2,\) real symmetric matrices \(D_1,D_2,\) and real matrices \(H_k,E_k(k=1,\ldots ,5),\ \mathcal {Y},\Phi _{il},\Psi _{il}\) such that the following LMIs hold

$$\begin{aligned} \mathbb {X}=&\bigg [\begin{array}{ll} \mathcal {R} &\mathcal {X} \\ *&\mathcal {R} \end{array}\bigg ]\ge 0,\end{aligned}$$
(11)
$$\begin{aligned} \mathbb {Z}=&\bigg [\begin{array}{ll} \mathcal {Z}+\mathcal {D}_1 &\mathcal {Y} \\ *& \mathcal {Z}+\mathcal {D}_2 \end{array}\bigg ]\ge 0,\end{aligned}$$
(12)
$$\begin{aligned}&\left[ \begin{array}{lll} H_1 &H_{2} &H_4\\ *& H_3& H_5 \\ *&*& Y\end{array}\right] \ge 0,\end{aligned}$$
(13)
$$\begin{aligned}&\left[ \begin{array}{lll} E_1 &E_{2} &E_4\\ *& E_3& E_5 \\ *&*& Y\end{array}\right] \ge 0,\end{aligned}$$
(14)
$$\begin{aligned}&\left[ \begin{array}{ll} \Big (\begin{array}{ll} i\\ p\end{array}\Big )\mathbb {U}_i &\Phi _{il} \\ *& \Big (\begin{array}{ll} i\\ p\end{array}\Big )\mathbb {U}_i(t)\end{array}\right] \ge 0,\end{aligned}$$
(15)
$$\begin{aligned}&\left[ \begin{array}{ll} \Big (\begin{array}{ll} i\\ p\end{array}\Big )\mathbb {M}_i &\Psi _{il} \\ *& \Big (\begin{array}{ll} i\\ p\end{array}\Big )\mathbb {M}_i(t)\end{array}\right] \ge 0,\ \ \ \ i=1,2,\ldots ,m;\ l=1,2,\ldots ,i; \end{aligned}$$
(16)
$$\begin{aligned}&\left[ \begin{array}{lll} \Xi +\overline{\Xi }+\widetilde{\Xi }+\Xi _p & n\mathcal {P} &n\mathcal {P} \\ *& -n\varepsilon _1I& 0 \\ *&*& -n\varepsilon _2I\end{array}\right] <0,\ \ \ p=1,2, \end{aligned}$$
(17)

where

$$\begin{aligned} \mathcal {D}_1=&\bigg [\begin{array}{ll} 0&D_1 \\ D_1&0 \end{array}\bigg ],\quad \mathcal {D}_2=\bigg [\begin{array}{ll} 0&D_2 \\ D_2&0 \end{array}\bigg ],\quad \mathbb {U}_i=\bigg [\begin{array}{ll} U_i &0 \\ 0& \frac{i+2}{i}U_i\end{array}\bigg ],\quad \mathbb {U}_i(t)=\bigg [\begin{array}{ll} U_i(t) &0 \\ 0& \frac{i+2}{i}U_i(t)\end{array}\bigg ],\\ \mathbb {M}_i=&\,\mathrm{diag}\Big \{M_i,\frac{i+2}{i}M_i\Big \},\quad \mathbb {M}_i(t)=\mathrm{diag}\Big \{M_i(t),\frac{i+2}{i}M_i(t)\Big \},\quad \mathcal {P} =\big [\begin{array}{lll}0_{n\times 6n}&P&0_{n\times 4mn}\end{array}\big ]^T,\\ \Xi =&\mathrm{sym}\big \{\eta _1^TK\eta _7\big \}-\mathrm{sym}\big \{\eta _7^TP(\eta _7+C\eta _1)\big \}+\varepsilon _1\eta _1^T\Sigma \hat{A}\Sigma \eta _1 +\varepsilon _2\eta _2^T\Sigma \hat{B}\Sigma \eta _2\\\quad&+\big (\begin{array}{ll} \eta _1^T&\eta _4^T \end{array}\big ) \big (\bar{\tau }^2\mathcal {R}+\mathcal {S}\big )\mathrm{col}\{\eta _1,\eta _4\}-\big (\begin{array}{ll} \eta _3^T&\eta _6^T \end{array}\big )\mathcal {S}\mathrm{col}\{\eta _3,\eta _6\} -\eta _r^T\mathbb {X}\eta _r\\&+\bar{\tau }^2\big (\begin{array}{ll} \eta _1^T&\eta _7^T \end{array}\big ) \mathcal {Z}\mathrm{col}\{\eta _1,\eta _7\}+\bar{\tau }\eta _1^TD_1\eta _1-\bar{\tau }\eta _2^T(D_1-D_2)\eta _2 -\bar{\tau }\eta _3^TD_2\eta _3-\eta _b^T\mathbb {Z}\eta _b \\&+\bar{\tau }\eta _7^TY\eta _7+\sum ^m_{i=1}\bigg (\frac{\bar{\tau }^i}{i!}\bigg )^2\eta _7^T\big (U_i+M_i\big )\eta _7 -\sum ^m_{i=1}\Big \{\big (\eta ^\phi _{i-1}\big )^T\mathcal {U}_i(t)\eta ^\phi _{i-1} \\&+\big (\eta ^\psi _{i-1}\big )^T\mathcal {M}_i(t)\eta ^\psi _{i-1}\Big \}\\&+\big (\begin{array}{lll} \eta _1^T&\eta _2^T&\eta _{10}^T \end{array}\big )\Big \{\tau (t)\Big (H_1+\frac{1}{3}H_3\Big ) \\&+\mathrm{sym}\big \{H_4(I \ \ -I \ \ 0 )+H_5( -I\ \ -I\ \ 2I)\big \}\Big \}\mathrm{col}\{\eta _1,\eta _2,\eta _{10}\}\\&+\big (\eta _2^T \ \ \eta _3^T\ \ \eta _{11}^T \big )\Big \{[\bar{\tau }-\tau (t)]\Big (E_1+\frac{1}{3}E_3\Big ) \\&+\mathrm{sym}\big \{E_4(I \ \ -I \ \ 0 )+E_5( -I\ \ -I\ \ 2I)\big \}\Big \}\mathrm{col}\{\eta _2,\eta _3,\eta _{11}\}, \end{aligned}$$
$$\begin{aligned} \overline{\Xi }=\,&\mathrm{sym}\big \{\eta _7^TP(G_1\eta _1+G_2\eta _2)\big \},\\ \widetilde{\Xi }=\,&\big (\begin{array}{ll} \eta _1^T&\eta _4^T \end{array}\big )\mathcal {Q}\mathrm{col}\{\eta _1,\eta _4\} -(1-\mu )\big (\begin{array}{ll} \eta _2^T&\eta _5^T \end{array}\big )\mathcal {Q}\mathrm{col}\{\eta _2,\eta _5\},\\ \Xi _1=\,&-(\eta _4+\Sigma \eta _1)^TW_1\eta _4 -(\eta _5+\Sigma \eta _2)^TW_2\eta _5-(\eta _6+\Sigma \eta _3)^TW_3\eta _6 \\&-[\eta _4-\eta _5+\Sigma (\eta _1-\eta _2)]^T\\&\times W_4(\eta _4-\eta _5)-[\eta _5-\eta _6+\Sigma (\eta _2-\eta _3)]^TW_5(\eta _5-\eta _6) \\&-[\eta _4-\eta _6+\Sigma (\eta _1-\eta _3)]^TW_6(\eta _4-\eta _6),\\ \Xi _2=&-\eta _4TL_1(\eta _4-\Sigma \eta _1) \\&-\eta _5^TL_2(\eta _5-\Sigma \eta _2)-\eta _6^TL_3(\eta _6-\Sigma \eta _3) \\&-(\eta _4-\eta _5)^TL_4[\eta _4-\eta _5\\&-\Sigma (\eta _1-\eta _2)]-(\eta _5-\eta _6)^TL_5[\eta _5-\eta _6-\Sigma (\eta _2-\eta _3)] \\&-(\eta _4-\eta _6)^TL_6[\eta _4-\eta _6-\Sigma (\eta _1-\eta _3)], \end{aligned}$$

with

$$\begin{aligned}&U_i(t)=\sum ^m_{j=i}\frac{[\bar{\tau }-\tau (t)]^{j-i}}{(j-i)!}U_j,\ M_i(t)=\sum ^m_{j=i}\frac{[\tau (t)]^{j-i}}{(j-i)!}M_j,\ \mathcal {U}_i(t)=\bigg [\begin{array}{ll} \mathbb {U}_i &\sum ^i_{p=1}\Phi _{ip} \\ *&\mathbb {U}_i(t)\end{array}\bigg ],\\&\eta _b=\mathrm{col}\big \{\tau (t)\eta _{10},\ \eta _1-\eta _2,\ [\bar{\tau }-\tau (t)]\eta _{11},\ \eta _2-\eta _3\big \},\ \\&\eta _r=\mathrm{col}\big \{\tau (t)\eta _{10},\ \eta _8,\ [\bar{\tau }-\tau (t)]\eta _{11},\ \eta _9\big \},\\&\eta ^\beta _i=[\bar{\tau }-\tau (t)]\eta _{9+2i}+\tau (t)\sum ^i_{j=1}\frac{[\bar{\tau }-\tau (t)]^{i-j}}{(i-j)!}\eta _{8+2j},\ \mathcal {M}_i(t)=\bigg [\begin{array}{ll} \mathbb {M}_i &\sum ^i_{p=1}\Psi _{ip} \\ *&\mathbb {M}_i(t)\end{array}\bigg ],\\&\eta ^\phi _i=\mathrm{col}\bigg \{\frac{[\bar{\tau }-\tau (t)]^{i}}{i!}\eta _2-[\bar{\tau }-\tau (t)]\eta _{9+2i},\ -\frac{[\bar{\tau }-\tau (t)]^{i}}{(i+1)!}\eta _2 \\&\qquad -[\bar{\tau }-\tau (t)]\eta _{9+2i}+(i+2)\eta _{11+2i},\\&\qquad \frac{[\tau (t)]^{i}}{i!}\eta _1-\tau (t)\eta _{8+2i},\ -\frac{[\tau (t)]^{i}}{(i+1)!}\eta _1-\tau (t)\eta _{8+2i}+(i+2)\eta _{10+2i}\bigg \},\\&\eta ^\psi _i=\mathrm{col}\bigg \{-\frac{[\tau (t)]^{i}}{i!}\eta _2+\tau (t)\eta _{6+2m+2i},\ \frac{[\tau (t)]^{i}}{(i+1)!}\eta _2+\tau (t)\eta _{6+2m+2i}\\&\quad -(i+2)\eta _{6+2m+2i},\\&\qquad -\frac{[\bar{\tau }-\tau (t)]^{i}}{i!}\eta _3+[\bar{\tau }-\tau (t)]\eta _{7+2m+2i},\ \frac{[\bar{\tau }-\tau (t)]^{i}}{(i+1)!}\eta _3+[\bar{\tau }-\tau (t)]\eta _{7+2m+2i}\\&\qquad -(i+2)\eta _{7+2m+2i}\bigg \}. \end{aligned}$$

Remark 1

By dividing the interval satisfied by the activation function into two equal subintervals (40) and (51), the information of the activation function is taken fully into account. This may lead to high effective results.

In order to show the design of the estimate gain matrices \(G_1,G_2\), we propose the following result.

Theorem 2

Suppose that Assumption 1 holds. Given positive integer m constant scalars \(\mu\) and \(\bar{\tau }>0\). The controlled slave system (2) is globally asymptotically synchronized with the master system (1) for any \(0\le \tau (t)\le \bar{\tau },\) \(\dot{\tau }(t)\le \mu ,\) if there exist positive definite matrices \(K,\mathcal {Q},\mathcal {R},\mathcal {S},U_i,M_i(i=1,2,\ldots ,m),Y,\mathcal {Z},\) positive diagonal matrices \(P,W_j,L_j(j=1,2,\ldots ,6),\) positive scalars \(\varepsilon _1,\varepsilon _2,\) real symmetric matrices \(D_1,D_2,\) and real matrices \(P_1,P_2,H_k,E_k(k=1,\ldots ,5),\ \mathcal {Y},\Phi _p,\Psi _p(p=1,2,3,4)\) such that inequalities (11)–(16) and the following LMIs hold

$$\begin{aligned}&\left[ \begin{array}{lll} \Xi +\widehat{\Xi }+\widetilde{\Xi }+\Xi _l & n\mathcal {P} &n\mathcal {P} \\ *& -n\varepsilon _1I& 0 \\ *&*& -n\varepsilon _2I\end{array}\right] <0,\ \ \ l=1,2, \end{aligned}$$
(18)

where \(\widehat{\Xi }=\mathrm{sym}\big \{\eta _7^T(P_1\eta _1+P_2\eta _2)\big \},\) and the other parameters are all the same as those defined in Theorem 1. Moreover, the estimation gain matrices are \(G_l=P^{-1}P_l,l=1,2.\)

Proof

Let \(P_l=PG_l(l=1,2)\) and following the same line as in Theorem 1, we can conclude the result of Theorem 2. \(\square\)

Remark 2

Note that the inequality (10) of Lemma 5 is composed of a set of slack variables, which provides extra freedom and may be less conservative than others in the literature. It is worth mentioning that the Wirtinger-based inequality proposed in Lemma 2 [29], which is shown more tighter than the well-known Jensen inequality, is a special case of (10). Thus Theorems 1 and 2, which are established by means of Lemma 5, may lead to less conservative result.

When the time varying delay \(\tau (t)\) is not differentiable or \(\dot{\tau }(t)\) is unknown, the results in Theorems 1, 2 are no longer applicable. For this case, from the proofs of Theorems 1, 2 one can obtain the following corollaries.

Corollary 1

Suppose that Assumption 1 holds. Given positive integer m constant scalars \(\bar{\tau }>0,\) and the control law (4). The controlled slave system (2) is globally asymptotically synchronized with the master system (1) for any \(0\le \tau (t)\le \bar{\tau },\) if there exist positive definite matrices \(K,\mathcal {R},\mathcal {S}, U_i,M_i(i=1,2,\ldots ,m),Y,\mathcal {Z},\) positive diagonal matrices \(P,W_j,L_j(j=1,2,\ldots ,6),\) real symmetric matrices \(D_1,D_2,\) and real matrices \(H_k,E_k(k=1,\ldots ,5), \mathcal {Y},\Phi _p,\Psi _p(p=1,2,3,4)\) such that (11)–(16) and the following LMIs hold

$$\begin{aligned} \left[ \begin{array}{lll} \Xi +\overline{\Xi }+\Xi _l & n\mathcal {P} &n\mathcal {P} \\ *& -n\varepsilon _1I& 0 \\ *&*& -n\varepsilon _2I\end{array}\right] <0,\ \ \ l=1,2, \end{aligned}$$
(19)

where the parameters are all the same as those defined in Theorem 1.

Corollary 2

Suppose Assumption that 1 holds. Given positive integer m and constant scalars \(\bar{\tau }>0.\) The controlled slave system (2) is globally asymptotically synchronized with the master system (1) for any \(0\le \tau (t)\le \bar{\tau },\) if there exist positive definite matrices \(K,\mathcal {R},\mathcal {S}, U_i,M_i(i=1,2,\ldots ,m),Y,\mathcal {Z},\) positive diagonal matrices \(P,W_j,L_j(j=1,2,\ldots ,6),\) real symmetric matrices \(D_1,D_2,\) and real matrices \(P_1,P_2,H_k,E_k(k=1,\ldots ,5), \mathcal {Y},\Phi _p,\Psi _p(p=1,2,3,4)\) such that (11)–(16) and the following LMIs hold

$$\begin{aligned} \left[ \begin{array}{lll} \Xi +\widehat{\Xi }+\Xi _l & n\mathcal {P} &n\mathcal {P} \\ *& -n\varepsilon _1I& 0 \\ *&*& -n\varepsilon _2I\end{array}\right] <0,\ \ \ l=1,2. \end{aligned}$$
(20)

where the parameters are all the same as those defined in Theorem 2. Moreover, the estimation gain matrices are \(G_l=P^{-1}P_l,l=1,2.\)

4 Illustrative example

A simulation example will be given in this section to illustrate the effectiveness of the developed approach.

Example 1

Consider the 3-dimensional delayed memristor-based neural network (1) with the following parameters:

$$\begin{aligned} \tau (t)=&0.3+0.3\cos (t),&f_i(v)=&\tanh (|v|-1),\ J_i=0,&c_i=&1.5,\ \ i=1,2,3,\\ a_{11}(v)=&\bigg \{\begin{array}{ll} 1+\frac{\pi }{4},& |v|<1,\\ 0.9+\frac{\pi }{4},& |v|>1,\end{array},&a_{12}(v)=&\bigg \{\begin{array}{ll} -1.1,& |v|<1,\\ 0.9,& |v|>1,\end{array}&a_{13}(v)=&\bigg \{\begin{array}{ll} 1.0,& |v|<1,\\ 1.5,& |v|>1,\end{array} \\ a_{21}(v)=&\bigg \{\begin{array}{ll} 0.2,& |v|<1,\\ 1.1,& |v|>1,\end{array}&a_{22}(v)=&\bigg \{\begin{array}{ll} 0.9+\frac{\pi }{4},& |v|<1,\\ 1+\frac{\pi }{4},& |v|>1,\end{array}&a_{23}(v)=&\bigg \{\begin{array}{ll} -0.6,& |v|<1,\\ -1.3,& |v|>1,\end{array}\\ a_{31}(v)=&\bigg \{\begin{array}{ll} 1.0,& |v|<1,\\ 0.9,& |v|>1,\end{array},&a_{32}(v)=&\bigg \{\begin{array}{ll} -1.1,& |v|<1,\\ 0.9,& |v|>1,\end{array}&a_{33}(v)=&\bigg \{\begin{array}{ll} 1+\frac{\pi }{4},& |v|<1,\\ 0.9+\frac{\pi }{4},& |v|>1,\end{array} \\ b_{11}(v)=&\bigg \{\begin{array}{ll} -\sqrt{2}-0.1,& |v|<1,\\ -\sqrt{2},& |v|>1,\end{array}&b_{12}(v)=&\bigg \{\begin{array}{ll} 1.1,& |v|<1,\\ -0.9,& |v|>1,\end{array}&b_{13}(v)=&\bigg \{\begin{array}{ll} -0.8,& |v|<1,\\ 1.0,& |v|>1,\end{array} \end{aligned}$$
$$\begin{aligned} b_{21}(v)=&\bigg \{\begin{array}{ll} -1.1,& |v|<1,\\ 0.9,& |v|>1,\end{array}&b_{22}(v)=&\bigg \{\begin{array}{ll} -\sqrt{2},& |v|<1,\\ -\sqrt{2}-0.1,& |v|>1,\end{array}&b_{23}(v)=&\bigg \{\begin{array}{ll} -1.0,& |v|<1,\\ 0.8,& |v|>1,\end{array}\\ b_{31}(v)=&\bigg \{\begin{array}{ll} -0.8,& |v|<1,\\ 1.0,& |v|>1,\end{array}&b_{32}(v)=&\bigg \{\begin{array}{ll} 1.0,& |v|<1,\\ -0.9,& |v|>1,\end{array}&b_{33}(v)=&\bigg \{\begin{array}{ll} -\sqrt{2}-0.1,& |v|<1,\\ -\sqrt{2},& |v|>1,\end{array} \end{aligned}$$

It is easy to see that the activation functions satisfy Assumption 1 with \(\Sigma =I_3\) and \(\mu =0.3,\bar{\tau }=0.6.\) If we set \(m=2,\) by resorting to the Matlab LMI Control Toolbox, we find that the LMIs in Theorem 2 are feasible and the control gain matrices are as follows

$$\begin{aligned} G_{1}=\left[ \begin{array}{lll} 0.5194 & -2.4359& -0.0946 \\ 1.2318 & 2.7617 & 0.1058\\ -1.8931 & 0.0582 & 2.1028 \end{array}\right] ,\ \ \ G_{2}=\left[ \begin{array}{lll} -2.6240 & 0.6905 & 0.5419\\ -1.3279 & -1.8124 & 0.0941\\ -2.0183& 1.5187& 1.9018 \end{array}\right] . \end{aligned}$$
Fig. 1
figure 1

Chaotic attractor of Example 1

Fig. 2
figure 2

The phase trajectories of \(t-x_1(t)-y_1(t)\)

Fig. 3
figure 3

The phase trajectories of \(t-x_2(t)-y_2(t)\)

Fig. 4
figure 4

The phase trajectories of \(t-x_3(t)-y_3(t)\)

Fig. 5
figure 5

The error state of \(t-e_1(t)-e_2(t)-e_3(t)\)

Figure 1 shows the neural network model has a chaotic attractor with initial values \(x_1(t)=0.5,\,x_2(t)=-0.5,\,x_3(t)=1.5,\ t\in [-1,0].\) The initial values of the response system are taken as \(y_1(t)=-0.5,\,y_2(t)=2.5,\,y_3(t)=-0.3,\ t\in [-1,0].\) Figures 2, 3 and 4 depict the phase trajectories of the drive system and response system, respectively. Figure 5 shows the error states. By numerical simulation, we can see that the dynamical behaviors of response system (2) synchronize with master system (1) as shown in Figs. 2, 3, 4 and 5.

It is easy to see that above activation functions do not satisfy the assumptions in [4, 10, 14, 38, 39, 44, 45]. Therefore the conditions of [4, 10, 14, 38, 39, 44, 45] fail to verify the synchronization of this example.

5 Conclusion

This paper deals with the synchronization problem for memristive chaotic neural networks with time-varying delays. Based on our proposed multiple integral forms of the Wirtinger-based integral inequality and the reciprocally convex combination approach for high order case, several novel delay-dependent conditions are established to achieve the synchronization for the memristive chaotic neural networks with time-varying delays. Also, the control gain matrices are obtained. One numerical example shows the effectiveness of the theoretical results.