1 Introduction

Memristor, which was first raised in 1971 [1], this device was proposed based on a nonlinear relationship between charge q and magnetic flux \(\varphi \). Shortly after, this new device was employed in a system, and thus caused a great response in the world [2]. However, considering the particularity of this circuit element, it takes a long time since it is first proposed to physical implementation, i.e., a device that contains memristive character was available until 2008, this mileage breakthrough awakened more and more attention being paid to memristors [3].

Recent researches have showcased unprecedented worldwide interests of memristor, as shown by S. Williams and its coworkers, the solid-state memristor can be used to realize crossbar latches, which will be substituted transistors in the future computers. The basis of the above meaningful application lies in the nonvolatile nature of memristor, i.e., the amount of the charge which passed though the device determines its resistance. This discovery promised applications in the next-generation memory technology, especially in the next generation computers, which can ensure the computer starting up instantly with no need for the “booting time”.

Neural networks, which can be seen as a powerful tool in dealing with practical problems [4,5,6,7,8,9,10], and these interesting results have attracted more and more researchers’ attention [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]. Among which, fixed-time synchronization of memristive system was considered in [9], and [11] explored the finite-time synchronization of coupled models, furthermore, draw support from the passivity theory, the relevant control criterion were addressed in [25].

However, as a practical matter, the effects caused by external interference are inevitable, i.e., a small fluctuations (such as temperature, humidity, air pressure, air flow, electric field, magnetic field, etc) in the environment may destroy the stability of a system, besides, the friction, lubrication, force, elastic deformation and other fluctuations within the system may also a very important factor of instability for the control system. Thus this contributed another motivation of this paper.

It is worth mentioning that, there are very few results show solicitude for the dynamic analysis of memristive system [26,27,28,29,30,31,32]. Among which, the dissipativity findings for stochastic memristive model were addressed in [26], besides, [27] and [32] interested in the memristive model with markovian jump, while, most of the existing results employed the differential inclusion theory and Filippov solution. However, the special characteristics of memristive model may lead to the parameters not compatible for different initial values. To overcome this shortcomings, a new robust algorithm was proposed, in this way, the target model can be treated as a class of system with uncertain parameters, and this constituted another main elements of this brief.

According to the above analysis, this paper deliberated the ES of stochastic memristive model. The verdicts of this brief can be abstracted as: (i) The effect brought by the stochastic disturbance is considered; (ii) Considering the specificity of memristor, the target network was translated into an uncertain parameter model, thus, the derived findings can also be employed to deliberate this kinds of issues.

Notations: The \(*\) means the term that induced by a symmetry matrix. \((\varvec{\Omega }, \mathcal {F}, \mathcal {P})\) is the probability space, \( \varvec{\Omega }\) is the sample space, \(\mathcal {F}\) is the \(\sigma \)-algebra of subsets of the sample space, and \(\mathcal {P} \) is the probability measure on \(\mathcal {F}\), \(\mathbb {E}\) refers to the expectation operator with some probability measure \(\mathcal {P} \). Set \(\overline{a}_{ij}=\max \{ a_{ij}^{\star } , a_{ij}^{\star \star } \}\), \(\underline{a}_{ij}=\min \{ a_{ij}^{\star } , a_{ij}^{\star \star } \}\) , \(a^+_{ij}=\frac{1}{2}(\overline{a}_{ij}+\underline{a}_{ij})\), \(a^-_{ij}=\frac{1}{2}(\overline{a}_{ij}-\underline{a}_{ij})\), \(\overline{b}_{ij}=\max \{ b_{ij}^{\star } , b_{ij}^{\star \star } \}\), \(\underline{b}_{ij}=\min \{ b_{ij}^{\star }, b_{ij}^{\star \star } \}\) , \(b^+_{ij}=\frac{1}{2}(\overline{b}_{ij}+\underline{b}_{ij})\), \(b^-_{ij}=\frac{1}{2}(\overline{b}_{ij}-\underline{b}_{ij})\).

2 Model Description and Preliminaries

The target stochastic memristive model which will be deliberated is given in the following form:

$$\begin{aligned} \begin{aligned} d x( t)&= \Big [ - H x(t )+ A(t) f(x(t ))+ B(t) f(x(t-\delta (t)) \Big ]dt\\&\quad +\, \Big [ H_1 x(t)+ H_2 x(t-\delta (t))+H_3 f(x(t )) +H_4 f(x(t-\delta (t)) \Big ] d\omega (t), \end{aligned} \end{aligned}$$
(1)

where x(t) signifies the neuron states, \(H=\text {diag}(h_{ 1},h_{ 2},\cdots ,h_{ n})>0\), \(H_1\), \(H_2\), \(H_3\) and \(H_4\) are connection matrices, \(A(t )=(a_{ij}(t ))_{n \times n}\), \(B(t )=(b_{ij}(t ))_{n \times n}\) are the connection weight matrices, the neuron activation functions f(x(t)) means the neuron at time t , \(\omega (t)\) is a one-dimensional Brownian motion on \((\varvec{\Omega }, \mathcal {F}, \mathcal {P})\) that subjected by \(\mathbb {E}\{d\omega (t)\}=0\), \(\mathbb {E}\{d\omega (t)^2\}=dt\). \(\delta (t)\) restricted by

$$\begin{aligned} 0<\delta (t)\le \tau , \ \ \ \ \dot{\delta }(t)\le \mu , \end{aligned}$$

where \(\mu >0\) is a scalar, and

$$\begin{aligned} \begin{aligned} a_{ij}(x_i(t))=\left\{ \begin{array}{l} a_{ij}^{\star }, \ \ \ | x_i(t)|\le \digamma _i,\\ a_{ij}^{\star \star }, \ | x_i(t)|> \digamma _i, \end{array} \right. \ \ \ b_{ij}(x_i(t))=\left\{ \begin{array}{l} b_{ij}^{\star }, \ \ \ |x_i(t)|\le \digamma _i,\\ b_{ij}^{\star \star }, \ |x_i(t)| > \digamma _i, \end{array} \right. \end{aligned} \end{aligned}$$
(2)

\(a_{ij}^\star , a_{ij}^{\star \star }, b_{ij}^\star , b_{ij}^{\star \star }\) are scalars.

Lemma 2.1

For real matrices \(\underline{A }\in \mathbb {R}^{n \times n}\), \(\bar{A} \in \mathbb {R}^{n \times n}\), \(A(t)\in [\underline{A }, \bar{A}]\), then there exist possess matrices G, H and F(t), satisfies:

$$\begin{aligned} A(t)= \frac{1}{2}(\underline{A } + \bar{A})+G F(t)H, \end{aligned}$$

and

$$\begin{aligned} F^T(t) F(t)\le I. \end{aligned}$$

Proof

Let \(\underline{A }=(\underline{a}_{ij})_{n \times n}\), \(\bar{A}=(\bar{a}_{ij})_{n \times n}\), and \(A(t)=(a_{ij}(t))_{n \times n}\), considering the fact that \(A(t)\in [\underline{A }, \bar{A }]\), then, one has

$$\begin{aligned} \underline{a}_{ij} \le a_{ij}(t ) \le \bar{a}_{ij}. \end{aligned}$$

Let \(\psi (x)=\frac{a_{ij}^+ +a_{ij}^-}{2}+\frac{a_{ij}^+ - a_{ij}^- }{2}x\), which implies that \(\psi (-1)=a_{ij}^- \), and \(\psi ( 1)=a_{ij}^+ \). Then, associate with the Intermediate Value Theorem, one can read that there possess \(F_{ij}(t)\in [-1,1]\), such that:

$$\begin{aligned} \psi (F_{ij}(t)) =a_{ij}(t), \end{aligned}$$

i.e.,

$$\begin{aligned} a_{ij}(t)=\frac{a_{ij}^+ +a_{ij}^-}{2}+\frac{a_{ij}^+ - a_{ij}^- }{2}F_{ij}(t). \end{aligned}$$

Let \(G=[G_1, G_2, \cdots , G_n]\), \(H=[H_1, H_2, \cdots , H_n]\), \(F(t)=\text {diag}\{F_{11}(t), \cdots , F_{1n}(t), \cdots , F_{ n1}(t), \cdots , F_{nn}(t)\}\), where

$$\begin{aligned} \begin{aligned}&G_i=\text {diag}\{ a_{i1}^{1-\lambda }, a_{i2}^{1-\lambda }, \cdots , a_{in}^{1-\lambda } \},\\&H_i= \begin{pmatrix}0_{i-1, n} \\ a_{i1}^\lambda , a_{i2}^\lambda , \cdots , a_{in}^\lambda \\ 0_{n-i, n} \end{pmatrix}, \end{aligned} \end{aligned}$$

and \(\lambda \in [0,1]\). Obviously, \(F^T(t)F(t)\le I\), and \(A(t)=\frac{1}{2}(\underline{A} +\bar{A} )+G F(t) H\).

Thus, system (1) can be moulded as:

$$\begin{aligned} \begin{aligned} d x( t)&= \Big [ - H x(t )+ \Big ( A^+ + M_a\Theta _1(t )N_a \Big ) f(x(t ))\\&\quad +\, \Big (B^+ + M_b\Theta _2(t )N_b \Big ) f(x(t-\delta (t)) \Big ]dt\\&\quad +\, \Big [ H_1 x(t)+ H_2 x(t-\delta (t))+H_3 f(x(t )) +H_4 f(x(t-\delta (t)) \Big ] d\omega (t), \end{aligned} \end{aligned}$$
(3)

where

$$\begin{aligned} \begin{aligned}&M_a=\Big (\sqrt{a^-_{11}}\zeta _1 ,\cdots , \sqrt{a^-_{1n}}\zeta _1 ,\cdots ,\sqrt{a^-_{n1}}\zeta _n ,\cdots ,\sqrt{a^-_{nn}}\zeta _n\Big )_{n\times n^2}\\&N_a=\Big (\sqrt{a^-_{11}}\zeta _1 ,\cdots , \sqrt{a^-_{1n}}\zeta _n ,\cdots ,\sqrt{a^-_{n1}}\zeta _1 ,\cdots ,\sqrt{a^-_{nn}}\zeta _n\Big )^T_{n^2\times n },\\&M_b=\Big (\sqrt{b^-_{11}}\zeta _1 ,\cdots , \sqrt{b^-_{1n}}\zeta _1 ,\cdots ,\sqrt{b^-_{n1}}\zeta _n ,\cdots ,\sqrt{b^-_{nn}}\zeta _n\Big )_{n\times n^2}\\&N_b=\Big (\sqrt{b^-_{11}}\zeta _1 ,\cdots , \sqrt{b^-_{1n}}\zeta _n ,\cdots ,\sqrt{b^-_{n1}}\zeta _1 ,\cdots ,\sqrt{b^-_{nn}}\zeta _n\Big )^T_{n^2\times n }, \end{aligned} \end{aligned}$$

where \(\zeta _i\in \mathbb {R}^n\) be the column vector with the ith element to be 1 and others to be 0, besides, \(\Theta _i^T(t)\Theta _i(t)\le I\), \(i=1,2\).

To reach the synchronization goal, the response system is shaped as:

$$\begin{aligned} \begin{aligned} d y( t)&= \Big [ - H y(t )+ A(t) f(y(t )) + B(t) f(y(t-\delta (t))+u(t) \Big ]dt\\&\quad +\, \Big [ H_1 y(t)+ H_2 y (t-\delta (t))+H_3 f(y(t )) +H_4 f(y(t-\delta (t)) \Big ] d\omega (t), \end{aligned} \end{aligned}$$
(4)

where u(t) is the controller to reach synchronization control goal. Then, repeat the above analysis, one can easily read that the response system (4) can be future modified as:

$$\begin{aligned} \begin{aligned} d y( t)&= \Big [ - H y(t )+ \Big (A^+ + M_a \Theta _3(t )N_a \Big ) f(y(t ))\\&\quad +\, \Big (B^+ + M_b\Theta _4(t )N_b \Big ) f(y(t-\delta (t))+u(t) \Big ]dt\\&\quad +\, \Big [ H_1 y(t)+ H_2 y(t-\delta (t))+H_3 f(y(t )) +H_4 f(y(t-\delta (t)) \Big ] d\omega (t), \end{aligned} \end{aligned}$$
(5)

where \(\Theta _k^T(t )\Theta _k(t )\le I\), \(k=3,4\).

Denote the error expression as:

$$\begin{aligned} \theta (t )=y(t )-x(t ), \end{aligned}$$

then, the detailed description of the error system can be illustrated as:

$$\begin{aligned} \begin{aligned} d \theta ( t)&= \Big [ - H \theta (t )+ \Big (A^+ + M_a\Theta _3(t )N_a \Big ) g(\theta (t ))\\&\quad +\, \Big ( B^+ + M_b\Theta _4(t )N_b \Big ) g(\theta (t-\delta (t)) +u(t)\\&\quad +\, \Big ( M_a\Theta _3(t )N_a-M_a\Theta _1(t )N_a\Big ) f(x(t))\\&\quad +\, \Big ( M_b\Theta _4(t )N_b-M_b\Theta _2(t )N_b\Big ) f(x(t-\delta (t))) \Big ]dt\\&\quad +\, \Big [ H_1 \theta (t)+ H_2 \theta (t-\delta (t))+H_3 g(\theta (t )) +H_4 g(\theta (t-\delta (t)) \Big ] d\omega (t), \end{aligned} \end{aligned}$$
(6)

where \(g(\theta (\cdot ))=f(y( \cdot ))-f(x( \cdot ))\), the initial condition of (6) is given by:

$$\begin{aligned} \theta ( s) = \varphi (s), \ \ \ s\in [-\tau ,0]. \end{aligned}$$
(7)

Before moving on, the following two new state variables for the error memristive neural network (6) are presented:

$$\begin{aligned} \begin{aligned} \psi (t)&= - H \theta (t )+ \Big (A^+ + M_a\Theta _3(t )N_a \Big ) g(\theta (t ))\\&\quad + \Big (B^+ + M_b\Theta _4(t )N_b \Big ) g(\theta (t-\delta (t)) +u(t)\\&\quad +\, \Big ( M_a\Theta _3(t )N_a-M_a\Theta _1(t )N_a\Big ) f(x(t))\\&\quad +\, \Big ( M_b\Theta _4(t )N_b-M_b\Theta _2(t )N_b\Big ) f(x(t-\delta (t))),\\ \phi (t)&= H_1 \theta (t)+ H_2 \theta (t-\delta (t))+H_3 g(\theta (t )) +H_4 g(\theta (t-\delta (t)) , \end{aligned} \end{aligned}$$

then, the error model (6) can be modified as:

$$\begin{aligned} d \theta ( t) = \psi (t) dt+ \phi (t) d\omega (t). \end{aligned}$$
(8)

\((A_1)\): For \(x_1, x_2 \in \mathbb {R}\), \(x_1\ne x_2\), the neural function \(f_i(\cdot )\) satisfies:

$$\begin{aligned} \beta _i^-\le \frac{f_i(x_1)-f_i(x_2)}{x_1-x_2}\le \beta _i^+, \ \ |f_i(\cdot ) |\le F_i. \end{aligned}$$

where \(\beta _i^-\), \(\beta _i^+\), \(F_i>0\) are scalars. \(\square \)

Definition 2.1

The trivial solution of (6) is exponentially stable in the mean square, if there possess constants \(\gamma > 0\), \( \varrho >0\), such that

$$\begin{aligned} \mathbb {E} \{ \Vert \theta (t)\Vert ^2 \} \le \gamma e^{-\varrho t} \sup _{s\in [-\tau , 0]} \mathbb {E} \{ \Vert \varphi (s)\Vert ^2 \} \end{aligned}$$

is true.

Lemma 2.2

([33]) For matrices \(\mathfrak {U}\), \(\mathfrak {H}\) , and symmetric matrix \(\mathfrak {R}\),

$$\begin{aligned} \mathfrak {R}+\mathfrak {U}\mathfrak {F}\mathfrak {H}+(\mathfrak {U}\mathfrak {F}\mathfrak {H})^T<0 \end{aligned}$$

is true if and only if

$$\begin{aligned} \mathfrak {R}+\epsilon \mathfrak {U}\mathfrak {U}^T+\epsilon ^{-1}\mathfrak {H}^T\mathfrak {H}<0 \end{aligned}$$

holds, where \(\epsilon >0\), \(\mathfrak {F}^T\mathfrak {F}<I\).

Lemma 2.3

([33]) Given matrices \(\Delta _1,\Delta _2,\Delta _3\) where \(\Delta _1=\Delta _1^T\), \(0<\Delta _2=\Delta _2^T\), then

$$\begin{aligned} \Delta _1+\Delta _3^T\Delta _2^{-1}\Delta _3<0, \end{aligned}$$

if and only if

$$\begin{aligned} \begin{aligned} \begin{pmatrix}\Delta _1&{}\quad \Delta _3^T\\ \Delta _3&{}\quad -\Delta _2\end{pmatrix}<0 \ \ \ \ \text {or} \ \ \ \ \ \begin{pmatrix}-\Delta _2&{}\quad \Delta _3\\ \Delta _3^T&{}\quad \Delta _1\end{pmatrix}<0. \end{aligned} \end{aligned}$$

3 Main Conclusions

The following limes specialized on the ES control of the memristive model with stochastic terms, to reach this goal, the following control strategy is necessary:

$$\begin{aligned} u(t)=-k \theta (t )-\Gamma \text {sign}(\theta (t )), \end{aligned}$$
(9)

where \(\Gamma =\text {diag}\{r_1, r_2, \cdots , r_n\}\).

Theorem 3.1

Under \((A_1)\), the trivial solution of the (6) is ES in mean square, if there exist positive definite matrices P, Q, diagonal matrices \(M_1>0\), \(M_2>0\), matrix \(N= (N_1, N_2, N_3, N_4, N_5)^T\), and constants \(\varsigma _1>0\), \(\varsigma _2>0\), such that:

$$\begin{aligned} \begin{aligned}&\Omega =\left[ \begin{array}{ccccccccccc} \Omega _{11}&{}\quad \Omega _{12}&{}\quad \Omega _{13}&{}\quad \Omega _{14} &{}\quad \Omega _{15}&{}\quad PM_a&{}\quad PM_b\\ *&{}\quad \Omega _{22} &{}\quad \Omega _{23}&{}\quad \Omega _{24}&{}\quad \Omega _{25}&{}\quad 0&{}\quad 0\\ *&{}\quad *&{}\quad \Omega _{33}&{}\quad \Omega _{34}&{}\quad \Omega _{35}&{}\quad 0&{}\quad 0\\ *&{}\quad *&{}\quad *&{}\quad \Omega _{44}&{}\quad \Omega _{45}&{}\quad 0&{}\quad 0\\ *&{}\quad *&{}\quad *&{}\quad *&{} \quad \Omega _{55}&{}\quad 0&{}\quad 0\\ *&{}\quad *&{}\quad *&{}\quad *&{}\quad *&{}\quad -\varsigma _1 I&{}\quad 0\\ *&{}\quad *&{}\quad *&{}\quad *&{}\quad *&{}\quad *&{}\quad - \varsigma _2 I \end{array} \right] <0, \end{aligned} \end{aligned}$$
(10)

where

$$\begin{aligned} \begin{aligned}&\Omega _{11}=-2PH-2R+Q-\beta _1M_1\beta _2 + N_1 H_1+ H_1^T N_1^T, \ \ \Omega _{12}= N_1 H_2+ H_1^TN_2^T,\\&\Omega _{13}=-N_1+H_1^TN_3^T,\ \ \Omega _{14}= PA^++\frac{1}{2}M_1(\beta _1+\beta _2)+N_1H_3+H_1^TN_4^T, \\&\Omega _{15}= PB^+ +N_1H_4+H_1^TN_5^T, \ \ \Omega _{22}=-(1-\mu )Q-\beta _1M_2\beta _2+ N_2H_2+H_2^TN_2^T, \\&\Omega _{23}=-N_2+H_2^TN_3^T,\ \ \Omega _{24}= N_2H_3+H_2^TN_4^T, \\&\Omega _{25}= \frac{1}{2}M_2(\beta _1+\beta _2)+N_2H_4+H_2^TN_5^T, \ \ \Omega _{33}=P- N_3-N_3^T, \ \ \Omega _{34}=N_3H_3-N_4^T, \\&\Omega _{35}=N_3H_4-N_5^T, \ \ \Omega _{44}=-M_1+ N_4H_3+H_3^TN_4^T+\varsigma _1N_a^TN_a, \\&\Omega _{45}=N_4H_4+H_3^TN_5^T, \ \ \Omega _{55}= -M_2+ N_5H_4+H_4^TN_5^T+\varsigma _2N_b^TN_b, \\&N=\Big (N_1^T , N_2^T , N_3^T , N_4^T , N_5^T \Big )^T, \\&\eta ^T(t )=\Big (\theta ^T(t ), \theta ^T(t- \delta (t) ), \phi (t), g^T(\theta (t )), g^T(\theta (t-\delta (t)) ) \Big ),\\&\beta _1=\text {diag}\Big (\beta _1^-,\beta _2^-,\cdots ,\beta _n^-\Big ), \ \ \ \beta _2=\text {diag}\Big (\beta _1^+,\beta _2^+,\cdots ,\beta _n^+\Big ). \end{aligned} \end{aligned}$$

The parameters in (9) are subjected to the following restriction:

$$\begin{aligned} r_i\ge \sum _{j=1}^n \Big ( |\bar{a}_{ij} -\underline{a}_{ij} | +|\bar{b}_{ij} -\underline{b}_{ij} |\Big )F_j, \end{aligned}$$
(11)

and the control gain can be checked by \(k=P^{-1}R\).

Proof

Consider the following Lyapunov functional:

$$\begin{aligned} V(t )= \theta ^T(t)P\theta (t)+\int _{t-\delta (t)}^t \theta ^T(s)Q\theta (s)ds. \end{aligned}$$
(12)

Then, by means of Itô’s differential formula, one has:

$$\begin{aligned} d V(t )= \mathcal {L} V(t)dt + 2 \theta ^T(t)P \phi (t) d\omega (t), \end{aligned}$$
(13)

where

$$\begin{aligned} \begin{aligned} \mathcal {L} V(t)&\le 2\theta ^T(t)P \psi (t)+ \phi ^T(t)P\phi (t)+ \theta ^T(t)Q\theta (t)-(1-\mu )\theta ^T(t-\delta (t))Q\theta (t-\delta (t))\\&= 2 \theta ^T(t)P \Big [ - H \theta (t )+ \Big (A^+ + M_a\Theta _3(t )N_a \Big ) g(\theta (t ))\\&\quad +\, \Big (B^+ + M_b\Theta _4(t )N_b \Big ) g(\theta (t-\delta (t)) \\&\quad +\, \Big ( M_a\Theta _3(t )N_a-M_a\Theta _1(t )N_a\Big ) f(x(t))\\&\quad +\, \Big ( M_b\Theta _4(t )N_b-M_b\Theta _2(t )N_b\Big ) f(x(t-\delta (t)))\\&\quad -\, k \theta (t )-\Gamma \text {sign}(\theta (t )) \Big ]+ \phi ^T(t)P\phi (t)+ \theta ^T(t)Q\theta (t)\\&\quad -\, (1-\mu )\theta ^T(t-\delta (t))Q\theta (t-\delta (t)), \end{aligned} \end{aligned}$$
(14)

considering that:

$$\begin{aligned} \begin{aligned}&2\theta ^T(t)P \Big [ \Big ( M_a\Theta _3(t )N_a-M_a\Theta _1(t )N_a\Big ) f(x(t))\\&\qquad + \Big ( M_b\Theta _4(t )N_b-M_b\Theta _2(t )N_b\Big ) f(x(t-\delta (t))) -\Gamma \text {sign}(\theta (t )) \Big ]\\&\quad \le 2\sum _{i=1}^n | \theta _i(t)| p_i \bigg [ \sum _{j=1}^n \Big ( |\bar{a}_{ij} -\underline{a}_{ij} | +|\bar{b}_{ij} -\underline{b}_{ij} |\Big )F_j-r_i \bigg ]\\&\quad = 0, \end{aligned} \end{aligned}$$
(15)

thus, a more compact estimation of (14) can be described as:

$$\begin{aligned} \begin{aligned} \mathcal {L} V(t)&\le \, 2 \theta ^T(t)P \Big [ - (H+k) \theta (t )+ \Big (A^+ + M_a\Theta _3(t )N_a \Big ) g(\theta (t ))\\&\quad +\, \Big (B^+ + M_b\Theta _4(t )N_b \Big ) \\&\quad \times \, g(\theta (t-\delta (t)) \Big ] + \phi ^T(t)P\phi (t)+ \theta ^T(t)Q\theta (t)\\&\quad -\,(1-\mu )\theta ^T(t-\delta (t))Q \theta (t-\delta (t)). \end{aligned} \end{aligned}$$
(16)

Moreover, for matrix N, the following line is true:

$$\begin{aligned} 2 \eta ^T(t)N \Big ( H_1 \theta (t)+ H_2 \theta (t-\delta (t))+H_3 g(\theta (t )) +H_4 g(\theta (t-\delta (t)) -\phi (t) \Big )=0, \end{aligned}$$
(17)

Draw support from \((A_1)\) and diagonal matrices \(M_1>0\),\(M_2>0\), one has:

$$\begin{aligned} \begin{aligned}&\theta ^T(t )\beta _1M_1\beta _2 \theta (t )-\theta ^T(t )M_1(\beta _1+\beta _2)g(\theta (t ))+g^T(\theta (t ))M_1g(\theta (t ))\le 0,\\&\theta ^T(t-\delta (t) )\beta _1M_2\beta _2 \theta (t-\delta (t) )-\theta ^T(t-\delta (t) )M_2(\beta _1+\beta _2)g(\theta (t-\delta (t)) )\\&\quad +g^T(\theta (t-\delta (t)) )M_2g(\theta (t-\delta (t)) )\le 0. \end{aligned} \end{aligned}$$
(18)

Then, adding (16)–(18) to (13), yields:

$$\begin{aligned} d V (t )\le \eta ^T(t)\tilde{\Omega } \eta (t)+ 2\theta ^T(t)P\phi (t)d\omega (t), \end{aligned}$$
(19)

where

$$\begin{aligned} \begin{aligned} \tilde{\Omega }=\left[ \begin{array}{ccccccccccc} \Omega _{11}&{}\quad \Omega _{12}&{}\quad \Omega _{13}&{}\quad \tilde{\Omega }_{14} &{}\quad \tilde{\Omega }_{15}\\ *&{}\quad \Omega _{22} &{}\quad \Omega _{23}&{}\quad \Omega _{24}&{}\quad \Omega _{25}\\ *&{}\quad *&{}\quad \Omega _{33}&{}\quad \Omega _{34}&{}\quad \Omega _{35}\\ *&{}\quad *&{}\quad *&{}\quad \tilde{\Omega }_{44}&{}\quad \Omega _{45}\\ *&{}\quad *&{}\quad *&{}\quad *&{}\quad \tilde{\Omega }_{55} \end{array} \right] , \end{aligned} \end{aligned}$$

with

$$\begin{aligned} \begin{aligned} \tilde{\Omega }_{14}&= PA^+ + P M_a\Theta _3(t)N_a+\frac{1}{2}M_1(\beta _1+\beta _2)+N_1H_3+H_1^TN_4^T,\\ \tilde{\Omega }_{15}&= PB^+ + P M_b\Theta _4(t)N_b+N_1H_4+H_1^TN_5^T, \\ \tilde{\Omega }_{44}&=-M_1+ N_4H_3+ H_3^TN_4^T, \\ \tilde{\Omega }_{55}&= -M_2+ N_5H_4+H_4^TN_5^T, \end{aligned} \end{aligned}$$

then, taking consideration of \(\tilde{\Omega }_{14}\), \(\tilde{\Omega }_{15}\), \(\tilde{\Omega }\) can be further regulated as:

$$\begin{aligned} \begin{aligned} \tilde{\Omega }&=\left[ \begin{array}{ccccccccccc} \Omega _{11}&{}\quad \Omega _{12}&{}\quad \Omega _{13}&{}\quad \Omega _{14} &{}\quad \Omega _{15}\\ *&{}\quad \Omega _{22} &{}\quad \Omega _{23}&{}\quad \Omega _{24}&{}\quad \Omega _{25}\\ *&{}\quad *&{}\quad \Omega _{33}&{}\quad \Omega _{34}&{}\quad \Omega _{35}\\ *&{}\quad *&{}\quad *&{}\quad \tilde{\Omega }_{44}&{}\quad \Omega _{45}\\ *&{}\quad *&{}\quad *&{}\quad *&{}\quad \tilde{\Omega }_{55} \end{array} \right] \\&\quad +\, \begin{pmatrix}0\\ 0\\ 0\\ N_a^T \\ 0 \end{pmatrix}\Theta _3^T(t) \begin{pmatrix}M_a^TP\\ 0\\ 0\\ 0\\ 0 \end{pmatrix}^T+(\cdot )+\begin{pmatrix}0\\ 0\\ 0\\ 0\\ N_b^T \end{pmatrix}\Theta _4^T(t) \begin{pmatrix}M_b^TP\\ 0\\ 0\\ 0\\ 0 \end{pmatrix}^T+(\cdot ), \end{aligned} \end{aligned}$$
(20)

consulting from the Lemmas 2.2 and 2.3, one can read that for any constants \(\varsigma _1>0\), \(\varsigma _2>0\), \( \Omega \) in (10) can ensure

$$\begin{aligned} \tilde{\Omega }<0, \end{aligned}$$

thus, the results derived in (19) yields:

$$\begin{aligned} \begin{aligned} d V (t )&\le -\lambda _{\min }(-\Omega ) ( \Vert \theta (t)\Vert ^2+ \Vert \theta (t-\delta (t))\Vert ^2) + 2\theta ^T(t)P\phi (t)d\omega (t). \end{aligned} \end{aligned}$$
(21)

Moreover, based on the expression of V(t), one can conclude that there possess two scalars \(\rho _1>0\), \(\rho _2>0\), such that:

$$\begin{aligned} V(t )\le \rho _1 \Vert \theta (t)\Vert ^2+ \rho _2\int _{t-\tau }^t \Vert \theta (s)\Vert ^2 ds, \end{aligned}$$
(22)

where \(\rho _1=\lambda _{\max }(P)\), \(\rho _2=\lambda _{\max }(Q)\).

One knows that there must be a constant \(\beta >0\), such that:

$$\begin{aligned} \beta (\rho _1+\tau \rho _2 e^{\beta \tau } )\le \lambda _{\min }(-\Omega ), \end{aligned}$$
(23)

therefore,

$$\begin{aligned} \begin{aligned} d(e^{\beta t}V(t))&= \beta e^{\beta t}V(t)+e^{\beta t} d V(t)\\&\le e^{\beta t} \Big (\beta \rho _1 \Vert \theta (t)\Vert ^2+ \beta \rho _2\int _{t-\tau }^t \Vert \theta (s)\Vert ^2 ds\\&\quad -\, \lambda _{\min }(-\Omega ) \Vert \theta (t)\Vert ^2 -\lambda _{\min }(-\Omega ) \Vert \theta (t-\delta (t))\Vert ^2 \Big )dt\\&\quad +\, 2e^{\beta t} \theta ^T(t)P\phi (t)d\omega , \end{aligned} \end{aligned}$$
(24)

Thus, integrating both sides of (24) from 0 to \(T>0\), and operating the mathematical expectation gives:

$$\begin{aligned} \begin{aligned} \mathbb {E}( e^{\beta T}V(T))&\le V(0)+\beta \rho _1 \mathbb {E} \Big ( \int _0^T e^{\beta t} \Vert \theta (t)\Vert ^2 dt \Big ) + \beta \rho _2 \mathbb {E} \Big ( \int _0^T \int _{t-\tau }^t e^{\beta t}\Vert \theta (s)\Vert ^2 ds dt\Big )\\&\quad -\,\lambda _{\min }(-\Omega ) \mathbb {E} \Big ( \int _0^T e^{\beta t} \Vert \theta (t)\Vert ^2 dt \Big )\\&\quad -\,\lambda _{\min }(-\Omega ) \mathbb {E} \Big ( \int _0^T e^{\beta t} \Vert \theta (t-\delta (t))\Vert ^2 dt\Big ), \end{aligned} \end{aligned}$$
(25)

while,

$$\begin{aligned} \begin{aligned} \int _{0}^T \int _{t-\tau }^t e^{\beta t}\Vert \theta (s)\Vert ^2 ds dt&\le \int _{-\tau }^T \bigg (\int _{s \vee 0 }^{(s+\tau )\wedge T}e^{\beta t}dt \bigg ) \Vert \theta (s)\Vert ^2 ds\\&\le \int _{-\tau }^T \tau e^{\beta (\alpha +\tau )} \Vert \theta (s)\Vert ^2 ds\\&\le \tau e^{\beta \tau }\int _{0}^T e^{\beta t} \Vert \theta (t)\Vert ^2 dt + \tau e^{\beta \tau } \int _{-\tau }^0 \Vert \theta (s)\Vert ^2 ds\\&\le \tau e^{\beta \tau }\int _{0}^T e^{\beta t} \Vert \theta (t)\Vert ^2 dt + \tau ^2 e^{\beta \tau } \sup _{-\tau \le s \le 0} \Vert \varphi (s)\Vert ^2,\\ V(0)&\le \Big (\rho _1+\tau \rho _2 \Big ) \sup _{-\tau \le s \le 0} \mathbb {E} (\Vert \varphi (s)\Vert ^2), \end{aligned} \end{aligned}$$
(26)

thus, considering the restriction expressed in (26), (25) can be future developed as:

$$\begin{aligned} \begin{aligned} \mathbb {E}( e^{\beta T}V(T))&\le \Big (\beta \rho _1-\lambda _{\min }(-\Omega )+\beta \rho _2 \tau e^{\beta \tau }\Big ) \mathbb {E} \Big ( \int _0^T e^{\beta t} \Vert \theta (t)\Vert ^2 dt \Big )\\&\quad +\, \Big (\rho _1+\tau \rho _2 +\beta \rho _2 \tau ^2 e^{\beta \tau }\Big ) \sup _{-\tau \le s \le 0} \mathbb {E} (\Vert \varphi (s)\Vert ^2)\\&\quad -\, \lambda _{\min }(-\Omega ) \mathbb {E} \Big ( \int _0^T e^{\beta t} \Vert \theta (t-\delta (t))\Vert ^2 dt\Big )\\&\le \Big (\rho _1+\tau \rho _2 +\beta \rho _2 \tau ^2 e^{\beta \tau }\Big ) \sup _{-\tau \le s \le 0} \mathbb {E} (\Vert \varphi (s)\Vert ^2) , \end{aligned} \end{aligned}$$
(27)

as a result, one can see that:

$$\begin{aligned} \begin{aligned} \lambda _{\min }(P)\mathbb {E}( e^{\beta T} \Vert \theta (t )\Vert ^2) \le \mathbb {E}( e^{\beta T} V(T))\le \Big (\rho _1+\tau \rho _2 +\beta \rho _2 \tau ^2 e^{\beta \tau }\Big ) \sup _{-\tau \le s \le 0} \mathbb {E} (\Vert \varphi (s)\Vert ^2), \end{aligned} \end{aligned}$$
(28)

i.e.,

$$\begin{aligned} \begin{aligned} \mathbb {E}( \Vert \theta (t )\Vert ^2)&\le \frac{ \rho _1+\tau \rho _2 +\beta \rho _2 \tau ^2 e^{\beta \tau } }{\lambda _{\min }(P)} e^{- \beta t} \sup _{-\tau \le s \le 0} \mathbb {E} (\Vert \varphi (s)\Vert ^2), \end{aligned} \end{aligned}$$
(29)

Thus, based on Definition 2.1, one can read that the error system is ES in mean square. The proof is thus completed. \(\square \)

If the stochastic terms are removed from (1), then it will be degenerated into a memristive model shaped as:

$$\begin{aligned} \dot{x}( t) = - H x(t )+ A(t) f(x(t ))+ B(t) f(x(t-\delta (t)), \end{aligned}$$
(30)

then, repeat the above analysis gives:

$$\begin{aligned} \dot{ x}( t) = - H x(t )+ \Big (A^+ + M_a\Theta _1(t )N_a \Big ) f(x(t )) + \Big (B^+ + M_b\Theta _2(t )N_b \Big ) f(x(t-\delta (t)) . \end{aligned}$$
(31)

To reach the ES goal, the response model can be described by:

$$\begin{aligned} \dot{ y}( t) = - H y(t )+ A(t) f(y(t )) + B(t) f(y(t-\delta (t))+u(t) , \end{aligned}$$
(32)

equivalently to:

$$\begin{aligned} \dot{ y}( t) = - H y(t )+ \Big (A^+ + M_a\Theta _3(t )N_a \Big ) f(y(t )) + \Big (B^+ + M_b\Theta _4(t )N_b \Big ) f(y(t-\delta (t))+u(t) , \end{aligned}$$
(33)

then, argued as the above procedures, the error signal can be modified as:

$$\begin{aligned} \begin{aligned} \dot{\theta }( t)&= - H \theta (t )+ \Big (A^+ + M_a\Theta _3(t )N_a \Big ) g(\theta (t ))\\&\quad +\, \Big (B^+ + M_b\Theta _4(t )N_b \Big ) g(\theta (t-\delta (t)) +u(t)\\&\quad +\, \Big ( M_a\Theta _3(t )N_a-M_a\Theta _1(t )N_a\Big ) f(x(t))\\&\quad +\, \Big ( M_b\Theta _4(t )N_b-M_b\Theta _2(t )N_b\Big ) f(x(t-\delta (t))) , \end{aligned} \end{aligned}$$
(34)

where u(t) has the same expression as defined in (9).

Corollary 3.1

Under \((A_1)\), if there possess positive definite matrices P, Q, diagonal matrices \(M_1>0\), \(M_2>0\), and constants \(\varsigma _1>0\), \(\varsigma _2>0\), such that:

$$\begin{aligned} \begin{aligned}&\Pi =\left[ \begin{array}{ccccccccccc} \Pi _{11}&{}\quad 0 &{}\quad PA^+ +\frac{1}{2}M_1(\beta _1+\beta _2) &{}\quad PB^+ &{}\quad PM_a&{}\quad PM_b\\ *&{}\quad \Pi _{22} &{}\quad 0&{}\quad \Pi _{24}&{}\quad 0&{}\quad 0\\ *&{}\quad *&{}\quad \Pi _{33}&{}\quad 0&{}\quad 0&{}\quad 0\\ *&{}\quad *&{}\quad *&{}\quad \Pi _{44}&{}\quad 0&{}\quad 0\\ *&{}\quad *&{}\quad *&{}\quad *&{}\quad -\varsigma _1 I&{}\quad 0\\ *&{}\quad *&{}\quad *&{}\quad *&{}\quad *&{}\quad - \varsigma _2 I \end{array} \right] <0, \end{aligned} \end{aligned}$$
(35)

where

$$\begin{aligned} \begin{aligned} \Pi _{11}&=-2P H -2R+Q-\beta _1M_1\beta _2 , \ \ \ \Pi _{22}=-(1-\mu )Q-\beta _1M_2\beta _2 , \\ \Pi _{24}&= \frac{1}{2}M_2(\beta _1+\beta _2) , \ \ \ \Pi _{33}=-M_1 +\varsigma _1N_a^TN_a, \\ \Pi _{44}&= -M_2 +\varsigma _2N_b^TN_b, \\ \chi ^T(t )&=\Big (\theta ^T(t ), \theta ^T(t- \delta (t) ), g^T(\theta (t )), g^T(\theta (t-\delta (t)) ) \Big ). \end{aligned} \end{aligned}$$

then, the trivial solution of (34) is ES. Besides, the parameters in (9) are subjected to:

$$\begin{aligned} r_i\ge \sum _{j=1}^n \Big ( |\bar{a}_{ij} -\underline{a}_{ij} | +|\bar{b}_{ij} -\underline{b}_{ij} |\Big )F_j, \end{aligned}$$
(36)

and \(k=P^{-1}R\).

Proof

By taking \(\phi (t)=0\), \(N=0\) in Theorem 3.1, the conclusion can be easily derived. \(\square \)

Remark 3.1

[34] pay attention to the anti-synchronization control of stochastic memristive neural networks, while form the derived main conclusions, one can see that the control algorithm is associated with dimension n of the target model, which is unreasonable considering the memristive model is a very large scale system. In [31], a stochastic memristive system is considered, while the model discussed in this paper is independent of the delayed terms, as a result the derived conclusions are also ignored the effect driven by delays. Besides, [26, 35] concentrated on the discrete-time stochastic memristive neural networks, what should be pointed is that the above analysis results are derived based on the differential inclusion theory, while in this note, by a Lemma, the parameters in the memristive system is translated into a model with uncertain terms.

4 Numerical Examples

One example and some simulation diagrams are furnished to illustrate the validity of the proposed findings.

Example 1

A stochastic memristive model is considered in this paper, in which, the parameters are given by:

$$\begin{aligned} \begin{aligned}&H=\begin{pmatrix}3&{}\quad 0&{}\quad 0\\ 0&{}\quad 1&{}\quad 0\\ 0&{}\quad 0&{}\quad 2\end{pmatrix}, \ \ A(t)=\begin{pmatrix}a_{11}(x_1(t) )&{}\quad a_{12}(x_1(t) )&{}\quad a_{13}(x_1(t) )\\ a_{21}(x_2(t) )&{}\quad a_{22}(x_2(t) )&{}\quad a_{23}(x_2(t) )\\ a_{31}(x_3(t) )&{}\quad a_{32}(x_3(t) )&{}\quad a_{33}(x_3(t) )\end{pmatrix}, \\&B(t)=\begin{pmatrix}b_{11}(x_1(t) )&{}\quad b_{12}(x_1(t) )&{}\quad b_{13}(x_1(t) )\\ b_{21}(x_2(t) )&{}\quad b_{22}(x_2(t) )&{}\quad b_{23}(x_2(t) ) \\ b_{31}(x_3(t) ) &{}\quad b_{32}(x_3(t) ) &{}\quad b_{33}(x_3(t) )\end{pmatrix},\ \ H_i=\begin{pmatrix}1&{}\quad 0&{}\quad 0\\ 0&{}\quad 1&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\end{pmatrix}, \ i=1,2,3,4, \end{aligned} \end{aligned}$$

where

$$\begin{aligned}&a_{11}(x_1(t))=\left\{ \begin{array}{l} -\,1.3, \ | x_1(t) |\le 1,\\ -\,1.6, \ | x_1(t) |> 1, \end{array}\right. \ \ a_{12}(x_1(t))=\left\{ \begin{array}{l} 0.3, \ \ \ \ \ | x_1(t) |\le 1,\\ -\,0.15, \ | x_1(t) |> 1, \end{array}\right. \\&a_{13}(x_1(t))=\left\{ \begin{array}{l} 0.2, \ \ \ | x_1(t) |\le 1,\\ 0.1, \ \ \ | x_1(t) |> 1, \end{array}\right. \ \ a_{21}(x_2(t))=\left\{ \begin{array}{l} 0.4, \ \ \ \ \ \ | x_2(t) |\le 1,\\ 0.52, \ \ \ \ | x_2(t) |> 1, \end{array} \right. \\&a_{22}(x_2(t))=\left\{ \begin{array}{l} -\,1.2, \ \ \ \ | x_2(t) |\le 1,\\ -\,0.9, \ \ \ \ | x_2(t) |> 1, \end{array}\right. \ \ a_{23}(x_2(t))=\left\{ \begin{array}{l} 0.4, \ \ \ | x_2(t) |\le 1,\\ 0.2, \ \ \ | x_2(t) |> 1, \end{array}\right. \\&a_{31}(x_3(t))=\left\{ \begin{array}{l} 0.1, \ \ \ \ \ \ \ | x_3(t) |\le 1,\\ 0.15, \ \ \ \ \ | x_3(t) |> 1, \end{array}\right. \ \ a_{32}(x_3(t))=\left\{ \begin{array}{l} 0.1, \ \ \ | x_3(t) |\le 1,\\ 0.2, \ \ \ | x_3(t) |> 1, \end{array}\right. \\&a_{33}(x_3(t))=\left\{ \begin{array}{l} -\,2, \ \ \ \ \ \ \ | x_3(t) |\le 1,\\ -\,1.6, \ \ \ \ | x_3(t) |> 1, \end{array}\right. \ \ b_{11}(x_1(t ))=\left\{ \begin{array}{l} -\,0.3, \ | x_1(t ) |\le 1,\\ -\,0.4, \ | x_1(t ) |> 1, \end{array}\right. \\&b_{12}(x_1(t ))=\left\{ \begin{array}{l} -\,0.2, \ \ \ \ | x_1(t ) |\le 1,\\ -\,0.3, \ \ \ \ | x_1(t ) |> 1, \end{array}\right. \ \ b_{13}(x_1(t ))=\left\{ \begin{array}{l} 0.3, \ \ \ | x_1(t ) |\le 1,\\ 0.35, \ | x_1(t ) |> 1, \end{array}\right. \\&b_{21}(x_2(t ))=\left\{ \begin{array}{l} 0.4, \ \ \ \ \ \ | x_2(t ) |\le 1,\\ 0.5, \ \ \ \ \ \ | x_2(t ) |> 1, \end{array} \right. \ \ b_{22}(x_2(t ))=\left\{ \begin{array}{l} 0.5, \ \ \ | x_2(t ) |\le 1,\\ 0.4, \ \ \ | x_2(t ) |> 1. \end{array} \right. \\&b_{23}(x_2(t ))=\left\{ \begin{array}{l} 0.2, \ \ \ \ \ \ | x_2(t ) |\le 1,\\ 0.16, \ \ \ \ | x_2(t ) |> 1, \end{array} \right. \ \ b_{31}(x_3(t ))=\left\{ \begin{array}{l} 0.2, \ \ \ | x_3(t ) |\le 1,\\ 0.12, \ | x_3(t ) |> 1. \end{array} \right. \\&b_{32}(x_3(t ))=\left\{ \begin{array}{l} -\,0.22, \ | x_3(t ) |\le 1,\\ -\,0.3, \ \ \ | x_3(t ) |> 1, \end{array} \right. \ \ b_{33}(x_3(t ))=\left\{ \begin{array}{l} -\,1.6, \ \ | x_3(t ) |\le 1,\\ -\,1.2, \ \ | x_3(t ) |> 1. \end{array} \right. \end{aligned}$$

Thus, the above lines implies:

$$\begin{aligned} \begin{aligned}&M_a=\begin{pmatrix}\sqrt{0.15}&{}\quad \sqrt{0.225}&{}\quad \sqrt{0.05}&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.06}&{}\quad \sqrt{0.15}&{}\quad \sqrt{0.1}&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.025}&{}\quad \sqrt{0.05}&{}\quad \sqrt{0.2}\end{pmatrix}, \\&H_a=\begin{pmatrix}\sqrt{0.15}&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.06}&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.025}&{}\quad 0&{}\quad 0\\ 0&{}\quad \sqrt{0.225}&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.15}&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.05}&{}\quad 0\\ 0&{}\quad 0&{}\quad \sqrt{0.05}&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.1}&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.2}\end{pmatrix}^T, \\&M_b=\begin{pmatrix}\sqrt{0.05}&{}\quad \sqrt{0.05}&{}\quad \sqrt{0.025}&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.0 5}&{}\quad \sqrt{0.05}&{}\quad \sqrt{0.02}&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.04}&{}\quad \sqrt{0.04}&{}\quad \sqrt{0.2}\end{pmatrix}, \\&H_b=\begin{pmatrix}\sqrt{0.05}&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.0 5}&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.04}&{}\quad 0&{}\quad 0\\ 0&{}\quad \sqrt{0.05}&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.05}&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.04}&{}\quad 0\\ 0&{}\quad 0&{}\quad \sqrt{0.025}&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.02}&{}\quad 0&{}\quad 0&{}\quad \sqrt{0.2}\end{pmatrix}^T, \end{aligned} \end{aligned}$$

Now we will evaluate the synchronization performance of the target model. the delays is given by \(\delta (t)=0.2+0.1\sin (t)\), a straightforward calculation gives \(\tau =3\) and \(\mu =0.1\). Let \(f(s)= \tanh (s)+3\), then, according to assumption (\(A_1\)), one has \(\beta _j^-=2, \beta _j^+=4\) and \(F_j=4\). According to the above parameters, through a simple calculation, the control parameter in (9) can be illustrated as:

$$\begin{aligned} \begin{aligned} r_1&\ge \sum _{j=1}^3 \Big ( |\bar{a}_{1j} -\underline{a}_{1j} | +|\bar{b}_{1j} -\underline{b}_{1j} |\Big )F_j = 4.4,\\ r_2&\ge \sum _{j=1}^3 \Big ( |\bar{a}_{2j} -\underline{a}_{2j} | +|\bar{b}_{2j} -\underline{b}_{2j} |\Big )F_j = 3.44,\\ r_3&\ge \sum _{j=1}^3 \Big ( |\bar{a}_{3j} -\underline{a}_{3j} | +|\bar{b}_{3j} -\underline{b}_{3j} |\Big )F_j = 4.44. \end{aligned} \end{aligned}$$

For numerical simulations, choosing \(r_1= 4.5\), \(r_2= 3.5\), \(r_3=4.5\).

Moreover, by solving (10), parts of the feasible solutions are emerged as:

$$\begin{aligned} \begin{aligned}&P =\begin{pmatrix} 0.4352 &{}\quad 0.0082 &{}\quad 0.0295 \\ 0.0082 &{}\quad 0.5470 &{}\quad 0.0387 \\ 0.0295 &{}\quad 0.0387 &{}\quad 0.4223 \end{pmatrix}, \ \ \ \ \ Q=\begin{pmatrix} 3.1968 &{}\quad -\,0.1283 &{}\quad 0.0536 \\ -\,0.1283 &{}\quad 3.2082 &{}\quad 0.0191 \\ 0.0536 &{}\quad 0.0191 &{}\quad 3.2023 \end{pmatrix},\\&R=\begin{pmatrix} 1.7505 &{}\quad 0.3122 &{}\quad -\,0.0029 \\ 0.3122 &{}\quad 2.9008 &{}\quad -\,0.0054 \\ -\,0.0029 &{}\quad -\,0.0054 &{}\quad 2.8540 \end{pmatrix}, \end{aligned} \end{aligned}$$

thus, k can be calculated as:

$$\begin{aligned} \begin{aligned} k =P^{-1}R=\begin{pmatrix} 4.0346 &{}\quad 0.6539 &{}\quad -\,0.4604 \\ 0.5341 &{}\quad 5.3318 &{}\quad -\,0.4869 \\ -\,0.3375 &{}\quad -\,0.5475 &{}\quad 6.8345 \end{pmatrix}, \end{aligned} \end{aligned}$$

According to the derived control gains, the simulation figures can be seen in Figs. 1, 2, 3, 4, 5 and 6. The dynamic behavior of the drive-response systems are given in Figs. 3, 4, 5, and 6 describes the time responses of synchronization error system \(\theta _i(t)\), which trends to be zero concerning to t. Thus, the obtained control gains ensure the error system converges to zero exponentially.

Fig. 1
figure 1

Chaotic behavior of the drive system

Fig. 2
figure 2

Chaotic behavior of the response system without the controller

Fig. 3
figure 3

Time-domain behavior of the state variables \(x_1(t)\), \(y_1(t)\)

Fig. 4
figure 4

Time-domain behavior of the state variables \(x_2(t)\), \(y_2(t)\)

Fig. 5
figure 5

Time-domain behavior of the state variables \(x_3(t)\), \(y_3(t)\)

Fig. 6
figure 6

State responses of \(\theta (t)\)

5 Conclusion

This paper introduced the ES control methodology for a class of stochastic memristive neural networks. On the framework of Lyapunov–Krasovskii functional, the stochastic stability theory and free-weighting matrices method, some brand-new solvability criteria are established to achieve ES goal of the target memristive systems. Considering the special characteristics of memristive system, a new robust control algorithm was proposed, in this way, the target model can be treated as a system with uncertain parameters. Finally, the derived findings are confirmed by a simulation example.