1 Introduction

Since being proposed by Pecora and Carroll [1], chaotic synchronization has been considered and applied in many areas, such as chemical reaction, biological system, secure communication, information processing and so on. A variety of approaches have been proposed to investigate the chaos synchronization, including adaptive control [2, 3], optimal control [4], and sliding mode control [5,6,7]. Therefore, many kinds of synchronizations have been explored, involving lag synchronization [8], projective synchronization [9], anti-synchronization [10], burst synchronization [11], phase synchronization [12], hybrid synchronization [13] etc.

In the past decades, as a hot topic, dynamical behaviors of neural network have attracted much attention due to its good explaining some of the neurophysiologic phenomena and potential application in many fields [14,15,16]. In fact, the intrinsic time in realistic neuronal systems could be associated with response delay and propagation time delay in cell loop [17], which focuses on the synchronization stability under appropriate schemes. It is important to mention further guidance on prediction for collapse of synchronization and pattern stability [18, 19]in the end of the paper. Therefore, in practical engineering, it is desirable to realize the synchronization in finite time. For this, finite-time stability theory is brought forward [20,21,22,23], in which finite-time control techniques have been concerned. Research shows that the finite-time control technique has demonstrated better robustness and disturbance rejection property [24]. In fact, finite-time synchronization means the optimality in convergence time. Recently, combining the advantages of finite-time control technique and finite-time stability theorem, finite-time synchronization of complex networks was raised [25].

In real applications, almost all network systems received random uncertainties, e.g., stochastic forces and noisy measurements. Therefore, the network with noise perturbations aroused the interest of researchers [26, 27], especially the neural network, in which only a single node is considered with noise disturbances. However, in actual neural network, there is more than one node with noise disturbance; even all the nodes are provided with it. That is the reason why the stochastic synchronization has become one of the focused subjects [28,29,30], in most of which time-delay is hardly taken into account, while time-delay always exists between the neurons when communication is implemented in one neural network or between neural networks. Existing results show that, in some systems, time-delay often causes oscillation, divergence [31, 32]. Therefore, the dynamical behaviors of neural network with time-delay have been widely studied in recent years, especially the control of synchronization and stability. Accordingly, many schemes have been proposed for realizing the chaos synchronization of time-delay neural networks [33,34,35,36], in which noise disturbance is rarely considered. In real applications, time-delay and noise disturbance often coexist in many neural networks, which causes more complex dynamic behaviors and more difficult to control.

Motivated by above discussion, we consider a neural network with time-delay and noise disturbance, which includes vector-form Wiener process. This kind of network is more practical in real world. Using properties of the Wiener process and inequality techniques, suitable controllers are designed to ensure the finite-time stochastic synchronization of time-delay neural networks with noise disturbance and the factors affecting the convergence speed are found out. Several cases are given via numerical simulations to demonstrate the impact of the factors on the synchronization time.

The rest of this paper is arranged as follows. Section 2 describes the system and some relative preliminaries. In Sect. 3, the finite-time stochastic synchronization of time-delay neural networks is realized, sufficient conditions are given and some factors affecting the convergence speed are obtained via theoretical analysis. In Sect. 4, numerical simulations are demonstrated to verify the theoretical results. Section 5 draws some conclusions and gives future investigation directions.

2 System description and some preliminaries

In this section, some relative preliminaries are described to discuss the finite-time stochastic synchronization of time-delay neural networks with noise disturbance, and the time-delay neural network consisting of N nodes is considered as following

$$\begin{aligned} \dot{x}(t)=Ax(t)+Bf(x(t))+Cg(x(t-\tau )), \end{aligned}$$
(1)

where \(x=(x_1 (t),x_2 (t),\ldots ,x_N (t))^{T}\) is the state vector of the neural network, \(x_i (t)\) is the state variable of the ith node, \(\tau \) is the time-delay, and \(A,B,C\in R^{N\times N}\) are constant matrices.

$$\begin{aligned} f(x)= & {} (f_1 (x_1 (t)),f_2 (x_2 (t)),\ldots ,f_N (x_N (t)))^{T}\\&\in R^{N}, \\ g(x)= & {} (g_1 (x_1 (t)),g_2 (x_2 (t)),\ldots ,g_N (x_N (t)))^{T}\\&\in R^{N}, \end{aligned}$$

are continuously differential nonlinear vector functions. In this work, time-delay \(\tau \) is assumed as constant.

To gain the main result of this paper, system (1) is taken as the drive system, and the slave system is considered as

$$\begin{aligned} \dot{y}(t)= & {} Ay(t)+Bf(y(t))+Cg(y(t-\tau ))\nonumber \\&+\,\delta (y(t)-x(t))\dot{W}(t)+U, \end{aligned}$$
(2)

where the noise term \(\delta (y(t)-x(t))\dot{W}(t)\) is used to describe the coupling process influenced by environmental fluctuation, \(\delta \) is noisy intensity matrix, \(\dot{W}(t)\) is a N-dimensional white noise, and \(U=(U_1,U_2,\ldots ,U_N )^{T}\) is the controller vector to be designed.

Definition 1

[28] It is said that the finite-time stochastic synchronization between systems (2) and (1) can be achieved if, for any initial states x(0), y(0), there is a finite time function

$$\begin{aligned} T_0 =\inf \left\{ {T:x_i (t)-y_i (t)=0,\,\forall t\ge T} \right\} , \end{aligned}$$

holding that, for all \(t\ge T_0 \), we have

$$\begin{aligned}&P\left\{ {\left| {x_i (t,x_i (0))-y_i (t,y_i (0))} \right| =0} \right\} \\&\quad =1 \quad (i=1,2,\ldots ,N), \end{aligned}$$

where \(T_0 \) is called the stochastic time.

For n-dimension stochastic differential equation

$$\begin{aligned} \hbox {d}x=f(x)\hbox {d}t+g(x)\hbox {d}W(t), \end{aligned}$$
(3)

where \(x\in R^{n}\) is the state vector, \(f:R^{n}\rightarrow R^{n}\) and \(g:R^{n}\rightarrow R^{n\times m}\) are continuous functions which satisfies \(f(0)=0\), \(g(0)=0\), it is granted that Eq. (3) has a unique global solution denoted by \(x(t,x(0))(0\le t<\infty )\), where x(0) is the initial state.

For each \(V\in C^{2,1}(R^{n}\times R_+,R_+ )\), the operator \(\mathcal{L}V\)[28] relative to Eq. (3) is defined as

$$\begin{aligned} \mathcal{L}V=\frac{\partial V}{\partial x}\cdot f+\frac{1}{2}\hbox {trace}\left[ {g^{T}\cdot \frac{\partial ^{2}V}{\partial x^{2}}\cdot g} \right] , \end{aligned}$$
(4)

where \(\frac{\partial V}{\partial x}=\left( {\frac{\partial V}{\partial x_1 },\frac{\partial V}{\partial x_2 },\ldots ,\frac{\partial V}{\partial x_n }} \right) \) and \(\frac{\partial ^{2}V}{\partial x^{2}}=\left( {\frac{\partial ^{2}V}{\partial x_i \partial x_j }} \right) _{n\times n} \quad (i,j=1,2,\ldots n)\).

Assumption 1

For system (1), it is believed that \(f_i \), \(g_i (i=1,2,\ldots ,N)\) all satisfy Lipchitz condition, e.g., there are positive constants \(L_f\), \(L_g\) such that

$$\begin{aligned} \left| {f_i (x_i )-f_i (y_i )} \right|\le & {} L_f \left| {x_i -y_i } \right| , \\ \left| {g_i (x_i )-g_i (y_i )} \right|\le & {} L_g \left| {x_i -y_i } \right| \end{aligned}$$

for all \(x_i,y_i \in R^{n}(i=1,2,\ldots ,N)\).

Because the change rate of concrete system is far more than the speed of the environmental fluctuations, then for the noise intensity function, following Assumption 2 is given.

Assumption 2

The noise of intensity function \(\delta (y(t)-x(t))\) satisfies Lipchitz condition; namely, there exists a constant q such that

$$\begin{aligned}&\hbox {trace}(\delta ^{T}(y(t)-x(t))\delta (y(t)-x(t))) \nonumber \\&\quad \le 2q(y(t)-x(t))^{T}(y(t)-x(t)). \end{aligned}$$
(5)

Moreover, \(\delta (0)\equiv 0\).

Lemma 1

[37] For Eq. (3), define \(T_0 (x_0 )=\inf \{T\ge 0:x(t,x_0 )=0,\forall t\ge T\}\), and assume that Eq. (3) has the unique global solution, if there is a positive definite, twice continuously differentiable, radially unbounded Lyapunov function \(V:R^{n}\rightarrow R^{+}\) and real numbers \(k>0,0<\rho <1\), such that

$$\begin{aligned} \mathcal {L}V(x)\le -k(V(x))^{\rho }, \end{aligned}$$
(6)

then the origin of system (3) is globally stochastically finite-time stable, and

$$\begin{aligned} E\left[ {T_0 (x_0 )} \right] \le \frac{(V(x_0 ))^{1-\rho }}{k(1-\rho )}. \end{aligned}$$
(7)

Lemma 2

[38] Suppose that \(0<r\le 1\), a, b are all positive numbers, then the inequality

$$\begin{aligned} (a+b)^{r}\le a^{r}+b^{r} \end{aligned}$$

is quite straightforward.

Lemma 3

[39] Suppose that \(\Sigma _1 \),\(\Sigma _2 \),\(\Sigma _3 \) are real matrices with appropriate dimensions and \(s>0\) is a scalar, satisfying \(\Sigma _3 =(\Sigma _3 )^{T}>0\), then following inequality

$$\begin{aligned} \Sigma _1^T \Sigma _2 +\Sigma _2^T \Sigma _1 \le s\Sigma _1^T \Sigma _3 \Sigma _1 +s^{-1}\Sigma _2^T \Sigma _3^{-1} \Sigma _2 \end{aligned}$$
(8)

holds.

Corollary

In Lemma 3, if \(\Sigma _3 \) is chosen as the identity matrix with appropriate dimension, inequality (8) can be simplified as

$$\begin{aligned} \Sigma _1^T \Sigma _2 +\Sigma _2^T \Sigma _1 \le s\Sigma _1^T \Sigma _1 +s^{-1}\Sigma _2^T \Sigma _2. \end{aligned}$$
(9)

3 Main result

In this section, the finite-time stochastic synchronization of time-delay neural networks with noise disturbance is investigated based on above preliminaries. For this, let \(e(t)=y(t)-x(t)\), and then the error system can be gotten as

$$\begin{aligned} \dot{e}(t)= & {} Ae(t)+B[f(y(t))-f(x(t))]\nonumber \\&+\,C[g(y(t-\tau ))-g(x(t-\tau ))]\nonumber \\&+\,\delta (e(t))\dot{W}(t)+U. \end{aligned}$$
(10)

The main result can be given as following Theorem 1.

Theorem 1

Suppose that I is the identity matrix with appropriate order, if there exist constants q,s,\(k_1 \),\(k_2 \) satisfying following two conditions:

  1. (i)

    \(A+A^{T}+L_f (B+B^{T})+(k_2 +q-2k_1 +sL_g )I\le 0\),

  2. (ii)

    \(s^{-1}L_g C^{T}C-k_2 I\le 0\),

then the finite-time stochastic synchronization between systems (1) and (2) can be obtained under the feedback control

$$\begin{aligned} U= & {} -k_1 e(t)-\eta \mathrm{sign}(e(t))\left| {e(t)} \right| ^{\theta }\nonumber \\&-\eta \left( \int _{t-\tau }^t {k_2 e^{T}(v)} e(v)\mathrm{d}v\right) ^{\frac{1+\theta }{2}}\cdot \frac{e(t)}{\left\| {e(t)} \right\| ^{2}}, \end{aligned}$$
(11)

where \(k_1 \),\(k_2 \) are the control strengths, \(\eta >0\),\(0<\theta <1\) and

$$\begin{aligned} \mathrm{sign}(e(t))= & {} \mathrm{diag}(\mathrm{sign}(e_1 (t)),\mathrm{sign}(e_2 (t)),\\&\ldots ,\mathrm{sign}(e_N (t))),\\ \left| {e(t)} \right| ^{\theta }= & {} (\left| {e_1 (t)} \right| ^{\theta },\left| {e_2 (t)} \right| ^{\theta },\ldots ,\left| {e_N (t)} \right| ^{\theta })^{T}. \end{aligned}$$

The finite time is estimated by \(E\left[ {T_0 (x_0 )} \right] \le T=t_0 +\frac{(V(x_0 ))^{\frac{1-\theta }{2}}}{\eta (1-\theta )}\).

Note: Here \(\left\| \cdot \right\| \) refers to Euclidean norm.

Proof

Firstly, substitute (11) into (10), and the error system can be obtained as

$$\begin{aligned} \dot{e}(t)= & {} Ae(t)+B(f(y(t))-f(x(t)))\nonumber \\&+\,C(g(y(t-\tau ))-g(x(t-\tau )))+\delta (e(t))\dot{W}(t)\nonumber \\&-\,k_1 e(t)-\eta \mathrm{sign}(e(t))\left| {e(t)} \right| ^{\theta }\nonumber \\&-\,\eta \left( \int _{t-\tau }^t {k_2 e^{T}(v)} e(v)dv\right) ^{\frac{1+\theta }{2}}\cdot \frac{e(t)}{\left\| {e(t)} \right\| ^{2}}. \end{aligned}$$
(12)

Secondly, Lyapunov function is chosen as

$$\begin{aligned} V(t)=e^{T}(t)e(t)+\int _{t-\tau }^t {k_2 e^{T}(v)e(v)\mathrm{d}v}. \end{aligned}$$
(13)

Diffuse operator \(\mathcal{L}\) defined in Eq. (4) onto function V along error system (12), and we can get

$$\begin{aligned} \mathcal{L}V(t)= & {} e^{T}(t)\dot{e}(t)+\dot{e}^{T}(t)e(t)+k_2 e^{T}(t)e(t)\\&-\,k_2 e^{T}(t-\tau )e(t-\tau )\\&+\,\frac{1}{2}\mathrm{trace}(\delta ^{T}(e(t))\delta (e(t)))\\= & {} e^{T}(t)\left[ Ae(t)+B(f(y(t))-f(x(t)))\right. \\&+\,C(g(y(t-\tau ))-g(x(t-\tau )))\\&-\,k_1 e(t)-\eta \mathrm{sign}(e(t))\left| {e(t)} \right| ^{\theta }\\&-\,\left. \eta \left( \int _{t-\tau }^t {k_2 e^{T}(v)} e(v)dv\right) ^{\frac{1+\theta }{2}}\cdot \frac{e(t)}{\left\| {e(t)} \right\| ^{2}}\right] \\&+\,\left[ Ae(t)+B(f(y(t))-f(x(t)))\right. \\&+\,C(g(y(t-\tau ))-g(x(t-\tau )))\\&-\,k_1 e(t)-\eta \mathrm{sign}(e(t))\left| {e(t)} \right| ^{\theta }\\&\left. -\,\eta \left( \int _{t-\tau }^t {k_2 e^{T}(v)} e(v)\mathrm{d}v\right) ^{\frac{1+\theta }{2}}\cdot \frac{e(t)}{\left\| {e(t)} \right\| ^{2}}\right] ^{T}e(t)\\&+\,k_2 e^{T}(t)e(t)-k_2 e^{T}(t-\tau )e(t-\tau )\\&+\,\frac{1}{2}\mathrm{trace}(\delta ^{T}(e(t))\delta (e(t)))\\\le & {} e^{T}(t)[A+A^{T}+L_f (B+B^{T})\\&+\,(k_2 +q-2k_1 )I]e(t)+L_g [e^{T}(t)(Ce(t-\tau ))\\&+\,(Ce(t-\tau ))^{T}e(t)]-k_2 e^{T}(t-\tau )e(t-\tau )\\&-\,\eta e^{T}(t)\mathrm{sign}(e(t))\left| {e(t)} \right| ^{\theta }\\&-\,[\eta \mathrm{sign}(e(t))\left| {e(t)} \right| ^{\theta }]^{T}e(t)\\&-\,2\eta \left( \int _{t-\tau }^t k_2 e^{T}(v)e(v)\mathrm{d}v\right) ^{\frac{1+\theta }{2}}. \end{aligned}$$

According to the Corollary of Lemma 3, there is a scalar \(s>0\) such that

$$\begin{aligned}&e^{T}(t)Ce(t-\tau )+(Ce(t-\tau ))^{T}e(t)\le se^{T}(t)e(t)\nonumber \\&\quad +\,s^{-1}e^{T}(t-\tau )C^{T}Ce(t-\tau ). \end{aligned}$$
(14)

Meanwhile, it is noticed that

$$\begin{aligned}&-\eta e^{T}(t)\mathrm{sign}(e(t))\left| {e(t)} \right| ^{\theta }\nonumber \\&\quad -\,\eta [\mathrm{sign}(e(t))\left| {e(t)} \right| ^{\theta }]^{T}e(t)=-2\eta \left| {e^{T}(t)} \right| \left| {e(t)} \right| ^{\theta }\nonumber \\&\quad \le -\,2\eta \left| {e^{T}(t)e(t)} \right| ^{\frac{1+\theta }{2}}. \end{aligned}$$
(15)

Therefore, in line with Lemma 2 and the conditions in Theorem 1, we have

$$\begin{aligned} \mathcal{L}V(t)\le & {} e^{T}(t)[A+A^{T}+L_f (B+B^{T})\\&+\,(k_2 +q-2k_1 +sL_g)I]e(t) \\&+\,e^{T}(t-\tau )(s^{-1}L_g C^{T}C-k_2 I)e(t-\tau )\\&-\,2\eta \left| {e^{T}(t)e(t)} \right| ^{\frac{1+\theta }{2}}\\&-\,2\eta \left( {\int _{t-\tau }^t {k_2 e^{T}(v)e(v)dv} } \right) ^{\frac{1+\theta }{2}}\\\le & {} -2\eta \left| {e^{T}(t)e(t)} \right| ^{\frac{1+\theta }{2}}\\&-\,2\eta \left( {\int _{t-\tau }^t {k_2 e^{T}(v)e(v)dv} } \right) ^{\frac{1+\theta }{2}}\\\le & {} -2\eta \left( {\left| {e^{T}(t)e(t)} \right| {+}\int _{t-\tau }^t {k_2 e^{T}(v)e(v)dv} } \right) ^{\frac{1{+}\theta }{2}} \\= & {} -2\eta V(t)^{\frac{1+\theta }{2}}. \end{aligned}$$

On the basis of Lemma 1, the trivial solution of (12) is globally stochastically asymptotically stable in finite time. It means that the finite-time synchronization of systems (1) and (2) could be achieved for almost every initial data, and the finite time is estimated by

$$\begin{aligned} E\left[ {T_0 } \right] \le T=\frac{(V(0))^{\frac{1-\theta }{2}}}{\eta (1-\theta )}, \end{aligned}$$
(16)

where \(V(0)=(1+\tau )\sum _{i=1}^N {e_i^2 (0)} \). Theorem 1 is proved.

Remark 1

From the conditions (i) and (ii) of Theorem 1, it is known that, for any high-level noise, there are sufficiently large positive constants \(k_1 \) and \(k_2 \) such that the finite-time synchronization of neural networks is obtained in probability. That is to say, this kind of synchronization is robust to the perturbation.

Remark 2

In the light of Itô formula, it is obvious to see that the decay rate of function V(t) depends on the quality of \(\mathcal{L}V\). It means that the quality of \(\mathcal{L}V\) dominates the convergence speed of the error system (10). So the synchronization time of systems (1) and (2) is controlled by the quality of \(\mathcal{L}V\).

Remark 3

From Eq. (16), it is easy to see that, for fixed initial values, the convergence time of the proposed algorithm is closely related to the protocol parameters \(\eta \) and \(\theta \). Following conclusions can be further obtained: (a) For fixed value of \(\theta \), the synchronization time decreases with \(\eta \) increasing. (b) If \(\eta \) is fixed, the synchronization time becomes longer with \(\theta \) increasing when \(1-\hbox {2/}\ln V(0)<\theta <1\), while it becomes less with \(\theta \) increasing when \(0<\theta <1-\hbox {2/}\ln V(0)\), which can be derived from the derivation T of \(\theta \). This result depends on the initial values of the systems. Therefore, in the following numerical simulations, we only consider the influence of \(\eta \) on the synchronization time.

4 Numerical simulations

In this section, by MATLAB program, some numerical simulations are given to verify the feasibility and effectiveness of the proposed scheme. For this, we think of following neural network with single time-delay [40]:

$$\begin{aligned} \dot{x}_i (t)= & {} -a_i x_i (t)+\sum _{j=1}^n {b_{ij} } f(x_j (t))\nonumber \\&+\sum _{j=1}^n {c_{ij} } f(x_j (t-\tau ))\nonumber \\&+I_i, (i,j=1,2,\ldots ,N), \end{aligned}$$
(17)

where \(a_i >0\), \(b_{ij}\) and \(c_{ij}\) are real numbers. \(\tau >0\) is time-delay. \(I_i (i=1,2,\ldots ,N)\) are external inputs. The input–output transfer function f(x) is selected as \(\tanh (x)\), which means \(L_f =1\). Let

$$\begin{aligned}&x=(x_1 (t),x_2 (t),\ldots ,x_N (t))^{T},\\&f(x)=(f(x_1 (t)),f(x_2 (t)),\ldots ,f(x_N (t)))^{T},\\&f(x(t-\tau ))=(f(x_1 (t-\tau )),f(x_2 (t-\tau )),\\&\quad \ldots ,f(x_N (t-\tau )))^{T},\\&I=(I_1,I_2,\ldots ,I_N )^{T},\\&A=\mathrm{diag}(a_i )_{N\times N},\\&B=(b_{ij} )_{N\times N},\\&C=(c_{ij} )_{N\times N}, \end{aligned}$$

and then system (17) can be rewritten as

$$\begin{aligned} \dot{x}(t)=-Ax(t)+Bf(x(t))+Cg(x(t-\tau ))+I,\nonumber \\ \end{aligned}$$
(18)

which is taken as the master system and the corresponding slave system is

$$\begin{aligned} \dot{y}(t)= & {} -Ay(t)+Bf(y(t))+Cf(y(t-\tau ))+I\nonumber \\&+\,\delta (e(t))\dot{W}(t)+U, \end{aligned}$$
(19)

where controller vector U is taken as (11).

In the simulations, the value of time-delay is \(\tau =1\). The control strengths are taken as \(k_1 =20,k_2 =2\). The initial values are taken from the interval [−1.0, 1.0] randomly. In addition, we choose \(\delta (e(t))=\sqrt{2\delta _0 }e(t)\), which holds \(trace(\delta ^{T}\delta )\le 2\delta _0 e^{T}(t)e(t)\). To investigate the effect of the number of nodes on the synchronization time, we select the neural networks with two nodes, five nodes and ten nodes, respectively. Furthermore, for the purpose of comparing the convergence rate of the system, we think about the total error function

$$\begin{aligned} E(t)=\sqrt{\sum _{i=1}^N {e_i^2 (t)} }. \end{aligned}$$
(20)
Fig. 1
figure 1

The time evolution curves of the drive system (18) and the response system (19) without controller, \(N=2\), \(\eta =6.0\), \(\theta =0.01\), (a) \(x_1,y_1\); (b) \(x_2,y_2 \)

Case 1 When \(N=2\), we take \(A=\left( {{\begin{array}{cc} 1&{} 0 \\ 0&{} 1 \\ \end{array} }} \right) \), \(B=\left( {{\begin{array}{cc} {3.0}&{} {5.0} \\ {0.1}&{} {2.0} \\ \end{array} }} \right) \), \(C=\left( {{\begin{array}{cc} {-2.5}&{} {0.2} \\ {0.1}&{} {-1.5} \\ \end{array} }} \right) \), \(\theta =0.01\) and \(\eta =\hbox {6}.0\). Figure 1 shows the time evolution curves of the drive system (18) and response system (19) without controller, from which it could be seen that the two systems gradually separate from each other over time. To obtain our main point, the error dynamics of systems (18) and (19) with controller as (11) are to be simulated in several conditions. Figure 2 depicts the error dynamics of systems (18) and (19) when \(N=2,\theta =0.01\) and \(\eta =6.0\), which suggests that stochastic synchronization can be realized in finite time for 2-node neural network. Figure 3 gives the evolution of total error function E(t)in (20) with the same protocol parameter values of Fig. 2.

Fig. 2
figure 2

The error dynamics of systems (18) and (19) with controller, \(N=2\), \(\eta =6.0\), \(\theta =0.01\)

Case 2 When \(N=5\), the matrices are chosen as

$$\begin{aligned} A= & {} \left( {{\begin{array}{ccccc} 1&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 1&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 1&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 1&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 1 \\ \end{array} }} \right) ,\\ B= & {} \left( {{\begin{array}{ccccc} {3.0}&{} {5.0}&{} {4.0}&{} {6.0}&{} {3.0} \\ {0.1}&{} {2.0}&{} {3.0}&{} {4.0}&{} {1.0} \\ {0.3}&{} {0.8}&{} {1.0}&{} {2.5}&{} {5.0} \\ {0.4}&{} {0.9}&{} {0.6}&{} {4.0}&{} {8.0} \\ {0.2}&{} {0.7}&{} {0.8}&{} {0.6}&{} {5.0} \\ \end{array} }} \right) ,\\ C= & {} \left( {{\begin{array}{cccccc} {2.5}&{} {0.2}&{} {0.1}&{} {0.3}&{} {0.1} \\ {0.1}&{} {-1.5}&{} {0.4}&{} {0.5}&{} {0.3} \\ {0.7}&{} {1.0}&{} {-0.5}&{} {0.2}&{} {0.5} \\ {0.6}&{} {2.0}&{} {0.7}&{} {-3.5}&{} {0.1} \\ {0.5}&{} {0.8}&{} {0.6}&{} {0.8}&{} {-4.5} \\ \end{array} }} \right) . \end{aligned}$$

Figure 4 traces the error evolution of systems (18) and (19) when \(\theta =0.01\) and \(\eta =6.0\), which means that the finite-time stochastic synchronization can be obtained if the neural network possesses 5 nodes. Figure 5 depicts dynamical behavior of the total error function E(t) in (20) with the same protocol parameter values of Fig. 4.

Case 3 When \(N=10\), A is taken as the 10th order unit matrix and the matrices BC are chosen as

$$\begin{aligned} B= & {} \left( {{\begin{array}{cccccccccc} {8.0}&{} {2.0}&{} {1.0}&{} {3.0}&{} {1.0}&{} 0&{} 0&{} {2.0}&{} 0&{} {1.0} \\ {0.1}&{} {6.5}&{} {2.0}&{} {4.0}&{} {1.0}&{} 0&{} 0&{} {2.0}&{} {1.0}&{} {3.0} \\ {0.3}&{} {0.6}&{} {5.0}&{} {2.0}&{} {4.0}&{} {1.0}&{} 0&{} {1.0}&{} 0&{} {2.0} \\ {0.5}&{} {0.9}&{} {0.2}&{} {4.0}&{} {8.0}&{} {3.0}&{} {2.0}&{} {1.0}&{} 0&{} {1.0} \\ {0.2}&{} {0.7}&{} {0.8}&{} {0.6}&{} {6.0}&{} {5.0}&{} {2.0}&{} {4.0}&{} {3.0}&{} 0 \\ {0.1}&{} {0.3}&{} {0.4}&{} 0&{} {0.3}&{} {5.5}&{} {8.0}&{} {3.0}&{} 0&{} {1.0} \\ 0&{} {0.8}&{} {0.9}&{} {0.5}&{} {0.2}&{} {0.2}&{} {7.0}&{} {2.0}&{} {5.0}&{} {1.0} \\ 0&{} {0.8}&{} {0.9}&{} {0.3}&{} {0.1}&{} 0&{} {0.3}&{} {3.5}&{} {3.0}&{} {2.0} \\ 0&{} {0.8}&{} {0.2}&{} {0.5}&{} {0.2}&{} {0.1}&{} {0.3}&{} {0.5}&{} {2.0}&{} {3.0} \\ 0&{} {0.5}&{} {0.6}&{} {0.3}&{} {0.8}&{} {0.5}&{} 0&{} 0&{} {0.2}&{} {3.0} \\ \end{array} }} \right) ,\\ C= & {} \left( {{\begin{array}{cccccccccc} {-7.5}&{} {0.1}&{} {0.2}&{} {0.9}&{} {0.8}&{} {0.6}&{} {0.5}&{} {0.6}&{} {0.4}&{} {0.2} \\ {0.7}&{} {-6.0}&{} {0.1}&{} {1.0}&{} {0.8}&{} {0.7}&{} {0.5}&{} {0.6}&{} {0.3}&{} 0 \\ {0.5}&{} {0.8}&{} {-4.5}&{} {0.2}&{} {0.4}&{} {0.5}&{} {0.6}&{} {0.4}&{} 0&{} {0.1} \\ {0.6}&{} {0.7}&{} {0.9}&{} {-3.5}&{} {0.1}&{} {1.0}&{} {0.5}&{} {1.2}&{} {0.7}&{} {1.0} \\ {0.5}&{} {0.8}&{} {0.6}&{} {0.8}&{} {-5.5}&{} {0.6}&{} {0.9}&{} {1.0}&{} {0.5}&{} 0 \\ {0.2}&{} {0.6}&{} {0.4}&{} {0.7}&{} {0.8}&{} {-5.0}&{} {3.0}&{} {2.0}&{} {1.0}&{} {0.8} \\ {0.4}&{} {0.3}&{} {0.6}&{} {0.9}&{} {0.7}&{} {0.9}&{} {-6.5}&{} {0.3}&{} {0.2}&{} {0.1} \\ {0.3}&{} {0.4}&{} {0.5}&{} {0.6}&{} {0.7}&{} {0.8}&{} {0.1}&{} {-3.0}&{} {2.0}&{} {0.3} \\ 0&{} {0.2}&{} {0.4}&{} {0.8}&{} {0.7}&{} {0.6}&{} {0.3}&{} {0.5}&{} {-1.5}&{} {1.0} \\ {0.1}&{} {0.3}&{} {0.2}&{} {0.4}&{} {0.8}&{} {0.7}&{} {0.6}&{} {0.5}&{} {0.1}&{} {-2.5} \\ \end{array} }} \right) . \end{aligned}$$

Figure 6 draws the time evolution of error dynamics of systems (18) and (19) when \(\theta =0.01\) and \(\eta =6.0\), from which we know that the finite-time stochastic synchronization can be obtained for 10-node neural network. Figure 7 pictures the dynamics of the total error function E(t) in (20) with \(N=10\), \(\theta =0.01\), \(\eta =6.0\). Furthermore, comparing Figs. 2, 5 and 7, it is obvious to see that the total error function of the neural networks with more nodes converge slower than those with fewer nodes.

To find out the time of stochastic synchronization over parameter \(\eta \), Fig. 8 describes the evolution of E(t) over time t when \(N=2,\theta =0.01\) and \(\eta \) is taken as different values, which indicates that the time required to realize the finite-time stochastic synchronization becomes less with \(\eta \) increasing. This phenomenon is consistent with the comment in Remark 3.

Fig. 3
figure 3

The time evolution curve of E(t) in (20) with controller, \(N=2\), \(\eta =\hbox {6}.0\), \(\theta =0.01\)

Fig. 4
figure 4

The error dynamics of systems (18) and (19) with controller, \(N=5\), \(\eta =6.0\), \(\theta =0.01\)

Fig. 5
figure 5

The time evolution curve of E(t) in (20) with controller, \(N=5\), \(\eta =6.0\), \(\theta =0.01\)

Fig. 6
figure 6

Error dynamics of systems (18) and (19) with controller, \(N=10\), \(\eta =6.0\), \(\theta =0.01\). (a) Evolution curve of \(e_i (i=1,2,\ldots ,10)\), (b) enlargement of the corresponding portion in (a) for \(t\in [0,200]\) and \(e_i \in [-10,1]\)

Fig. 7
figure 7

The time evolution curve of E(t) in (20) with controller, \(N=10\), \(\eta =6.0\), \(\theta =0.01\)

Fig. 8
figure 8

The evolution of E(t) in (20) along time t for \(N=2\), \(\theta =0.01\) and different values of \(\eta \)

5 Conclusions and future works

In this paper, based on finite-time stability theory of stochastic differential equation, via suitable controllers, stochastic synchronization of time-delay neural networks with noise disturbance is achieved in finite time. This result is not only obtained by theoretical analysis, but also verified by numerical simulations. Furthermore, factors affecting the convergence rate are described. When the number of nodes in the neural network is fixed, larger \(\eta \) is helpful for improving the convergence rate. At the same condition, the time of convergence is positively correlated with the number of nodes in the neural networks. The fewer the nodes, the less time is required to achieve stochastic synchronization.

Because the discussed neural networks take into account both time-delay and noise disturbance, it is attractive and practical in understanding the dynamic behavior of neural networks. It will be helpful for the application of neural network.

In this work, the time-delay is assumed as constant. For some systems with time-varying delay, adaptive control [41, 42] can be used to realized the finite-time stochastic synchronization of neural networks. Our next work is to investigate this problem to further understand the complexity mechanism of the neural system and avoid some unfavorable phenomenon as much as possible.